using System.Drawing; using System.Drawing.Printing; void testPrint() { PrintDocument pd = new PrintDocument(); pd.PrintPage += (thesender, ev) => { ev.Graphics.DrawImage(Image.FromFile("Your Image Path"), //This is to keep image in margins of the Page. new PointF(ev.MarginBounds.Left,ev.MarginBounds.Top)); }; pd.Print(); }
Monday, April 28, 2014
print image
C# Tutorial - Simple Threaded TCP Server Listen port
In this tutorial I'm going to show you how to build a threaded tcp server with C#. If you've ever worked with Window's sockets, you know how difficult this can sometimes be. However, thanks to the .NET framework, making one is a lot easier than it used to be.
What we'll be building today is a very simple server that accepts client connections and can send and receive data. The server spawns a thread for each client and can, in theory, accept as many connections as you want (although in practice this is limited because you can only spawn so many threads before Windows will get upset).
Let's just jump into some code. Below is the basic setup for our TCP server class.
So here's a basic server class - without the guts. We've got a TcpListener which does a good job of wrapping up the underlying socket communication, and a Thread which will be listening for client connections. You might have noticed the function ListenForClients that is used for our ThreadStart delegate. Let's see what that looks like.
This function is pretty simple. First it starts our TcpListener and then sits in a loop accepting connections. The call to AcceptTcpClient will block until a client has connected, at which point we fire off a thread to handle communication with our new client. I used a ParameterizedThreadStart delegate so I could pass the TcpClient object returned by the AcceptTcpClient call to our new thread.
The function I used for the ParameterizedThreadStart is called HandleClientComm. This function is responsible for reading data from the client. Let's have a look at it.
The first thing we need to do is cast client as a TcpClient object since the ParameterizedThreadStart delegate can only accept object types. Next, we get the NetworkStream from the TcpClient, which we'll be using to do our reading. After that we simply sit in a while true loop reading information from the client. The Read call will block indefinitely until a message from the client has been received. If you read zero bytes from the client, you know the client has disconnected. Otherwise, a message has been successfully received from the server. In my example code, I simply convert the byte array to a string and push it to the debug console. You will, of course, do something more interesting with the data - I hope. If the socket has an error or the client disconnects, you should call Close on the TcpClient object to free up any resources it was using.
Believe it or not, that's pretty much all you need to do to create a threaded server that accepts connections and reads data from clients. However, a server isn't very useful if it can't send data back, so let's look at how to send data to one of our connected clients.
Do you remember the TcpClient object that was returned from the call AcceptTcpClient? Well, that's the object we'll be using to send data back to that client. That being said, you'll probably want to keep those objects around somewhere in your server. I usually keep a collection of TcpClient objects that I can use later. Sending data to connected clients is very simple. All you have to do is call Write on the the client's NetworkStream object and pass it the byte array you'd like to send.
Your TCP server is now finished. The hard part is defining a good protocol to use for sending information between the client and server. Application level protocols are generally unique for application, so I'm not going to go into any details - you'll just have to invent you're own.
But what use is a server without a client to connect to it? This tutorial is mainly about the server, but here's a quick piece of code that shows you how to set up a basic TCP connection and send it a piece of data.
The first thing we need to do is get the client connected to the server. We use the TcpClient.Connect method to do this. It needs the IPEndPoint of our server to make the connection - in this case I connect it to localhost on port 3000. I then simply send the server the string "Hello Server!".
One very important thing to remember is that one write from the client or server does not always equal one read on the receiving end. For instance, your client could send 10 bytes to the server, but the server may not get all 10 bytes the first time it reads. Using TCP, you're pretty much guaranteed to eventually get all 10 bytes, but it might take more than one read. You should keep that in mind when designing your protocol.
That's it! Now get out there and clog the tubes with your fancy new C# TCP servers.
Reference:
http://tech.pro/tutorial/704/csharp-tutorial-simple-threaded-tcp-server
What we'll be building today is a very simple server that accepts client connections and can send and receive data. The server spawns a thread for each client and can, in theory, accept as many connections as you want (although in practice this is limited because you can only spawn so many threads before Windows will get upset).
Let's just jump into some code. Below is the basic setup for our TCP server class.
using System; using System.Text; using System.Net.Sockets; using System.Threading; using System.Net; namespace TCPServerTutorial { class Server { private TcpListener tcpListener; private Thread listenThread; public Server() { this.tcpListener = new TcpListener(IPAddress.Any, 3000); this.listenThread = new Thread(new ThreadStart(ListenForClients)); this.listenThread.Start(); } } }
So here's a basic server class - without the guts. We've got a TcpListener which does a good job of wrapping up the underlying socket communication, and a Thread which will be listening for client connections. You might have noticed the function ListenForClients that is used for our ThreadStart delegate. Let's see what that looks like.
private void ListenForClients() { this.tcpListener.Start(); while (true) { //blocks until a client has connected to the server TcpClient client = this.tcpListener.AcceptTcpClient(); //create a thread to handle communication //with connected client Thread clientThread = new Thread(new ParameterizedThreadStart(HandleClientComm)); clientThread.Start(client); } }
This function is pretty simple. First it starts our TcpListener and then sits in a loop accepting connections. The call to AcceptTcpClient will block until a client has connected, at which point we fire off a thread to handle communication with our new client. I used a ParameterizedThreadStart delegate so I could pass the TcpClient object returned by the AcceptTcpClient call to our new thread.
The function I used for the ParameterizedThreadStart is called HandleClientComm. This function is responsible for reading data from the client. Let's have a look at it.
private void HandleClientComm(object client) { TcpClient tcpClient = (TcpClient)client; NetworkStream clientStream = tcpClient.GetStream(); byte[] message = new byte[4096]; int bytesRead; while (true) { bytesRead = 0; try { //blocks until a client sends a message bytesRead = clientStream.Read(message, 0, 4096); } catch { //a socket error has occured break; } if (bytesRead == 0) { //the client has disconnected from the server break; } //message has successfully been received ASCIIEncoding encoder = new ASCIIEncoding(); System.Diagnostics.Debug.WriteLine(encoder.GetString(message, 0, bytesRead)); } tcpClient.Close(); }
The first thing we need to do is cast client as a TcpClient object since the ParameterizedThreadStart delegate can only accept object types. Next, we get the NetworkStream from the TcpClient, which we'll be using to do our reading. After that we simply sit in a while true loop reading information from the client. The Read call will block indefinitely until a message from the client has been received. If you read zero bytes from the client, you know the client has disconnected. Otherwise, a message has been successfully received from the server. In my example code, I simply convert the byte array to a string and push it to the debug console. You will, of course, do something more interesting with the data - I hope. If the socket has an error or the client disconnects, you should call Close on the TcpClient object to free up any resources it was using.
Believe it or not, that's pretty much all you need to do to create a threaded server that accepts connections and reads data from clients. However, a server isn't very useful if it can't send data back, so let's look at how to send data to one of our connected clients.
NetworkStream clientStream = tcpClient.GetStream(); ASCIIEncoding encoder = new ASCIIEncoding(); byte[] buffer = encoder.GetBytes("Hello Client!"); clientStream.Write(buffer, 0 , buffer.Length); clientStream.Flush();
Do you remember the TcpClient object that was returned from the call AcceptTcpClient? Well, that's the object we'll be using to send data back to that client. That being said, you'll probably want to keep those objects around somewhere in your server. I usually keep a collection of TcpClient objects that I can use later. Sending data to connected clients is very simple. All you have to do is call Write on the the client's NetworkStream object and pass it the byte array you'd like to send.
Your TCP server is now finished. The hard part is defining a good protocol to use for sending information between the client and server. Application level protocols are generally unique for application, so I'm not going to go into any details - you'll just have to invent you're own.
But what use is a server without a client to connect to it? This tutorial is mainly about the server, but here's a quick piece of code that shows you how to set up a basic TCP connection and send it a piece of data.
TcpClient client = new TcpClient(); IPEndPoint serverEndPoint = new IPEndPoint(IPAddress.Parse("127.0.0.1"), 3000); client.Connect(serverEndPoint); NetworkStream clientStream = client.GetStream(); ASCIIEncoding encoder = new ASCIIEncoding(); byte[] buffer = encoder.GetBytes("Hello Server!"); clientStream.Write(buffer, 0 , buffer.Length); clientStream.Flush();
The first thing we need to do is get the client connected to the server. We use the TcpClient.Connect method to do this. It needs the IPEndPoint of our server to make the connection - in this case I connect it to localhost on port 3000. I then simply send the server the string "Hello Server!".
One very important thing to remember is that one write from the client or server does not always equal one read on the receiving end. For instance, your client could send 10 bytes to the server, but the server may not get all 10 bytes the first time it reads. Using TCP, you're pretty much guaranteed to eventually get all 10 bytes, but it might take more than one read. You should keep that in mind when designing your protocol.
That's it! Now get out there and clog the tubes with your fancy new C# TCP servers.
Reference:
http://tech.pro/tutorial/704/csharp-tutorial-simple-threaded-tcp-server
10001 件資工系畢業前一定要做的事
畢業季到了,看著學長姐紛紛離開校園的溫室,準備前去挑戰現實世界的殘酷,你心中或許會想,輪到我時,我準備好了嗎?這邊幫你準備了 10001 件在離開校園前,你應該嘗試過的事情。
00000 擁有你自己的網域 — 買 .com 和 .com.tw 的網域有各自的竅門,如何找到最划算的價錢,你應該要知道
00001 租用你自己的雲端主機 — AWS 第一年幾乎是免費的,EC2 和 S3 是兩個你至少要摸熟的服務
00010 安裝 Apache 伺服器軟體 — 學著設定 httpd.conf,例如把 www.domain.com 轉到 domain.com (台灣 90% 的網站都忘了做這件事)
00100 用 Ruby on Rails 寫出一個產品 — 我認為 Rails 將會取代 LAMP 的地位,現在缺 Ruby 工程師的比缺 LAMP 的還多,而且都是一些非常棒的新創團隊,我鼓勵你把這個框架學好
00101 幫同學解決一個實際的問題 — 去跟非資工系的同學、朋友、親戚聊天,找出他們生活、工作、社團有什麼實際的問題,你可以用程式幫他們解決的,然後實際做出這個網站給他們使用
00110 寫一個 iOS/Android App — 如果上面這個問題,透過 Smartphone App 可以解決得更好,那就寫一個 iPhone/Android 給他們 (台大的學生可以考慮去上我的合夥人 Prof. Mike Chen 的智慧手機開發課)
00111 實際使用 Facebook API — 用 Facebook Connect 來讓使用者快速登入你的網站,並且拿到他們的 Email 和好友名單,再想辦法利用這兩個資料給他們更好的服務
01000 實際使用 MongoDB (或是其他 NoSQL) — 學校教了很多 RDBMS,但是你要知道現在有很多時候 NoSQL 更符合需求,尤其是大規模網站
01001 把你的網站放上 AdSense — 雖然沒辦法賺大錢,但是你會因此更了解網路廣告的各種內幕
01010 讀很多別人寫的程式碼、文章和書 — 寫程式和寫作進步的方法,除了多練習,就是多讀別人的作品
01100 貢獻給一個 Open Source 專案 — Open Source 界有他們的文化,跟他們實際協做過,你才能了解
01101 學會用 packet sniffer — 聽聽看你愛玩的遊戲,是怎麼跟它的主機溝通的,你會學到很多
01110 設定你的 DSL Router讓家裡電腦當伺服器 — 你會對 TCP/UDP ports 有更多了解
01111 用 map-reduce 分析資料 — 這是現在最最熱門的題目,你應該要試試 (從這裡開始)
10000 去當暑期實習生 — 暑假來了,實際去公司上班兩個月,看看真實世界是長什麼樣子吧!新創公司尤其是你的好選擇,因為你將可以摸到更多好玩的東西 (不知道要去哪裡?寫信來,我幫你介紹: mr.jamie.blog [at] gmail.com)
10001 去跟 10 個學長姐聊聊 — 在你決定要繼續攻碩士、就業還是創業前,去跟 10 個已經畢業的資工系學長姐聊聊,看看他們都在做什麼,有什麼事情他們後悔當初沒學好,有什麼事情他們覺得你可以考慮去做。不要悶著頭聽同學們的建議做決定,他們和你一樣不知道自己在做什麼。
以上,希望對於正在念資工系的你們有幫助。
http://mrjamie.cc/2011/06/27/things-all-cs-students-should-do/
Friday, April 25, 2014
WIFI issue on Windows Mobile Motorola MC9500 Series
If you found WIFI problem with your handheld scanner Motorola MC3190 this article could be the solution for you. We have already 10 monochrome scanners to scan garment in the factory. Recently we buy 6 units of Motorola MC3190 colour with Win CE6.0 as its operating system. Although we happy with new colour handheld scanner, we have a problem with it's WIFI.
When we do cold boot (press 1+9+power) the wifi is always disabled, we have to enable it first to use it. This is not acceptable because user will have to access windows first to enable the wifi.
After googling i found that the problem is not unique, many people having the same problem. Searching more i found posting about this problem from .Fret Developer (http://dotfret.blogspot.com/2010/10/wireless-radio-disabled-on-cold-boot-of.html). The blog owner is a savvy .NET developer.
" After a cold boot of your Motorola / Symbol MC9500 series device, if you find that your wireless radio is disabled, create the following .reg file and add it to the \Application folder on the device;"
It's easy to follow and it worked.
Several days later i got a reg file from our supplier that came from Motorola. This file called "fusion on.reg" and same with above direction, it must be put into application folder of scanner.
This solution is also solved my problem. Albeit I'm not really know the different but since it come from my supplier which is Motorola distributor and it seems more complex than the first one from .fret blog i decided to use this file.
Once again without .fret blog i may not found the solution quickly. Thank you.
http://freestuff2.hubpages.com/hub/Motorola-MC3190-WIFI-Problem-Solved
When we do cold boot (press 1+9+power) the wifi is always disabled, we have to enable it first to use it. This is not acceptable because user will have to access windows first to enable the wifi.
After googling i found that the problem is not unique, many people having the same problem. Searching more i found posting about this problem from .Fret Developer (http://dotfret.blogspot.com/2010/10/wireless-radio-disabled-on-cold-boot-of.html). The blog owner is a savvy .NET developer.
" After a cold boot of your Motorola / Symbol MC9500 series device, if you find that your wireless radio is disabled, create the following .reg file and add it to the \Application folder on the device;"
REGEDIT4 [HKEY_LOCAL_MACHINE\Drivers\BuiltIn\WLAN] "InitialState"=dword:00000000
It's easy to follow and it worked.
Several days later i got a reg file from our supplier that came from Motorola. This file called "fusion on.reg" and same with above direction, it must be put into application folder of scanner.
REGEDIT4 [HKEY_LOCAL_MACHINE\Drivers\BuiltIn\WLAN] "InitialState"=dword:00000000 "Prefix"="WLP" "Dll"="PmProxy.Dll" "Order"=dword:00000008 "Index"=dword:00000001 "DeviceArrayIndex"=dword:00000004 "WLANSlot"=dword:00000000 "Flags"=dword:00000008 "IClass"=hex(7):\ 7B,41,33,32,39,34,32,42,37,2D,39,32,30,43,2D,34,38,36,62,2D,42,30,45,36,\ 2D,39,32,41,37,30,32,41,39,39,42,33,35,7D,00,7B,66,38,61,36,62,61,39,38,\ 2D,30,38,37,61,2D,34,33,61,63,2D,61,39,64,38,2D,62,37,66,31,33,63,35,62,\ 61,65,33,31,7D,00,7B,39,34,31,30,37,44,37,30,2D,33,34,43,46,2D,34,31,39,\ 61,2D,42,41,42,41,2D,31,45,35,30,39,36,31,38,35,33,34,37,7D,00,00 "FriendlyName"="WLAN Proxy Driver for Motorola WLAN Adapter" "SupportedStates"=dword:00000011 "InitFlags"=dword:00000000 "SlotResetOnResume"=dword:00000000 "SlotResetWait"=dword:000001F4 [HKEY_LOCAL_MACHINE\Drivers\BuiltIn\WLAN\ActivityEvents] "PowerManager/SystemIdleTimerReset"=""
This solution is also solved my problem. Albeit I'm not really know the different but since it come from my supplier which is Motorola distributor and it seems more complex than the first one from .fret blog i decided to use this file.
Once again without .fret blog i may not found the solution quickly. Thank you.
http://freestuff2.hubpages.com/hub/Motorola-MC3190-WIFI-Problem-Solved
Wednesday, April 23, 2014
_default_ VirtualHost overlap on port 443, the first has precedence
Problem:
# /usr/local/etc/rc.d/apache22 onerestart
Performing sanity check on apache22 configuration:
[Wed Apr 23 15:43:04 2014] [warn] _default_ VirtualHost overlap on port 443, the first has precedence
Syntax OK
Solution:
Add this line to your httpd-ssl.conf:
NameVirtualHost *:443
# /usr/local/etc/rc.d/apache22 onerestart
Performing sanity check on apache22 configuration:
[Wed Apr 23 15:43:04 2014] [warn] _default_ VirtualHost overlap on port 443, the first has precedence
Syntax OK
Solution:
Add this line to your httpd-ssl.conf:
NameVirtualHost *:443
Installing Nagios on FreeBSD 10
Installing Nagios on FreeBSD 10
Nagios is a powerful monitoring system that enables organizations to identify and resolve IT infrastructure problems before they affect critical business processes.
the fork of nagios to icinga is a good thing, much in the same way as quagga was a great fork of zebra.
# uname -a
FreeBSD bsd10.local 10.0-RELEASE
Install Apache2.2
Install PHP5.4.27
Install MySQL5.5
On the Nagios server, install Nagios:
# cd /usr/ports/net-mgmt/nagios
# make config-recursive
# make config-recursive
# make install
Note: you only need to install Nagios on the machine that is going to act as a monitoring server. You do not need to install Nagios on the clients.
Add the www user to the nagios group:
# pw groupmod nagios -m www
# grep nagios /etc/group
nagios:*:181:www
Enable nagios to start on boot:
# echo 'nagios_enable="YES"' >> /etc/rc.conf
Now copy the sample files to the config files:
# cd /usr/local/etc/nagios/
# cp cgi.cfg-sample cgi.cfg
# cp nagios.cfg-sample nagios.cfg
# cp resource.cfg-sample resource.cfg
Move sample files to a sample folder:
# mkdir -p /usr/local/etc/nagios/sample
# mv /usr/local/etc/nagios/*-sample /usr/local/etc/nagios/sample
Navigate to /usr/local/etc/nagios/objects and do the same:
# cd /usr/local/etc/nagios/objects
# cp commands.cfg-sample commands.cfg
# cp contacts.cfg-sample contacts.cfg
# cp localhost.cfg-sample localhost.cfg
# cp printer.cfg-sample printer.cfg
# cp switch.cfg-sample switch.cfg
# cp templates.cfg-sample templates.cfg
# cp timeperiods.cfg-sample timeperiods.cfg
Move sample files to a sample folder:
# mkdir -p /usr/local/etc/nagios/objects/sample
# mv /usr/local/etc/nagios/objects/*-sample /usr/local/etc/nagios/objects/sample
Note: A sample configuration file for monitoring windows servers can be found at /usr/ports/net-mgmt/nagios/work/nagios-3.2.3/sample-config/template-object/windows.cfg
Now check you nagios configurations errors:
# nagios -v /usr/local/etc/nagios/nagios.cfg
Create a Nagios Admin called "nagiosadmin":
# htpasswd -c /usr/local/etc/nagios/htpasswd.users nagiosadmin
Note: the -c parameter creates the htpasswd file. If htpasswd file already exists, it is rewritten and truncated.
Note: you must call the admin name "nagiosadmin", because it is the default admin name in these configuration file "grep -i 'admin' /usr/local/etc/nagios/*.cfg".
Change permission:
# chown root:www /usr/local/etc/nagios/htpasswd.users
# chmod 440 /usr/local/etc/nagios/htpasswd.users
Create a Nagios user called "nagiosuser":
# htpasswd /usr/local/etc/nagios/htpasswd.users nagiosuser
Note: you do not need the -c parameter this time since the htpasswd file already created.
Now add Nagios setting to your apache configuration:
# vim /usr/local/etc/apache22/Includes/nagios.conf
Restart Apache:
# /usr/local/etc/rc.d/apache22 restart
Start Nagios:
# /usr/local/etc/rc.d/nagios start
On the Nagios Client, install nrpe2:
# cd /usr/ports/net-mgmt/nrpe
# make config-recursive
# make config-recursive
# make install
Make the Nagios configuration file:
# ls -l /usr/local/etc/nrpe.cfg
If nrpe.cfg does not exist:
# cp /usr/local/etc/nrpe.cfg.sample /usr/local/etc/nrpe.cfg
Change Permission:
# chmod 440 /usr/local/etc/nrpe.cfg
On the Nagios Client, add the Nagios Server's IP Address to allowed hosts:
# vi /usr/local/etc/nrpe.cfg
allowed_hosts=127.0.0.1,192.168.13.3
Note: comma separated. No Space in between!
On the Nagios Client, enable nrpe2 to start on boot:
# echo "nrpe2_enable="YES"" >> /etc/rc.conf
On the Nagios Client, start nrpe2:
# /usr/local/etc/rc.d/nrpe2 start
On the Nagios Client, make sure nrpe2 is running:
# ps auxww | grep nrpe
nagios 46166 0.0 0.1 14392 1860 - Is 4:47AM 0:00.00 /usr/local/sbin/nrpe2 -c /usr/local/etc/nrpe.cfg -d
On the Nagios Client, make sure the nrpe2 daemon is running:
# netstat -a | grep 5666
tcp4 0 0 *.5666 *.* LISTEN
tcp6 0 0 *.5666 *.* LISTEN
# sockstat | grep -E 'nagios|nrpe|5666'
nagios nrpe2 99457 3 dgram -> /var/run/logpriv
nagios nrpe2 99457 4 tcp6 *:5666 *:*
nagios nrpe2 99457 5 tcp4 *:5666 *:*
On the Nagios Client, run check_nrpe2 check. You should see the version number on success.
# /usr/local/libexec/nagios/check_nrpe2 -H localhost
NRPE v2.15
On the Nagios Client, you can test some of these by running the following commands:
# /usr/local/libexec/nagios/check_http -H localhost
# /usr/local/libexec/nagios/check_nrpe2 -H localhost -c check_users
# /usr/local/libexec/nagios/check_nrpe2 -H localhost -c check_load
# /usr/local/libexec/nagios/check_nrpe2 -H localhost -c check_hda1
# /usr/local/libexec/nagios/check_nrpe2 -H localhost -c check_sda1
# /usr/local/libexec/nagios/check_nrpe2 -H localhost -c check_total_procs
# /usr/local/libexec/nagios/check_nrpe2 -H localhost -c check_zombie_procs
Note: plugins are stored in /usr/local/libexec/nagios.
At this point, you are done installing and configuring NRPE on the remote host (Nagios Client). Now its time to install a component and make some configuration entries on your monitoring server.
On the Nagios Server, install nrpe2:
# cd /usr/ports/net-mgmt/nrpe
# make install
Make sure the check_nrpe2 plugin can talk to the NRPE daemon on the remote host. Replace "192.168.13.156" in the command below with the IP address of the remote host that has NRPE installed. Run following command on the Nagios Server:
# /usr/local/libexec/nagios/check_nrpe2 -H 192.168.13.156
NRPE v2.15
On the Nagios Server, run following command for testing:
# /usr/local/libexec/nagios/check_nrpe2 -H 192.168.13.156 -c check_total_procs
Use a Browser to check:
http://192.168.13.2/nagios/
Edit the admin email:
# vim /usr/local/etc/nagios/nagios.cfg
admin_email=me@example.com
admin_pager=me@example.com
Note: Nagios never uses these values itself, but you can access them by using the $ADMINEMAIL$ and $ADMINPAGER$ macros in your notification commands.
Define Generic Contact Template in templates.cfg:
Nagios installation gives a default generic contact template that can be used as a reference to build your contacts. Please note that all the directives mentioned in the generic-contact template below are mandatory. So, if you've decided not to use the generic-contact template definition in your contacts, you should define all these mandatory definitions inside your contacts yourself.
The following generic-contact is already available under /usr/local/etc/nagios/objects/templates.cfg. Also, the templates.cfg is included in the nagios.cfg by default as shown below.
Please note that any of these directives mentioned in the templates.cfg can be overridden when you define a real contact using this generic-template.
# grep templates /usr/local/etc/nagios/nagios.cfg
cfg_file=/usr/local/etc/nagios/objects/templates.cfg
Note: generic-contact is available under /usr/local/etc/nagios/objects/templates.cfg
Define Individual Contacts in contacts.cfg:
One you've confirmed that the generic-contact templates is defined properly, you can start defining individual contacts definition for all the people in your organization who would ever receive any notifications from nagios. Please note that just by defining a contact doesn't mean that they'll get notification. Later you have to associate this contact to either a service or host definition as shown in the later sections below. So, feel free to define all possible contacts here. (for example, Developers, DBAs, Sysadmins, IT-Manager, Customer Service Manager, Top Management etc.)
Note: Define these contacts in /usr/local/etc/nagios/objects/contacts.cfg
Define Contact Groups with Multiple Contacts in contacts.cfg:
Once you've defined the individual contacts, you can also group them together to send the appropriate notifications. For example, only DBAs needs to be notified about the database down service definition. So, a db-admins group may be required. Also, may be only Unix system administrators needs to be notified when Apache goes down. So, a unix-admins group may be required. Feel free to define as many groups as you think is required. Later you can use these groups in the individual service and host definitions.
Note: Define contact groups in /usr/local/etc/nagios/objects/contacts.cfg
Attach Contact Groups or Individual Contacts to Service and Host Definitions:
Once you've defined the individual contacts and contact groups, it is time to start attaching them to a specific host or service definition as shown below.
Note: Following host is defined under /usr/local/etc/nagios/objects/servers/email-server.cfg. This can be any host definition file.
Note: Following is defined under /usr/local/etc/nagios/objects/servers/db-server.cfg. This can be any host definition file.
We will create a new configuration file for all FreeBSD servers on the LAN:
# touch /usr/local/etc/nagios/objects/lan-freebsd-servers.cfg
# vi /usr/local/etc/nagios/objects/lan-freebsd-servers.cfg
Note: you can either edit the existing localhost.cfg or create the lan-freebsd-servers.cfg file.
Note: comma separated. No Space in between!
Add other FreeBSD hosts on the LAN to the host group member list.
# vi /usr/local/etc/nagios/objects/localhost.cfg
Remember to add host name to /etc/hosts:
# vi /etc/hosts
192.168.13.156 test-bsd
192.168.13.242 web1
192.168.13.108 bsd-sql
192.168.13.2 fw1
Define check_nrpe2 command in order to allow Nagios Server to run the check_nrpe2 command. Add following lines to commands.cfg:
# vi /usr/local/etc/nagios/objects/commands.cfg
Note: $USERn$ macros are defined in /usr/local/etc/nagios/resource.cfg.
Note: Standard macros that are available in Nagios are listed here http://nagios.sourceforge.net/docs/3_0/macrolist.html .
Add following line to nagios.cfg:
# vi /usr/local/etc/nagios/nagios.cfg
# Definitions for monitoring the freebsd servers on the lan.
cfg_file=/usr/local/etc/nagios/objects/lan-freebsd-servers.cfg
Now check you nagios configurations errors:
# /usr/local/bin/nagios -v /usr/local/etc/nagios/nagios.cfg
Restart nagios if everything was okay:
# /usr/local/etc/rc.d/nagios restart
On the Nagios Client, install check_mysql_health plugin:
# cd /usr/ports/net-mgmt/check_mysql_health
# make install
Note: there is a plugin called "check_mysql" in nagios-plugins-1.4.15_1,1. However, check_mysql_health seems better.
Go to your MySQL server, and grant "no privileges" for a nagios user:
# mysql -u root -p
mysql> GRANT USAGE ON *.* TO 'nagios'@'localhost' IDENTIFIED BY 'nagios';
mysql> FLUSH PRIVILEGES;
mysql> exit
If you want to monitor mysql replication status as well, grant "REPLICATION CLIENT" privileges for a nagios user:
# mysql -u root -p
mysql> GRANT REPLICATION CLIENT ON *.* TO 'nagios'@'localhost' IDENTIFIED BY 'nagios';
mysql> FLUSH PRIVILEGES;
mysql> exit
# mysql -u nagios -p
mysql> show grants;
View check_mysql_health options:
# /usr/local/libexec/nagios/check_mysql_health -h
You can test some of these by running the following commands on Nagios Client:
# /usr/local/libexec/nagios/check_mysql_health --hostname localhost --username nagios --password nagios --mode uptime --warning 2 --critical 5
Note: this command above will trigger a WARNING if mysql uptime is greater than 2 minutes; will trigger a CRITICAL if mysql uptime is greater than 5 minutes.
Pleae note, that the thresholds must be specified according to the Nagios plug-in development Guidelines.
10 // means "Alarm, if > 10" (without colon).
90: // means "Alarm, if < 90" (with colon).
On Nagios Client, edit nrpe.cfg:
# vi /usr/local/etc/nrpe.cfg
### MySQL - hardcoded command arugments.
command[check_mysql_health_uptime]=/usr/local/libexec/nagios/check_mysql_health --hostname localhost --username nagios --password nagios --mode uptime
command[check_mysql_health_slave-io-running]=/usr/local/libexec/nagios/check_mysql_health --hostname localhost --username nagios --password nagios --mode slave-io-running
command[check_mysql_health_slave-sql-running]=/usr/local/libexec/nagios/check_mysql_health --hostname localhost --username nagios --password nagios --mode slave-sql-running
On Nagios Client, restart nrpe2:
# /usr/local/etc/rc.d/nrpe2 restart
You can test some of these by running the following commands on Nagios Client:
# /usr/local/libexec/nagios/check_nrpe2 -H localhost -c check_mysql_health_uptime
You can test some of these by running the following commands on Nagios Server:
# /usr/local/libexec/nagios/check_nrpe2 -H 192.168.13.108 -c check_mysql_health_uptime
# /usr/local/libexec/nagios/check_nrpe2 -H 192.168.13.108 -c check_mysql_health_slave-io-running
# /usr/local/libexec/nagios/check_nrpe2 -H 192.168.13.108 -c check_mysql_health_slave-sql-running
Check system message if it did not work:
# tail /var/log/messages
Reference:
http://www.wonkity.com/~wblock/docs/nagios.pdf
http://www.weithenn.org/cgi-bin/wiki.pl?Nagios-%E7%B6%B2%E8%B7%AF%E7%9B%A3%E6%8E%A7%E5%8F%8A%E5%91%8A%E8%AD%A6%E7%B3%BB%E7%B5%B1
http://nagios.sourceforge.net/docs/nrpe/NRPE.pdf
http://nagios.sourceforge.net/docs/3_0/macros.html
http://www.thegeekstuff.com/2009/06/4-steps-to-define-nagios-contacts-with-email-and-pager-notification/
Nagios is a powerful monitoring system that enables organizations to identify and resolve IT infrastructure problems before they affect critical business processes.
the fork of nagios to icinga is a good thing, much in the same way as quagga was a great fork of zebra.
# uname -a
FreeBSD bsd10.local 10.0-RELEASE
Install Apache2.2
Install PHP5.4.27
Install MySQL5.5
On the Nagios server, install Nagios:
# cd /usr/ports/net-mgmt/nagios
# make config-recursive
# make config-recursive
# make install
Note: you only need to install Nagios on the machine that is going to act as a monitoring server. You do not need to install Nagios on the clients.
Add the www user to the nagios group:
# pw groupmod nagios -m www
# grep nagios /etc/group
nagios:*:181:www
Enable nagios to start on boot:
# echo 'nagios_enable="YES"' >> /etc/rc.conf
Now copy the sample files to the config files:
# cd /usr/local/etc/nagios/
# cp cgi.cfg-sample cgi.cfg
# cp nagios.cfg-sample nagios.cfg
# cp resource.cfg-sample resource.cfg
Move sample files to a sample folder:
# mkdir -p /usr/local/etc/nagios/sample
# mv /usr/local/etc/nagios/*-sample /usr/local/etc/nagios/sample
Navigate to /usr/local/etc/nagios/objects and do the same:
# cd /usr/local/etc/nagios/objects
# cp commands.cfg-sample commands.cfg
# cp contacts.cfg-sample contacts.cfg
# cp localhost.cfg-sample localhost.cfg
# cp printer.cfg-sample printer.cfg
# cp switch.cfg-sample switch.cfg
# cp templates.cfg-sample templates.cfg
# cp timeperiods.cfg-sample timeperiods.cfg
Move sample files to a sample folder:
# mkdir -p /usr/local/etc/nagios/objects/sample
# mv /usr/local/etc/nagios/objects/*-sample /usr/local/etc/nagios/objects/sample
Note: A sample configuration file for monitoring windows servers can be found at /usr/ports/net-mgmt/nagios/work/nagios-3.2.3/sample-config/template-object/windows.cfg
Now check you nagios configurations errors:
# nagios -v /usr/local/etc/nagios/nagios.cfg
Create a Nagios Admin called "nagiosadmin":
# htpasswd -c /usr/local/etc/nagios/htpasswd.users nagiosadmin
Note: the -c parameter creates the htpasswd file. If htpasswd file already exists, it is rewritten and truncated.
Note: you must call the admin name "nagiosadmin", because it is the default admin name in these configuration file "grep -i 'admin' /usr/local/etc/nagios/*.cfg".
Change permission:
# chown root:www /usr/local/etc/nagios/htpasswd.users
# chmod 440 /usr/local/etc/nagios/htpasswd.users
Create a Nagios user called "nagiosuser":
# htpasswd /usr/local/etc/nagios/htpasswd.users nagiosuser
Note: you do not need the -c parameter this time since the htpasswd file already created.
Now add Nagios setting to your apache configuration:
# vim /usr/local/etc/apache22/Includes/nagios.conf
### [START] nagios <Directory /usr/local/www/nagios> Order deny,allow Deny from all Allow from 127.0.0.1 Allow from 192.168.6.112 php_flag engine on php_admin_value open_basedir /usr/local/www/nagios/:/var/spool/nagios/ AuthName "Nagios Access Ya" AuthType Basic AuthUSerFile /usr/local/etc/nagios/htpasswd.users Require valid-user </Directory> <Directory /usr/local/www/nagios/cgi-bin> Options ExecCGI </Directory> ScriptAlias /nagios/cgi-bin/ /usr/local/www/nagios/cgi-bin/ Alias /nagios/ /usr/local/www/nagios/ ### [END] nagios
Restart Apache:
# /usr/local/etc/rc.d/apache22 restart
Start Nagios:
# /usr/local/etc/rc.d/nagios start
On the Nagios Client, install nrpe2:
# cd /usr/ports/net-mgmt/nrpe
# make config-recursive
# make config-recursive
# make install
Make the Nagios configuration file:
# ls -l /usr/local/etc/nrpe.cfg
If nrpe.cfg does not exist:
# cp /usr/local/etc/nrpe.cfg.sample /usr/local/etc/nrpe.cfg
Change Permission:
# chmod 440 /usr/local/etc/nrpe.cfg
On the Nagios Client, add the Nagios Server's IP Address to allowed hosts:
# vi /usr/local/etc/nrpe.cfg
allowed_hosts=127.0.0.1,192.168.13.3
Note: comma separated. No Space in between!
On the Nagios Client, enable nrpe2 to start on boot:
# echo "nrpe2_enable="YES"" >> /etc/rc.conf
On the Nagios Client, start nrpe2:
# /usr/local/etc/rc.d/nrpe2 start
On the Nagios Client, make sure nrpe2 is running:
# ps auxww | grep nrpe
nagios 46166 0.0 0.1 14392 1860 - Is 4:47AM 0:00.00 /usr/local/sbin/nrpe2 -c /usr/local/etc/nrpe.cfg -d
On the Nagios Client, make sure the nrpe2 daemon is running:
# netstat -a | grep 5666
tcp4 0 0 *.5666 *.* LISTEN
tcp6 0 0 *.5666 *.* LISTEN
# sockstat | grep -E 'nagios|nrpe|5666'
nagios nrpe2 99457 3 dgram -> /var/run/logpriv
nagios nrpe2 99457 4 tcp6 *:5666 *:*
nagios nrpe2 99457 5 tcp4 *:5666 *:*
On the Nagios Client, run check_nrpe2 check. You should see the version number on success.
# /usr/local/libexec/nagios/check_nrpe2 -H localhost
NRPE v2.15
On the Nagios Client, you can test some of these by running the following commands:
# /usr/local/libexec/nagios/check_http -H localhost
# /usr/local/libexec/nagios/check_nrpe2 -H localhost -c check_users
# /usr/local/libexec/nagios/check_nrpe2 -H localhost -c check_load
# /usr/local/libexec/nagios/check_nrpe2 -H localhost -c check_hda1
# /usr/local/libexec/nagios/check_nrpe2 -H localhost -c check_sda1
# /usr/local/libexec/nagios/check_nrpe2 -H localhost -c check_total_procs
# /usr/local/libexec/nagios/check_nrpe2 -H localhost -c check_zombie_procs
Note: plugins are stored in /usr/local/libexec/nagios.
At this point, you are done installing and configuring NRPE on the remote host (Nagios Client). Now its time to install a component and make some configuration entries on your monitoring server.
On the Nagios Server, install nrpe2:
# cd /usr/ports/net-mgmt/nrpe
# make install
Make sure the check_nrpe2 plugin can talk to the NRPE daemon on the remote host. Replace "192.168.13.156" in the command below with the IP address of the remote host that has NRPE installed. Run following command on the Nagios Server:
# /usr/local/libexec/nagios/check_nrpe2 -H 192.168.13.156
NRPE v2.15
On the Nagios Server, run following command for testing:
# /usr/local/libexec/nagios/check_nrpe2 -H 192.168.13.156 -c check_total_procs
Use a Browser to check:
http://192.168.13.2/nagios/
Edit the admin email:
# vim /usr/local/etc/nagios/nagios.cfg
admin_email=me@example.com
admin_pager=me@example.com
Note: Nagios never uses these values itself, but you can access them by using the $ADMINEMAIL$ and $ADMINPAGER$ macros in your notification commands.
Define Generic Contact Template in templates.cfg:
Nagios installation gives a default generic contact template that can be used as a reference to build your contacts. Please note that all the directives mentioned in the generic-contact template below are mandatory. So, if you've decided not to use the generic-contact template definition in your contacts, you should define all these mandatory definitions inside your contacts yourself.
The following generic-contact is already available under /usr/local/etc/nagios/objects/templates.cfg. Also, the templates.cfg is included in the nagios.cfg by default as shown below.
Please note that any of these directives mentioned in the templates.cfg can be overridden when you define a real contact using this generic-template.
# grep templates /usr/local/etc/nagios/nagios.cfg
cfg_file=/usr/local/etc/nagios/objects/templates.cfg
Note: generic-contact is available under /usr/local/etc/nagios/objects/templates.cfg
define contact{ name generic-contact service_notification_period 24x7 host_notification_period 24x7 service_notification_options w,u,c,r,f,s host_notification_options d,u,r,f,s service_notification_commands notify-service-by-email host_notification_commands notify-host-by-email register 0 }
- Name - This defines the name of the contact template (generic-contact).
- service_notification_period - This defines when nagios can send notification about services issues (for example, Apache down). By default this is 24×7 timeperiod, which is defined under /usr/local/etc/nagios/objects/timeperiods.cfg
- host_notification_period - This defines when nagios can send notification about host issues (for example, server crashed). By default, this is 24×7 timeperiod.
- service_notification_options - This defines the type of service notification that can be sent out. By default this defines all possible service states including flapping events. This also includes the scheduled service downtime activities.
- host_notification_options - This defines the type of host notifications that can be sent out. By default this defines all possible host states including flapping events. This also includes the scheduled host downtime activities.
- service_notification_commands - By default this defines that the contact should get notification about service issues (for example, database down) via email. You can also define additional commands and add it to this directive. For example, you can define your own notify-service-by-sms command.
- host_notification_commands - By default this defines that the contact should get notification about host issues (for example, host down) via email. You can also define additional commands and add it to this directive. For example, you can define your own notify-host-by-sms command.
Define Individual Contacts in contacts.cfg:
One you've confirmed that the generic-contact templates is defined properly, you can start defining individual contacts definition for all the people in your organization who would ever receive any notifications from nagios. Please note that just by defining a contact doesn't mean that they'll get notification. Later you have to associate this contact to either a service or host definition as shown in the later sections below. So, feel free to define all possible contacts here. (for example, Developers, DBAs, Sysadmins, IT-Manager, Customer Service Manager, Top Management etc.)
Note: Define these contacts in /usr/local/etc/nagios/objects/contacts.cfg
define contact{ contact_name sgupta use generic-contact alias Sanjay Gupta (Developer) email sgupta@thegeekstuff.com pager 333-333@pager.thegeekstuff.com } define contact{ contact_name jbourne use generic-contact alias Jason Bourne (Sysadmin) email jbourne@thegeekstuff.com }
Define Contact Groups with Multiple Contacts in contacts.cfg:
Once you've defined the individual contacts, you can also group them together to send the appropriate notifications. For example, only DBAs needs to be notified about the database down service definition. So, a db-admins group may be required. Also, may be only Unix system administrators needs to be notified when Apache goes down. So, a unix-admins group may be required. Feel free to define as many groups as you think is required. Later you can use these groups in the individual service and host definitions.
Note: Define contact groups in /usr/local/etc/nagios/objects/contacts.cfg
define contactgroup{ contactgroup_name db-admins alias Database Administrators members jsmith, jdoe, mraj } define contactgroup{ contactgroup_name unix-admins alias Linux System Administrator members jbourne, dpatel, mshankar }
Attach Contact Groups or Individual Contacts to Service and Host Definitions:
Once you've defined the individual contacts and contact groups, it is time to start attaching them to a specific host or service definition as shown below.
Note: Following host is defined under /usr/local/etc/nagios/objects/servers/email-server.cfg. This can be any host definition file.
define host{ use linux-server host_name email-server alias Corporate Email Server address 192.168.1.14 contact_groups unix-admins }
Note: Following is defined under /usr/local/etc/nagios/objects/servers/db-server.cfg. This can be any host definition file.
define service{ use generic-service host_name prod-db service_description CPU Load contact_groups unix-admins check_command check_nrpe!check_load }
We will create a new configuration file for all FreeBSD servers on the LAN:
# touch /usr/local/etc/nagios/objects/lan-freebsd-servers.cfg
# vi /usr/local/etc/nagios/objects/lan-freebsd-servers.cfg
Note: you can either edit the existing localhost.cfg or create the lan-freebsd-servers.cfg file.
############################################################################### # LOCALHOST.CFG - SAMPLE OBJECT CONFIG FILE FOR MONITORING THIS MACHINE # # Last Modified: 03-03-2011 # # NOTE: This config file is intended to serve as an *extremely* simple # example of how you can create configuration entries to monitor # the local (FreeBSD) machine. # ############################################################################### ############################################################################### ############################################################################### # # HOST DEFINITION # ############################################################################### ############################################################################### # Define a host for the local machine define host{ use freebsd-server ; Inherit default values from a template host_name test-bsd ; The name we're giving to this host alias My TEST BSD ; A longer name associated with the host address 192.168.13.156 ; IP address of the host } define host{ use freebsd-server ; Inherit default values from a template host_name dev01 ; The name we're giving to this host alias dev01 ; A longer name associated with the host address 192.168.13.157 ; IP address of the host } define host{ use freebsd-server ; Inherit default values from a template host_name web1 ; The name we're giving to this host alias Online Web ; A longer name associated with the host address 192.168.13.242 ; IP address of the host } define host{ use freebsd-server ; Inherit default values from a template host_name bsd-sql ; The name we're giving to this host alias Online SQL ; A longer name associated with the host address 192.168.13.108 ; IP address of the host } define host{ use freebsd-server ; Inherit default values from a template host_name fw1 ; The name we're giving to this host alias Firewall Server ; A longer name associated with the host address 192.168.13.2 ; IP address of the host } ############################################################################### ############################################################################### # # SERVICE DEFINITIONS # ############################################################################### ############################################################################### # Define a service to "ping" the local machine define service{ use generic-service ; Name of service template to use host_name test-bsd,web1,bsd-sql,fw1,dev01 service_description PING check_command check_ping!100.0,20%!500.0,60% } # Define a service to check SSH on the local machine. # Disable notifications for this service by default, as not all users may have SSH enabled. define service{ use generic-service ; Name of service template to use host_name test-bsd,web1,bsd-sql service_description SSH check_command check_ssh notifications_enabled 0 } # Define a service to check HTTP. # Disable notifications for this service by default, as not all users may have HTTP enabled. define service{ use generic-service ; Name of service template to use host_name web1 service_description HTTP check_command check_http contact_groups admins notifications_enabled 1 } ### A more advanced definition for monitoring the HTTP service is shown below. This service definition will check to see if the /index.php URI contains the string "html". It will produce an error if the string isn't found, the URI isn't valid, or the web server takes longer than 5 seconds to respond. ### If you are checking a virtual server that uses 'host headers' you must supply the FQDN (fully qualified domain name) as the [host_name] argument. define service{ use generic-service ; Name of service template to use host_name web1 service_description HTTP check_command check_http!-u /index.php -t 5 -s "html" contact_groups admins notifications_enabled 1 } ### Note: For more advanced monitoring, run the check_http plugin manually with --help as a command-line argument to see all the options you can give the plugin. ### # /usr/local/libexec/nagios/check_http --help ### # /usr/local/libexec/nagios/check_http -H localhost # Define a service to check the number of currently logged in users. define service{ use generic-service ; Name of service template to use host_name test-bsd,web1,bsd-sql,fw1,dev01 service_description Current Users check_command check_nrpe2!check_users } # Define a service to check the root partition of the disk. define service{ use generic-service ; Name of service template to use host_name localhost,test-bsd,web1,bsd-sql,fw1,dev01 service_description / partition check_command check_nrpe2!check_root } # Define a service to check the /usr partition of the disk. define service{ use generic-service ; Name of service template to use host_name localhost,test-bsd,web1,bsd-sql,fw1,dev01 service_description /usr partition check_command check_nrpe2!check_usr } # Define a service to check the /var partition of the disk. define service{ use generic-service ; Name of service template to use host_name localhost,test-bsd,web1,bsd-sql,fw1,dev01 service_description /var partition check_command check_nrpe2!check_var } # Define a service to check the /tmp partition of the disk. define service{ use generic-service ; Name of service template to use host_name localhost,test-bsd,web1,bsd-sql,fw1,dev01 service_description /tmp partition check_command check_nrpe2!check_tmp } # Define a service to check the load. define service{ use generic-service ; Name of service template to use host_name test-bsd,web1,bsd-sql,fw1,dev01 service_description Current Load check_command check_nrpe2!check_load } # Define a service to check zombie processes. define service{ use generic-service ; Name of service template to use host_name test-bsd,web1,bsd-sql,fw1,dev01 service_description Zombie Processes check_command check_nrpe2!check_zombie_procs } # Define a service to check total processes. define service{ use generic-service ; Name of service template to use host_name test-bsd,web1,bsd-sql,fw1,dev01 service_description total Processes check_command check_nrpe2!check_total_procs } # Define a service to check mysql uptime. define service{ use generic-service ; Name of service template to use host_name bsd-sql service_description MySQL Uptime check_command check_nrpe2!check_mysql_health_uptime } # Define a service to check mysql slave io running. define service{ use generic-service ; Name of service template to use host_name bsd-sql service_description MySQL Slave IO check_command check_nrpe2!check_mysql_health_slave-io-running } # Define a service to check mysql slave sql running. define service{ use generic-service ; Name of service template to use host_name bsd-sql service_description MySQL Slave SQL check_command check_nrpe2!check_mysql_health_slave-sql-running }
Note: comma separated. No Space in between!
Add other FreeBSD hosts on the LAN to the host group member list.
# vi /usr/local/etc/nagios/objects/localhost.cfg
define hostgroup{
hostgroup_name freebsd-servers ; The name of the hostgroup
alias FreeBSD Servers ; Long name of the group
members localhost,test-bsd,web1,bsd-sql,fw1 ; Comma separated list of hosts that belong to this group
}
Remember to add host name to /etc/hosts:
# vi /etc/hosts
192.168.13.156 test-bsd
192.168.13.242 web1
192.168.13.108 bsd-sql
192.168.13.2 fw1
Define check_nrpe2 command in order to allow Nagios Server to run the check_nrpe2 command. Add following lines to commands.cfg:
# vi /usr/local/etc/nagios/objects/commands.cfg
# 'check_nrpe2' command definition define command{ command_name check_nrpe2 command_line $USER1$/check_nrpe2 -H $HOSTADDRESS$ -c $ARG1$ }
Note: $USERn$ macros are defined in /usr/local/etc/nagios/resource.cfg.
Note: Standard macros that are available in Nagios are listed here http://nagios.sourceforge.net/docs/3_0/macrolist.html .
Add following line to nagios.cfg:
# vi /usr/local/etc/nagios/nagios.cfg
# Definitions for monitoring the freebsd servers on the lan.
cfg_file=/usr/local/etc/nagios/objects/lan-freebsd-servers.cfg
Now check you nagios configurations errors:
# /usr/local/bin/nagios -v /usr/local/etc/nagios/nagios.cfg
Restart nagios if everything was okay:
# /usr/local/etc/rc.d/nagios restart
On the Nagios Client, install check_mysql_health plugin:
# cd /usr/ports/net-mgmt/check_mysql_health
# make install
Note: there is a plugin called "check_mysql" in nagios-plugins-1.4.15_1,1. However, check_mysql_health seems better.
Go to your MySQL server, and grant "no privileges" for a nagios user:
# mysql -u root -p
mysql> GRANT USAGE ON *.* TO 'nagios'@'localhost' IDENTIFIED BY 'nagios';
mysql> FLUSH PRIVILEGES;
mysql> exit
If you want to monitor mysql replication status as well, grant "REPLICATION CLIENT" privileges for a nagios user:
# mysql -u root -p
mysql> GRANT REPLICATION CLIENT ON *.* TO 'nagios'@'localhost' IDENTIFIED BY 'nagios';
mysql> FLUSH PRIVILEGES;
mysql> exit
# mysql -u nagios -p
mysql> show grants;
View check_mysql_health options:
# /usr/local/libexec/nagios/check_mysql_health -h
You can test some of these by running the following commands on Nagios Client:
# /usr/local/libexec/nagios/check_mysql_health --hostname localhost --username nagios --password nagios --mode uptime --warning 2 --critical 5
Note: this command above will trigger a WARNING if mysql uptime is greater than 2 minutes; will trigger a CRITICAL if mysql uptime is greater than 5 minutes.
Pleae note, that the thresholds must be specified according to the Nagios plug-in development Guidelines.
10 // means "Alarm, if > 10" (without colon).
90: // means "Alarm, if < 90" (with colon).
On Nagios Client, edit nrpe.cfg:
# vi /usr/local/etc/nrpe.cfg
### MySQL - hardcoded command arugments.
command[check_mysql_health_uptime]=/usr/local/libexec/nagios/check_mysql_health --hostname localhost --username nagios --password nagios --mode uptime
command[check_mysql_health_slave-io-running]=/usr/local/libexec/nagios/check_mysql_health --hostname localhost --username nagios --password nagios --mode slave-io-running
command[check_mysql_health_slave-sql-running]=/usr/local/libexec/nagios/check_mysql_health --hostname localhost --username nagios --password nagios --mode slave-sql-running
On Nagios Client, restart nrpe2:
# /usr/local/etc/rc.d/nrpe2 restart
You can test some of these by running the following commands on Nagios Client:
# /usr/local/libexec/nagios/check_nrpe2 -H localhost -c check_mysql_health_uptime
You can test some of these by running the following commands on Nagios Server:
# /usr/local/libexec/nagios/check_nrpe2 -H 192.168.13.108 -c check_mysql_health_uptime
# /usr/local/libexec/nagios/check_nrpe2 -H 192.168.13.108 -c check_mysql_health_slave-io-running
# /usr/local/libexec/nagios/check_nrpe2 -H 192.168.13.108 -c check_mysql_health_slave-sql-running
Check system message if it did not work:
# tail /var/log/messages
Reference:
http://www.wonkity.com/~wblock/docs/nagios.pdf
http://www.weithenn.org/cgi-bin/wiki.pl?Nagios-%E7%B6%B2%E8%B7%AF%E7%9B%A3%E6%8E%A7%E5%8F%8A%E5%91%8A%E8%AD%A6%E7%B3%BB%E7%B5%B1
http://nagios.sourceforge.net/docs/nrpe/NRPE.pdf
http://nagios.sourceforge.net/docs/3_0/macros.html
http://www.thegeekstuff.com/2009/06/4-steps-to-define-nagios-contacts-with-email-and-pager-notification/
絕對別這樣做的10個理財習慣
絕對別這樣做的10個理財習慣
不為「錢」所困,幾乎是所有人的夢想,但多數上班族仍為理財苦惱。 事實上,邁向財務自由,不見得要有富爸爸或數學腦;只要從小地方做起,改變對金錢的態度和習慣,一樣可滾出自己的一桶金、兩桶金…,透過理財實現美好生活。
01 認為「這點小錢沒什麼」
逛街忍不住多買一件衣服、朋友邀約吃大餐不想缺席、無意間一個月看了3部電影......月底才驚覺花了好多錢。不記帳、無意識的花錢,是「守不住錢」的一大關鍵。
02 看到特價,覺得「不買可惜」
衝動購物或撿便宜心態,讓開銷超出實際需要。沒有規畫預算和控制支出的概念,成為荷包失血的主因。
03 買東西習慣刷卡或分期付款
老是習慣在當下先滿足欲望,收到帳單時,才發現超過負擔,甚至要支付額外利息。對金錢沒有長遠計劃,不但存不到錢,生活品質也會大打折扣。
04 錢包塞得亂七八糟
發票、集點券、信用卡不分類,平時什麼都往錢包塞;不記得上次何時領錢,錢包空了再領就好,於是搞不懂錢花去哪;辛苦換來的點數常放到過期。對「錢」迷迷糊糊,當然也就留不住錢包裡的財富。
05 不細看各種帳單,拖到最後才匆忙繳費
「馬上繳費」代表信守紀律的理財觀;不認真弄清楚費用超支的原因,反映對金錢流向不夠重視,容易陷入「不知為何總是花很多錢」的惡性循環。
06 不規畫薪資用途,就急著投資
對收入、支出沒有計劃性分配。急著想「多賺一點」,就把大部份的錢拿去投資;當遇到突發性經濟需求,就沒有足夠存款因應。要準備好足夠的「閒錢」,不影響基本生活,才是投資的基礎。
07 在業務員推薦下,買了各式各樣的保險
只因為業務說「有買有保障」就跟著買,不肯下功夫搞清楚各種保單。這樣可能買了功能重覆的保險,更被保費壓垮。
08 聽到別人說「這支很賺」,就跟著買
說不出自己投資的原因,只是單純跟著別人下單,賺到像中樂透,賠了則自認倒霉。沒有自己的原則,當然容易投資失敗。
09 每天盯盤,被數字搞得坐立難安
一看到情勢不佳或大盤下挫,馬上決定出場、停止扣款。短線殺入殺出,容易造成「淨值高時購入,淨值低時賣出」的反效果,得不償失。
10 自認「不會投資」,不花時間研究
聽到投資就想躲得遠遠的,長時間下來,即使沒亂花也只能累積「死薪水」。別忘了時間就是最大的本錢,每天花10分鐘學習投資,是財富穩定成長的起點。
同場加映:今天存錢,明天才有選擇
有沒有想過,你的錢都躲到哪裡去了?有人的錢都藏在衣櫥裡;有人的錢都跑到冰箱、鞋櫃、梳妝台上……。《我上班,我存到100萬》的作者謝依珊(筆名:詩諾懷特,Snow White),就是靠著扭轉觀念,把一張張鈔票從衣櫥裡找回來。
公務員家庭長大的謝依珊,從小到大從不需要為錢煩惱,想要什麼,父母就會盡量滿足她,「真的像一個公主般長大,」她形容。直到28歲,她依然當個開心月光族,存款躺在「零」。
連存錢都不會,別想賺大錢
直到有一天,家族中跟她年紀相仿的親戚被負債壓得喘不過氣,才讓她感受到「錢」的重要性:「年輕時的價值觀,等到30幾歲不見得適用,我不想在人生需要改變時,卻因為沒錢,而沒有選擇餘地。」她深刻地體會到,唯有存上足夠的錢,才能給未來的自己選擇的權利。
想要快速致富、達到環遊世界的目的,她當時打電話問阿姨是不是該先投資買小套房,卻被狠狠訓了一頓。因為連存錢都不會的她,卻本末倒置想投資致富。
「如果我一開始就去投資套房,說不定會賺,但也說不定會賠,我的錢一定會卡死。」回過頭看自己,她覺得幸好當初腳踏實地的存錢。
決定開始存錢後,謝依珊也非常徹底的改變行為。
把錢從衣櫥、記帳本、理財書裡找回來
首先,改變生活習慣。「連生活都管理不好,還想要賺大錢就跳太遠了。」謝依珊說,她過去衣服堆滿衣櫥,「很多連標籤都還沒剪,甚至買過都不記得。」動手整理衣櫥、清掉不需要的,搞清楚自己到底有哪些東西,才不會買了一堆重複的物品而不知道。
接著,存入一半薪水。在薪水約4萬元入帳後,她馬上轉存一半以上的錢到另一個戶頭,剩下扣掉房租等固定開支,才是每月可消費的1萬元額度。為了不超支,她砍掉治裝預算,只穿舊衣。「給自己一個機會,去試試看存到一筆錢的感覺。」
砍掉購衣預算,等於每月多出5,000元,她偶爾還會撥一些錢去職訓局上課,補充其他領域的知識。她曾經去上過房屋裝修的課,「政府補助80%的錢,為什麼不利用?」
再來就是記帳。「這就像慢跑一樣,每天做一點,累積對數字的敏感度。」透過手寫記帳讓自己有印象,也順便花時間回想當天消費,久而久之,對自己的開銷更有警覺。
最後,則是學習財經知識。她從看理財書籍、報章雜誌著手:「我不用變成一個股市作手或專家,但至少我不要被騙,而且可以好好運用自己的財產。」每月剩餘的錢,她也用來練習買零股,進而去關心時事。
一個動作,今年開始不一樣
160期 一個動作,今年開始不一樣
購買
訂閱
每月薪水、三節獎金,以及偶爾兼差賺的錢,她統統老老實實的存起來,2年半後,她存到第一個100萬元。
「錢一旦放在好的地方,它就會加倍還給你,流回自己身邊。」謝依珊認為,100萬元不是一個結束,而是幫助夢想實現的墊腳石,她的足跡就因此踏遍了美國、俄羅斯、土耳其等國家。
你也想變成下一個謝依珊嗎?現在開始行動還不晚!
不為「錢」所困,幾乎是所有人的夢想,但多數上班族仍為理財苦惱。 事實上,邁向財務自由,不見得要有富爸爸或數學腦;只要從小地方做起,改變對金錢的態度和習慣,一樣可滾出自己的一桶金、兩桶金…,透過理財實現美好生活。
01 認為「這點小錢沒什麼」
逛街忍不住多買一件衣服、朋友邀約吃大餐不想缺席、無意間一個月看了3部電影......月底才驚覺花了好多錢。不記帳、無意識的花錢,是「守不住錢」的一大關鍵。
02 看到特價,覺得「不買可惜」
衝動購物或撿便宜心態,讓開銷超出實際需要。沒有規畫預算和控制支出的概念,成為荷包失血的主因。
03 買東西習慣刷卡或分期付款
老是習慣在當下先滿足欲望,收到帳單時,才發現超過負擔,甚至要支付額外利息。對金錢沒有長遠計劃,不但存不到錢,生活品質也會大打折扣。
04 錢包塞得亂七八糟
發票、集點券、信用卡不分類,平時什麼都往錢包塞;不記得上次何時領錢,錢包空了再領就好,於是搞不懂錢花去哪;辛苦換來的點數常放到過期。對「錢」迷迷糊糊,當然也就留不住錢包裡的財富。
05 不細看各種帳單,拖到最後才匆忙繳費
「馬上繳費」代表信守紀律的理財觀;不認真弄清楚費用超支的原因,反映對金錢流向不夠重視,容易陷入「不知為何總是花很多錢」的惡性循環。
06 不規畫薪資用途,就急著投資
對收入、支出沒有計劃性分配。急著想「多賺一點」,就把大部份的錢拿去投資;當遇到突發性經濟需求,就沒有足夠存款因應。要準備好足夠的「閒錢」,不影響基本生活,才是投資的基礎。
07 在業務員推薦下,買了各式各樣的保險
只因為業務說「有買有保障」就跟著買,不肯下功夫搞清楚各種保單。這樣可能買了功能重覆的保險,更被保費壓垮。
08 聽到別人說「這支很賺」,就跟著買
說不出自己投資的原因,只是單純跟著別人下單,賺到像中樂透,賠了則自認倒霉。沒有自己的原則,當然容易投資失敗。
09 每天盯盤,被數字搞得坐立難安
一看到情勢不佳或大盤下挫,馬上決定出場、停止扣款。短線殺入殺出,容易造成「淨值高時購入,淨值低時賣出」的反效果,得不償失。
10 自認「不會投資」,不花時間研究
聽到投資就想躲得遠遠的,長時間下來,即使沒亂花也只能累積「死薪水」。別忘了時間就是最大的本錢,每天花10分鐘學習投資,是財富穩定成長的起點。
同場加映:今天存錢,明天才有選擇
有沒有想過,你的錢都躲到哪裡去了?有人的錢都藏在衣櫥裡;有人的錢都跑到冰箱、鞋櫃、梳妝台上……。《我上班,我存到100萬》的作者謝依珊(筆名:詩諾懷特,Snow White),就是靠著扭轉觀念,把一張張鈔票從衣櫥裡找回來。
公務員家庭長大的謝依珊,從小到大從不需要為錢煩惱,想要什麼,父母就會盡量滿足她,「真的像一個公主般長大,」她形容。直到28歲,她依然當個開心月光族,存款躺在「零」。
連存錢都不會,別想賺大錢
直到有一天,家族中跟她年紀相仿的親戚被負債壓得喘不過氣,才讓她感受到「錢」的重要性:「年輕時的價值觀,等到30幾歲不見得適用,我不想在人生需要改變時,卻因為沒錢,而沒有選擇餘地。」她深刻地體會到,唯有存上足夠的錢,才能給未來的自己選擇的權利。
想要快速致富、達到環遊世界的目的,她當時打電話問阿姨是不是該先投資買小套房,卻被狠狠訓了一頓。因為連存錢都不會的她,卻本末倒置想投資致富。
「如果我一開始就去投資套房,說不定會賺,但也說不定會賠,我的錢一定會卡死。」回過頭看自己,她覺得幸好當初腳踏實地的存錢。
決定開始存錢後,謝依珊也非常徹底的改變行為。
把錢從衣櫥、記帳本、理財書裡找回來
首先,改變生活習慣。「連生活都管理不好,還想要賺大錢就跳太遠了。」謝依珊說,她過去衣服堆滿衣櫥,「很多連標籤都還沒剪,甚至買過都不記得。」動手整理衣櫥、清掉不需要的,搞清楚自己到底有哪些東西,才不會買了一堆重複的物品而不知道。
接著,存入一半薪水。在薪水約4萬元入帳後,她馬上轉存一半以上的錢到另一個戶頭,剩下扣掉房租等固定開支,才是每月可消費的1萬元額度。為了不超支,她砍掉治裝預算,只穿舊衣。「給自己一個機會,去試試看存到一筆錢的感覺。」
砍掉購衣預算,等於每月多出5,000元,她偶爾還會撥一些錢去職訓局上課,補充其他領域的知識。她曾經去上過房屋裝修的課,「政府補助80%的錢,為什麼不利用?」
再來就是記帳。「這就像慢跑一樣,每天做一點,累積對數字的敏感度。」透過手寫記帳讓自己有印象,也順便花時間回想當天消費,久而久之,對自己的開銷更有警覺。
最後,則是學習財經知識。她從看理財書籍、報章雜誌著手:「我不用變成一個股市作手或專家,但至少我不要被騙,而且可以好好運用自己的財產。」每月剩餘的錢,她也用來練習買零股,進而去關心時事。
一個動作,今年開始不一樣
160期 一個動作,今年開始不一樣
購買
訂閱
每月薪水、三節獎金,以及偶爾兼差賺的錢,她統統老老實實的存起來,2年半後,她存到第一個100萬元。
「錢一旦放在好的地方,它就會加倍還給你,流回自己身邊。」謝依珊認為,100萬元不是一個結束,而是幫助夢想實現的墊腳石,她的足跡就因此踏遍了美國、俄羅斯、土耳其等國家。
你也想變成下一個謝依珊嗎?現在開始行動還不晚!
Wednesday, April 16, 2014
csh tcsh alias command line argument
# vi ~/.cshrc
alias ee 'echo \!:1 secondArg'
# source ~/.cshrc
# ee firstArg
firstArg secondArg
Alias argument selectors; the ability to define an alias to take arguments supplied to it and apply them to the commands that it refers to. Tcsh is the only shell that provides this feature.
http://en.wikipedia.org/wiki/Tcsh
alias ee 'echo \!:1 secondArg'
# source ~/.cshrc
# ee firstArg
firstArg secondArg
Alias argument selectors; the ability to define an alias to take arguments supplied to it and apply them to the commands that it refers to. Tcsh is the only shell that provides this feature.
- \!# - argument selector for all arguments, including the alias/command itself; arguments need not be supplied.
- \!* - argument selector for all arguments, excluding the alias/command; arguments need not be supplied.
- \!$ - argument selector for the last argument; argument need not be supplied, but if none is supplied, the alias name is considered to be the last argument.
- \!^ - argument selector for first argument; argument MUST be supplied.
- \!:n - argument selector for the nth argument; argument MUST be supplied; n=0 refers to the alias/command name.
- \!:m-n - argument selector for the arguments from the mth to the nth; arguments MUST be supplied.
- \!:n-$ - argument selector for the arguments from the nth to the last; at least argument n MUST be supplied.
http://en.wikipedia.org/wiki/Tcsh
Tuesday, April 15, 2014
How to Design a Good API & Why it Matters
Summary
A well-written API can be a great asset to the organization that wrote it and to all that use it. Given the importance of good API design, surprisingly little has been written on the subject. In this talk (recorded at Javapolis), Java library designer Joshua Bloch teaches how to design good APIs, with many examples of what good and bad APIs look like.
http://www.infoq.com/presentations/effective-api-design
A well-written API can be a great asset to the organization that wrote it and to all that use it. Given the importance of good API design, surprisingly little has been written on the subject. In this talk (recorded at Javapolis), Java library designer Joshua Bloch teaches how to design good APIs, with many examples of what good and bad APIs look like.
http://www.infoq.com/presentations/effective-api-design
Where is weight stored in the database?
Where is weight stored in the database?
Weight is an attribute in Magento's EAV system.
Look at the table eav_attribute. Find the row with attribute code 'weight' and entity_type_id 4. (Entity type 4 means products.) In my table, this is row 64. This means the weight attribute is attribute 64.
Now look at catalog_product_entity_decimal. This is where all decimal attributes for products are stored, and weight is a decimal attribute. All the rows having attribute_id 64 are weight values. The entity_id values correspond to the products.
Reference:
http://www.magentocommerce.com/boards/viewthread/14761
Weight is an attribute in Magento's EAV system.
Look at the table eav_attribute. Find the row with attribute code 'weight' and entity_type_id 4. (Entity type 4 means products.) In my table, this is row 64. This means the weight attribute is attribute 64.
Now look at catalog_product_entity_decimal. This is where all decimal attributes for products are stored, and weight is a decimal attribute. All the rows having attribute_id 64 are weight values. The entity_id values correspond to the products.
Reference:
http://www.magentocommerce.com/boards/viewthread/14761
Monday, April 14, 2014
Why is godaddy HTTPS/SSL certification so much cheaper than digicert, thawte, and verisign?
I am a novice on HTTPS/SSL but GoDaddy charges $12.99 and Digicert, thawte, and Verisign charge $100-1000+ for SSL certificates.
I must be missing something on the quality of the encryption or something. Can someone explain some of the basic differences that lead to these dramatically different prices?
Update $12.99 is a sale price. Typically SSL certificates cost $89.99 on GoDaddy. Here's a link on Godaddy which makes the very comparison this question asks about: http://www.godaddy.com/Compare/gdcompare_ssl.aspx?isc=sslqgo002c
Apart from unserious offerings, you can distinguish between cheaper domain-validated SSL certificates and the more expensive extended-validation SSL certificates (EV).
Both certificates are technically the same (the connection is encrypted), but domain-validated certificates are cheaper, because the seller only have to check the domain. The EV-certificates also require information about the owner of the domain, and the seller should check, if this information is correct (more administrative effort).
Normally you can see the difference when you visit the site with a browser. Firefox for example will highlight the domain in blue for domain-validated SSL, and green for extended-validation SSL.
Two examples:
https://accounts.google.com/ (domain-validated)
https://www.postfinance.ch/ (extended-validated)
In most cases the domain-validated certificate is fine, the user will have no disadvantages and the EV-certificates are really (too) expensive.
i just found that GoDaddy doesn't allow to "duplicates" certificate for your wildcards SSL.
That's a pitty since this is often used when you manage a farm of server and each one has its private key / csr.
(to compare, digicert do allow them, and unlimited number of them)
To be quite honest. there is absolutely NO difference when it comes to SSL certificates. The only contributing factor is the EV / non EV / Wildcard tags.
EV == Extended Validation: This means the site is actively " pinged " by the Certificate Authority on the provided IP of the domain, then a server-side script compares the IP address of the ping response from the CA, and the IP address YOU are visiting. This does NOT guarentee that there isn't a man-in-the-middle attack, or net-wide DNS poisoning. This just ensures that the site you are viewing is the same one the CA sees.
Non-EV == no one is actively checking the domain's IP against a logged / provided IP for security purposes.
Wildcard == *.domain.com based Certificates are often used when people have a multitude of subdomains, or a set of subdomains that are ever-changing, but still need valid SSL encryption.
The truth behind SSL Certificates.
You can make your own. They are no less secure than any other certificate. The difference being a " self-signed " certificate is not " vouched for " by any third party.
The problem with SSL Certificates is they are extremely over-priced for what they are. There is absolutely NO garentee that the site you are visiting belongs to whomever is listed on the certificate as owner / location etc. This defeats the purpose of the third-party-trust-chain model SSL was developed to use.
ALL Certificate Authorities known as CA's that sell their certificates, wants the user to believe that their certificate is somehow better. When in fact, they never check the information provided for the certificate unless there is an issue that may cost them revenue. This practice also defeats the purpose of the SSL trust-chain model.
I know of only ONE CA that indeed validates it's certificates. This is CACert.org.
For them to issue a " complete " certificate (business name, name, addres, phone etc..) you must meet one of their assurer's FACE-TO-FACE!.
However. most browsers do not use CACert.org due to pressures added to them by mega corporations like Thawte, Comodo, and Verisign.
So.. to sum it all up.
The only differences between certificates is the behavior of the CA. Certificates can't really be trusted to verify anything other than the connection to the site is useing encryption.
At the end of the day, people think paying $100 - $1000 somehow equates to trustworthiness. This is NOT the case. It just means you deal with less sophisticated or less established crooks.
Reference:
http://webmasters.stackexchange.com/questions/28595/why-is-godaddy-https-ssl-certification-so-much-cheaper-than-digicert-thawte-an
I must be missing something on the quality of the encryption or something. Can someone explain some of the basic differences that lead to these dramatically different prices?
Update $12.99 is a sale price. Typically SSL certificates cost $89.99 on GoDaddy. Here's a link on Godaddy which makes the very comparison this question asks about: http://www.godaddy.com/Compare/gdcompare_ssl.aspx?isc=sslqgo002c
Apart from unserious offerings, you can distinguish between cheaper domain-validated SSL certificates and the more expensive extended-validation SSL certificates (EV).
Both certificates are technically the same (the connection is encrypted), but domain-validated certificates are cheaper, because the seller only have to check the domain. The EV-certificates also require information about the owner of the domain, and the seller should check, if this information is correct (more administrative effort).
Normally you can see the difference when you visit the site with a browser. Firefox for example will highlight the domain in blue for domain-validated SSL, and green for extended-validation SSL.
Two examples:
https://accounts.google.com/ (domain-validated)
https://www.postfinance.ch/ (extended-validated)
In most cases the domain-validated certificate is fine, the user will have no disadvantages and the EV-certificates are really (too) expensive.
i just found that GoDaddy doesn't allow to "duplicates" certificate for your wildcards SSL.
That's a pitty since this is often used when you manage a farm of server and each one has its private key / csr.
(to compare, digicert do allow them, and unlimited number of them)
To be quite honest. there is absolutely NO difference when it comes to SSL certificates. The only contributing factor is the EV / non EV / Wildcard tags.
EV == Extended Validation: This means the site is actively " pinged " by the Certificate Authority on the provided IP of the domain, then a server-side script compares the IP address of the ping response from the CA, and the IP address YOU are visiting. This does NOT guarentee that there isn't a man-in-the-middle attack, or net-wide DNS poisoning. This just ensures that the site you are viewing is the same one the CA sees.
Non-EV == no one is actively checking the domain's IP against a logged / provided IP for security purposes.
Wildcard == *.domain.com based Certificates are often used when people have a multitude of subdomains, or a set of subdomains that are ever-changing, but still need valid SSL encryption.
The truth behind SSL Certificates.
You can make your own. They are no less secure than any other certificate. The difference being a " self-signed " certificate is not " vouched for " by any third party.
The problem with SSL Certificates is they are extremely over-priced for what they are. There is absolutely NO garentee that the site you are visiting belongs to whomever is listed on the certificate as owner / location etc. This defeats the purpose of the third-party-trust-chain model SSL was developed to use.
ALL Certificate Authorities known as CA's that sell their certificates, wants the user to believe that their certificate is somehow better. When in fact, they never check the information provided for the certificate unless there is an issue that may cost them revenue. This practice also defeats the purpose of the SSL trust-chain model.
I know of only ONE CA that indeed validates it's certificates. This is CACert.org.
For them to issue a " complete " certificate (business name, name, addres, phone etc..) you must meet one of their assurer's FACE-TO-FACE!.
However. most browsers do not use CACert.org due to pressures added to them by mega corporations like Thawte, Comodo, and Verisign.
So.. to sum it all up.
The only differences between certificates is the behavior of the CA. Certificates can't really be trusted to verify anything other than the connection to the site is useing encryption.
At the end of the day, people think paying $100 - $1000 somehow equates to trustworthiness. This is NOT the case. It just means you deal with less sophisticated or less established crooks.
Reference:
http://webmasters.stackexchange.com/questions/28595/why-is-godaddy-https-ssl-certification-so-much-cheaper-than-digicert-thawte-an
Wednesday, April 9, 2014
Cache Protection for RAID Controller Cards
Modern RAID controllers have integrated caches for increasing performance. With corresponding protective mechanisms, the content of these caches would be lost when a power failure occurs. For that reason, the cache content is often protected by a BBU or BBM (depending on the manufacturer, either the term Battery Backup Unit (BBU) or Battery Backup Module (BBM) is used). However, proper maintenance is required so that the BBU will actually work properly during a power failure. With such maintenance, complete data loss may be a risk during a power failure in the worst case.
Note: RAID controllers, which do not use a BBU to protect the cache (but instead copy the content of the cache to flash memory in the event of a power failure), do not require special cache protection maintenance (e.g. Adaptec ZMCP or LSI CacheVault).
Two types:
Most RAID controllers that support Write caching, will not enable it without a battery backup pack. Imagine the damage a large 64 Megs of cached writes, not written to disk would do to a volume.
Without write caching, RAID5 controllers write performance drop by a factor of 5-10 times. (We had a Dell PERC 3 (The LSI, not Adaptec ones) that would write sustained at about 8 GB/hour with write cache off, but at 70-90 GB/hour with write caching on.
I do believe in using the batteries when available, but am not overly concerned if a server doesn't have one. In practice, I've noticed that the cached writes have a very short life in the buffer. They make it to disk surprisingly quick even on our heavily utilized servers. It also doesn't solve the issue of the writes/processes that were only partially supplied to the card from the app & OS. Does it help, yes, it will help minimize one particular case of data corruption. However, there's still a LOT of other places for it to go wrong during a power outage.
RAID controller cards temporarily cache data from the host system until it is successfully written to the storage media. While cached, data can be lost if system power fails, jeopardizing the data’s permanent integrity. CacheVault® flash cache protection modules and battery backup units (BBUs) protect the integrity of cached data by storing cached data in non-volatile flash cache storage or by providing battery power to the controller.
Reference:
http://www.thomas-krenn.com/en/wiki/Battery_Backup_Unit_(BBU/BBM)_Maintenance_for_RAID_Controllers
http://serverfault.com/questions/203355/why-do-i-need-a-raid-battery-pack
http://www.lsi.com/products/raid-controllers/pages/cache-protection.aspx
Note: RAID controllers, which do not use a BBU to protect the cache (but instead copy the content of the cache to flash memory in the event of a power failure), do not require special cache protection maintenance (e.g. Adaptec ZMCP or LSI CacheVault).
Two types:
- battery-backed cache based controllers (either the term Battery Backup Unit (BBU) or Battery Backup Module (BBM) is used)).
- flash-backed cache based controllers.
Most RAID controllers that support Write caching, will not enable it without a battery backup pack. Imagine the damage a large 64 Megs of cached writes, not written to disk would do to a volume.
Without write caching, RAID5 controllers write performance drop by a factor of 5-10 times. (We had a Dell PERC 3 (The LSI, not Adaptec ones) that would write sustained at about 8 GB/hour with write cache off, but at 70-90 GB/hour with write caching on.
I do believe in using the batteries when available, but am not overly concerned if a server doesn't have one. In practice, I've noticed that the cached writes have a very short life in the buffer. They make it to disk surprisingly quick even on our heavily utilized servers. It also doesn't solve the issue of the writes/processes that were only partially supplied to the card from the app & OS. Does it help, yes, it will help minimize one particular case of data corruption. However, there's still a LOT of other places for it to go wrong during a power outage.
RAID controller cards temporarily cache data from the host system until it is successfully written to the storage media. While cached, data can be lost if system power fails, jeopardizing the data’s permanent integrity. CacheVault® flash cache protection modules and battery backup units (BBUs) protect the integrity of cached data by storing cached data in non-volatile flash cache storage or by providing battery power to the controller.
- Lower total cost of ownership (TCO) with CacheVault technology by reducing hardware maintenance and disposal issues associated with lithium-ion batteries
- Battery backup units allow for higher ambient temperatures
- Provides additional peace of mind for all MegaRAID® controller cards
- Enjoy configuration flexibility with many chassis mounting options
Reference:
http://www.thomas-krenn.com/en/wiki/Battery_Backup_Unit_(BBU/BBM)_Maintenance_for_RAID_Controllers
http://serverfault.com/questions/203355/why-do-i-need-a-raid-battery-pack
http://www.lsi.com/products/raid-controllers/pages/cache-protection.aspx
FreeBSD 10 enlarge large resize hard drive space size
FreeBSD 10 enlarge large resize hard drive space size
FreeBSD 10 的 growfs 可以直接把使用中的磁碟區變大,這讓不少事情變得簡單許多,例如要在 VMWare 裡面把一顆虛擬磁碟 (da1) 變大,只要把那顆磁碟調大一點,然後在正常開機下,從 FreeBSD 裡面下這些指令
# gpart status
# gpart show
# gpart recover da1
# gpart resize -i 1 da1
# growfs da1p1
最後按 yes 就可以把 da1 的第一個分割區 da1p1 變成新的大小一樣大了
https://blog.pighead.cc/whsyu/2013/11/10/growfs-in-freebsd-10/
FreeBSD 10 的 growfs 可以直接把使用中的磁碟區變大,這讓不少事情變得簡單許多,例如要在 VMWare 裡面把一顆虛擬磁碟 (da1) 變大,只要把那顆磁碟調大一點,然後在正常開機下,從 FreeBSD 裡面下這些指令
# gpart status
# gpart show
# gpart recover da1
# gpart resize -i 1 da1
# growfs da1p1
最後按 yes 就可以把 da1 的第一個分割區 da1p1 變成新的大小一樣大了
https://blog.pighead.cc/whsyu/2013/11/10/growfs-in-freebsd-10/
How to correct portsnap snapshot corrupt
How to correct portsnap snapshot corrupt
# portsnap fetch update
files/bffb72df84b5223114294ade66936e04e9bc1e7a089a063b9881efd5292f1b26.gz not found -- snapshot corrupt.
# rm -r /var/db/portsnap/*
# portsnap fetch extract update
# portsnap fetch update
files/bffb72df84b5223114294ade66936e04e9bc1e7a089a063b9881efd5292f1b26.gz not found -- snapshot corrupt.
# rm -r /var/db/portsnap/*
# portsnap fetch extract update
Tuesday, April 8, 2014
Trunk端口匯聚學習筆記_周庭婷
Trunk端口匯聚學習筆記_周庭婷
作者:周庭婷 來源:本站原創+轉載 發佈時間:2012-09-02 點擊數:154
Trunk(端口匯聚)
概念:通過軟件配置,將兩個或多個物理端口組合在一起成為一條邏輯路徑,試想並行傳輸,實現提高網絡帶寬,增加吞吐量,大幅度提供整個網絡能力,實現鏈路冗余。可用於交換機與交換機之間,交換機與路由器之間,主機與交換機之間,主機與路由器之間。
TRUNK的具體應用
1. TRUNK功能用於與服務器相連,給服務器提供獨享的高帶寬。
2. TRUNK功能用於交換機之間的級聯,通過犧牲端口數來給交換機之間的數據交換提供捆綁的高帶寬,提高網絡速度,突破網絡瓶頸,進而大幅提高網絡性能。
3. Trunk可以提供負載均衡能力以及系統容錯。由於Trunk實時平衡各個交換機端口和服務器接口的流量,一旦某個端口出現故障,它會自動把故障端口從Trunk組中撤消,進而重新分配各個Trunk端口的流量,從而實現系統容錯。
4. 承載多個vlan,即vlan管道作用
配置TRUNK時的注意事項
在一個TRUNK中,數據總是從一個特定的源點到目的點,一條單一的鏈路被設計去處理廣播包或不知目的地的包。在配置TRUNK時,必須遵循下列規則:
1. 正確選擇TRUNK的端口數目,必須是2,4或8。
2. 必須使用同一組中的端口,在交換機上的端口分成了幾個組,TRUNK的所有端口必須來自同一組。
3. 使用連續的端口;TRUNK上的端口必須連續,如你可以用端口4,5,6和7組合成一個端口匯聚。
4. 在一組端口只產生一個TRUNK;如對於安奈特的AT-8224XL以太網交換機有3組,假定沒有擴展槽。所以該交換機可以支持3個端口聚合。加上擴展槽可以使得該交換機多支持一個端口匯聚。
5. 基於端口號維護接線順序:在接線時最重要的是兩頭的連接線必須相同。在一端交換機的最低序號的端口必須和對方最低序號的端口相連接,依次連接。舉例來說,假定你從OPF-8224E交換機端口聚合到另一台OPF-8288XL交換機,在OPF-8224E上你選擇了第二組端口12、13、14、15,在OPF-8288XL上你選擇了第一組端口5、6、7、8,為了保持連接的順序,你必須把OPF-8224XL上的端口12和OPF-8288XL上的端口5連接,端口13對端口6,其它如此。
6. 為TRUNK配置端口參數:在TRUNK上的所有端口自動認為都具有和最低端口號的端口參數相同的配置(比如在VLAN中的成員)。比如如果你用端口4、5、6和7產生了TRUNK,端口4是主端口,它的配置被擴散到其他端口(端口5、6和7)。只要端口已經被配置成了TRUNK,你不能修改端口5、6和7的任何參數,可能會導致和端口4的設置衝突。
7. 使用擴展槽:有些擴展槽支持TRUNK.這要看模塊上的端口數量。
http://ce.sysu.edu.cn/hope/Item/85407.aspx
作者:周庭婷 來源:本站原創+轉載 發佈時間:2012-09-02 點擊數:154
Trunk(端口匯聚)
概念:通過軟件配置,將兩個或多個物理端口組合在一起成為一條邏輯路徑,試想並行傳輸,實現提高網絡帶寬,增加吞吐量,大幅度提供整個網絡能力,實現鏈路冗余。可用於交換機與交換機之間,交換機與路由器之間,主機與交換機之間,主機與路由器之間。
TRUNK的具體應用
1. TRUNK功能用於與服務器相連,給服務器提供獨享的高帶寬。
2. TRUNK功能用於交換機之間的級聯,通過犧牲端口數來給交換機之間的數據交換提供捆綁的高帶寬,提高網絡速度,突破網絡瓶頸,進而大幅提高網絡性能。
3. Trunk可以提供負載均衡能力以及系統容錯。由於Trunk實時平衡各個交換機端口和服務器接口的流量,一旦某個端口出現故障,它會自動把故障端口從Trunk組中撤消,進而重新分配各個Trunk端口的流量,從而實現系統容錯。
4. 承載多個vlan,即vlan管道作用
配置TRUNK時的注意事項
在一個TRUNK中,數據總是從一個特定的源點到目的點,一條單一的鏈路被設計去處理廣播包或不知目的地的包。在配置TRUNK時,必須遵循下列規則:
1. 正確選擇TRUNK的端口數目,必須是2,4或8。
2. 必須使用同一組中的端口,在交換機上的端口分成了幾個組,TRUNK的所有端口必須來自同一組。
3. 使用連續的端口;TRUNK上的端口必須連續,如你可以用端口4,5,6和7組合成一個端口匯聚。
4. 在一組端口只產生一個TRUNK;如對於安奈特的AT-8224XL以太網交換機有3組,假定沒有擴展槽。所以該交換機可以支持3個端口聚合。加上擴展槽可以使得該交換機多支持一個端口匯聚。
5. 基於端口號維護接線順序:在接線時最重要的是兩頭的連接線必須相同。在一端交換機的最低序號的端口必須和對方最低序號的端口相連接,依次連接。舉例來說,假定你從OPF-8224E交換機端口聚合到另一台OPF-8288XL交換機,在OPF-8224E上你選擇了第二組端口12、13、14、15,在OPF-8288XL上你選擇了第一組端口5、6、7、8,為了保持連接的順序,你必須把OPF-8224XL上的端口12和OPF-8288XL上的端口5連接,端口13對端口6,其它如此。
6. 為TRUNK配置端口參數:在TRUNK上的所有端口自動認為都具有和最低端口號的端口參數相同的配置(比如在VLAN中的成員)。比如如果你用端口4、5、6和7產生了TRUNK,端口4是主端口,它的配置被擴散到其他端口(端口5、6和7)。只要端口已經被配置成了TRUNK,你不能修改端口5、6和7的任何參數,可能會導致和端口4的設置衝突。
7. 使用擴展槽:有些擴展槽支持TRUNK.這要看模塊上的端口數量。
http://ce.sysu.edu.cn/hope/Item/85407.aspx
什麼是Trunk? 什麼是端口匯聚 (Link aggregation)?
什麼是Trunk? 什麼是端口匯聚 (Link aggregation)?
TRUNK是端口匯聚 (Link aggregation) 的意思,就是通過配置軟件的設置,將2個或多個物理端口組合在一起成為一條邏輯的路徑從而增加在交換機和網絡節點之間的帶寬,將屬於這幾個端口的帶寬合併,給端口提供一個幾倍於獨立端口的獨享的高帶寬。Trunk是一種封裝技術,它是一條點到點的鏈路,鏈路的兩端可以都是交換機,也可以是交換機和路由器,還可以是主機和交換機或路由器。基於端口匯聚(Trunk)功能,允許交換機與交換機、交換機與路由器、主機與交換機或路由器之間通過兩個或多個端口並行連接同時傳輸以提供更高帶寬、更大吞吐量, 大幅度提供整個網絡能力。
在最普遍的路由與交換領域,VLAN的端口聚合也有的叫TRUNK,不過大多數都叫TRUNKING ,如CISCO公司。所謂的TRUNKING是用來在不同的交換機之間進行連接,以保證在跨越多個交換機上建立的同一個VLAN的成員能夠相互通訊。其中交換機之間互聯用的端口就稱為TRUNK端口。與一般的交換機的級聯不同,TRUNKING是基於OSI第二層模型的,如果你在2個交換機上分別劃分了多個VLAN(VLAN也是基於Layer2的),那麼分別在兩個交換機上的VLAN10和VLAN20的各自的成員如果要互通,就需要在A交換機上設為VLAN10的端口中取一個和交換機B上設為VLAN10的某個端口作級聯連接。VLAN20也是這樣。那麼如果交換機上劃了10個VLAN就需要分別連10條線作級聯,端口效率就太低了。 當交換機支持TRUNKING的時候,事情就簡單了,只需要2個交換機之間有一條級聯線,並將對應的端口設置為Trunk,這條線路就可以承載交換機上所有VLAN的信息。這樣的話,就算交換機上設了上百個個VLAN也只用1個端口就解決了。
如果是不同台的交換機上相同id的vlan要相互通信,那麼可以通過共享的trunk端口就可以實現,如果是同一台上不同id的vlan/不同台不同id的vlan它們之間要相互通信,需要通過第三方的路由來實現;vlan的劃分有兩個需要注意的地方:一是劃分了幾個不同的vlan組,都有不同的vlan id號;分配到vlan 組裡面的交換機端口也有port id.比如端口1,2,3,4劃分到vlan10,5,6,7,8劃分到vlan20,我可以把1,3,4的端口的port id設置為10,而把2端口的 port id設置為20;把5,6,7端口的port id設置為20,而把8端口的port id設置為10.這樣的話,vlan10中的1,3,4端口能夠和vlan20中8端口相互通信;而vlan10中的2端口能夠和vlan20中的5,6,7端口相互通信;雖然vlan id不同,但是port id相同,就能通信,同樣vlan id相同,port id不同的端口之間卻不能相互訪問,比如vlan10中的2端口就不能和1,3,4端口通信。
Reference
http://digdeeply.org/archives/1212254.html
TRUNK是端口匯聚 (Link aggregation) 的意思,就是通過配置軟件的設置,將2個或多個物理端口組合在一起成為一條邏輯的路徑從而增加在交換機和網絡節點之間的帶寬,將屬於這幾個端口的帶寬合併,給端口提供一個幾倍於獨立端口的獨享的高帶寬。Trunk是一種封裝技術,它是一條點到點的鏈路,鏈路的兩端可以都是交換機,也可以是交換機和路由器,還可以是主機和交換機或路由器。基於端口匯聚(Trunk)功能,允許交換機與交換機、交換機與路由器、主機與交換機或路由器之間通過兩個或多個端口並行連接同時傳輸以提供更高帶寬、更大吞吐量, 大幅度提供整個網絡能力。
在最普遍的路由與交換領域,VLAN的端口聚合也有的叫TRUNK,不過大多數都叫TRUNKING ,如CISCO公司。所謂的TRUNKING是用來在不同的交換機之間進行連接,以保證在跨越多個交換機上建立的同一個VLAN的成員能夠相互通訊。其中交換機之間互聯用的端口就稱為TRUNK端口。與一般的交換機的級聯不同,TRUNKING是基於OSI第二層模型的,如果你在2個交換機上分別劃分了多個VLAN(VLAN也是基於Layer2的),那麼分別在兩個交換機上的VLAN10和VLAN20的各自的成員如果要互通,就需要在A交換機上設為VLAN10的端口中取一個和交換機B上設為VLAN10的某個端口作級聯連接。VLAN20也是這樣。那麼如果交換機上劃了10個VLAN就需要分別連10條線作級聯,端口效率就太低了。 當交換機支持TRUNKING的時候,事情就簡單了,只需要2個交換機之間有一條級聯線,並將對應的端口設置為Trunk,這條線路就可以承載交換機上所有VLAN的信息。這樣的話,就算交換機上設了上百個個VLAN也只用1個端口就解決了。
如果是不同台的交換機上相同id的vlan要相互通信,那麼可以通過共享的trunk端口就可以實現,如果是同一台上不同id的vlan/不同台不同id的vlan它們之間要相互通信,需要通過第三方的路由來實現;vlan的劃分有兩個需要注意的地方:一是劃分了幾個不同的vlan組,都有不同的vlan id號;分配到vlan 組裡面的交換機端口也有port id.比如端口1,2,3,4劃分到vlan10,5,6,7,8劃分到vlan20,我可以把1,3,4的端口的port id設置為10,而把2端口的 port id設置為20;把5,6,7端口的port id設置為20,而把8端口的port id設置為10.這樣的話,vlan10中的1,3,4端口能夠和vlan20中8端口相互通信;而vlan10中的2端口能夠和vlan20中的5,6,7端口相互通信;雖然vlan id不同,但是port id相同,就能通信,同樣vlan id相同,port id不同的端口之間卻不能相互訪問,比如vlan10中的2端口就不能和1,3,4端口通信。
Reference
http://digdeeply.org/archives/1212254.html
Thursday, April 3, 2014
Apache Reverse Proxy with secure HTTPS SSL
Setting a reverse proxy allows us to share the same public static IP address with multiple servers in the same LAN.
# vim /usr/local/etc/apache22/httpd.conf
Listen 80
Listen 443
LoadModule proxy_module libexec/apache22/mod_proxy.so
LoadModule proxy_http_module libexec/apache22/mod_proxy_http.so
# vim /usr/local/etc/apache22/extra/httpd-vhosts.conf
Reference:
http://blog.ijun.org/2014/03/difference-between-proxy-server-and.html
http://httpd.apache.org/docs/current/vhosts/examples.html
http://ubuntuguide.org/wiki/Apache2_reverse_proxies
http://stackoverflow.com/questions/16130303/apache-config-how-to-proxypass-http-requests-to-https
# vim /usr/local/etc/apache22/httpd.conf
Listen 80
Listen 443
LoadModule proxy_module libexec/apache22/mod_proxy.so
LoadModule proxy_http_module libexec/apache22/mod_proxy_http.so
# vim /usr/local/etc/apache22/extra/httpd-vhosts.conf
NameVirtualHost *:80 NameVirtualHost *:443 ### set up a reverse proxy for the regular HTTP port 80 website. <VirtualHost *:80> ServerName store.mydomain.com ProxyPreserveHost On ProxyRequests off ProxyPass / http://192.168.0.5:80/ ProxyPassReverse / http://192.168.0.5:80/ ErrorLog "/var/log/apache22/store.mydomain.com-error_log" CustomLog "/var/log/apache22/store.mydomain.com-access_log" common </VirtualHost> ### set up a reverse proxy for the secure HTTPS port 443 website. <VirtualHost *:443> ServerName store.mydomain.com:443 ProxyPreserveHost On ProxyRequests off ProxyPass / https://192.168.0.5:443/ ProxyPassReverse / https://192.168.0.5:443/ SSLProxyEngine On SSLCertificateFile "/usr/local/etc/apache22/ssl/store.mydomain.com.crt" SSLCertificateKeyFile "/usr/local/etc/apache22/ssl/store.mydomain.com.key" ErrorLog "/var/log/apache22/store.mydomain.com-error_log" CustomLog "/var/log/apache22/store.mydomain.com-access_log" common </VirtualHost>
Reference:
http://blog.ijun.org/2014/03/difference-between-proxy-server-and.html
http://httpd.apache.org/docs/current/vhosts/examples.html
http://ubuntuguide.org/wiki/Apache2_reverse_proxies
http://stackoverflow.com/questions/16130303/apache-config-how-to-proxypass-http-requests-to-https
Wednesday, April 2, 2014
Setting up a Secure Subversion Server
Setting up a Secure Subversion Server
Backing up your scripts.
# tar czvf - /usr/local/etc/www/data | ssh dru@192.168.2.2 "cat > www.tar.gz"
Preparing the System
In my scenario, it was important that only the members of the development team have access to the repository. We also chose to have the repository on a system separate from the actual web server and left it up to the web administrator to copy over files from the repository to the web server as he saw fit.
To accomplish this, start by creating a backup of the existing directory structure you wish to put under revision control, and send it securely to the repository server. In my case, I backed up the www data on the web server to an internal server at 192.168.2.2.
Add a user called svn.
# pw useradd -n svn -s /bin/tcsh -w yes -d /home/svn -c "svn user" -m
# passwd svn
Next, on the repository system, create a new group called svn and add to it any existing user accounts that need access to the repository. For example, I added my existing web administrator as I created the group by running following command:
# pw groupmod svn -m webadmin
Then, create a new user called svn and, if necessary, any missing user accounts that need access to the repository. Make sure each account is a member of the svn group and has a password and a valid shell. I used sysinstall to create user accounts for the new web developers. When I finished, I double-checked the membership of the svn group. It looked something like this:
# grep svn /etc/group
svn:*:3690:webadmin,devel1,devel2
Dealing with umask
Before installing Subversion, take a close look at the existing umask for the svn user. On my FreeBSD system it was:
# su - svn
% umask
022
In Unix, the umask value determines the default permissions of a newly created directory or file. It does this by defining which permissions to disable. If you remember:
r = 4
w = 2
x = 1
you'll see that this umask doesn't turn off any (0) permissions for the user (svn); it turns off write (2) for the group (svn); and it turns off write (2) for world.
Because the members of the svn group should be able to write to the repository, change that group 2 to a 0. If you don't want nongroup members even to be aware of the existence of the repository, also change the world 2 to a 7.
The easy part is changing the umask for the svn user's shell. If it uses csh:
# su - svn
svn # vi ~svn/.cshrc
# A righteous umask
umask 027
Note: the meaning of each umask:
umask 002 // File permission 644. Owner can read/write. Group and Others can only read.
umask 007 // File permission 660. Owner and Group can read/write. Others can not read or write.
umask 027 // File permission 640. Owner can read/write. Group can read. Others can not read or write.
Note: I personally prefer to set umask 027. There is a security reason behind the thought. In order to prevent bad scripts trying to create new scripts or modify existing scripts on your server, you can have "svn update" running automatically in crontab. That will take care of source code update part. Then, you would want to make "svn" user to be the only user that has the write permission. www group users will only have read permission.
then find the existing umask line and change it to either 002, 007 or 027.
If your svn user has a shell other than csh, make your edit in your chosen shell's configuration file.
Once you've saved your changes to ~svn/.cshrc (or wherever), don't forget to tell the shell:
svn # source ~svn/.cshrc
Repeat the umask command to verify that your changes have taken place.
Installing Subversion with the correct umask
If you chose a umask of 002, you can compile a wrapper into Subversion when you build it from the ports collection. If you chose a umask of 007 or 027, or prefer to install the precompiled version of Subversion, create a wrapper script to ensure that the Subversion binaries use your umask value.
To compile in a wrapper that sets a umask of 007 or 027:
# cd /usr/ports/devel/subversion
# make config-recursive
# make install clean
To compile in a wrapper that sets a umask of 002:
# cd /usr/ports/devel/subversion
# make -DWITH_SVNSERVE_WRAPPER install clean
Note: you do NOT need -DWITH_SVNSERVE_WRAPPER option if you decided to use umask of 007 or 027.
Make sure you DO uncheck this option:
[] BDB=off "db4 repository backend"
Because some poeple said:
- I never used a BDB repos myself, exactly because the Subversion manual warns strongly against it. So second this. – jfs Sep 23 at 8:30
- I also second this - we used to have Subversion BDB repository problems. Switching to the FSFS repository type helped. – Phill Sacre Sep 23 at 8:38
Alternatively, to install the precompiled binary:
# pkg_add -r subversion
Note: before installing by either method, finish reading the article. You may find some additional compile options that interest you.
If you didn't compile in your wrapper (that means you use a umask of 007 or 027), move your existing binary and create your own wrapper script:
# mv /usr/local/bin/svn /usr/local/bin/svn.orig
# vi /usr/local/bin/svn
Set your umask to either 002, 007 or 027 so that it is the same as the umask for your svn user.
Don't forget to make your wrapper script executable:
# chmod +x /usr/local/bin/svn
Repeat the same steps for proj2 files.
Creating the Repository
Create a central place to store all repositories.
# mkdir /usr/local/repositories
# chown svn:svn /usr/local/repositories
Now that your environment is set up properly, you're ready to create the repository itself.
Log in as the user svn to ensure that both the svn user and the svn group own the files you create in the repository.
# su - svn
svn # cd /usr/local/repositories
svn # svnadmin create proj1
svn # svnadmin create proj2
In this example, I've called my repository "proj1" and "proj2" two separate repositories. You can choose any name that is useful to you.
svnadmin create simply creates the directory infrastructure required by the Subversion tools:
svn # ls -F proj1 proj2
proj1:
README.txt conf/ db/ format hooks/ locks/
proj2:
README.txt conf/ db/ format hooks/ locks/ db/ hooks/
Notice that db directory? By default, Subversion uses databases to track changes to the files that you place under revision control. This means that you must import your data into those databases.
Edit svnserve.conf & passwd file:
svn # vi /usr/local/repositories/proj1/conf/svnserve.conf
[general]
anon-access = none
auth-access = write
password-db = passwd
svn # vi /usr/local/repositories/proj1/conf/passwd
[users]
danny = mypassword
Start SVN server as a stand-alone daemon
# /usr/local/bin/svnserve -d --listen-port=3690 --listen-host=0.0.0.0 -r /usr/local/repositories
Preparing Files to be imported
At that point, I untarred my backup so that I had some data to import. If you do this, don't restore directly into the /usr/local/repositories/proj1 directory. (It's a database, remember?) Instead, I first made a new directory structure:
# mkdir /usr/local/www/apache22/data/proj1
# cd /usr/local/www/apache22/data/proj1
# mkdir branches tags trunk
# cd trunk
# tar xzvf /full/path/to/www.tar.gz .
Importing the Data
Next, it's time to import the information from /usr/local/www/apache22/data/proj1 into the Subversion databases. To do so, use the svn import command:
# su - svn
svn # cd /usr/local/www/apache22/data
svn # svn import proj1 file:///usr/local/repositories/proj1 -m "initial import"
svn # svn import proj2 file:///usr/local/repositories/proj2 -m "initial import"
svn import is one of many svn commands available to users. Type svn help to see the names of all the available commands. If you insert one of those commands between svn and help, as in svn import help, you'll receive help on the syntax for that specified command.
After svn import, specify the name of the directory containing the data to import (proj1 or proj2). Your data doesn't have to be in the same directory; simply specify the full path to the data, but ensure that your svn user has permission to access the data you wish to import. Note: once you've successfully imported your data, you don't have to keep an original copy on disk. In my case, I issued the command rm -Rf www.
Next, notice the syntax I used when specifying the full path to the repository. Subversion supports multiple URL schemas or "repository access" RA modules. Verify which schemas your svn supports with:
# svn --version
svn, version 1.1.3 (r12730)
compiled Mar 20 2005, 11:04:16
Copyright (C) 2000-2004 CollabNet.
Subversion is open source software, see http://subversion.tigris.org/
This product includes software developed by CollabNet (http://www.Collab.Net/).
The following repository access (RA) modules are available:
* ra_dav : Module for accessing a repository via WebDAV (DeltaV) protocol.
- handles 'http' schema
- handles 'https' schema
* ra_local : Module for accessing a repository on local disk.
- handles 'file' schema
* ra_svn : Module for accessing a repository using the svn network protocol.
- handles svn schema
Because I wished to access the repository on the local disk, I used the file:/// schema. I also appended www at the very end of the URL, as I wish that particular part of the repository to be available by that name. Yes, you can import multiple directory structures into the same Subversion repository, so give each one a name that is easy for you and your users to remember.
Finally, I used the -m message switch to append the comment "initial import" to the repository log. If I hadn't included this switch, svn would have opened the log for me in the user's default editor (vi) and asked me to add a comment before continuing.
This is a very important point. The whole reason to install a revision control system is to allow multiple users to modify files, possibly even simultaneously. It's up to each user to log clearly which changes they made to which files. It's your job to make your users aware of the importance of adding useful comments whenever an svn command prompts them to do so.
Edit /etc/rc.conf:
Just went through this (thank you) however I came across the issue where my FreeBSD box was just listening on tcp6. I'm using this internally on my network but without a tcp6 router this of course doesn't help. To make it work, I just modified my rc.conf to listen on host 0.0.0.0 (telling it to use tcp4). Also for anyone who wants it to start easily on boot, add this to your /etc/rc.conf (replacing the data dir, user, and group as necessary)
svnserve_enable="YES"
svnserve_flags="-d --listen-port=3690 --listen-host=0.0.0.0"
svnserve_data="/usr/local/repositories"
svnserve_user="svn"
svnserve_group="svn"
To make sure svn is running for utf-8 content. Make sure you do have following setting:
# vi ~svn/.cshrc
setenv LC_ALL en_US.UTF-8
setenv LANG en_US.UTF-8
Method 1:
Since the svn server stores everything in utf-8, and crontab is using /bin/sh shell. So, we need to add:
# vi /etc/crontab
*/5 * * * * svn export LC_ALL=en_US.UTF-8 && /usr/local/bin/svn update --username MYUSERNAME --password MYPASSWORD --non-interactive /www/web_hosting > /dev/null 2>&1
Method 2:
Run the command with tcsh:
# vi /etc/crontab
*/5 * * * * svn tcsh -c "/usr/local/bin/svn update --username MYUSERNAME --password MYPASSWORD --non-interactive /www/web_hosting" > /dev/null 2>&1
Edit Firewall Rules:
# vi /usr/local/etc/ipfw.rules
### SVN
$IPF 250 allow tcp from 192.168.500.0/24 to any 3690 in
Refresh Firewall Rules:
# sh /usr/local/etc/ipfw.rules
Backup Repositories
There are at least four ways to backup repositories:
- SVN hotcopy
- SVN dump
- tar entire directory.
- svnsync (also use svnsync + hook script is great. Sync at each commit)
SVN Hotcopy backup
# svnadmin hotcopy /usr/local/repositories/proj1 /home/bot/repositories/proj1
Note: the target (destination) directory must be a empty directory.
More: http://gala4th.blogspot.com/2009/08/svn.html
SVN Restore from hotcopy
Hotcopy should produce a usable file-level repository. You should be
able to use it as-is if the ownership and permissions are suitable. If
you are running a server, you may have to copy back to the location the
server expects or adjust the configuration to use the new location.
You must read carefully the following sections:
http://svnbook.red-bean.com/nightly/en/svn-book.html#svn.reposadmin.maint.migrate
http://svnbook.red-bean.com/nightly/en/svn-book.html#svn.reposadmin.maint.backup
http://svnbook.red-bean.com/nightly/en/svn-book.html#svn.ref.svnadmin.c.dump
http://svnbook.red-bean.com/nightly/en/svn-book.html#svn.ref.svnadmin.c.load
http://svnbook.red-bean.com/nightly/en/svn-book.html#svn.ref.svnadmin.c.hotcopy
And you will be golden and all set.
As you read those links, then you must realize by now that restoring a
hotcopy which is basically a copy of all your repository its easy just
copy to your Subversion scope, you may need to change owner
permissions and change your hook-scripts (only if you were using
hook-scripts) after this you will be ready to go.
SVN Client - TortoiseSVN
On Windows, create folders:
C:\svn_repositories\proj1
C:\svn_repositories\proj2
Right click on each folder (proj1 & proj2) > check out > enter respectively:
svn://192.168.100.156/proj1
svn://192.168.100.156/proj2
or try command line:
# svn checkout svn://192.168.100.156/proj1 /usr/local/www/apache22/data/proj1
# svn update /usr/local/www/apache22/data/proj1
# svn update
# svn status /usr/local/www/apache22/data/proj1
# svn status -u
# svn add /usr/local/www/apache22/data/proj1/test111.php
# svn commit -m "LogMessage"
List all properties on files, dirs, or revisions:
# svn proplist /www/drupal6/sites
Print the value of a property on files, dirs, or revisions:
# svn propget svn:ignore /www/drupal6/sites
Edit a property with an external editor:
# svn propedit svn:ignore /www/drupal6/sites
*.local
*.com
*.ca
Set the value of a property on files, dirs, or revisions:
# cd /www/drupal6/sites
# svn propset svn:ignore *.local .
Note: you should consider use "Edit a property with an external editor" instead.
Deciding Upon a URL Schema
Congratulations! You now have a working repository. Now's the best time to take a closer look at the various URL schemas and choose the access method that best suits your needs.
Chapter 6 of the freely available e-book Version Control with Subversion gives details about the possible configurations. You can choose to install the book when you compile the FreeBSD port by adding -DWITH_BOOK to your make command.
If all of your users log in to the system either locally or through ssh, use the file:/// schema. Because users are "local" to the repository, this scenario doesn't open a TCP/IP port to listen for Subversion connections. However, it does require an active shell account for each user and assumes that your users are comfortable logging in to a Unix server. As with any shell account, your security depends upon your users choosing good passwords and you setting up repository permissions and group memberships correctly. Having users ssh to the system does ensure that they have encrypted sessions.
Another possibility is to integrate Subversion into an existing Apache server. By default, the FreeBSD port of Subversion compiles in SSL support, meaning your users can have the ability to access your repository securely from their browsers using the https:// schema. However, if you're running Apache 2.x instead of Apache 1.x, remember to pass the -DWITH_MOD_DAV_SVN option to make when you compile your FreeBSD port.
If you're considering giving browser access to your users, read carefully through the Apache httpd configuration section of the Subversion book first. You'll have to go through a fair bit of configuration; fortunately, the documentation is complete.
A third approach is to use svnserve to listen for network connections. The book suggests running this process either through inetd or as a stand-alone daemon. Both of these approaches allow either anonymous access or access once the system has authorized a user using CRAM-MD5. Clients connect to svnserve using the svn:// schema.
Anonymous access wasn't appropriate in my scenario, so I followed the configuration options for CRAM-MD5. However, I quickly discovered that CRAM-MD5 wasn't on my FreeBSD system. When a Google search failed to find a technique for integrating CRAM-MD5 with my Subversion binary, I decided to try the last option.
This was to invoke svnserve in tunnel mode, which allows user authentication through the normal SSH mechanism as well as any restrictions you have placed in your /etc/ssh/sshd_config file. For example, I could use the AllowUsers keyword to control which users can authenticate to the system. Note that this schema uses svn+ssh://.
The appeal of this method is that I could use an existing authentication scheme without forcing the user to actually be "on" the repository system. However, this network connection is unencrypted; the use of SSH is only to authenticate. If your data is sensitive, either have your users use file:// after sshing in or use https:// after you've properly configured Apache.
If you decide to use the svnserve server and you compiled in the wrapper, it created a binary called svnserve.bin. Users won't be able to access the repository until:
# cp /usr/local/bin/svnserve.bin /usr/local/bin/svnserve
That's it for this installment. In the next column, I'll show how to start accessing the repository as a client.
Dru Lavigne is a network and systems administrator, IT instructor, author and international speaker. She has over a decade of experience administering and teaching Netware, Microsoft, Cisco, Checkpoint, SCO, Solaris, Linux, and BSD systems. A prolific author, she pens the popular FreeBSD Basics column for O'Reilly and is author of BSD Hacks and The Best of FreeBSD Basics.
Reference:
http://blog.ijun.org/2010/01/setting-up-secure-subversion-server.html
http://onlamp.com/pub/a/bsd/2005/05/12/FreeBSD_Basics.html
http://blog.jostudio.net/2007/06/svn.html
Backing up your scripts.
# tar czvf - /usr/local/etc/www/data | ssh dru@192.168.2.2 "cat > www.tar.gz"
Preparing the System
In my scenario, it was important that only the members of the development team have access to the repository. We also chose to have the repository on a system separate from the actual web server and left it up to the web administrator to copy over files from the repository to the web server as he saw fit.
To accomplish this, start by creating a backup of the existing directory structure you wish to put under revision control, and send it securely to the repository server. In my case, I backed up the www data on the web server to an internal server at 192.168.2.2.
Add a user called svn.
# pw useradd -n svn -s /bin/tcsh -w yes -d /home/svn -c "svn user" -m
# passwd svn
Next, on the repository system, create a new group called svn and add to it any existing user accounts that need access to the repository. For example, I added my existing web administrator as I created the group by running following command:
# pw groupmod svn -m webadmin
Then, create a new user called svn and, if necessary, any missing user accounts that need access to the repository. Make sure each account is a member of the svn group and has a password and a valid shell. I used sysinstall to create user accounts for the new web developers. When I finished, I double-checked the membership of the svn group. It looked something like this:
# grep svn /etc/group
svn:*:3690:webadmin,devel1,devel2
Dealing with umask
Before installing Subversion, take a close look at the existing umask for the svn user. On my FreeBSD system it was:
# su - svn
% umask
022
In Unix, the umask value determines the default permissions of a newly created directory or file. It does this by defining which permissions to disable. If you remember:
r = 4
w = 2
x = 1
you'll see that this umask doesn't turn off any (0) permissions for the user (svn); it turns off write (2) for the group (svn); and it turns off write (2) for world.
Because the members of the svn group should be able to write to the repository, change that group 2 to a 0. If you don't want nongroup members even to be aware of the existence of the repository, also change the world 2 to a 7.
The easy part is changing the umask for the svn user's shell. If it uses csh:
# su - svn
svn # vi ~svn/.cshrc
# A righteous umask
umask 027
Note: the meaning of each umask:
umask 002 // File permission 644. Owner can read/write. Group and Others can only read.
umask 007 // File permission 660. Owner and Group can read/write. Others can not read or write.
umask 027 // File permission 640. Owner can read/write. Group can read. Others can not read or write.
Note: I personally prefer to set umask 027. There is a security reason behind the thought. In order to prevent bad scripts trying to create new scripts or modify existing scripts on your server, you can have "svn update" running automatically in crontab. That will take care of source code update part. Then, you would want to make "svn" user to be the only user that has the write permission. www group users will only have read permission.
then find the existing umask line and change it to either 002, 007 or 027.
If your svn user has a shell other than csh, make your edit in your chosen shell's configuration file.
Once you've saved your changes to ~svn/.cshrc (or wherever), don't forget to tell the shell:
svn # source ~svn/.cshrc
Repeat the umask command to verify that your changes have taken place.
Installing Subversion with the correct umask
If you chose a umask of 002, you can compile a wrapper into Subversion when you build it from the ports collection. If you chose a umask of 007 or 027, or prefer to install the precompiled version of Subversion, create a wrapper script to ensure that the Subversion binaries use your umask value.
To compile in a wrapper that sets a umask of 007 or 027:
# cd /usr/ports/devel/subversion
# make config-recursive
# make install clean
To compile in a wrapper that sets a umask of 002:
# cd /usr/ports/devel/subversion
# make -DWITH_SVNSERVE_WRAPPER install clean
Note: you do NOT need -DWITH_SVNSERVE_WRAPPER option if you decided to use umask of 007 or 027.
Make sure you DO uncheck this option:
[] BDB=off "db4 repository backend"
Because some poeple said:
- I never used a BDB repos myself, exactly because the Subversion manual warns strongly against it. So second this. – jfs Sep 23 at 8:30
- I also second this - we used to have Subversion BDB repository problems. Switching to the FSFS repository type helped. – Phill Sacre Sep 23 at 8:38
Alternatively, to install the precompiled binary:
# pkg_add -r subversion
Note: before installing by either method, finish reading the article. You may find some additional compile options that interest you.
If you didn't compile in your wrapper (that means you use a umask of 007 or 027), move your existing binary and create your own wrapper script:
# mv /usr/local/bin/svn /usr/local/bin/svn.orig
# vi /usr/local/bin/svn
#!/bin/sh
### initialize
svnarg=""
### use encoding utf-8 as default if run "svn ci" or "svn commit".
if [ "$1" != "help" ]; then
for myarg in "$@"; do
if [ "${myarg}" = "commit" ] || [ "${myarg}" = "ci" ]; then
svnarg="--encoding utf-8"
break
fi
done
fi
### wrapper script to set umask to 027 on subversion binaries
### Note: the meaning of each umask:
### umask 002 // File permission 644. Owner can read/write. Group and Others can only read.
### umask 007 // File permission 660. Owner and Group can read/write. Others can not read or write.
### umask 027 // File permission 640. Owner can read/write. Group can read. Others can not read or write.
umask 027
### svn command
/usr/local/bin/svn.orig ${svnarg} "$@"
Set your umask to either 002, 007 or 027 so that it is the same as the umask for your svn user.
Don't forget to make your wrapper script executable:
# chmod +x /usr/local/bin/svn
Repeat the same steps for proj2 files.
Creating the Repository
Create a central place to store all repositories.
# mkdir /usr/local/repositories
# chown svn:svn /usr/local/repositories
Now that your environment is set up properly, you're ready to create the repository itself.
Log in as the user svn to ensure that both the svn user and the svn group own the files you create in the repository.
# su - svn
svn # cd /usr/local/repositories
svn # svnadmin create proj1
svn # svnadmin create proj2
In this example, I've called my repository "proj1" and "proj2" two separate repositories. You can choose any name that is useful to you.
svnadmin create simply creates the directory infrastructure required by the Subversion tools:
svn # ls -F proj1 proj2
proj1:
README.txt conf/ db/ format hooks/ locks/
proj2:
README.txt conf/ db/ format hooks/ locks/ db/ hooks/
Notice that db directory? By default, Subversion uses databases to track changes to the files that you place under revision control. This means that you must import your data into those databases.
Edit svnserve.conf & passwd file:
svn # vi /usr/local/repositories/proj1/conf/svnserve.conf
[general]
anon-access = none
auth-access = write
password-db = passwd
svn # vi /usr/local/repositories/proj1/conf/passwd
[users]
danny = mypassword
Start SVN server as a stand-alone daemon
# /usr/local/bin/svnserve -d --listen-port=3690 --listen-host=0.0.0.0 -r /usr/local/repositories
Preparing Files to be imported
At that point, I untarred my backup so that I had some data to import. If you do this, don't restore directly into the /usr/local/repositories/proj1 directory. (It's a database, remember?) Instead, I first made a new directory structure:
# mkdir /usr/local/www/apache22/data/proj1
# cd /usr/local/www/apache22/data/proj1
# mkdir branches tags trunk
# cd trunk
# tar xzvf /full/path/to/www.tar.gz .
Importing the Data
Next, it's time to import the information from /usr/local/www/apache22/data/proj1 into the Subversion databases. To do so, use the svn import command:
# su - svn
svn # cd /usr/local/www/apache22/data
svn # svn import proj1 file:///usr/local/repositories/proj1 -m "initial import"
svn # svn import proj2 file:///usr/local/repositories/proj2 -m "initial import"
svn import is one of many svn commands available to users. Type svn help to see the names of all the available commands. If you insert one of those commands between svn and help, as in svn import help, you'll receive help on the syntax for that specified command.
After svn import, specify the name of the directory containing the data to import (proj1 or proj2). Your data doesn't have to be in the same directory; simply specify the full path to the data, but ensure that your svn user has permission to access the data you wish to import. Note: once you've successfully imported your data, you don't have to keep an original copy on disk. In my case, I issued the command rm -Rf www.
Next, notice the syntax I used when specifying the full path to the repository. Subversion supports multiple URL schemas or "repository access" RA modules. Verify which schemas your svn supports with:
# svn --version
svn, version 1.1.3 (r12730)
compiled Mar 20 2005, 11:04:16
Copyright (C) 2000-2004 CollabNet.
Subversion is open source software, see http://subversion.tigris.org/
This product includes software developed by CollabNet (http://www.Collab.Net/).
The following repository access (RA) modules are available:
* ra_dav : Module for accessing a repository via WebDAV (DeltaV) protocol.
- handles 'http' schema
- handles 'https' schema
* ra_local : Module for accessing a repository on local disk.
- handles 'file' schema
* ra_svn : Module for accessing a repository using the svn network protocol.
- handles svn schema
Because I wished to access the repository on the local disk, I used the file:/// schema. I also appended www at the very end of the URL, as I wish that particular part of the repository to be available by that name. Yes, you can import multiple directory structures into the same Subversion repository, so give each one a name that is easy for you and your users to remember.
Finally, I used the -m message switch to append the comment "initial import" to the repository log. If I hadn't included this switch, svn would have opened the log for me in the user's default editor (vi) and asked me to add a comment before continuing.
This is a very important point. The whole reason to install a revision control system is to allow multiple users to modify files, possibly even simultaneously. It's up to each user to log clearly which changes they made to which files. It's your job to make your users aware of the importance of adding useful comments whenever an svn command prompts them to do so.
Edit /etc/rc.conf:
Just went through this (thank you) however I came across the issue where my FreeBSD box was just listening on tcp6. I'm using this internally on my network but without a tcp6 router this of course doesn't help. To make it work, I just modified my rc.conf to listen on host 0.0.0.0 (telling it to use tcp4). Also for anyone who wants it to start easily on boot, add this to your /etc/rc.conf (replacing the data dir, user, and group as necessary)
svnserve_enable="YES"
svnserve_flags="-d --listen-port=3690 --listen-host=0.0.0.0"
svnserve_data="/usr/local/repositories"
svnserve_user="svn"
svnserve_group="svn"
To make sure svn is running for utf-8 content. Make sure you do have following setting:
# vi ~svn/.cshrc
setenv LC_ALL en_US.UTF-8
setenv LANG en_US.UTF-8
Method 1:
Since the svn server stores everything in utf-8, and crontab is using /bin/sh shell. So, we need to add:
# vi /etc/crontab
*/5 * * * * svn export LC_ALL=en_US.UTF-8 && /usr/local/bin/svn update --username MYUSERNAME --password MYPASSWORD --non-interactive /www/web_hosting > /dev/null 2>&1
Method 2:
Run the command with tcsh:
# vi /etc/crontab
*/5 * * * * svn tcsh -c "/usr/local/bin/svn update --username MYUSERNAME --password MYPASSWORD --non-interactive /www/web_hosting" > /dev/null 2>&1
Edit Firewall Rules:
# vi /usr/local/etc/ipfw.rules
### SVN
$IPF 250 allow tcp from 192.168.500.0/24 to any 3690 in
Refresh Firewall Rules:
# sh /usr/local/etc/ipfw.rules
Backup Repositories
There are at least four ways to backup repositories:
- SVN hotcopy
- SVN dump
- tar entire directory.
- svnsync (also use svnsync + hook script is great. Sync at each commit)
SVN Hotcopy backup
# svnadmin hotcopy /usr/local/repositories/proj1 /home/bot/repositories/proj1
Note: the target (destination) directory must be a empty directory.
More: http://gala4th.blogspot.com/2009/08/svn.html
SVN Restore from hotcopy
Hotcopy should produce a usable file-level repository. You should be
able to use it as-is if the ownership and permissions are suitable. If
you are running a server, you may have to copy back to the location the
server expects or adjust the configuration to use the new location.
You must read carefully the following sections:
http://svnbook.red-bean.com/nightly/en/svn-book.html#svn.reposadmin.maint.migrate
http://svnbook.red-bean.com/nightly/en/svn-book.html#svn.reposadmin.maint.backup
http://svnbook.red-bean.com/nightly/en/svn-book.html#svn.ref.svnadmin.c.dump
http://svnbook.red-bean.com/nightly/en/svn-book.html#svn.ref.svnadmin.c.load
http://svnbook.red-bean.com/nightly/en/svn-book.html#svn.ref.svnadmin.c.hotcopy
And you will be golden and all set.
As you read those links, then you must realize by now that restoring a
hotcopy which is basically a copy of all your repository its easy just
copy to your Subversion scope, you may need to change owner
permissions and change your hook-scripts (only if you were using
hook-scripts) after this you will be ready to go.
SVN Client - TortoiseSVN
On Windows, create folders:
C:\svn_repositories\proj1
C:\svn_repositories\proj2
Right click on each folder (proj1 & proj2) > check out > enter respectively:
svn://192.168.100.156/proj1
svn://192.168.100.156/proj2
or try command line:
# svn checkout svn://192.168.100.156/proj1 /usr/local/www/apache22/data/proj1
# svn update /usr/local/www/apache22/data/proj1
# svn update
# svn status /usr/local/www/apache22/data/proj1
# svn status -u
# svn add /usr/local/www/apache22/data/proj1/test111.php
# svn commit -m "LogMessage"
List all properties on files, dirs, or revisions:
# svn proplist /www/drupal6/sites
Print the value of a property on files, dirs, or revisions:
# svn propget svn:ignore /www/drupal6/sites
Edit a property with an external editor:
# svn propedit svn:ignore /www/drupal6/sites
*.local
*.com
*.ca
Set the value of a property on files, dirs, or revisions:
# cd /www/drupal6/sites
# svn propset svn:ignore *.local .
Note: you should consider use "Edit a property with an external editor" instead.
Deciding Upon a URL Schema
Congratulations! You now have a working repository. Now's the best time to take a closer look at the various URL schemas and choose the access method that best suits your needs.
Chapter 6 of the freely available e-book Version Control with Subversion gives details about the possible configurations. You can choose to install the book when you compile the FreeBSD port by adding -DWITH_BOOK to your make command.
If all of your users log in to the system either locally or through ssh, use the file:/// schema. Because users are "local" to the repository, this scenario doesn't open a TCP/IP port to listen for Subversion connections. However, it does require an active shell account for each user and assumes that your users are comfortable logging in to a Unix server. As with any shell account, your security depends upon your users choosing good passwords and you setting up repository permissions and group memberships correctly. Having users ssh to the system does ensure that they have encrypted sessions.
Another possibility is to integrate Subversion into an existing Apache server. By default, the FreeBSD port of Subversion compiles in SSL support, meaning your users can have the ability to access your repository securely from their browsers using the https:// schema. However, if you're running Apache 2.x instead of Apache 1.x, remember to pass the -DWITH_MOD_DAV_SVN option to make when you compile your FreeBSD port.
If you're considering giving browser access to your users, read carefully through the Apache httpd configuration section of the Subversion book first. You'll have to go through a fair bit of configuration; fortunately, the documentation is complete.
A third approach is to use svnserve to listen for network connections. The book suggests running this process either through inetd or as a stand-alone daemon. Both of these approaches allow either anonymous access or access once the system has authorized a user using CRAM-MD5. Clients connect to svnserve using the svn:// schema.
Anonymous access wasn't appropriate in my scenario, so I followed the configuration options for CRAM-MD5. However, I quickly discovered that CRAM-MD5 wasn't on my FreeBSD system. When a Google search failed to find a technique for integrating CRAM-MD5 with my Subversion binary, I decided to try the last option.
This was to invoke svnserve in tunnel mode, which allows user authentication through the normal SSH mechanism as well as any restrictions you have placed in your /etc/ssh/sshd_config file. For example, I could use the AllowUsers keyword to control which users can authenticate to the system. Note that this schema uses svn+ssh://.
The appeal of this method is that I could use an existing authentication scheme without forcing the user to actually be "on" the repository system. However, this network connection is unencrypted; the use of SSH is only to authenticate. If your data is sensitive, either have your users use file:// after sshing in or use https:// after you've properly configured Apache.
If you decide to use the svnserve server and you compiled in the wrapper, it created a binary called svnserve.bin. Users won't be able to access the repository until:
# cp /usr/local/bin/svnserve.bin /usr/local/bin/svnserve
That's it for this installment. In the next column, I'll show how to start accessing the repository as a client.
Dru Lavigne is a network and systems administrator, IT instructor, author and international speaker. She has over a decade of experience administering and teaching Netware, Microsoft, Cisco, Checkpoint, SCO, Solaris, Linux, and BSD systems. A prolific author, she pens the popular FreeBSD Basics column for O'Reilly and is author of BSD Hacks and The Best of FreeBSD Basics.
Reference:
http://blog.ijun.org/2010/01/setting-up-secure-subversion-server.html
http://onlamp.com/pub/a/bsd/2005/05/12/FreeBSD_Basics.html
http://blog.jostudio.net/2007/06/svn.html
Drupal modules to try
String Overrides
https://drupal.org/project/stringoverrides
WebForm
https://drupal.org/project/webform
https://drupal.org/project/stringoverrides
WebForm
https://drupal.org/project/webform
Tuesday, April 1, 2014
Using a load balancer or reverse proxy
Add these two lines to your Drupal's settings.php file:
Using a load balancer or reverse proxy
Reference:
https://drupal.org/node/425990
$conf = array( 'reverse_proxy' => TRUE, 'reverse_proxy_addresses' => array('192.168.0.6', '192.168.0.7'), // Filling this array Drupal will trust the information stored in the X-Forwarded-For headers only if Remote IP address is one of these, that is the request reaches the web server from one of your reverse proxies. );
Using a load balancer or reverse proxy
When running large Drupal installations, you may find yourself with a web server cluster that lives behind a load balancer. The pages here contain tips for configuring Drupal in this setup, as well as example configurations for various load balancers.
In addition to a large selection of commercial options, various open source load balancers exist: Pound, Varnish, ffproxy, tinyproxy, etc. Web servers (including Squid, Apache and NGINX) can also be configured as reverse proxies.
The basic layout you can expect in most high-availability environments will look something like this:
Browser
──→
HTTP Reverse Proxy┌─→
──┼─→
└─→Web server 1
Web server 2
Web server 3↘
→ Database
↗
By way of explanation:
- Browsers will connect to a reverse proxy using HTTP or HTTPS. The proxy will in turn connect to web servers via HTTP.
- Web servers will likely be on private IP addresses. Use of a private network allows web servers to share a database and/or NFS server that need not be exposed to the Internet on a public IP address.
- If HTTPS is required, it is configured on the proxy, not the web server.
Most HTTP reverse proxies will also "clean" requests in some way. For example, they'll require that a browser include a valid User-Agent string, or that the requested URL contain standard characters or not exceed a certain length.
In the case of Drupal, it is highly recommended that all web servers share identical copies of the Drupal DocumentRoot in use, to insure version consistency between themes and modules. This may be achieved using an NFS mount to hold your Drupal files, or by using a revision control system (CVS, SVN, git, etc) to maintain your files.
High availability
In order to achieve the maximum uptime, a high-availability design should have no single points of failure. For network connectivity, this may mean using BGP with multiple upstream providers, as well as perhaps using Link Aggregation (LACP) to maintain multiple physical network paths in your LAN. In the diagram above, the two server elements that need attention are the load balancer and the database.
A load balancer cannot easily be "clustered" because a single IP address usually needs to apply to a single machine. To address this issue, you may wish to read up on CARP(FreeBSD) and Heartbeat (Linux).
A database server generally needs access to a single repository of data. Various technologies exist to address this, including MySQL NDB and PgCluster. If you're willing to accept the possibility of less than 100% up-time while you recover from broken hardware, you should consider using transactional database replication to keep a live copy of your data on a secondary server. Read the documentation for your database server software to find out how to set this up.
Needless to say, always set up regular automated backups.
Note:
- If you plan to install Drupal 7 on a web server that browsers will reach only via HTTPS, there's an outstanding issue you'll want to check (#313145: Support X-Forwarded-Proto HTTP header). At this time, Drupal's AJAX callbacks use URLs based on the protocol used at the web server, regardless of the protocol used at the proxy. Your workaround is either this patch, or to set the "reverse_proxy" variable manually in your settings.php file. Unfortunately, as the Drupal installer relies on AJAX, your only other option is to install via HTTP instead of HTTPS.
Reference:
https://drupal.org/node/425990
Subscribe to:
Posts (Atom)