Wednesday, December 16, 2009

An algorithm to find and resolve data differences between MySQL tables

I’ve been designing an algorithm to resolve data differences between MySQL tables, specifically so I can ‘patch’ a replication slave that has gotten slightly out of sync without completely re-initializing it. I intend to create a tool that can identify which rows are different and bring them into sync. I would like your thoughts on this.

Background and requirements

I see this as the next step in my recent series of posts on MySQL tools and techniques to keep replication running reliably and smoothly. Sometimes slaves “drift” a little bit, even when there don’t seem to be any issues with replication (this is one reason I submitted a bug report to add checksums on binlog events). Once a table differs on the slave, it gets more and more different from the master, possibly causing other tables to differ too.

I need a tool that, given a table known to differ on master and slave(s), will efficiently compare the tables and resolve the differences. Finding tables that differ is easy with MySQL Table Checksum, but I am not sure the best way to find which rows differ.

Here are my requirements. The algorithm needs to be:

  • Designed for statement-based replication, which means no temp tables, no expensive queries that will propagate to the slave, and so forth.
  • Efficient in terms of network load and server load, both when finding and when resolving differences. No huge tables or un-indexed data, no high-impact INSERT.. SELECT locking, etc.
  • Efficient on the client-side where the tool is executed.
  • Must work well on “very large” tables.

Some things I assume:

  • Tables must have primary keys. Without primary keys, it’s hard or a waste of time at best, and a disaster at worst.
  • It is not a good idea to do this unless the fraction of rows that differ is very small. If much of the table is different, then mysqldump is a better idea.

Other tools I’ve found

I’ve found a number of tools that are either not complete or don’t quite address the need, but reading the source code has been very productive. There’s Giuseppe Maxia’s work in remote MySQL table comparison. I based the MySQL Table Checksum tool on some of this work. Read the comments on that link, and you’ll see some follow-up from Fabien Coelho, who wrote pg_comparator. The documentation for this tool is an excellent read, as it goes into great detail on the algorithm used.

There are also a few projects that don’t do what I’m looking for. datadiff does a two-way in-server comparison of two tables with OUTER JOIN, a fine technique but inherently limited to two tables on one server, and not practical for extremely large tables. coldiff is a more specialized variant of that tool. mysqldiff diffs the structure of two tables, which I mention for completeness though it is not the problem I’m trying to solve.

The Maxia/Coelho bottom-up algorithm

Without restating everything these smart people have written, here’s a high-level overview of the algorithm as presented by Maxia and Coelho:

  • Compute a “folding factor” based on the number of rows in the table and/or user parameters.
  • Build successive levels of checksum tables bottom-up, starting at a row-level granularity and decreasing granularity by the “folding factor” with each level, until the final table has a single row.

    • Each row in the first table contains key column(s), a checksum of the key column(s), and a checksum of the whole row.
    • Each row in an intermediate-level summary table contains checksums for a group of rows in the next more granular level of summary data.
    • Groups are defined by taking checksums from the previous level modulo the folding factor.

  • Beginning at the most aggregated level, walk the “tree” looking for the differences, honing in eventually to the offending rows.
The “folding factor” is really a “branching factor” for the tree of summary tables. If the factor is 128, each level in an intermediate summary table will contain the groupwise checksum of about 128 rows in the next most granular level summary table.
This algorithm has many strengths. For example, it uses a logarithmic search to find rows that differ. It makes no assumptions about key distributions; the modulo operation on the checksum should randomize the distribution of which rows need to be fixed. It’s also very generic, which means it works pretty much the same on all tables. There’s no need to think about the “best way to do it” on a given table.
I am concerned about a few things, though. There’s a lot of data in all these summary tables. The first summary table contains as many rows as the table to analyze. If I were to calculate and store these rows for a table with lots of relatively narrow rows, I might be better off just copying the whole table from one server to the other. Also, creating these tables is not replication-friendly; the queries that run on the master will run on the slave too. This might not be a problem for everyone, but it would not be acceptable for my purposes.
The second part of the algorithm, walking the “tree” of summary tables to find rows that differ, doesn’t use any indexes in the implementations I’ve seen. Suppose I have a table with 128 million rows I want to analyze on two servers, using a branching factor of 128 (the default). The first checksum table has 128 million rows; the second has 1 million, and so on. Repeated scans on these tables will be inefficient, and given the randomization caused by the summary checksums, will cause lots of random I/O. Indexes could be added on the checksum modulo branching factor, but that’s another column, plus an index — this makes the table even bigger.
The checksum/modulo approach has another weakness. It defeats any optimizations I might be able to make based on knowledge of where in the table the rows differ. If the differences are grouped at the end of the table, for example in an append-only table that just missed a few inserts on the slave, the algorithm will distribute the “pointers” to these corrupt rows randomly through the summary tables, even though the rows really live near each other. Likewise, if my table contains client data and only one client is bad, the same situation will happen. This is a major issue, especially in some large tables I work with where we do things a client or account at a time. These and other spatial and temporal locality scenarios are realistic, because lots of real data is unevenly distributed. The checksum/modulo approach isn’t optimal for this.
Finally, the bottom-up approach doesn’t allow for early optimization or working in-memory. It builds the entire tree, then does the search. There’s no chance to “prune” the tree or try to keep a small working set. The flip side of this is actually a strength: assuming that the whole tree needs to be built, bottom-up is optimal. But most of my data isn’t like that. If much of the table is corrupt, I’m going to do a mysqldump instead, so I want to optimize for cases where I’ll be able to prune the tree.

One solution: a top-down approach

Given that I won’t even be looking at a table unless the global checksum has already found it differs, I am considering the following top-down approach, or some variation thereof:
  • Generate groupwise checksums for the whole table in a top-level grouping (more on that later).
  • If more than a certain fraction of the groups differ, quit. Too much of the table is different.
  • Otherwise descend depth-first into each group that has differences.
I think this algorithm, with some tuning, will address most of my concerns above. In particular, it will allow a smart DBA to specify how the grouping and recursion should happen. The choice of grouping is actually the most complicated part.
I’d do this client-side, not server-side. I’d generate the checksums server-side, but then fetch them back to the client code and keep them in memory. Given a good grouping, this shouldn’t require much network traffic or memory client-side, and will avoid locks, eliminate scratch tables, and keep the queries from replicating.
In the best case, all other things being equal, it will require the server to read about as many rows as the bottom-up approach, but it will exploit locality — a client at a time, a day at a time, and so on. This is a huge help, in my opinion; reducing random I/O is a high priority for me.
Given all this, I think top-down is better if there are not many changes to resolve, or if they’re grouped tightly together.
Some of the weaknesses I see are complexity, a proliferation of recursion and grouping strategies, perhaps more network traffic, and susceptibility to edge cases. Whereas the bottom-up approach has identical best and worst cases for different distributions of corrupt rows (assuming the number of corrupt rows is constant), the top-down approach suffers if there’s no locality to exploit. I’m a bit worried about edge cases causing this to happen more than I think it ought to.
Finally, and this could be either a strength or weakness, this approach lets every level of the recursion have a different branching factor, which might be appropriate or not — the DBA needs to decide.

Smart grouping and recursion

I think the hardest part is choosing appropriate ways to group and “drill down” into the table. Here are some possible strategies:
  • Date groupings. We have a lot of data in InnoDB tables with day-first or week-first primary keys, which as you know creates a day-first or week-first clustered index. The first checksum I’d run on these tables would be grouped by day.
  • Numeric groupings. Tables whose primary key is an auto-incremented number would probably be best grouped by division, for example, floor(id/5000) to group about 5000 neighboring rows together at a time.
  • Character groupings. If the primary key is a character string, I might group on the first few letters of the string.
  • Drill-down. Take for example one of our tables that is primary-keyed on IDs, which are auto-incremented numbers, and client account numbers. The best way to do the table I’m thinking of is by account number, then numerically within that on ID. For the day-first table, I’d group by day, then account number, and then by ID.
  • Exploit append-only tables. If a table is append-only, then corruption is likely in the most recent data, and I might try to examine only that part of the table. If there are updates and deletes to existing rows, this approach might not work.
  • Use defaults if the DBA doesn’t specify anything. If there’s a multi-column primary key, recurse one column at a time. If a single-column key, look for another key whose cardinality is less, and recurse from that to the primary key instead.
I think the DBA will have to choose the best strategy on a table-by-table basis, because I can’t think of a good automatic way to do it. Even analyzing the index structures on the table, and then trying to decide which are good choices, is too risky to do automatically. For example, SHOW INDEX will show estimated index cardinalities, but they’re based on random dives into the index tree and can be off by an order of magnitude or more.

How to resolve the differences

Again assuming that this reconciliation is taking place between a master and slave server, it’s important to fix the rows without causing more trouble while the fixing happens. For example, I don’t want to do something that’ll propagate to another slave that’s okay, and thereby mess it up, too.
Fixing the rows on the master, and letting the fixes propagate to the slave via the normal means, might actually be a good idea. If a row doesn’t exist or is different on the slave, REPLACE or INSERT .. ON DUPLICATE KEY UPDATE should fix the row on the slave without altering it on the master. If the row exists on the slave but not the master, DELETE on the master should delete it on the slave.
Peripheral benefits of this approach are that I don’t need to set up an account with write privileges on the slave. Also, if more than one slave has troubles with the same rows, this should fix them all at the same time.
Issues I need to research are whether the different number of rows affected on the slave will cause trouble, and if this can be solved with a temporary slave-skip-errors setting. The manual may document this, but I can’t find it.

Next steps

I’m looking forward to your feedback, and then I plan to build a tool that’ll implement whatever algorithm emerges from that discussion. At this point, assuming the above algorithm is as good as we can come up with together, I’m planning to actually implement both top-down and bottom-up approaches in the tool, so the DBA can decide what to use. The tool will, like the rest of the scripts in the MySQL Toolkit, be command-line friendly (there are lots of proprietary “visual tools” to compare and sync tables, but they don’t interest me — plus, why would I ever trust customer data to something I can’t see source code for?). I also understand that not everyone has the same narrowly-defined use case of re-syncing a slave, so of course I’ll make the tool more generic.
For my own use, ideally I’ll be making sure the tool is rock-solid, then defining rules for tables that frequently drift, and running a cron job to automatically find which tables are different and fix them. If the MySQL Table Checksum tool finds a table is out of sync and I don’t have a rule for it, it’ll just notify me and not try to fix it.


In this article I proposed some ideas for a top-down, in-client, replication-centric way to compare a table known to differ on a master and slave, find the rows that differ, and resolve them. I’m thinking about building a tool to implement this algorithm, and would like your feedback on efficient ways to do this.

Nice article!

We have a similar tool called SQLyog Job Agent which incorporates most of what you have discussed in this article. Unfortunately, it is not open-source.

We are always trying to improve the algorithm and look forward to more articles on this topic!


6 Mar 07 at 2:32 am

Very interesting. I’ve already developed a simple tool to do just that, albeit based on a simple row-by-row comparison, with the correcting actions being inserts or updates directly on the slave.

It can recurse all the databases and tables to repair an entire database, or just operate on a single table.

I’d not considered doing the corrective action on the master, but it’s an interesting idea.

You’re absolutely correct that tables that lack a primary key are barely worth attempting, and so far my script ignores them.

One thing that I’ve found in practice, is that you must perform the synchronisation while replication is actually running. If you don’t, you will inevitably end up replicating ahead of the normal replication process and breaking it.

I’ve found that this tool is useful for bringing up a new replica where it’s impossible for the master to be affected in any ways, such as through table locking.

If you do develop a working implementation of your own, do let me know!


James Holden

6 Mar 07 at 7:48 am

Rohit, if I read your comment right, you’re subtly saying I’m going a good direction, which is encouraging :-)

James, you’ve reinforced my belief that lots of people need a tool that doesn’t disrupt replication.


6 Mar 07 at 8:57 am

I finally read the stuff (this article and the source code). The various discussions are very interesting.

I’m not sure that I understand fully the “index” issue with the bottom-up approach. Basically each summary table is build (once) and then it is scanned just once, so having an index built on some attribute would not be amortized. The only exception may be for bulk deletes or inserts, but that should not happen.

On the “exploit append-only tables” idea, the bottom-up approach can have a “where” clause on the initial table so that the comparison is only performed on part of the data. Moreover, if the candidate tuples are somehow an identifiable fraction of the table, it might be simpler to just
download them directly for comparison, that would be a third algorithm:-)

Do you have performance figures with your tool in different settings?

Fabien Coelho

15 May 07 at 5:06 am

Hello Fabien. The issue with the indexing is not scans, but lookups from a child table to its parent tables, including the group-by queries. These happen potentially many times. I could benchmark with and without indexes fairly easily and see for sure, but after writing all the queries I’m satisfied the index is important.

The WHERE clause has proven to be very important, as you guessed.

I did some testing with real data, and the results are here: Comparison of Table Sync Algorithms.


15 May 07 at 7:57 am

I have some problem with query sql.
I want you correct for me.
this is my query :
$queryselect=”select Distinct student.id_student,first_name,last_name,promotion,class
from student,applicationrecieve where student.id_student applicationrecieve.id_student”;

the query that I write to you, I want to select data that it don’t have in the table applicationrecieve from table student.



27 Jun 07 at 5:57 am

Nice article. I’ve studied the algorithm before, but wasn’t so clearly.
Actually I met he same need for keeping my databases synchronized, and for some days I’m trying to build a suitable algorithm for that. The Top-Down algorithm is good, but, as you mentioned, is too hard to find a good grouping. And how what is to be done if there is only one indexed column as Primary Key? So I find the Bottom-Up more suitable for that purpose, but with some differences, I’m going to build a B*-Tree based on row checksums for each table, keep it saved locally (I think a XML structure is a good way) and, for saving time and traffic, do the comparison locally too. I’m not sure it’s the best way for that, but i want to try.
Best regards.

Negruzzi Cristian

9 Oct 07 at 8:38 am

Please take a look at MySQL Table Sync in the MySQL Toolkit ( It may save you a lot of work. I’ve implemented both algorithms there.


9 Oct 07 at 8:50 am

But by all means, explore your algorithm too! I don’t mean to say you shouldn’t. It may be a much better way.


9 Oct 07 at 8:52 am

i want the algorithm to find the table of any no.


14 Oct 08 at 8:01 am

Remote MySQL table comparison

Remote MySQL table comparison
by gmax

In the current Agust 2004 issue of SysAdmin there is an article about one of my favorite subjects, i.e. remote table comparison applied to MySQL databases.

The source code is actually working code (not as documented as I would have liked it, though, but space was not unlimited) with which you can compare two tables in two remote databases and see wether they differ at all, and if they do, there are method to find out which rows are different, without copying the data from a host to the other.
How this works is minutely explain in the article, so I won't repeat it here. However, since the article deals mostly with database issues, I would like to spend some words on the supporting Perl code, which does the magic of creating the not-so-obvious SQL queries to carry out the task. I won't examine the most complex script, the one finding the detailed differences between tables, because I'd need to introduce too much background knowledge to explain them in full. Therefore, I would like to present a smaller function that will tell you if two tables are different at all, so you can take further action.
This is a slightly modified version of the source code for the same task published in the magazine. I am thinking about making a CPAN module with the whole thing, but not right now.

Let's start with the algorithm used.

In short, if you want to compare two large records, you can make a signature of the whole record, by joining together their fields and applying a CRC function, such as MD5 or SHA1.
Comparing a whole table is trickier, because in standard SQL, without stored procedures and cursors, obtaining the CRC of a range of records is far from trivial. One possibility, though, is to make a CRC for each record and then sum them up to get a result that can be consider the table signature.

Here's the code.

sub get_table_CRC { my $dbh = shift; my $tablename = shift; my $fields = get_fields($dbh, $tablename); my $check_table = qq[ SELECT COUNT(*) AS cnt, CONCAT(SUM(CONV( SUBSTRING(\@CRC:=MD5( CONCAT_WS('/##/',$fields)),1,8),16,10 )), SUM(CONV(SUBSTRING(\@CRC, 9,8),16,10)), SUM(CONV(SUBSTRING(\@CRC,17,8),16,10)), SUM(CONV(SUBSTRING(\@CRC,25,8),16,10)) ) AS sig FROM $tablename ]; # uncomment the following line to see the full query # print $check_table,$/; my ($count, $crc); eval { ($count, $crc) = $dbh->selectrow_array($check_table)}; if ($@) { return undef; } return [$count, $crc]; }

The SQL part is quite complicated by the fact that MySQL can't handle arithmetic operations with the kind of number that can result from a MD5 signature. To get over this problem, I split the MD5 string into four chunks, using the SUBSTRING function, and convert them from base 16 to base 10 using CONV. The result is a simple number that is passed to SUM. The MD5 is calculated once per record and assigned to a global MySQL variable (@CRC). It is the signature of a string composed by all fields in the record, with some adjustments to avoid NULL values to come into the equation.
Perl's biggest involvement in all this, apart form making the query, which is a bitch to create manually, is the get_fields function that reads the table structure and creates the list of fields to be be passed to MD5. If one field is nullable, a call to the COALESCE function will be used instead of its bare name.

sub get_fields { my ($dbh, $tablename) = @_; my $sth = $dbh->prepare(qq[describe $tablename]); $sth->execute(); my @fields=(); while (my $row = $sth->fetchrow_hashref()) { my $field ="`$row->{Field}`"; # backticks # if the field is nullable, # then a COALESCE function is used # to prevent the whole CONCAT from becoming NULL if (lc $row->{Null} eq 'yes') { $field = qq[COALESCE($field,"#NULL#")]; } push @fields, $field; } return join ",", @fields; }

With these two functions ready, we can actually run a test on two tables in two different hosts.
To be sure that the function works as advertised, this script will actually create the tables in both hosts. The first time this script runs, the newly created tables have the same contents, and the result will be "no differences". If you run it a second time, one record will be altered, just barely, and the result will be different.

#!/usr/bin/perl -w use strict; use DBI; # for this test, we create the database handlers directly # in the script my $dbh1 = DBI->connect('dbi:mysql:test;host=localhost', 'localuser', 'localpassword', {RaiseError => 1}); my $dbh2 = DBI->connect('dbi:mysql:test;host=;port=13330', + 'remoteuser', 'remotepassword', {RaiseError => 1}); # this is the table to be created in both hosts my $tablename = 'testcrc'; my $create = qq{create table if not exists $tablename (i int not null, j int, a char(1), b float )}; my ($table_exists) = $dbh1->selectrow_array( qq{SHOW TABLES LIKE '$tablename'}); if ($table_exists) { # table exists. Let's make a tiny change $dbh1->do(qq{update $tablename set j = j-1 where i = 50}); } else # table does not exists. Create and populate { # create both tables $dbh1->do($create); $dbh2->do($create); my $insert = qq{insert into $tablename values (?, ?, ?, ?)}; my $sth1 = $dbh1->prepare($insert); my $sth2 = $dbh2->prepare($insert); # populates both tables with the same values for ( 1 .. 100 ) { $sth1->execute($_, $_ * 100, chr(ord('A') + $_), 1 / 3 ); $sth2->execute($_, $_ * 100, chr(ord('A') + $_), 1 / 3 ); } } my %probes; # gets the local table record count and CRC $probes{'local'} = get_table_CRC($dbh1, $tablename) or die "wrong info: $DBI::errstr\n"; # gets the remote table record count and CRC $probes{'remote'} = get_table_CRC($dbh2, $tablename) or die "wrong info: $DBI::errstr\n"; # Checks the result and displays print "LOCAL : @{$probes{'local'}}\nREMOTE: @{$probes{'remote'}}\n"; if ( ($probes{'local'}->[0] != $probes{'remote'}->[0]) or ($probes{'local'}->[1] ne $probes{'remote'}->[1]) ) { print "there are differences\n"; } else { print "NO DIFFERENCES\n"; }

The first run gives this result (the query is displayed only if you uncomment the relative print statement).
    COUNT(*) AS cnt,
                COALESCE(`b`,"#NULL#"))),1,8),16,10 )),
    SUM(CONV(SUBSTRING(@CRC, 9,8),16,10)),
    ) AS sig

LOCAL : 100 211184068521202404576502196746869468230923726643
REMOTE: 100 211184068521202404576502196746869468230923726643

The first number is a simple record count. The second one is a concatenated string from the four sums calculated on the MD5 chunks.
When we run the script for the second time, column "j" in record nr. 50 is increased by 1. This displays the following result:

LOCAL : 100 208246712599204490057913196197913536230317654094
REMOTE: 100 211184068521202404576502196746869468230923726643
there are differences

For the third run, we modify the source code and change "set j = j+1" to "set j = j-1". Again, the script reports that there are no differences.

You should notice that what we actually get from the get_table_CRC function is a few dozen bytes, even if the tables contain a million records. The database server may have some number crunching to perform, but you don't need to send gigabytes of data through the network.
This technique can save you hours of searching if you need to know if two remote tables have differences. More granularity is provided by the other functions described in the article.

Comments welcome

Re: Remote MySQL table comparison
by fab (Initiate) on Aug 25, 2004 at 12:07 UTC

I've also read this paper with great interest.

Although there are a few potential bugs in the algorithms and in the implementations presented, the idea is both simple and efficient.

Thus I've implemented a new version which hopefully solves some of the weaknesses I found and has a better theoretical behavior. It is dedicated to PostgreSQL, but may be adapted to other databases.

It is called pg_comparator, a tool for network and time efficient database table content comparison.

see for the perl implementation.
Another programmer extended Maxia's work even further. Fabien Coelho changed and generalized Maxia's technique, introducing symmetry and avoiding some problems that might have caused too-frequent checksum collisions. This work grew into pg_comparator, Coelho also explained the technique further in a paper titled "Remote Comparison of Database Tables" (

This existing literature mostly addressed how to find the differences between the tables, not how to resolve them once found. I needed a tool that would not only find them efficiently, but would then resolve them. I first began thinking about how to improve the technique further with my article, where I discussed a number of problems with the Maxia/Coelho "bottom-up" algorithm. After writing that article, I began to write this tool. I wanted to actually implement their algorithm with some improvements so I was sure I understood it completely. I discovered it is not what I thought it was, and is considerably more complex than it appeared to me at first. Fabien Coelho was kind enough to address some questions over email.



What I mean is that if you really want to understand something

He who hasn't hacked assembly language as a youth has no heart. He who does so as an adult has no brain. ~John Moore

If debugging is the process of removing bugs, then programming must be the process of putting them in. ~Author Unknown

If you cannot grok the overall structure of a program while taking a shower, e.g., with no external memory aids, you are not ready to code it. ~Richard Pattis

It should be noted that no ethically-trained software engineer would ever consent to write a DestroyBaghdad procedure. Basic professional ethics would instead require him to write a DestroyCity procedure, to which Baghdad could be given as a parameter. ~Nathaniel S. Borenstein

It's easy to cry "bug" when the truth is that you've got a complex system and sometimes it takes a while to get all the components to co-exist peacefully. ~Doug Vargas

It's okay to figure out murder mysteries, but you shouldn't need to figure out code. You should be able to read it. ~Steve McConnell

It's the only job I can think of where I get to be both an engineer and an artist. There's an incredible, rigorous, technical element to it, which I like because you have to do very precise thinking. On the other hand, it has a wildly creative side where the boundaries of imagination are the only real limitation. ~Andy Hertzfeld, about programming

We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. ~C.A.R. Hoare, quoted by Donald Knuth

Programming is like sex. One mistake and you have to support it for the rest of your life. ~Michael Sinz

Programming is similar to a game of golf. The point is not getting the ball in the hole but how many strokes it takes. ~Harlan Mills

Programming languages should be designed not by piling feature on top of feature, but by removing the weaknesses and restrictions that make additional features appear necessary. ~Author Unknown

Programming languages, like pizzas, come in only two sizes: too big and too small. ~Richard Pattis

Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the universe trying to produce bigger and better idiots. So far, the universe is winning. ~Author Unknown

Programs for sale: fast, reliable, cheap - choose two. ~Author Unknown

Ready, fire, aim: the fast approach to software development. Ready, aim, aim, aim, aim: the slow approach to software development. ~Author Unknown

Reusing pieces of code is like picking off sentences from other people's stories and trying to make a magazine article. ~Bob Frankston

Should array indices start at 0 or 1? My compromise of 0.5 was rejected without, I thought, proper consideration. ~Stan Kelly-Bootle

The best performance improvement is the transition from the nonworking state to the working state. ~J. Osterhout

The magic of myth and legend has come true in our time. One types the correct incantation on a keyboard, and a display screen comes to life, showing things that never were nor could be.... The computer resembles the magic of legend in this respect, too. If one character, one pause, of the incantation is not strictly in proper form, the magic doesn't work. Human beings are not accustomed to being perfect, and few areas of human activity demand it. Adjusting to the requirement for perfection is, I think, the most difficult part of learning to program. ~Frederick Brooks

The only way for errors to occur in a program is by being put there by the author. No other mechanisms are known. Programs can't acquire bugs by sitting around with other buggy programs. ~Harlan Mills

There are two ways to write error-free programs; only the third one works. ~Alan J. Perlis

One man's constant is another man's variable. ~Alan J. Perlis

There does not now, nor will there ever exist, a programming language in which it is the least bit hard to write bad programs. ~Lawrence Flon

We don't manage our time as well as we manage our space. There's an overhead of starting and an overhead of stopping a project because you kind of lose your momentum. And you've got to bracket and put aside all the things you're already doing. So you need reasonably large blocks of uninterrupted time if you're going to be successful at doing some of these things. That's why hackers tend to stay up late. If you stay up late and you have another hour of work to do, you can just stay up another hour later without running into a wall and having to stop. Whereas it might take three or four hours if you start over, you might finish if you just work that extra hour. If you're a morning person, the day always intrudes a fixed amount of time in the future. So it's much less efficient. Which is why I think computer people tend to be night people - because a machine doesn't get sleepy. ~Bill Joy

When a programming language is created that allows programmers to program in simple English, it will be discovered that programmers cannot speak English. ~Author Unknown

When debugging, novices insert corrective code; experts remove defective code. ~Richard Pattis

When you catch bugs early, you also get fewer compound bugs. Compound bugs are two separate bugs that interact: you trip going downstairs, and when you reach for the handrail it comes off in your hand. ~Paul Graham, "The Other Road Ahead," 2001

You cannot teach beginners top-down programming, because they don't know which end is up. ~C.A.R. Hoare

In programming, as in everything else, to be in error is to be reborn. ~Alan J. Perlis

The New Testament offers the basis for modern computer coding theory, in the form of an affirmation of the binary number system. "But let your communication be Yea, yea; nay, nay: for whatsoever is more than these cometh of evil." Matthew 5:37 ~Author Unknown

What I mean is that if you really want to understand something, the best way is to try and explain it to someone else. That forces you to sort it out in your own mind. And the more slow and dim-witted your pupil, the more you have to break things down into more and more simple ideas. And that's really the essence of programming. By the time you've sorted out a complicated idea into little steps that even a stupid machine can deal with, you've certainly learned something about it yourself. ~Douglas Adams

Tuesday, December 15, 2009

MySQL – Multi Master Replication

There are two common types of replication in MySQL, that is Multi Master and Master-Slave replications.

For Master-Slave replication, the Slave will act as the backup MySQL server which only keeps on updated by referring to the data passed from the Master. This slave server can also act as the READONLY server which is usefull for those user who just wish to generate reports from the data instead of pumping in data.

For Multi-Master replication, both MySQL server will replicate data to each other. Both server are available online for data entry. Both servers allow application to write data into its database and will replicate the data to across other master server vice versa.

On this post, I will refer to Multi Master instead of Master and Slave configuration. Configuring the Multi Master replication on MySQL is kinda simple if you understand on how it works and what will be the problem which is commonly faced. There are main problem found on the Multi-Master settings that is:

  1. Duplicate records due to indentical primary key which uses auto increment.

This common problem can be addressed if you configure the server properly. Configuring Multi Master on Windows or Linux are the same. Below are some of the hint you may refer on the Multi Master MySQL server replication:

Install MySQL Server

You may install the MySQL server either fromt the repository or source provided by the mysql provider at

If you are using CENTOS/REDHAT, you may use the “yum” command to install the MySQL from the repository. To ease your work or make sure there is none of the required component left out during the installation, use the wildcards. For example:

yum install “mysql*”

If you are a Window users, I believe you cannot use yum. All you need to do is just download the installation source from

Setting Up Master Server

Once the MySQL server is being installed. It can runs with default settings but will be an independent MySQL server which do not have master enabled. In order to enable the master on the installed MySQL server do the following steps:

  • Create a mysql user with replication privileges only.
  • Stop all MySQL server
  • Update the MySQL server configuration file with some additional settings.
  • Repeat the above steps for the rest of the master server
  • Start all MySQL server

Create a mysql user with replication privileges:

Execute the following SQL command to create a new mysql user which will be used by other mysql server to connect to the server as slave. For example, if the username is “repluser” and password is “slavepass”:

GRANT REPLICATION SLAVE ON *.* TO ‘repluser’@'%’ IDENTIFIED BY ’slavepass’;

Stop all MySQL server:

For Linux, to stop the MySQL server, you may execute the following command:

service mysqld stop

For Windows, execute the services management console and stop the mysql instances.

Update the MySQL server configuration:

Add the following line into the MySQL server configuration. If the settings already exists, update it with this setting:







master-host=another master node IP







  1. Make sure there will be no clashes on the server id. Server id must be unique for each MySQL master in the same network.
  2. Both auto_increment_increment and auto_increment_offset settings are used to make sure there is no clash on the auto increment on the msater server data.
  3. replicate-same-server-id should be turned off by set it to 0 value. This is to avoid the master server replicates its own data which passes by other MySQL master which will cause infinite loop or data consistencies issue.
  4. Each master need to be slave to another master server in order to have the updated data on other mysql server. This is set under master-host setting. If there is only 2 MySQL server ( A and B), then the settings will be Server A >> Server B >> Server A.
  5. log-bin is set enables the master server to log the query that executed on itself by the web application or administrator.
  6. log-slave-updates enables the master to log all the query passed from other MySQL master server. This is important if the network is having more than 2 Master server.
  7. All slave should not be started automatically. This is due to of all the master is not started, the slave will throw error as it is not able to connect to the master.

Start all MySQL server

For Linux, to start the MySQL server, you may execute the following command:

service mysqld start

For Windows, execute the services management console and start the mysql instances.

Once all the settings being done and the Master server is started. The next step will be start all the slave service. To start the slave, login into Master Server and run the following command:

> mysql

mysql> slave start;

Database replication lag

As explained in an earlier blog post, we recently started using MySQL master-slave replication on in order to provide the scalability necessary to accommodate our growing demands. With one or more replicas of our database, we can instruct Drupal to distribute or load balance the SQL workload among different database servers.

MySQL's master-slave replication is an asynchronous replication model. Typically, all the mutator queries (like INSERT, UPDATE, DELETE) go to a single master, and the master propagates all updates to the slave servers without synchronization or communication. While the asynchronous nature has its advantages, it is also means that the slaves might be (slightly) out of sync.
Consider the following pseudo-code:

$nid = node_save($data);

$node = node_load($nid);

Because node_save() executes a mutator query (an INSERT or UPDATE statement) is has to be executed on the master, so the master can propagate the changes to the slaves. Because node_load() uses a read-only query, it can go to the master or any of the available slaves. Because of the lack of synchronization between master and slaves, there is one obvious caveat: when we execute node_load() the slaves might not have been updated. In other words, unless we force node_load() to query the master, we risk not being able to present the visitor the data that he just saved. In other cases, we risk introducing data inconsistencies due to the race conditions.

So what is the best way to fix this?
  1. Our current solution on is to execute all queries on the master, except for those that we know can't introduce race conditions. In our running example, this means that we'd chose to execute all node_load()s on the master, even in absence of a node_save(). This limits our scalability so this is nothing but a temporary solution until we have a good solution in place.
  2. One way to fix this is to switch to a synchronous replication model. In such a model, all database changes will be synchronized across all servers to ensure that all replicas are in a consistent state. MySQL provides a synchronous replication model through the NDB cluster storage engine. Stability issues aside, MySQL's cluster technology works best when you avoid JOINs and sub-queries. Because Drupal is highly relational, we might have to rewrite large parts of our code base to get the most out of it.
  3. Replication and load balancing can be left to some of the available proxy layers, most notably Continuent's Sequoia and MySQL Proxy. Drupal connects to the proxy as if it was the actual database, and the proxy talks to the underlying databases. The proxy parses all the queries and propagates mutator queries to all the underlying databases to make sure they remain in a consistent state. Reads are only distributed among the servers that are up-to-date. This solution is transparent to Drupal, and should work with older versions of Drupal. The only downside is that it not trivial to setup, and based on my testing, it requires quite a bit of memory.
  4. We could use database partitioning, and assign data to different shards in one way or another. This would reduce the replica lag to zero but as we don't have that much data or database tables with millions of rows, I don't think partitioning will buy much.
  5. Another solution is to rewrite large parts of Drupal so it is "replication lag"-aware. In its most naive form, the node_load() function in our running example would get a second parameter that specifies whether the query should be executed on the master or not. The call should then be changed to node_load($nid, TRUE) when proceeded by a node_save().

    I already concluded through research that this is not commonly done; probably because such a solution still doesn't provide any guarantees across page request.
  6. A notable exception is MediaWiki, the software behind Wikipedia which has some documented best practices to deal with replication lag. Specifically, they recommend to query the master (using a very fast query) to see what version of the data they have to retrieve from the slave. If the specified version is not yet available on the slave due to replication lag, they simply wait for it to become available. In our running example, each node should get a version number and node_load() would first retrieve the latest version number from the master and then use that version number to make sure it gets an up-to-date copy from the slave. If the right version isn't yet available, node_load() will try loading the data again until it becomes available.


Another emerging SQL proxy which can be compared to Continuent's Sequoia is MySQL's own MySQLProxy , which is slowly maturing. It offers a Lua interpreter which allows scripting the query flow, and will eventually offer all the features required to solve the challenge outlined above in a much more lightweight fashion than the Continuent solution.
August 24, 2007 - 12:52

* reply

Jakub Suchy:

Isn't Sequoia only for Java applications?
August 27, 2007 - 15:13

* reply

Philippe Jadin:

As for the solution 6, if Drupal enforces new revisions on each node save, it should be possible to compare the version numbers. This would apply to nodes only though (what about taxonomy terms and other stuff ?)
August 24, 2007 - 13:09

* reply


Behold! The silver bullet :)

Another option would be for your slaves to mostly read their data out of memcache. Then you could modify the master and memcache at the same time. Things would get replicated on the DB level, so that any of the slaves could rebuild the memcache entry if it were to disappear. At the same time memcache is clustered and therefore there would be no replication lag.
August 24, 2007 - 13:35

* reply


A master - master setup with sticky sessions? Should work and be fairly simple to set up.
August 24, 2007 - 14:58

* reply


We did some tests with Drupal 5 & master-master replication and it's not that simple. The master-master setup itself is no problem, but master-master breaks the sequences concept (db_next_id()) in Drupal and it also causes problems in the caching tables. We quickly gave up with it. We still use master-master for high availability, but point all our frontend servers to the same DB-server. You still need to be able to handle all load with one server that way, but it makes it easier to failover.
Master-master should become easier with Drupal 6, I'll try to do some tests with it once I have time.
August 24, 2007 - 22:07

* reply


If we as a community/project were willing to shift our mental map and work on the various UI issues, revisioned everything (option 6) sounds like a win-win approach.

The Requiring node revisions thread from the devel list has some background discussion on this option. There are a lot of nay-sayers in that thread, and I counted myself among them until I read an excerpt from one of Derek's posts here:

> How often do we do a rollback on How often have you
> done a rollback of one of your blog posts? How often have you
> rolled back a change on your company website? If the answer is
> "never" or "once a year" this should be reflected in the UIs.

IMHO, those are the wrong questions. These are more appropriate:
- How often have you wondered who and when was the last person to modify something?
- How often do you wish you could have seen exactly what they changed?
- How often do you wonder how frequently something is being changed?

For myself, and probably a large majority of users, the answer to these is "all the time".
August 24, 2007 - 17:06

* reply


My knee-jerk reaction would be to keep the layers separate and ignorant of each other (that was my original reaction to the earlier post as well).

I'm guessing Drupal has an order of magnitude more reads than mutations. Therefore, my first try would be to see what a fully synchronous master-master setup would buy me, before introducing a lot of logic in the application layer to solve the database layer's scalability problem.

Both MySQL and PostgreSQL seem capable to provide master-master setups. Some numbers would be interesting too.
August 24, 2007 - 19:17

* reply


Memcached? When storing data into the master, update the cache. Read first from cache. Hard to get sync issues. If it gets slow, add more memcached's. Everyone's doing it. :)

Otherwise, if data doesn't exist on the slave (make new id for comment edits? Dunno...) just query the master... If your boxes are relatively fast you won't often end up in a situation where someone can post, then try to reload a page before replication can get to it.
August 25, 2007 - 05:46

* reply

Robert Douglass:

I like the idea of versioned data but I don't like the idea of blocking on synchronization. I'd feel better if the strategy were "Is the latest version of the data on the slave? If not, load from the master".

Versioned data also makes caching very efficient, especially memcache (as mentioned above).

Oh, and everything webchick said =) Not only would I be in favor of requiring revisions for node data, I would support making revisions tables for all of our first class objects.
August 25, 2007 - 08:14

* reply


You might consider it a hack, but it has worked very well for us if there is a write query done to send future reads for that DB to the master. It's only for that page load.
August 26, 2007 - 03:00

* reply

Jeremy Cole:

Hi Dries,

The one true solution here as suggested already is to use memcache for scaling reads, rather than replication. Invalidate the cache entry on node_save() and populate it on node_load() if not present. Given the already present caching infrastructure within Drupal, I would think it's quite easy to do.

Use dual master replication (master-master) for failover, writing to only one master at a time (thus removing any concerns about sequences, etc.). Typically this is done using IP takeover.


August 29, 2007 - 18:17

* reply

Jacques Mattheij:

Hi there, I'm pretty green when it comes to Drupal, so forgive me any ignorance.

I think that it all depends on who is doing the 'looking', if the majority of your users are not logged in then they could go to any one of the slaves and it would never matter what they see, if it's outdated it won't be by much (and most of the times that content could come straight from the cache anyway since anonymous users are all the same user so they will see exactly the same content), a user that refreshes a page and is not himself/herself the cause of an update has no way of knowing if a page really is the last version or 1 or 2 seconds out of date, as long as the page is consistent they'll be happy. (When stuff starts breaking that's a different matter of course.)

As soon as someone logs in the situation changes dramatically because these are the users that have update rights (unless of course you allow your anonymous users to update content, in which case a warning message that their update will show up shortly should be provided, and which is fairly common on high volume sites.)

So, as long as the anonymous / logged-in users division is reasonable you'll get a big payoff without any penalties. You'd have to put a figure on that ratio to be able to decide if the scenario if feasible.

Another possibility is to 'bind' the user to the machine that they land on, use a load distribution machine that passes through the complete HTTP request to the 'bound' node, update the bound node (which will give the user an immediate response and will always show consistent output) and the master, the master should take care of replicating to the other nodes. Sequence numbers would still have to be assigned centrally.

Another issue is sequence numbers.

I think I can see why Drupal uses sequence numbers in a separate table (portability ?) but that makes atomic inserts a lot harder and creates a headache during synchronization, possibly it would be better to document which dbms's/table types support auto_increment and to handle the ones that don't in single node configurations only.

Best regards,

Jacques Mattheij
December 13, 2007 - 21:12

* reply

MySQL Replication Tips And Tricks

Until recently, I was a student employee at the Oregon State University Open Source Lab. My career there ended, like many, with that painful process known as graduation. I got invaluable experience at the lab, not the least of which being the knowledge gained as their main (only) database administrator. One of my great pleasures in that position, was learning how to configure MySQL replication and manage clusters of replicating database servers. Even the simple case of a single master and a single slave has its edge cases. There are endless tips and tricks to make long-term maintenance easier and to allow you to make full use of this technology. Running replicating database servers for Drupal in particular has several tricks to it:
Replication Settings - Timewarp

A major issue with replication in MySQL is that it has a tendency to fall behind. There are various tricks you can employ to assist with this and some big ones for Drupal specifically. First, if you are going to use the search module, you will need to stop the replication of the temp tables it uses. These will be replicated to the slave as they are technically volatile queries, however, they are then never used on the slave. Therefore, they are just replication overhead and because replication is single-threaded they will block the slave's replication thread until they are finished. The way to stop these is to add the following to the my.cnf on the slave:


where "drupal" is the name of the database containing your site. This simply tells MySQL to ignore all replicated commands pertaining to these tables. This means all of those create temporary table statements used by the search module, specifically the do_search method.

Another major reason for replication falling behind is the Drupal cron. Drupal is missing some indexes on the tables that get swept by cron for cleansing. This usually doesn't matter, but as noted above replication is single threaded. Going through these tables without indexes takes far too long and stalls the replication thread. The big problem tables here are history and watchdog, watchdog in particular. You will need to add an index to the timestamp columns of these tables or replication will eventually start stalling.
Failover - Preparing For Disaster

It is useful to be able to fail back and forth between your master and slave. You can do this either by changing the actual site configuration, changing a DNS pointer or actually utilizing something like linux-ha ( to have true IP based failover. However you decide to implement this, it is not an instant process and there is the possibility (the likelihood) that for a second you will have processes writing to both DB's. To prepare for this, you need to configure auto increment offsets. This will allow for auto_increment columns on the slave and the master to choose different numbers and thus not conflict. You do this with an offset and an increment. For example, you might do this:


This means that the auto_inc columns will all increment by a block of 2 (actually 3) and this particular server will offset 2 into that block to choose its ID #. The other server would be setup in exactly the same way, but with a different offset, perhaps 1. This is not a perfect solution for Drupal, at least at the moment, but helps significantly. Down the road this will become much more effective. As it is, I would avoid the DNS methods of failover that prolong the "purgatory" period where both DB's maybe being written to.
Replication Checking - Reporting

The other major issue with replication is that there is no data security. I mean absolutely none. If there is line noise or other stack errors, the DB will gracefully accept them and never question the data its being given. For this reason, I highly recommend you check out maatkit: A tool included in this kit is mk-table-checksum. This allows you to run a checksum across the tables on the master and the slave. There are several ways to do this, one being just a straight up run on both boxes and a comparison. However, you can also create a special table on the master and do a checksum on the master that will be inserted into this table and replicated to the slave. Then the checksums are checked on the slave as they replicate back. This tool is very cool and has uncovered a number of issues for me in the past. I recommend running this weekly via a cron job and having the results emailed to you if they are non-empty.

A note, because of slave lag seeing differences in the session and search tables are fairly common.

A Small Note: You may want to define a report host in your slave and master. You do this by adding report-host=masterIP on the slave and report-host=slaveIP on the master. Where masterIP is the masters IP and likewise for the slaveIP. This will make the boxes report to each other their hostname and some other information and allows you to query "show slave hosts". While this seems semi-pointless, it ends up being quite useful.
Nagios -

I highly recommend using nagios to watch replication status. Having replication stop without you knowing is a huge issue and does tend happen. There is a timeout that causes replication to stop if a query stalls. This is a good thing, but can leave your slave very divergent and your users very confused. It is absolutely critical that someone be paged when this happens, so that they can assess the situation and either fail to one server or restart replication.
Excluding some SQL statements from replication
Submitted by Dimitar (not verified) on Sun, 09/14/2008 - 00:45.

The main table in the master database is truncated every 48 hours. Is it possible to exclude the truncate statement from the replication process?

I mean by this this statement should be either excluded from the binary logs or the slave should be configured to ignore it.



* reply

Submitted by Ryan Lowe (not verified) on Wed, 07/30/2008 - 23:42.

Don't forget to use Cacti as a monitoring tool (good templates at, which can monitor binary & relay log stats, network traffic, and replication (among many other things).

* reply

quick correction
Submitted by Anonymous (not verified) on Thu, 07/31/2008 - 08:27.

quick correction: it is mk-table-checksum not mysql-table-checksum

* reply

Submitted by nnewton on Thu, 07/31/2008 - 13:51.

Good point, thanks. I really dislike that they renamed everything.

* reply

How to Tunnel Remote Desktop Through SSH on a Windows Computer

How to Tunnel Remote Desktop

Through SSH on a Windows Computer

Why me and why now?

CAE has been charged to implement the College of Engineering Network Security Policy .  As part of the security measures, the College has set up a firewall, which blocks access to the College's network on certain ports.

Those wishing to access their office (or lab) computer can do so via "Windows Remote Desktop", although not directly.  The method described below provides a secure (encrypted via SSH) method to gain access to a remote desktop (computer) behind the College's firewall.  This procedure is called tunneling. For details on how to remotely connect to a CAE Desktop, see the CAE Remote Desktop page on the CAE web site.

What you need

Setting up PuTTY

  1. Start PuTTY (double-click on putty.exe). You will see a window similar to this one:

  2. Next, enable compression. Select SSH protocol level 2 as the default in the SSH subcategory for better security:

  3. To configure the "tunneling". In the example below, we are tunneling the remote desktop port on the local machine,
    through the gateway to the Remote Desktop port on the fictitious remote server
    "" (enter the name or IP address of your computer in place of this name).
    The name is resolved from the remote gateway machine, so it can be a hostname not visible to the user machine.
    Depending on your operating system, what you enter into "Source Port" may be
    different from the example shown:
    • Windows XP
    • Other Windows Platforms: 3389
    For more information on why this is necessary, see
    this page 

    • The source port is the port on the user machine to which you will address connections that you intend to have tunneled.
    • The destination defines a host and a port to which the remote gateway's sshd will connect incoming traffic from the user machine. When you click on
    • Add, the results are displayed like this:

  4. Go back to the Session subcategory, identify the gateway host's IP address or name (in the example below we used as the gateway, although it could be any computer with ssh allowed through the firewall), make sure that the SSH button is filled, name your session (in this case "Tunnel to my Remote Desktop") and save it:

    Whenever you need the tunnel to appear, you can start PuTTY and double-click that session.

Starting Remote Desktop

  1. Start PuTTY and then click on the session that you saved earlier;  this will start the SSH connection.

  2. Login to the gateway computer when prompted (in this case, the gateway computer is '') and when the login process is done, you can minimze the active PuTTY session (you don't need to type anything more, but you need to keep the program running).

  3. Start your Remote Desktop program as usual.  Instead of entering the name of the computer
    that you want to connect to, you must type in the address and port that Putty is forwarding to.
    Depending on your operating system, this may be different from what is shown in the example:
    • Windows XP:
    • Other Windows Platforms:
    This will connect you to the computer that was specified in PuTTY (in this case, the fictional computer

  4. Voila! You are now connected to your Remote Desktop computer through an SSH tunnel!

  5. After you are done using Remote Desktop, exit from the program as normal and then you may close the PuTTY program.

Linux Tips

Mike Chirico ( or (
Copyright (C) 2004 (GNU Free Documentation License) 
Last Updated: Sat Nov  7 10:07:03 EST 2009

The latest version of this document can be found at:
  or text version ( if you have trouble downloading the full document:
  over 140 pages )

For tips on Gmail with Postix and Fetchmail

For tips on using SQLite (over 25 pages)

For tips on MySQL reference:

For a recommended reading list

For tips on upgrading RedHat 9 or 8.0 to 2.6.x src kernel

For tips on Comcast Email with Home Linux Box

  **Note, if you want email notification after every 50 new tips have been
    added, then, click on the following link:

TIP 1:

     Is NTP Working?

     STEP 1 (Test the current server):

          Try issuing the following command:

          $ ntpq -pn

           remote refid st t when poll reach delay offset jitter
           =================================================== 16 u - 64 0 0.000 0.000 4000.00

          The above is an example of a problem.
          Compare it to a working configuration.

          $ ntpq -pn

           remote refid st t when poll reach delay offset jitter
           + 2 u 107 128 377 25.642 3.350 1.012
  10 l 40 64 377 0.000 0.000 0.008
           + 3 u 34 128 377 21.138 6.118 0.398
           * .USNO. 1 u 110 128 377 33.69 9.533 3.534

     STEP 2 (Configure the /etc/ntp.conf):

          $ cat /etc/ntp.conf

            # My simple client-only ntp configuration.
            # ping -a shows the IP address
            # which is used in the restrict below
            server # local clock
            fudge stratum 10
            driftfile /etc/ntp/drift
            restrict default ignore
            restrict mask
            authenticate no

     STEP 3 (Configure /etc/ntp/step-tickers):

          The values for server above are placed in the "/etc/ntp/step-tickers" file

          $ cat /etc/ntp/step-tickers


          The startup script /etc/rc.d/init.d/ntpd will grab the servers in this
          file and execute the ntpdate command as follows:

             /usr/sbin/ntpdate -s -b -p 8

          Why? Because if the time is off ntpd will not start. The command above set the
          clock. If System Time deviates from true time by more than 1000 seconds, then,
          the ntpd daemon  will enter panic mode and exit.

     STEP 4 (Restart the service and check):

          Issue the restart command

            /etc/init.d/ntpd restart

          check the values for "ntpq -pn",
          which should match step 1.

             ntpq -pn


          Time is always stored in the kernel as the number of seconds since
          midnight of the 1st of January 1970 UTC, regardless of whether the
          hardware clock is stored as UTC or not.  Conversions to local time
          are done at run-time. So, it's easy to get the time in different
          timezones for only the current session as follows:

              $ export TZ=EST
              $ date
              Mon Aug  2 10:34:04 EST 2004

              $ export TZ=NET
              $ date
              Mon Aug  2 15:34:18 NET 2004

          The following are possible values for TZ:

              Hours From Greenwich Mean Time (GMT) Value Description
              0 GMT Greenwich Mean Time
              +1 ECT European Central Time
              +2 EET European Eastern Time
              +2 ART
              +3 EAT Saudi Arabia
              +3.5 MET Iran
              +4 NET
              +5 PLT West Asia
              +5.5 IST India
              +6 BST Central Asia
              +7 VST Bangkok
              +8 CTT China
              +9 JST Japan
              +9.5 ACT Central Australia
              +10 AET Eastern Australia
              +11 SST Central Pacific
              +12 NST New Zealand
              -11 MIT Samoa
              -10 HST Hawaii
              -9 AST Alaska
              -8 PST Pacific Standard Time
              -7 PNT Arizona
              -7 MST Mountain Standard Time
              -6 CST Central Standard Time
              -5 EST Eastern Standard Time
              -5 IET Indiana East
              -4 PRT Atlantic Standard Time
              -3.5 CNT Newfoundland
              -3 AGT Eastern South America
              -3 BET Eastern South America
              -1 CAT Azores

              DST timezone

              0      BST for British Summer.
              +400   ADT for Atlantic Daylight.
              +500   EDT for Eastern Daylight.
              +600   CDT for Central Daylight.
              +700   MDT for Mountain Daylight.
              +800   PDT for Pacific Daylight.
              +900   YDT for Yukon Daylight.
              +1000  HDT for Hawaii Daylight.
              -100   MEST for Middle European Summer,
                         MESZ for Middle European Summer,
                         SST for Swedish Summer and FST for French Summer.
              -700   WADT for West Australian Daylight.
              -1000  EADT for Eastern Australian Daylight.
              -1200  NZDT for New Zealand Daylight.

     The following is an example of setting the TZ environment variable
     for the timezone, only when timezone changes go into effect.

               $ export TZ=EST+5EDT,M4.1.0/2,M10.5.0/2

     Take a look at the last line "M10.5.0/2". What does it mean? Here is the

        Mm.w.d This  specifies  day  d (0 <= d <= 6) of week w (1 <= w <= 5) of
              month m (1 <= m <= 12).  Week 1 is the first week in which day d
              occurs and week 5 is the last week in which day d occurs.  Day 0
              is a Sunday.

              The time fields specify when, in the local time  currently  in
              effect, the  change  to  the  other  time  occurs.   If omitted,
              the default is  02:00:00.

      So this is what it means. M10 stands for October, the 5 is the fifth week
      that includes a Sunday (note 0 in M10.5.0/2 is Sunday). To see that it is
      the fifth week see the calendar below. The time change occurs a 2am in
      the morning. (Special Note: In 2007, DST was extended. See TIP 230).

                         Su Mo Tu We Th Fr Sa
                                         1  2
                          3  4  5  6  7  8  9
                         10 11 12 13 14 15 16
                         17 18 19 20 21 22 23
                         24 25 26 27 28 29 30

       Prove it. Take the following program sunrise, which can calcuates sunrise
       and sunset for an latitude and longitude. This program can be downloaded
       from the following location:

       Below is a bash script that will run the program for the next 100 days.

          #  program: next100days  Mike Chirico
          #  download:
          #  This will calculate the sunrise and sunset for
          #  latitude     39.95  Note must convert to degrees
          #  longitude  75.15  Note must convert to degrees
          for (( i=0; i <= 100; i++))
            sunrise    `date -d "+$i day" "+%Y %m %d"` $lat $long

       Take a look at the following sample output.

           $ export TZ=EST+5EDT,M4.1.0/2,M10.5.0/2
           $ ./next100days

          Sunrise  08-24-2004  06:21:12   Sunset 08-24-2004  19:43:42
          Sunrise  08-25-2004  06:22:09   Sunset 08-25-2004  19:42:12
          Sunrise  08-26-2004  06:23:06   Sunset 08-26-2004  19:40:41
          Sunrise  08-27-2004  06:24:03   Sunset 08-27-2004  19:39:09
          Sunrise  08-28-2004  06:25:00   Sunset 08-28-2004  19:37:37
          Sunrise  08-29-2004  06:25:56   Sunset 08-29-2004  19:36:04
          Sunrise  08-30-2004  06:26:53   Sunset 08-30-2004  19:34:31
          Sunrise  08-31-2004  06:27:50   Sunset 08-31-2004  19:32:57
          Sunrise  09-01-2004  06:28:46   Sunset 09-01-2004  19:31:22
          Sunrise  09-02-2004  06:29:43   Sunset 09-02-2004  19:29:47
          ..[values omitted ]
          Sunrise  10-28-2004  07:25:31   Sunset 10-28-2004  18:02:34
          Sunrise  10-29-2004  07:26:38   Sunset 10-29-2004  18:01:19
          Sunrise  10-30-2004  07:27:46   Sunset 10-30-2004  18:00:06
          Sunrise  10-31-2004  06:28:53   Sunset 10-31-2004  16:58:54
          Sunrise  11-01-2004  06:30:01   Sunset 11-01-2004  16:57:44
          Sunrise  11-02-2004  06:31:10   Sunset 11-02-2004  16:56:35

       Compare 10-30-2004 with 10-31-2004. Sunrise is an hour earlier because
       daylight saving time has ended, just as predicted.

       There is an easier way to switch between timezones. Take a look at the
       directory zoneinfo as follows:

            $ ls /usr/share/zoneinfo

            Africa      Chile    Factory    Iceland      Mexico    posix       UCT
            America     CST6CDT  GB         Indian       Mideast   posixrules  Universal
            Antarctica  Cuba     GB-Eire    Iran         MST       PRC         US
            Arctic      EET      GMT  MST7MDT   PST8PDT     UTC
            Asia        Egypt    GMT0       Israel       Navajo    right       WET
            Atlantic    Eire     GMT-0      Jamaica      NZ        ROC         W-SU
            Australia   EST      GMT+0      Japan        NZ-CHAT   ROK
            Brazil      EST5EDT  Greenwich  Kwajalein    Pacific   Singapore   Zulu
            Canada      Etc      Hongkong   Libya        Poland    SystemV
            CET         Europe   HST        MET          Portugal  Turkey

       TZ can be set to any one of these files. Some of these are directories and contain
       subdirectories, such as ./posix/America. This way you don not have to enter the
       timezone, offset, and range for dst, since it has already been calculated.

           $ export TZ=:/usr/share/zoneinfo/posix/America/Aruba
           $ export TZ=:/usr/share/zoneinfo/Egypt


          Also see  (TIP 27).
          Also see  (TIP 103) using chrony which is very similiar to ntpd.
          Note time settings can usually be found in /etc/sysconfig/clock

TIP 2:

     cpio works like tar, only better.

     STEP 1 (Create two directories with data ../dir1 an ../dir2)

          mkdir -p ../dir1
          mkdir -p ../dir2
          cp /etc/*.conf ../dir1/.
          cp /etc/*.cnf ../dir2/.

          Which will backup all your cnf and conf files.

     STEP 2 (Piping the files to tar)

          cpio works like tar but can take input
          from the "find" command.

           $ find ../dir1/ | cpio -o --format=tar > test.tar
           $ find ../dir1/ | cpio -o -H tar > test2.tar

          Same command without the ">"

           $ find ../dir1/ | cpio -o --format=tar -F test.tar
           $ find ../dir1/ | cpio -o -H tar -F test2.tar

          Using append

           $ find ../dir1/ | cpio -o --format=tar -F test.tar
           $ find ../dir2/ | cpio -o --format=tar --append -F test.tar

     STEP 3 (List contents of the tar file)

          $ cpio -it < test.tar
          $ cpio -it -F test.tar

     STEP 4 (Extract the contents)

          $ cpio -i -F test.tar

TIP 3:

     Working with tar. The basics with encryption.

     STEP 1 (Using the tar command on the directory /stuff)

          Suppose you have a directory /stuff
          To tar everything in stuff to create a ".tar" file.

          $ tar -cvf stuff.tar stuff

          Which will create "stuff.tar".

     STEP 2 (Using the tar command to create a ".tar.gz" of /stuff)

          $ tar -czf stuff.tar.gz stuff

     STEP 3 (List the files in the archive)

          $ tar -tzf stuff.tar.gz
          $ tar -tf stuff.tar

     STEP 4 (A way to list specific files)

          Note, pipe the results to a file and edit

           $ tar -tzf stuff.tar.gz > mout

          Then, edit mout to only include the files you want

           $ tar -T mout -xzf stuff.tar.gz

          The above command will only get the files in mout.
          Of couse, if you want them all

           $ tar -xzf stuff.tar.gz


           $ tar -zcvf - stuff|openssl des3 -salt -k secretpassword | dd of=stuff.des3

          This will create stuff.des3...don't forget the password you
          put in place of  secretpassword. This can be done interactively as

            $ dd if=stuff.des3 |openssl des3 -d -k secretpassword|tar zxf -

     NOTE:  above there is a "-" at the end... this will
            extract everything.

TIP 4:

     Creating a Virtual File System and Mounting it with a Loopback Device.

     STEP 1 (Construct a 10MB file)

           $ dd if=/dev/zero of=/tmp/disk-image count=20480

          By default dd uses block of 512 so the size will be 20480*512

     STEP 2 (Make an ext2 or ext3 file system) -- ext2 shown here.

           $ mke2fs -q

          or if you want ext3

           $ mkfs -t ext3 -q /tmp/disk-image

          yes, you can even use reiser, but you'll need to create a bigger
          disk image. Something like "dd if=/dev/zero of=/tmp/disk-image count=50480".

           $ mkfs -t reiserfs -q /tmp/disk-image

          Hit yes for confirmation.  It only asks this because it's a file

     STEP 3 (Create a directory "virtual-fs" and mount. This has to be done as root)

           $ mkdir /virtual-fs
           $ mount -o loop=/dev/loop0 /tmp/disk-image /virtual-fs

         SPECIAL NOTE: if you mount a second device you will have to increase the
                       loop count: loop=/dev/loop1, loop=/dev/loop2, ... loop=/dev/loopn

          Now it operates just like a disk. This virtual filesystem can be mounted
          when the system boots by adding the following to the "/etc/fstab" file. Then,
          to mount, just type "mount /virtual-fs".

                 /tmp/disk-image /virtual-fs ext2               rw,loop=/dev/loop0 0 0

     STEP 4 (When done, umount it)

           $ umount /virtual-fs

     SPECIAL NOTE: If you are using Fedora core 2, in the /etc/fstab you can take
              advantage of acl properties for this mount. Note the acl next to the
              rw entry. This is shown here with ext3.

                 /tmp/disk-image     /virtual-fs ext3    rw,acl,loop=/dev/loop1 0 0

              Also, if you are using Fedora core 2 and above, you can mount the file
              on a cryptoloop.

                $ dd if=/dev/urandom of=disk-aes count=20480

                $ modprobe loop
                $ modprobe cryptoloop
                $ modprobe aes

                $ losetup -e aes /dev/loop0 disk-aes
                $ mkfs -t ext2 /dev/loop0
                $ mount -o loop,encryption=aes disk-aes <mount point>

              If you do not have Fedora core 2, then, you can build the kernel from source
              with some of the following options (not complete, yet)

                      Cryptographic API Support (CONFIG_CRYPTO)
                      generic loop cryptographic (CONFIG_CRYPTOLOOP)
                      Cryptographic ciphers (CONFIG_CIPHERS)
                      Enable one or more  ciphers  (CONFIG CIPHER .*) such as AES.

     HELPFUL INFORMATION: It is possible to bind mount partitions, or associate the
                     mounted partition to a directory name.

                  # mount --bind  /virtual-fs      /home/mchirico/vfs

             Also, if you want to see what filesystems are currently mounted, "cat" the
             file "/etc/mtab"

                  $ cat /etc/mtab

     Also see TIP 91.

TIP 5:

     Setting up 2 IP address on "One" NIC. This example is on ethernet.

     STEP 1 (The settings for the initial IP address)

           $ cat /etc/sysconfig/network-scripts/ifcfg-eth0


     STEP 2 (2nd IP address: )

           $ cat /etc/sysconfig/network-scripts/ifcfg-eth0:1


     SUMMARY  Note, in STEP 1 the filename is "ifcfg-eth0", whereas in
              STEP 2 it's "ifcfg-eth0:1" and also not the matching
              entries for "DEVICE=...".  Also, obviously, the
              "IPADDR" is different as well.

TIP 6:

     Sharing Directories Among Several Users.

     Several people are working on a project in "/home/share"
     and they need to create documents and programs so that
     others in the group can edit and execute these documents
     as needed. Also see (TIP 186) for adding existing users
     to groups.

       $  /usr/sbin/groupadd share
       $  chown -R root.share /home/share
       $  /usr/bin/gpasswd -a <username> share
       $  chmod 2775 /home/share

       $  ls -ld /home/share
             drwxrwsr-x    2 root     share        4096 Nov  8 16:19 /home/share
                   ^---------- Note the s bit, which was set with the chmod 2775

       $  cat /etc/group
          ...          ^------- users are added to this group.

     The user may need to login again to get access. Or, if the user is currently
     logged in, they can run the following command:

       $ su - <username>

     Note, the above step is recommended over  "newgrp - share" since currently
     newgrp in FC2,FC3, and FC4 gets access to the group but the umask is not
     correctly formed.

     As root you  can test their account.

       $ su - <username>   "You need to '-' to pickup thier environment  '$ su - chirico' "

     Note: SUID, SGID, Sticky bit. Only the left most octet is examined, and "chmod 755" is used
          as an example of the full command. But, anything else could be used as well. Normally
          you'd want executable permissions.

        Octal digit  Binary value      Meaning                           Example usage
            0           000       all cleared                             $ chmod 0755 or chmod 755
            1           001       sticky                                  $ chmod 1755
            2           010       setgid                                  $ chmod 2755
            3           011       setgid, sticky                          $ chmod 3755
            4           100       setuid                                  $ chmod 4755
            5           101       setuid, sticky                          $ chmod 5755
            6           110       setuid, setgid                          $ chmod 6755
            7           111       setuid, setgid, sticky                  $ chmod 7755

     A few examples applied to a directory below. In the first example all users in the group can
     add files to directory "dirA" and they can delete their own files. Users cannot delete other
     user's files.

        Sticky bit:
           $ chmod 1770  dirA

     Below files created within the directory have the group ID of the directory, rather than that
     of the default group setting for the user who created the file.

        Set group ID bit:
           $ chmod 2755  dirB

TIP 7:

     Getting Infomation on Commands

     The "info" is a great utility for getting information about the system.
     Here's a quick key on using "info" from the terminal prompt.

       'q' exits.
       'u' moves up to the table of contents of the current section.
       'n' moves to the next chapter.
       'p' moves to the previous chapter.
       'space' goes into the selected section.

      The following is a good starting point:

        $ info coreutils

      Need to find out what a certain program does?

        $ whatis  open
       open   (2)  - open and possibly create a file or device
       open   (3)  - perl pragma to set default PerlIO layers for input and output
       open   (3pm)  - perl pragma to set default PerlIO layers for input and output
       open   (n)  - Open a file-based or command pipeline channel

      To get specific information about the open commmand

        $ man 2 open

       also try 'keyword' search, which is the same as the apropos command.
       For example, to find all the man pages on selinux, type the following:

        $ man -k selinux

       or the man full word search. Same as whatis command.

        $ man -f <some string>

       This is a hint once you are inside man.

        space      moves forward one page
        b          moves backward
        y          scrolls up one line "yikes, I missed it!"
        g          goes to the beginning
        q          quits
        /<string>  search, repeat seach n
        m          mark, enter a letter like "a", then, ' to go back
        '          enter a letter that is marked.

       To get section numbers

        $ man 8 ping

       Note the numbers are used as follows
         (This is OpenBSD)

         1  General Commands
         2  System Calls and Error Numbers
         3  C Libraries
         3p perl
         4  Devices and device drivers
         5  File Formats and config files
         6  Game instructions
         7  Miscellaneous information
         8  System maintenance
         9  Kernel internals

       To find the man page directly, "ls" command:

         $ whereis -m ls
         ls: /usr/share/man/man1/ls.1.gz /usr/share/man/man1/ls.1 /usr/share/man/man1p/ls.1p

       To read this file directly, do the following:

         $ man /usr/share/man/man1/ls.1.gz

       If you want to know the manpath, execute manpath.

         $ manpath

TIP 8:

     How to Put a "Running Job" in the Background.

     You're running a job at the terminal prompt, and it's taking
     a very long time. You want to put the job in the backgroud.

       "CTL - z" Temporarily suspends the job
       $ jobs     This will list all the jobs
       $ bg %jobnumber (bg %1)  To run in the background
       $ fg %jobnumber          To bring back in the foreground

     Need to kill all jobs -- say you're using several suspended
     emacs sessions and you just want everything to exit.

       $ kill -9  `jobs -p`

     The "jobs -p" gives the process number of each job, and the
     kill -9 kills everything. Yes, sometimes "kill -9" is excessive
     and you should issue a "kill -15" that allows jobs to clean-up.
     However, for exacs session, I prefer "kill -9" and haven't had
     a problem.

     Sometimes you need to list the process id along with job
     information. For instance, here's process id with the listing.

       $ jobs -l

     Note you can also renice a job, or give it lower priority.

       $ nice -n +15 find . -ctime 2 -type f  -exec ls {} \; > last48hours
       $  bg

     So above that was a ctl-z to suppend. Then, bg to run it in
     the background. Now, if you want to change the priority lower
     you just renice it, once you know the process id.

       $ jobs -pl
   [1]+ 29388 Running                 nice -n +15 find . -ctime 2 -exec ls -l {} \; >mout &

       $ renice +30 -p 29388
        29388: old priority 15, new priority 19

      19 was the lowest priority for this job. You cannot increase
      the priority unless you are root.

TIP 9:

     Need to Delete a File for Good -- not even GOD can recover.

     You have a file "secret".  The following makes it so no one
     can read it.  If the file was 12 bytes, it's now 4096 after it
     has been over written 100 times.  There's no way to recover this.

       $ shred -n 100 -z secret

     Want to remove the file? Use the "u" option.

       $ shred -n 100 -z -u test2

     It can be applied to a device

       $ shred -n 100 -z -u /dev/fd0

       CAUTION: Note that shred relies on a very important assumption: that the file system  overwrites  data
       in  place.   This is the traditional way to do things, but many modern file system designs do not sat-
       isfy this assumption.  The following are examples of file systems on which shred is not effective,  or
       is not guaranteed to be effective in all file system modes:

       * log-structured or journaled file systems, such as those supplied with

              AIX and Solaris (and JFS, ReiserFS, XFS, Ext3, etc.)

     Also see (TIP 52).

TIP 10:

     Who and What is doing What on Your System - finding open sockets,
     files etc.

       $ lsof
          or as root
       $ watch lsof -i

     To list all open Internet files, use:

       $ lsof -i -U

     You can also get very specific about ports. Do this as root for low

       $ lsof -i TCP:3306

     Or, look at UDP ports as follows:

       $ lsof -i UDP:1812

       (See TIP 118)

     Also try fuser. Suppose you have a mounted file-system, and you need
     to umount it. To list the users on the file-system /work

       $ fuser -u /work

     To kill all processes accessing the file system /work  in  any way.

       $ fuser -km /work

     Or better yet, maybe you want to eject a cdrom on /mnt/cdrom

       $ fuser -km /mnt/cdrom

     If you need IO load information about your system, you can execute
     iostat. But note, the very first iostat gives a snapshot since
     the last boot. You typically want the following command, which gives
     you 3 outputs every 5 seconds.

       $ iostat -xtc 5 3
       Linux 2.6.12-1.1376_FC3smp (       10/05/2005

       Time: 07:05:04 PM
       avg-cpu:  %user   %nice %system %iowait   %idle
                  0.97    0.06    1.94    0.62   96.41

       Time: 07:05:09 PM
       avg-cpu:  %user   %nice %system %iowait   %idle
                  0.60    0.00    1.70    0.00   97.70

       Time: 07:05:14 PM
       avg-cpu:  %user   %nice %system %iowait   %idle
                  1.00    0.00    1.60    0.00   97.39

     vmstat reports memory statistics. See tip 241 for vmstat for
     I/O subsystem total statistics.

       $ vmstat
       $ ifconfig
       $ cat /proc/sys/vm/.. (entries under here)

      *NOTE: (TIP 77) shows sample usage of "ifconfig". Also
       (TIP 84) shows sample output of "$ cat /proc/cpuinfo". You can download iostat
       and other packages from (
       You also may want to look at iozone (TIP 178).


       $ cat /proc/meminfo
       $ cat /proc/stat

       $ cat /proc/uptime
       1078623.55 1048008.34       First number is the number of seconds since boot.
                                   The second number is the number of idle seconds.

       $ cat /proc/loadavg
       0.25 0.14 0.10 1/166 7778   This shows load at 1,5, and 15 minutes,
                                   a total of 1 current running process out
                                   from a total of 166. The 7778 is the last
                                   process id used.

     Or current process open file descriptors

        $ ls -l /proc/self/fd/0
            lrwx------    1 chirico  chirico        64 Jun 29 13:17 0 -> /dev/pts/51
            lrwx------    1 chirico  chirico        64 Jun 29 13:17 1 -> /dev/pts/51
            lrwx------    1 chirico  chirico        64 Jun 29 13:17 2 -> /dev/pts/51
            lr-x------    1 chirico  chirico        64 Jun 29 13:17 3 -> /proc/26667/fd

      So you could, $ echo "stuff" > /dev/pts/51, to get output. Note, tree is also
      helpful here:

         $ tree /proc/self

            |-- auxv
            |-- cmdline
            |-- cwd -> /work/souptonuts/documentation/theBook
            |-- environ
            |-- exe -> /usr/bin/tree
            |-- fd
            |   |-- 0 -> /dev/pts/51
            |   |-- 1 -> /dev/pts/51
            |   |-- 2 -> /dev/pts/51
            |   `-- 3 -> /proc/26668/fd
            |-- maps
            |-- mem
            |-- mounts
            |-- root -> /
            |-- stat
            |-- statm
            |-- status
            |-- task
            |   `-- 26668
            |       |-- auxv
            |       |-- cmdline
            |       |-- cwd -> /work/souptonuts/documentation/theBook
            |       |-- environ
            |       |-- exe -> /usr/bin/tree
            |       |-- fd
            |       |   |-- 0 -> /dev/pts/51
            |       |   |-- 1 -> /dev/pts/51
            |       |   |-- 2 -> /dev/pts/51
            |       |   `-- 3 -> /proc/26668/task/26668/fd
            |       |-- maps
            |       |-- mem
            |       |-- mounts
            |       |-- root -> /
            |       |-- stat
            |       |-- statm
            |       |-- status
            |       `-- wchan
            `-- wchan

            10 directories, 28 files

     Need a listing of the system settings?

       $ sysctl -a

     Need IPC (Shared Memory Segments, Semaphore Arrays, Message Queue) status

       $ ipcs
       $ ipcs -l  "This gives limits"

     Need to "watch" everything a user does?  The following watches donkey.

       $ watch lsof -u donkey

     Or, to see what in  going on in directory "/work/junk"

       $ watch lsof +D /work/junk

TIP 11:

     How to make a File "immutable" or "unalterable" -- it cannot be changed
     or deleted even by root. Note this works on (ext2/ext3) filesystems.
     And, yes, root can delete after it's changed back.

     As root:

       $ chattr +i filename

     And to change it back:

       $ chattr -i filename

     List attributes

       $ lsattr filename

TIP 12:

     SSH - How to Generate the Key Pair.

     On the local server

       $  ssh-keygen -t dsa -b 2048

     This will create the two files:

            .ssh/id_dsa (Private key)
            .ssh/  (Public key you can share)

     Next insert ".ssh/" on the remote server
     in the file  ".ssh/authorized_keys" and ".ssh/authorized_keys2"
     and change the  permission of each file to (chmod 600). Plus, make
     sure the directory ".ssh" exists on the remote computer with 700 rights.
     Ok, assuming is the remote server and "donkey" is the
     account on that remote server.

       $ ssh donkey@ "mkdir -p .ssh"
       $ ssh donkey@ "chmod 700 .ssh"
       $ scp ./.ssh/  donkey@

     Now connect to that remote server "" and add .ssh/
     to both "authorized_keys" and "authorized_keys2".  When done, the permission
       (This is on the remote server)

        $chmod 600 .ssh/authorized_key*

     Next, go back to the local server and issue the following:

        $ ssh-agent $SHELL
        $ ssh-add

     The "ssh-add" will allow you to enter the passphrase and it will
     save it for the current login session.

     You don't have to enter a password when running "ssh-keygen" above. But,
     remember anyone with root access can "su - <username>" and then connect
     to your computers.  It's harder, however, not impossible, for root to do
     this if  you have a password.

     (Reference TIP 151)

TIP 13:

     Securing the System: Don't allow root to login remotely.  Instead,
     the admin could login as another account, then, "su -".  However,
     root can still login "from the local terminal".

     In the "/etc/ssh/sshd_config" file change the following lines:

        Protocol 2
        PermitRootLogin no
        PermitEmptyPasswords no

     Then, restart ssh

        /etc/init.d/sshd restart

     Why would you want to do this?  It's not possible for anyone to guess
     or keep trying the root account.  This is especially good for computers
     on the Internet. So, even if the "root" passwords is known, they can't
     get access to the system remotely.  Only from the terminal, which is locked
     in your computer room. However, if anyone has a account on the server,
     then, they can login under their  account then "su -".

     Suppose you only want a limited number of users:  "mchirico" and "donkey".
     Add the following line to "/etc/ssh/sshd_config". Note, this allows access
     for chirico and donkey, but everyone else is denied.

         #  Once you add AllowUsers - everyone else is denied.
         AllowUsers mchirico donkey

TIP 14:

     Keep Logs Longer with Less Space.

     Normally logs rotate monthly, over writing all the old data.  Here's a
     sample "/etc/logrotate.conf" that will keep 12 months of backup
     compressing the logfiles

       $ cat /etc/logrotate.conf

             # see "man logrotate" for details
             # rotate log files weekly
             #chirico changes to monthly

             # keep 4 weeks worth of backlogs
             # keep 12 months of backup
             rotate 12

             # create new (empty) log files after rotating old ones

             # uncomment this if you want your log files compressed

             # RPM packages drop log rotation information into this directory
             include /etc/logrotate.d

             # no packages own wtmp -- we'll rotate them here
             /var/log/wtmp {
                 create 0664 root utmp
                 rotate 1

             # system-specific logs may be also be configured here.

       Note: see tip 1. The clock should always be correctly set.

TIP 15:

     What Network Services are Running?

          $ netstat -tanup

     or if you just want tcp services
          $ netstat -tanp


          $ netstat -ap|grep LISTEN|less

     This can be helpful to determine the services running.

     Need stats on dropped UDP packets?

          $ netstat -s -u

     or TCP

          $ netstat -s -t

     or summary of everything

          $ netstat -s

     or looking for error rates on the interface?

          $ netstat -i

     Listening interfaces?

          $ netstat -l

     (Tip above provided by Amos Shapira)

     Also see TIP 77.

TIP 16:

     Apache: Creating and Using an ".htaccess" File

     Below is a sample ".htaccess" file which goes in
     "/usr/local/apache/htdocs/chirico/alpha/.htaccess" for this

         AuthUserFile /usr/local/apache/htdocs/chirico/alpha/.htpasswd
         AuthGroupFile /dev/null
         AuthName "Your Name and regular password required"
         AuthType Basic

         <Limit GET POST>

         require valid-user

    In order for this to work /usr/local/apache/conf/httpd.conf must
    have the following line in it:

       <Directory /usr/local/apache/htdocs/chirico/alpha>
           AllowOverride FileInfo AuthConfig Limit
           Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec
               Order allow,deny
               Allow from all

           <LimitExcept GET POST OPTIONS PROPFIND>
               Order deny,allow
               Deny from all

    Also, a password file must be created

      $ /usr/local/apache/bin/htpasswd -c .htpasswd chirico

    And enter the user names and passwords.

    Next Reload Apache:

      $ /etc/init.d/httpd reload

    (Reference TIP 213 limit access to certain directories based on IP address).

TIP 17:

     Working with "mt" Commands: reading and writing to tape.

     The following assumes the tape device is "/dev/st0"

     STEP 1 ( rewind the tape)

          # mt -f /dev/nst0 rewind

     STEP 2 (check to see if you are at block 0)

          # mt -f /dev/nst0 tell
            At block 0.

     STEP 3 (Backup "tar compress"  directories "one"  and "two")

          # tar -czf /dev/nst0 one two

     STEP 4 (Check to see what block you are at)

           # mt -f /dev/nst0 tell

       You should get something like block 2 at this point.

     STEP 5 (Rewind the tape)

           # mt -f /dev/nst0 rewind

     STEP 6 (List the files)

           # tar -tzf /dev/nst0

     STEP 7 (Restore directory "one"  into directory "junk").  Note, you
          have to first rewind the tape, since the last operation moved
          ahead 2 blocks. Check this with "mt -f /dev/nst0".

           # cd junk
           # mt -f /dev/nst0 rewind
           # mt -f /dev/nst0 tell
              At block 0.
           # tar -xzf /dev/nst0 one

     STEP 8 (Next, take a look to see what block the tape is at)

           # mt -f /dev/nst0 tell
              At block 2.

     STEP 9 (Now backup directories three  and four)

           # tar -czf /dev/nst0 three four

       After backing up the files, the tape should be past block 2.
       Check this.

           # mt -f /dev/nst0 tell
             At block 4.

          Currently the following exist:

                At block 1:

                At block 2:

                At block 4:
                    (* This is empty *)

     A few notes. You can set the blocking factor and a label
     with tar. For example:

      $ tar --label="temp label" --create  --blocking-factor=128 --file=/dev/nst0 Notes

     But note if you try to read it with the default, incorrect blocking
     factor, then, you will get the following error:

        $ tar -t   --file=/dev/nst0
        tar: /dev/nst0: Cannot read: Cannot allocate memory
        tar: At beginning of tape, quitting now
        tar: Error is not recoverable: exiting now

     However this is easily fixed with the correct blocking factor

         $ mt -f /dev/nst0 rewind
         $ tar -t --blocking-factor=128 --file=/dev/nst0
         temp label

     Take advantage of the label command.

         $ MYCOMMENTS="Big_important_tape"
         $ tar --label="$(date +%F)"+"${MYCOMMENTS}"

     Writing to tape on a remote computer

         $ tar cvzf - ./tmp | ssh -l chirico '(mt -f /dev/nst0 rewind; dd of=/dev/st0 )'

     Restoring the contents from tape on a remote computer

         $ ssh -l chirico '(mt -f /dev/nst0 rewind; dd if=/dev/st0  )'|tar xzf -

     Getting data off of tape with dd command with odd blocking factor. Just set ibs very high

         $ mt -f /dev/nst0 rewind
         $ tar --label="Contenets of Notes" --create  --blocking-factor=128 --file=/dev/nst0 Notes
         $ mt -f /dev/nst0 rewind
         $ dd ibs=1048576 if=/dev/st0 of=notes.tar

     The above will probably work with ibs=64k as well

        (Also see TIP 136)

TIP 18:

     Encrypting Data to Tape using "tar" and "openssl".

     The following shows an example of writing the contents of "tapetest" to tape:

        $ tar zcvf - tapetest|openssl des3 -salt  -k secretpassword | dd of=/dev/st0

     Reading the data back:

        $ dd if=/dev/st0|openssl des3 -d -k secretpassword|tar xzf -

TIP 19:

     Mounting an ISO Image as a Filesystem -- this is great if you don't have the DVD
         hardware, but, need to get at the data.  The following show an example of
         mounting the Fedora core 2 as a file.

         $ mkdir /iso0
         $ mount -o loop -t iso9660 /FC2-i386-DVD.iso  /iso0

     Or to mount automatically at boot, add the following to "/etc/fstab"

         /FC2-i386-DVD.iso /iso0     iso9660 rw,loop  0 0


TIP 20:

     Getting Information about the Hard drive and list all PCI devices.

                $ hdparm /dev/hda

                   multcount    = 16 (on)
                   IO_support   =  0 (default 16-bit)
                   unmaskirq    =  0 (off)
                   using_dma    =  1 (on)
                   keepsettings =  0 (off)
                   readonly     =  0 (off)
                   readahead    = 256 (on)
                   geometry     = 16383/255/63, sectors = 234375000, start = 0

            or for SCSI

                $ hdparm /dev/sda

            Try it with the -i option for information

                $ hdparm -i /dev/hda


                Model=IC35L120AVV207-1, FwRev=V24OA66A, SerialNo=VNVD09G4CZ6E0T
                Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs }
                RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=52
                BuffType=DualPortCache, BuffSize=7965kB, MaxMultSect=16, MultSect=16
                CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=234375000
                IORDY=on/off, tPIO={min:240,w/IORDY:120}, tDMA={min:120,rec:120}
                PIO modes:  pio0 pio1 pio2 pio3 pio4
                DMA modes:  mdma0 mdma1 mdma2
                UDMA modes: udma0 udma1 udma2 udma3 udma4 *udma5
                AdvancedPM=yes: disabled (255) WriteCache=enabled
                Drive conforms to: ATA/ATAPI-6 T13 1410D revision 3a:  2 3 4 5 6

            How fast is your drive?

                $ hdparm -tT /dev/hda

                Timing buffer-cache reads:   128 MB in  0.41 seconds =315.32 MB/sec
                Timing buffered disk reads:  64 MB in  1.19 seconds = 53.65 MB/sec

            Need to find your device?

                $ mount
                $ cat /proc/partitions
                $ dmesg | egrep '^(s|h)d'

                   which for my system lists:

                      hda: IC35L120AVV207-1, ATA DISK drive
                      hdc: Lite-On LTN486S 48x Max, ATAPI CD/DVD-ROM drive
                      hda: max request size: 1024KiB
                      hda: 234375000 sectors (120000 MB) w/7965KiB Cache, CHS=16383/255/63, UDMA(100)

             By the way, if you want to turn on dma

                 $ hdparm -d1 /dev/hda
                   setting using_dma to 1 (on)
                   using_dma    =  1 (on)

         (Also see TIP 122 )

     List all PCI devices

                $ lspci -v

          00:00.0 Host bridge: Intel Corp. 82845G/GL [Brookdale-G] Chipset Host Bridge (rev
                  Subsystem: Dell Computer Corporation: Unknown device 0160
                  Flags: bus master, fast devsel, latency 0
                  Memory at f0000000 (32-bit, prefetchable) [size=128M]
                  Capabilities: <available only to root>

              ... lots more ...

           Note, there is also lspci -vv for even more information.

          (Also see TIP 200)

TIP 21:

     Setting up "cron" Jobs.

     If you want to use the emacs editor for editing cron jobs, then,
     set the following in your "/home/user/.bash_profile"


     Then, to edit cron jobs

        $ crontab -e

     You may want to put in the following header

        #MINUTE(0-59) HOUR(0-23) DAYOFMONTH(1-31) MONTHOFYEAR(1-12) DAYOFWEEK(0-6) Note 0=Sun and 7=Sun
        #14,15 10 * * 0   /usr/bin/somecommmand  >/dev/null 2>&1

     The sample "commented out command" will run at 10:14 and 10:15 every Sunday.  There will
     be no "mail" sent to the user because of the ">/dev/null 2>&1" entry.

        $ crontab -l

     The above will list all cron jobs. Or if you're root

        $ crontab -l -u <username>
        $ crontab -e -u <username>

     Reference "man 5 crontab":

        The time and date fields are:

                     field          allowed values
                     -----          --------------
                     minute         0-59
                     hour           0-23
                     day of month   1-31
                     month          1-12 (or names, see below)
                     day of week    0-7 (0 or 7 is Sun, or use names)

              A field may be an asterisk (*), which always stands for ``first-last''.

              Ranges of numbers are allowed.  Ranges are two numbers separated with a
              hyphen.   The  specified  range is inclusive.  For example, 8-11 for an
              ``hours'' entry specifies execution at hours 8, 9, 10 and 11.

              Lists are allowed.  A list is a set of numbers (or ranges) separated by
              commas.  Examples: ``1,2,5,9'', ``0-4,8-12''.

              Ranges can include "steps", so "1-9/2" is the same as "1,3,5,7,9".

     Note, you can run just every 5 minutes as follows:

              */5 * * * * /etc/mrtg/domrtg  >/dev/null 2>&1

     To run jobs hourly,daily,weekly or monthly you can add shell scripts into the
     appropriate directory:


     Note that the above are pre-configured schedules set in "/etc/crontab", so
     if you want, you can change the schedule. Below is my /etc/crontab:

           $ cat /etc/crontab

           # run-parts
           01 * * * * root run-parts /etc/cron.hourly
           02 4 * * * root run-parts /etc/cron.daily
           22 4 * * 0 root run-parts /etc/cron.weekly
           42 4 1 * * root run-parts /etc/cron.monthly

TIP 22:

     Keeping Files in Sync Between Servers.

     The remote computer is "" and has the account "donkey".  You want
     to "keep in sync" the files under "/home/cu2000/Logs" on the remote computer
     with files on "/home/chirico/dev/MEDIA_Server" on the local computer.

       $ rsync  -Lae ssh  donkey@ /home/chirico/dev/MEDIA_Server

     "rsync" is a convient command for keeping files in sync, and as shown here will work
     through ssh.  The -L option tells rsync to treat symbolic links like ordinary files.

        Also see []

TIP 23:

     Looking up the Spelling of a Word.

        $ look <partial spelling>

     so the following will list all words that
     start with stuff

        $ look stuff

     It helps to have a large "linuxwords" dictionary.  You can download
     a much bigger dictionary from the following:


     Note: vim users can setup the .vimrc file with the following. Now when you type 
       CTL-X CTL-T in insert mode, you'll get a thesaurus lookup.

           set dictionary+=/usr/share/dict/words
           set thesaurus+=/usr/share/dict/words

     Or, you can call aspell with the F6 command after putting the folling entry in your
     .vimrc file

           :nmap <F6> :w<CR>:!aspell -e -c %<CR>:e<CR>

     Now, hit F6 when you're in vim, and you'll get a spell checker.

     There is also an X Windows dictionary that runs with the following command.

           $ gnome-dictionary

TIP 24:

     Find out if a Command is Aliased.

         $ type -all <command>


         $ type -all ls
            ls is aliased to `ls --color=tty'
            ls is /bin/ls

TIP 25:

     Create a Terminal Calculator

      Put the following in your .bashrc file

            function calc
             echo "${1}"|bc -l;

      Or, run it at the shell prompt. Now
      "calc" from the shell will work as follows:

            $ calc 3+45

      All functions  with a "(" or ")" must be enclosed
      in quotes.  For instance, to get the sin of .4

            $ calc "s(.4)"

          (See TIP 115 using the expr command)

TIP 26:

     Kill a User and All Their Current Processes.

        #  This program will kill all processes from a
        #  user.  The user name is read from the command line.
        #  This program also demonstrates reading a bash variable
        #  into an awk script.
        #  Usage: kill9user <user>
        kill -9 `ps aux|awk -v var=$1 '$1==var { print $2 }'`

    or if you want want to create the above script the command
    below will kill the user "donkey" and all of his processes.

        $ kill -9 `ps aux|awk -v var="donkey" '$1==var { print $2 }'`

    Check their cron jobs and "at" jobs, if you have a security issue.

          $ crontab -u <user> -e

    Lock the account:

          $ passwd -l <user>

    Remove all authorized_keys

          $ rm /home/user/.shosts
          $ rm /home/user/.rhosts
          $ rm -rf /home/user/.ssh
          $ rm /home/user/.forward

      or consider

          $ mv /home/user  /home/safeuser

    Change the shell

          $ chsh -s /bin/true <user>

    Do an inventory

          $ find / -user <user>  > list_of_user_files

    NOTE: Also see (TIP 10).

    To see all users, except the current user. Do not use the
    dash "ps -aux" is wrong but the following is correct:

          $ ps aux| awk '!/'${USER}'/{printf("%s \n",$0)}'

     or (ww = wide, wide output)

          $ ps auwwx| awk '!/'${USER}'/{printf("%s \n",$0)}'

    The following codes may be useful:

       D    Uninterruptible sleep (usually IO)
       R    Running or runnable (on run queue)
       S    Interruptible sleep (waiting for an event to complete)
       T    Stopped, either by a job control signal or because it is being traced.
       W    paging (not valid since the 2.6.xx kernel)
       X    dead (should never be seen)
       Z    Defunct ("zombie") process, terminated but not reaped by its parent.

    For BSD formats and when the stat keyword is used, additional
       characters may be displayed:

       <    high-priority (not nice to other users)
       N    low-priority (nice to other users)
       L    has pages locked into memory (for real-time and custom IO)
       s    is a session leader
       l    is multi-threaded (using CLONE_THREAD, like NPTL pthreads do)
       +    is in the foreground process group

    Also see TIP 28. and TIP 89.

TIP 27:

     Format Dates for Logs and Files

         $ date "+%m%d%y %A,%B %d %Y %X"
             061704 Thursday,June 17 2004 07:13:40 PM

         $ date "+%m%d%Y"

         $ date -d '1 day ago' "+%m%d%Y"

         $ date -d '3 months 1 day  2 hour  15 minutes 2 seconds ago'

      or to go into the future remove the "ago"

         $ date -d '3 months 1 day  2 hour  15 minutes 2 seconds '

            Also the following works:

                $ date -d '+2 year +1 month -1 week  +3 day -8 hour +2 min -5 seconds'

      Quick question: If there are 100,000,000 stars in the visible sky, and you can
      count them, round the clock, at a rate of a star per second starting now, when
      would you finish counting?  Would you still be alive?

                $ date -d '+100000000 seconds'

      Sooner than you think!

      This can be assigned to variables

         $ mdate=`date -d '3 months 1 day  2 hour  15 minutes 2 seconds ' "+%m%d%Y_%A_%B_%D_%Y_%X"  `
         $ echo $mdate
             09182004_Saturday_September_09/18/04_2004_09:40:41 PM
             ^---- Easy to sort   ^-------^----- Easy to read

      See TIP 28 below.

      See TIP 87 when working with large delta time changes -40 years, or -200 years ago, or even
      1,000,000 days into the future.

      Also see (TIP 1) for working with time zones.

      See TIP 240 for converting epoch seconds to local time.

TIP 28:

     Need Ascii Codes? For instance, for printing quotes:

                         awk 'BEGIN { msg = "Don\047t Panic!"; printf "%s \n",msg }'
                         awk 'BEGIN { msg = "Don\x027t Panic!"; printf "%s \n",msg }'

     It's better to use \047, because certain characters that follow \x027 may cause problems.

     For example, take a look at the following two lines. The first line prints a "}" caused
     by the extra D in \x027D. The the line immediately below does not work as expected.

                      awk 'BEGIN {printf("The D causes problems \x027D\n")}'

            However, the line below works fine:

                      awk 'BEGIN {printf("The D does not cause problems \047D\n")}'

     Or if you wanted to use the date command in "awk" to print date.time.nanosecond.timezone for
     each line of a file "test".

     The following date can be used in awk because the single quotes are enclosed within the
     double quotes.

             date '+%m%d%Y.%H%M%S.%N%z'

       $ awk 'BEGIN { "date '+%m%d%Y.%H%M%S.%N%z'" | getline MyDate  } { print MyDate,$0 }' < data

     But it's also possible to replace  "+"  with  \x2B,  "%" with \x25, and "d" with \x64 as follows:

       $ awk 'BEGIN { "date \x27\x2B\x25m\x25\x64\x25Y.\x25H\x25M\x25S.\x25N\x25z\x27" | getline MyDate  } { print MyDate,$0 }' < test

             07062004.113820.346033000-0400 bob 71
             07062004.113820.346033000-0400 tom 43
             07062004.113820.346033000-0400 sal 34
             07062004.113820.346033000-0400 bob 89
             07062004.113820.346033000-0400 tom 66
             07062004.113820.346033000-0400 sal 99

     For this example it's not needed because single quotes are used inside of double quotes; however, there may be times when
     hex replacement is easier.

       $ man ascii

        Oct   Dec   Hex   Char           Oct   Dec   Hex   Char
            000   0     00    NUL '\0'       100   64    40    @
            001   1     01    SOH            101   65    41    A
            002   2     02    STX            102   66    42    B
            003   3     03    ETX            103   67    43    C
            004   4     04    EOT            104   68    44    D
            005   5     05    ENQ            105   69    45    E
            006   6     06    ACK            106   70    46    F
            007   7     07    BEL '\a'       107   71    47    G
            010   8     08    BS  '\b'       110   72    48    H
            011   9     09    HT  '\t'       111   73    49    I
            012   10    0A    LF  '\n'       112   74    4A    J
            013   11    0B    VT  '\v'       113   75    4B    K
            014   12    0C    FF  '\f'       114   76    4C    L
            015   13    0D    CR  '\r'       115   77    4D    M
            016   14    0E    SO             116   78    4E    N
            017   15    0F    SI             117   79    4F    O
            020   16    10    DLE            120   80    50    P
            021   17    11    DC1            121   81    51    Q
            022   18    12    DC2            122   82    52    R
            023   19    13    DC3            123   83    53    S
            024   20    14    DC4            124   84    54    T
            025   21    15    NAK            125   85    55    U
            026   22    16    SYN            126   86    56    V
            027   23    17    ETB            127   87    57    W
            030   24    18    CAN            130   88    58    X
            031   25    19    EM             131   89    59    Y
            032   26    1A    SUB            132   90    5A    Z
            033   27    1B    ESC            133   91    5B    [
            034   28    1C    FS             134   92    5C    \   '\\'
            035   29    1D    GS             135   93    5D    ]
            036   30    1E    RS             136   94    5E    ^
            037   31    1F    US             137   95    5F    _
            040   32    20    SPACE          140   96    60    `
            041   33    21    !              141   97    61    a
            042   34    22    "              142   98    62    b
            043   35    23    #              143   99    63    c
            044   36    24    $              144   100   64    d
            045   37    25    %              145   101   65    e
            046   38    26    &              146   102   66    f
            047   39    27    '              147   103   67    g
            050   40    28    (              150   104   68    h
            051   41    29    )              151   105   69    i
            052   42    2A    *              152   106   6A    j
            053   43    2B    +              153   107   6B    k
            054   44    2C    ,              154   108   6C    l
            055   45    2D    -              155   109   6D    m
            056   46    2E    .              156   110   6E    n
            057   47    2F    /              157   111   6F    o
            060   48    30    0              160   112   70    p
            061   49    31    1              161   113   71    q
            062   50    32    2              162   114   72    r
            063   51    33    3              163   115   73    s
            064   52    34    4              164   116   74    t
            065   53    35    5              165   117   75    u
            066   54    36    6              166   118   76    v
            067   55    37    7              167   119   77    w
            070   56    38    8              170   120   78    x
            071   57    39    9              171   121   79    y
            072   58    3A    :              172   122   7A    z
            073   59    3B    ;              173   123   7B    {
            074   60    3C    <              174   124   7C    |
            075   61    3D    =              175   125   7D    }
            076   62    3E    >              176   126   7E    ~
            077   63    3F    ?              177   127   7F    DEL

TIP 29:

     Need a WWW Browser for the Terminal Session? Try lynx or elinks.

         $ lynx

     Or to read all these tips, with the latest updates

      $ lynx

     Or, better yet elinks.

         $ elinks http://somepage.

     You can get elinks at the following site:


TIP 30:

    screen - screen manager with VT100/ANSI terminal emulation

         This is an excellent utility. But if you work a lot in Emacs,
         then, you should place the following in your ~/.bashrc

             alias s='screen -e^Pa -D -R'

         After loging in again (or source .bashrc) ,
         type the following to load "screen":

             $ s

         If you're using the not using the alias command above, substitute
         CTL-a for CTL-p below. :

             CTL-p CTL-C       To get a new session
             CTL-p  "           To list sessions, and arrow keys to move
             CTL-p SHFT-A      To name sessions
             CTL-p S            To split screens
             CLT-p Q            To unsplit screens
             CLT-p TAB          To switch between screens
             CLT-p :resize n    To resize screen to n rows, on split screen

         Screen is very powerful.  Should you become disconneced, you can
         still resume work after loggin in.

             $ man screen

         The above command will give you more information.

TIP 31:

     Need to Find the Factors of a Number?

           $ factor 2345678992
                2345678992: 2 2 2 2 6581 22277

     It's a quick way to find out if a number is prime

           $ factor 7867
                7867: 7867

TIP 32:

     Less is More -- piping to less to scroll backword and forward

      For large "ls" listings try the followin, then, use the arrow key
      to move up and down the list.

           $ ls /some_large_dir/ | less


           $ cat some_large_file | less


           $ less some_large_file

TIP 33:

     C "indent" Settings for Kernel Development

           $ indent -kr -i8  program.c

TIP 34:

     FTP auto-login.  "ftp" to a site and have the password stored.

     For instance, here's a sample ".net" file in a user's home
     directory for uploading to sourceforge. Note, sourceforge will
     take any password, so is used here for login "anonymous".

           $ cat ~/.netrc
               machine login anonymous password
               default login anonymous password user@site

     It might be a good idea to change the rights on this file

           $ chmod 0400 ~/.netrc

         #  Sample ftp automated script to download
         #  file to ${dwnld}
         cd ${dwnld}
         ftp << FTPSTRING
         prompt off
         cd /pub/usenet-by-group/news.answers/unix-faq/faq
         mget contents
         mget diff
         mget part*

     Sourceforge uses an anonymous login with an email address as
     a password. Below is the automated script I use for uploading 
     binary files.

        # ftp sourceforge auto upload
        #   Usage: ./ <filename>

        # machine user anonymous
        ftp -n -u <<  FTPSTRING
        user anonymous
        cd incoming
        put ${1}

      (Also see TIP 114 for ncftpget, which is a very powerful restarting
                            ftp program)

TIP 35:

     Bash Brace Expansion

           $ echo f{ee,ie,oe,um}
                fee fie foe fum

     This works with almost any command

           $ mkdir -p /work/junk/{one,two,three,four}

TIP 36:

     Getting a List of User Accounts on the System

           $ cut -d: -f1 /etc/passwd | sort

     Note (Thanks to Philip Vanmontfort) you can also do the following:

           $ getent passwd|cut -d: -f1|sort

TIP 37:

     Editing a Bash Command

      Try typing a long command say, then, type "fc" for an easy way
      to edit the command.

           $ find /etc -iname '*.cnf' -exec grep -H 'log' {} \;
           $ fc

      "fc" will bring the last command typed into an editor, "emacs" if
      that's the default editor. Type "fc -l" to list last few commands.

      To seach for a command, try typing "CTL-r" at the shell prompt for
      searching. "CTL-t" to transpose, say "sl" was typed by you want "ls".

      Hints when using "fc: in emacs:

           ESC-b     move one word backward
           ESC-f     move one word forward
           ESC-DEL   kill one word backward
           CTL-k     kill point to end
           CTL-y     un-yank killed region at point

TIP 38:

     Moving around Directories.

     Change to the home directory:
          $ cd ~
          $ cd

     To go back to the last directory
          $ cd -

     Instead of "cd" to a directory try "pushd" and look
     at the can see a list of directories.

          $ pushd /etc
          $ pushd /usr/local

      Then, to get back "popd" or "popd 1"

      To list all the directories pushed on the stack
      use the "dirs -v" command.

          $ dirs -v
           0  /usr/local
           1  /etc
           2  /work/souptonuts/documentation/theBook

      Now, if you "pushd +1" you will be moved to "/etc", since
      is number "1" on the stack, and this directory will become

          $ pwd
          $ pushd +1
          $ pwd

          $ dirs -v
           0  /etc
           1  /work/souptonuts/documentation/theBook
           2  /usr/local

TIP 39:

     Need an Underscore after a Variable?

       Enclose the variable in "{}".

          $echo ${UID}_

       Compare to

          $echo $UID_

       Also try the following:

                $ m="my stuff here"
                $ echo -e ${m// /'\n'}

TIP 40:

     Bash Variable Offset and String Operators

        $ r="this is stuff"
        $ echo ${r:3}
        $ echo ${r:5:2}

      Note, ${varname:offset:length}

         ${varname:?message}  If varname exist and isn't null return value, else,
                              print var and message.

           $ r="new stuff"
           $ echo ${r:? "that's r for you"}
               new stuff
           $ unset r
           $ echo ${r:? "that's r for you"}
               bash: r:  that's r for you

         ${varname:+word}    If varname exist and not null return word. Else, return null.

         ${varname:-word}    If varname exist and not null return value. Else, return word.

      Working with arrays in bash - bash arrays.

           $ unset p
           $ p=(one two three)

           $ echo -e "${p[@]}"
           one two three


           $ echo -e "${p[*]}"
           one two three

           $ echo -e "${#p[@]}"

           $ echo -e "${p[0]}"

           $ echo -e "${p[1]}"

            Also see (TIP 95)

TIP 41:

     Loops in Bash

      The command below loops through directories listed in $PATH.

          $ path=$PATH:
          $ while [ $path ]; do echo " ${path%%:*} "; path=${path#*:}; done

       The command below will also loop through directories in your path.

          $ for dir in $PATH
          > do
          > ls -ld $dir
          > done
              drwxr-xr-x    2 root     root         4096 Jun 10 20:16 /usr/local/bin
              drwxr-xr-x    2 root     root         4096 Jun 13 23:12 /bin
              drwxr-xr-x    3 root     root        40960 Jun 12 08:00 /usr/bin
              drwxr-xr-x    2 root     root         4096 Feb 14 03:12 /usr/X11R6/bin
              drwxrwxr-x    2 chirico  chirico      4096 Jun  6 13:06 /home/chirico/bin

     Other ways of doing loops:

        for (( i=1; i <= 20; i++))
                        echo -n "$i "

     Note, to do it all on one line, do the following:

        $ for (( i=1; i <= 20; i++)); do echo -n "$i"; done

     Below, is an example of declaring i an integer so that you do not
     have to preface with let.

          $ declare -i i
          $ i=5;
          $ while (( $i > 1 )); do
          > i=i-1
          > echo $i
          > done

     You can also use "while [ $i -gt 1 ]; do"  in place of "while (( $i > 1 )); do"

     To get a listing of all declared values

          $ declare -i

     Try putting a few words in the file "test"

         $ while read filename; do echo  "- $filename "; done < test |nl -w1

     Or, using an array

                declare -a Array
                for i in `seq ${#Array[@]}`
                  echo $Array[$i-1]

         Also see (TIP 95 and TIP 133).

TIP 42:

     "diff" and "patch".

        You have created a program "prog.c", saved as this name and also copied
        to  "prog.c.old". You post "prog.c" to users.  Next, you make changes
        to prog.c

          $ diff -c prog.c.old prog.c > prog.patch

        Now, users can get the latest updates by running.

          $ patch < prog.patch

        By the way, you can make backups of your data easily.

          $ cp /etc/fstab{,.bak}

        Now, you do your edits to "/etc/fstab" and if you need
        to go back to the original, you can find it at

        Also consider sdiff with the -s option, to ignore spaces to
        compare differences side-by-side between two files. An example
        is listed below.

          $ sdiff -s file1 file2

TIP 43:

     "cat" the Contents of Files Listed in a File, in That Order.

       SETUP (Assume you have the following)

              $ cat file_of_files

              $ cat file1
                  This is the data in file1

              $ cat file 2
                  This is the data in file2

       So there are 3 files here "file_of_files" which contains the name of
       other files.  In this case "file1" and "file2". And the contents of
       "file1" and "file2" is shown above.

               $ cat file_of_files|xargs cat
                    This is the data in  file1
                    This is the data in  file2

     Also see (TIP 44, TIP 62 and TIP 235).

TIP 44:

     Columns and Rows -- getting anything you want.

     Assume you have the following file.

        $ cat data
           1 2 3
           4 5
           6 7 8 9 10
           11 12
           13 14

     How to you get everything in  2 columns?

        $ cat data|tr ' ' '\n'|xargs -l2
           1 2
           3 4
           5 6
           7 8
           9 10
           11 12
           13 14

    Three columns?

        $ cat data|tr ' ' '\n'|xargs -l3
           1 2 3
           4 5 6
           7 8 9
           10 11 12
           13 14

    What's the row sum of the "three columns?"

        $ cat data|tr ' ' '\n'|xargs -l3|tr ' ' '+'|bc


        $ tr ' ' '\n' < data |xargs -l3|tr ' ' '+'|bc

    NOTE "Steven Heiner's rule":

             cat one_file | program

           can always be rewritten as

             program < one_file

   Note: thanks to Steven Heiner ( the above can be
       shortened as follows:

               $ tr ' ' '\n' < data|xargs -l3|tr ' ' '+'|bc

          Need to "tr" from the stdin?

               $ tr "xy" "yx"| ... | ...

       But there is a the "Stephane CHAZELAS" condition here

         "Note that tr, sed, and awk mail fail on files containing '\0'
          sed and awk have unspecified behaviors if the input
          doesn't end in a '\n' (or to sum up, cat works for
          binary and text files, text utilities such as sed or awk
          work only for text files).

TIP 45:

     Auto Directory Spelling Corrections.

      To turn this on:

           $ shopt -s cdspell

      Now mispell a directory in the cd command.

           $ cd /usk/local
                   ^-------- still gets you to --

      What other options can you set? The following will list
      all the options:

           $ shopt -p

TIP 46:

     Record Eveything Printed on Your Terminal Screen.

            $ script -a <filename>

     Now start doing stuff and "everything" is appended to <filename>.
     For example

            $ script installation

            $ (command)

            $ (result)

            $ ...

            $ ...

            $ (command)

            $ (result)

            $ exit

     The whole session log is in the installation file that you can later
     read and/or cleanup and add to a documentation.

     This command can also be used to redirect the contents to another user,
     but you must be root to do this.

     Step 1 - find out what pts they are using.

            $ w

     Step 2 - Run script on that pts. After running this command below
              everything you type will appear on their screen.

            $ script /dev/pts/4

     Thanks to Jacques.GARNIER-EXTERIEUR@EU.RHODIA.COM for his contribution
     to this tip.

     Also reference TIP 208.

TIP 47:

     Monitor all Network Traffic Except Your Current ssh Connection.

           $ tcpdump -i eth0 -nN -vvv -xX -s 1500 port not 22

       Or to filter out port 123 as well getting the full length of the packet
       (-s 0), use the following:

           $ tcpdump -i eth0 -nN -vvv -xX -s 0 port not 22  and port not 123

       Or to filter only a certain host say

           $ tcpdump -i eth0 -nN -vvv -xX  port not 22 and host

     Just want ip addresses and a little bit of data, then,
     use this. The "-c 20" is to stop after 20 packets.

           $ tcpdump -i eth0 -nN  -s 1500 port not 22 -c 20

     If you're looking for sign of DOS attacks, the following show just the SYN
     packets on all interfaces:

           $ tcpdump 'tcp[13] & 2 == 2'

TIP 48:

     Where are the GNU Reference Manuals?


     Also worth a look the "Linux Documentation Project"


     and Red Hat manuals


TIP 49:

     Setting or Changing the Library Path.

     The following contains the settings to be added or deleted


     After this file is edited, you must run the following:

           $ ldconfig

     See "man ldconfig" for more information.

TIP 50:

     Working with Libraries in C

     Assume the following 3 programs:

      $ cat ./src/test.c

         int test(int t)
           return t;

      $ cat ./src/prog1.c

          program: prog1.c
          dependences: test.c

          compiling this program:
          gcc -o prog test.c prog1.c

          Note the libpersonal include
          should be remove if NOT using the

         #include <libpersonal.h>
         #include <stdio.h>
         main(int argc, char **argv)

      $ cat ./include/libpersonal.h

         extern int test(int);

     Prog1.c needs the test function in  test.c
     To compile, so that both programs work together, do the following:

          $ cd src
          $ gcc -o prog test.c prog1.c -I../include

     However, if you want to create your own static library, then, run the following:

          $ mkdir -p ../lib
          $ gcc -c test.c  -o ../lib/test.o
          $ cd ../lib
          $ ar r libpersonal.a test.o
          $ ranlib libpersonal.a

     or, the ar and ranlib command can be combined as follows:

          $ ar rs libpersonal.a test.o

     To compile the program with the static library:

          $ cd ../src
          $ gcc -I../include -L../lib -o prog1 prog1.c -lpersonal

     The -I../include  tells  gcc to look in the ../include directory for
     libpersonal.h. and -L../lib, tells gcc to look for the
     "libpersonal.a" library.

           $ cd ..
           $ tree src lib include
           |-- prog
           |-- prog1
           |-- prog1.c
           `-- test.c
           |-- libpersonal.a
           `-- test.o
           `-- libpersonal.h

     This was a STATIC library. Often times you will want to use a SHARED
     or dynamic library.


     You must recompile test.c with -fpic option.

          $ cd ../lib
          $ gcc -c -fpic ../src/test.c -o test.o

     Next create the file.

          $ gcc -shared -o test.o

     Now, compile the source prog1.c as follows:

          $ cd ../src
          $ gcc -Wl,-R../lib -L../lib -I../include -o prog2 prog1.c -lpersonal

     This should work fine. But, take a look at prog2 using the ldd command.

          $ ldd prog2 => ../lib/ (0x40017000) => /lib/tls/ (0x42000000)
        /lib/ => /lib/ (0x40000000)

     If you move the program prog2 to a different location, it will not run.
     Instead you will get the following error:

           prog2: error while loading shared libraries:
                     cannot open shared object file: No such file or directory

     To fix this, you should specify the direct path to the library. And in my
     case it is rather long

      $  gcc -Wl,-R/work/souptonuts/documentation/theBook/lib -L../lib -I../include -o prog2 prog1.c -lpersonal

     SPECIAL NOTE: The -R must always follow the -Wl.  (-Wl,-R<directory>) They always go together

TIP 51:

     Actively Monitor a File and Send Email when Expression Occurs.

     This is a way to monitor "/var/log/messages" or any file for certain changes.
     The example below actively monitors "stuff" for the work "now" and as soon as
     "now" is added to the file, the contents of msg are sent to the user

          $ tail -f stuff | \
              awk ' /now/ { system("mail -s \"This is working\" < msg") }'

     Or, you can run a program, say get headings on slashdot from the program "getslash.php" which
     runs on  "" with account "chirico". Assuming you have ssh keys setup, then, the following
     will send mail from the output:

          $ ssh chirico@ "./bin/getslash.php"|mail -s "Slash cron Headlines"

     See (TIP 80) for scraping the headings on slash dot and how to get a copy of "getslash.php".  If you still
     want to use awk:

           $ ssh chirico@ "./bin/getslash.php"| \
                      awk '{ print $0 | "mail -s \x27 Slash Topics \x27 "}'

     Note the "\x27" is a quote.  Maybe you only want articles dealing with "Linux":

           $ ssh chirico@ "./bin/getslash.php"| \
                      awk '/Linux/{ print $0 | "mail -s \x27 Slash Topics \x27 "}'

     For $60, you can get a numeric display from "delcom engineering" that you can send messages and
     data to.  I get weather information off the internet and send it to this device.

     (Reference TIP 151 for ssh tips)

TIP 52:

     Need to Keep Secrets? Encrypt it.

      To Encrypt:

            $ openssl des3 -salt -in file.txt -out file.des3

      The above will prompt for a password, or you can put it in
      with a -k option, assuming you're on a trusted server.

      To Decrypt

            $  openssl des3 -d -salt -in file.des3 -out file.txt -k mypassword

      Need to encrypt what you type? Enter the following, then start typing
      and  ^D to end.

            $ openssl des3 -salt -out stuff.txt

TIP 53:

     Check that a File has Not Been Tampered With: Use Cryptographic Hashing Function.

     The md5sum is popular but dated

              $ md5sum file

     Instead, use one of the following;

              $ openssl dgst -sha1 -c file

              $ openssl dgst -ripemd160 -c  file

     All calls give a fixed length string or "message digest".

TIP 54:

     Need to View Information About a Secure Web Server? A SSL/TLS test.

           $ openssl s_client -connect

     Above will give a long listing of certificates.

     Note, it is also possible to get certificate information about a mail server

           $ openssl s_client -connect -showcerts

     When you do the above command you get two certificates. If you copy
     past both certificates by taking the following contents include the
     begin and end show below:

                 -----BEGIN CERTIFICATE-----
                 -----END CERTIFICATE-----

     Then create files "comcast0.pem" and "comcast1.pem" out of these certificaties and
     put them in a directory "/home/donkey/.certs", then, with the openssl src package, in
     the "./tools/c_rehash" run

            $ c_rehash .certs
            Doing .certs
            comcast0.pem => 72f90dc0.0
            comcast1.pem => f73e89fd.0

     Now it's possible to have fetchmail work with these certs.

       # Sample .fetchmailrc file for Comcast
       # Check mail every 90 seconds
       set daemon 90
       set syslog
       set postmaster donkey
       #set bouncemail
       # Comcast email is zdonkey but computer account is  just donkey
       poll with proto POP3 and options no dns
              user 'zdonkey' with pass "somethin35"  is 'donkey' here options ssl sslcertck sslcertpath '/home/donkey/.certs'
       # currently not used
       mda '/usr/bin/procmail -d %T'


TIP 55:

     cp --parents. What does this option do?

     Assume you have the following directory structure

            |-- a
            |   `-- b
            |       |-- c
            |       |   `-- d
            |       |       |-- file1
            |       |       `-- file2
            |       `-- x
            |           `-- y
            |               `-- file3
            `-- newdir

     Issue the following command:

         $ cp --parents ./a/b/c/d/* ./newdir/

     Now you have the following:

            |-- a
            |   `-- b
            |       |-- c
            |       |   `-- d
            |       |       |-- file1
            |       |       `-- file2
            |       `-- x
            |           `-- y
            |               `-- file3
            `-- newdir
                `-- a
                    `-- b
                        `-- c
                            `-- d
                                |-- file1
                                `-- file2

     Note that you can't do this with "cp -r" because you'd pickup
     the x directory and its contents. 

     You probably want to use the "cp --parents" command for directory 
     surgery, which you need to be very specific on what you cut and

TIP 56:

     Quickly Locating files.

     The "locate" command quickly searches the indexed database for files.  It just
     gives the name of the files; but, if you need more information use it as follows

         $ locate document|xargs ls -l

     The "locate" database may only get updated every 24 hours.  For more recent finds,
     use the "find" command.

TIP 57:

     Using the "find" Command.

     List only directories, max 2 nodes down that have "net" in the name

       $ find /proc -type d -maxdepth 2 -iname '*net*'

     Find all *.c and *.h files starting from the current "." position.

       $ find . \( -iname '*.c'  -o -iname '*.h' \) -print

     Find all, but skip what's in "/CVS" and "/junk". Start from "/work"

       $ find /work \( -iregex '.*/CVS'  -o -iregex '.*/junk' \)  -prune -o -print

     Note -regex and -iregex work on the directory as well, which means
     you must consider the "./" that comes before all listings.

     Here is another example. Find all files except what is under the CVS, including
     CVS listings. Also exclude "#" and "~".

       $ find . -regex '.*' ! \( -regex '.*CVS.*'  -o -regex '.*[#|~].*' \)

     Find a *.c file, then run grep on it looking for "stdio.h"

       $ find . -iname '*.c' -exec grep -H 'stdio.h' {} \;
         sample output -->  ./prog1.c:#include <stdio.h>

                            ./test.c:#include <stdio.h>

     Looking for the disk-hog on the whole system?

       $ find /  -size +10000k 2>/dev/null

     Looking for files changed in the last 24 hours? Make sure you add the
     minus sign "-1", otherwise, you will only find files changed exactly
     24 hours from now. With the "-1" you get files changed from now to 24

       $ find  . -ctime -1  -printf "%a %f\n"
       Wed Oct  6 12:51:56 2004 .
       Wed Oct  6 12:35:16 2004 How_to_Linux_and_Open_Source.txt

     Or if you just want files.

       $ find . -type f -ctime -1  -printf "%a %f\n"

     Details on file status change in the last 48 hours, current directory. Also note "-atime -2").

       $ find . -ctime -2 -type f -exec ls -l {} \;

             NOTE: if you don't use -type f, you make get "." returned, which
             when run through ls "ls ." may list more than what you want.

             Also you may only want the current directory

       $ find . -ctime -2 -type f -maxdepth 1 -exec ls -l {} \;

     To find files modified within the last 5 to 10 minutes

       $ find . -mmin +5 -mmin -10 

     For more example "find" commands, reference the following looking
     for the latest version of "bashscripts.x.x.x.tar.gz":

     See "TIP 71" for examples of find using the inode feature. " $ find . -inum <inode> -exec rm -- '{}' \; "

     If you don't want error messages, or need to redirect error messages "> /dev/null 2>&1", or see
     "TIP 81".

TIP 58:

     Using the "rm" command.

     How do you remove a file that has the name "-".  For instance, if you run the command
     "$ cat > - " and type some text followed by ^d, how does the "-" file get deleted?

        $ rm -- -

     The "--" nullifies any rm options.

     How do you delete the directory "one", all it's sub-directories, and any data?

        $ rm -rf ./one

     Note, to selectively delete stuff on a directory, use the find command "TIP 57".
     To delete by inode, see "TIP 71".

TIP 59:

     Giving ownership.

     How do you give the user "donkey" ownership to all directories and files under
     "./fordonkey" ?

          $ chown -R donkey ./fordonkey

TIP 60:

     Only Permit root login -- give others a message when they try to login.

     Create the file "/etc/nologin" with "nologin" containing the contents
     of the message.

TIP 61:

     Limits: file size, open files, pipe size, stack size, max memory size
             cpu time, plus others.

     To get a listing of current limits:

          $ ulimit -a
             core file size        (blocks, -c) 0
             data seg size         (kbytes, -d) unlimited
             file size             (blocks, -f) unlimited
             max locked memory     (kbytes, -l) unlimited
             max memory size       (kbytes, -m) unlimited
             open files                    (-n) 1024
             pipe size          (512 bytes, -p) 8
             stack size            (kbytes, -s) 8192
             cpu time             (seconds, -t) unlimited
             max user processes            (-u) 8179
             virtual memory        (kbytes, -v) unlimited

     Note as a user you can decrease your limits in the current
     shell session; but, you cannot increase.  This can be ideal
     for testing programs.  But, first you may want to create
     another shell "sh" so that you can "go back to where started".

          $ ulimit -f 10

     Now try

          $ yes >> out
             File size limit exceeded

     To set limits on users, make changes to "/etc/security/limits.conf"

           bozo   - maxlogins 1

     Will keep bozo from loging in more than once.

     To list hard limits:

          $ ulimit -Ha

     To list soft limits:

          $ ulimit -Sa

     To restrict user access by time, day make changes to

     Also take a look at "/etc/profile" to see what other changes
     can be made, plus take a look under "/etc/security/*.conf" for
     other configuration files.

TIP 62:

     Stupid "cat" Tricks.

     Also see (TIP 43 and TIP 235).

     If you have multiple blank lines that you want to squeeze down to
     one line, then, try the following:

          $ cat -s <file>

     Want to number the lines?

          $ cat -n <file>

     Want to show tabs?

          $ cat -t <file>

     Need to mark end of lines by "$"? The following was suggested by  (Amos Shapira)

          $ cat -e <file>

     Want to see all the ctl characters?

          /* ctlgen.c
            Program to generate ctl characters.


               gcc -o ctlgen ctlgen.c


               ./ctlgen > mout

            Now see the characters:

               cat -v mout

            Here's a sample output:

               $ cat -v mout|tail
                   test M-v
                   test M-w
                   test M-x
                   test M-y
                   test M-z
                   test M-{
                   test M-|
                   test M-}
                   test M-~
                   test M-^?

          #include <stdlib.h>

          #include <stdio.h>
          int main()
            int i;

            for(i=0; i < 256; ++i)
              printf("test %c \n",i);

            return 0;

TIP 63:

     Guard against SYN attacks and "ping".

     As root do the following:

          echo 1 > /proc/sys/net/ipv4/tcp_syncookies

     Want to disable "ping" ?

          echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all

     Disable broadcast/multicast "ping" ?

          echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts

     And to enable again:

          echo 0 > /proc/sys/net/ipv4/icmp_echo_ignore_all

TIP 64:

     Make changes to .bash_profile and need to update the current session?

       $ source .bash_profile

     With the above command, the user does not have to logout.

TIP 65:

     What are the Special Shell Variables?

        $#   The number of arguments.
        $@   All arguments, as separate words.
        $*   All arguments, as one word.
        $$   ID of the current process.
        $?   Exit status of the last command.
        $0,$1,..$9,${10},${11}...${N}    Positional parameters. After "9" you must use the ${k} syntax.

     Note that 0 is true. For example if you execute the following, which is true you get zero.

         $  [[ -f /etc/passwd ]]
         $  echo $?
     And the following is false, which returns a 1.

         $  [[ -f /etc/passwdjabberwisnohere ]]
         $  echo $?

     So true=0 and false=1.

     Sample program "mdo"  to show the difference between "$@" and "$*"

        function myarg
            echo "$# in myarg function"
        echo -e "$# parameters on the cmd line\n"
        echo -e "calling: myarg \"\$@\" and myarg \"\$*\"\n"
        myarg "$@"
        myarg "$*"
        echo -e "\ncalling: myarg \$@ and myarg \$* without quotes\n"
        myarg $@
        myarg $*

      The result of running "./mdo one two". Note that when quoted, myarg "$*",
      returns 1 ... all parameters are smushed together as one word.

            [chirico@third-fl-71 theBook]$ ./mdo one two
            2 parameters on the cmd line

            calling: myarg "$@" and myarg "$*"

            2 in myarg function
            1 in myarg function

            calling: myarg $@ and myarg $* without quotes

            2 in myarg function
            2 in myarg function

      Example program "mdo2" shows how the input separator can be changed.

        echo -e "$*\n"
        echo -e "$*\n"
        echo -e "$*\n"
        echo -e "$*\n"

            [chirico@third-fl-71 theBook]$ ./mdo2 one two three four five
            one two three four five




TIP 66:

     Replace all "x" with "y" and all "y" with "x" in file data.

        $ cata data
          x y
          y x

        $ tr "xy"  "yx" < data
          y x
          x y

TIP 67:

     On a Linux 2.6.x Kernel, how do you directly measure disk activity,
     and where is this information documented?

          o The information is documented in the kernel source

          o The new way of getting this info in 2.6.x is
              $ cat /sys/block/hda/stat
            151121 5694 1932358 796675 37867 76770 916994 8353762 0 800672 9150437

             Field  1 -- # of reads issued
                 This is the total number of reads completed successfully.
             Field  2 -- # of reads merged, field 6 -- # of writes merged
                 Reads and writes which are adjacent to each other may be merged for
                 efficiency.  Thus two 4K reads may become one 8K read before it is
                 ultimately handed to the disk, and so it will be counted (and queued)
                 as only one I/O.  This field lets you know how often this was done.
             Field  3 -- # of sectors read
                 This is the total number of sectors read successfully.
             Field  4 -- # of milliseconds spent reading
                 This is the total number of milliseconds spent by all reads (as
                 measured from __make_request() to end_that_request_last()).
             Field  5 -- # of writes completed
                 This is the total number of writes completed successfully.
             Field  7 -- # of sectors written
                 This is the total number of sectors written successfully.
             Field  8 -- # of milliseconds spent writing
                 This is the total number of milliseconds spent by all writes (as
                 measured from __make_request() to end_that_request_last()).
             Field  9 -- # of I/Os currently in progress
                 The only field that should go to zero. Incremented as requests are
                 given to appropriate request_queue_t and decremented as they finish.
             Field 10 -- # of milliseconds spent doing I/Os
                 This field is increases so long as field 9 is nonzero.
             Field 11 -- weighted # of milliseconds spent doing I/Os
                 This field is incremented at each I/O start, I/O completion, I/O
                 merge, or read of these stats by the number of I/Os in progress
                 (field 9) times the number of milliseconds spent doing I/O since the
                 last update of this field.  This can provide an easy measure of both
                 I/O completion time and the backlog that may be accumulating.

       Note, this is device specific.

TIP 68:

     Passing Outbound Mail, plus Masquerading User and Hostname.

     Here's a specific example:

         How does one send and receive Comcast email from a home Linux box,
         which uses Comcast as the ISP, if the local account on the Linux
         box is different from the Comcast email.  For instance, the
         account on the Linux box is "chirico@third-fl-71" and the Comcast
         email account is "".  Note both the hostname and
         username are different.

         So, the user "chirico" using "mutt", "elm" or any email program would
         like to send out email to say ""; yet, donkey would
         see the email from "" and not "chirico@third-fl-71"
         but chirico@third-fl-71 would get the replies.

         For a full description of how to solve this problem, including related
         "", "site.config.m4", "genericstable", "genericsdomain",
         ".procmailrc", and ".forward" files,  reference the following:


         Included in the above link are instructions for building sendmail with
         "SASL" and "STARTTLS".

TIP 69:

     How do you remove just the last 2 lines from a file and save the result?

         $ sed  '$d' file | sed '$d' > savefile

     Or, as Amos Shapira pointed out, it's much easier with the head command.

         $ head -2 file

      And, of course, removing just the last line

         $ sed '$d' file > savefile

         (See REFERENCES (13))

     How do you remove extra spaces at the end of a line?

         $ sed 's/[ ]*$//g'

     How do you remove blank lines, or lines with just spaces and tabs,
     saving the origional file as file.backup?

         $ perl -pi.backup -e "s/^(\s)*\n//"  file

     Or, you may want to remove empty spaces and tabs at the end of a line

         $ perl -pi.backup -e "s/(\s)*\n/\n/" file

     Or, you may want to converts dates of the format 01/23/2007 to the
     format 2007-01-23. This is MySQL's common date format.

         $ perl -pi.backup -e "s|(\d+)/(\d+)/(\d+)|\$3-\$2-\$1|" file

     Note, you need a backslash \$3,\$2,\$1 so as to not get bash shell

TIP 70:

     Generating Random Numbers.

         $ od -vAn -N4 -tu4 < /dev/urandom

TIP 71:

     Deleting a File by it's Inode Value.

       See (PROGRAMMING TIP 5) for creating the file, or

       $ cat > '\n\n\n\n\n\n\n'
         type some text

     To list the inode and display the characters.

       $ ls -libt *

     To remove by inode. Note the "--" option.  This
     will keep any special characters in the file from being
     interpreted at "rm" options.

       $ find . -inum <inode> -exec rm -- '{}' \;

     Or to check contents

       $ find . -inum <inode> -exec cat '{}' \;


TIP 72:

     Sending Attachments Using Mutt -- On the Command Line.

        $ mutt -s "See Attachment" -a file.doc < message.txt

          or just the message:

        $ echo | mutt -a sample.tar.gz


        Also see (TIP 51).

TIP 73:

     Want to find out what functions a program calls?

         $ strace <program>

     Try this with "topen.c" (see PROGRAMMING TIP 5)

         $ strace  ./topen

TIP 74:

     RPM Usage Summary.

     Install. Full filename is needed.

         $ rpm -ivh Fedora/RPMS/postgresql-libs-7.4.2-1.i386.rpm

     To view list of files installed with a particular package.

         $ rpm -ql postgresql-libs

     Or, to get the file listing from a package that is not installed use the
     "-p" option.

         $ rpm -pql /iso0/Fedora/RPMS/libpcap-0.8.3-7.i386.rpm

    Note, you can also get specific listing. For example, suppose you
    want to view the changelog

         $ rpm -q --changelog audit
               * Tue Jan 13 2009 Steve Grubb <> 1.7.11-2
               - Add crypto event definitions

               * Sat Jan 10 2009 Steve Grubb <> 1.7.11-1
               - New upstream release

    Or, maybe you want to see what scripts are installed.

         $ rpm -q --scripts audit
               postinstall scriptlet (using /bin/sh):
               /sbin/chkconfig --add auditd
               preuninstall scriptlet (using /bin/sh):
               if [ $1 -eq 0 ]; then
                    /sbin/service auditd stop > /dev/null 2>&1
                    /sbin/chkconfig --del auditd
               postuninstall scriptlet (using /bin/sh):
               if [ $1 -ge 1 ]; then
                    /sbin/service auditd condrestart > /dev/null 2>&1 || :


     For dependencies listing, use the "R" option.

         $ rpm -qpR /iso0/Fedora/RPMS/libpcap-0.8.3-7.i386.rpm
               kernel >= 2.2.0
               rpmlib(CompressedFileNames) <= 3.0.4-1
               rpmlib(PayloadFilesHavePrefix) <= 4.0-1

     To check the integrity, use the "-K" option.

         $ rpm -K /iso0/Fedora/RPMS/libpcap-0.8.3-7.i386.rpm
               /iso0/Fedora/RPMS/libpcap-0.8.3-7.i386.rpm: (sha1) dsa sha1 md5 gpg OK

     To list all packages installed.

         $ rpm -qa

     To find out which file a package belongs to.

         $ rpm -qf /usr/lib/

     To find the source. (See Tip 246 for more detail)

         $ rpm -qi sysstat

     To uninstall a package

         $ rpm -e

     For building rpm packages reference the following:

     To verify md5 sum so that you know it downloaded ok

         $ rpm -K  *.rpm

     The following is a good reference:

TIP 75:

     Listing Output from a Bash Script.

     Add "set -x"

          set -x

     Will list the files and output as follows:

           + ls
           ChangeLog  CVS  data  test
           + date

           Thu Jul  1 20:41:04 EDT 2004

TIP 76:

     Using wget.

     Grap a webpage and pipe it to less. For example suppose you wanted to pipe the
     contents of all these tips, directly from the web.

      $ wget -O -|less

TIP 77:

     Finding IP address and MAC address.

       $ /sbin/ifconfig

     Note the following output "eth0" and "eth0:1" which means
     two IP addresses are tied to 1 NIC (Network Interface Card).

             eth0      Link encap:Ethernet  HWaddr 00:50:DA:60:5B:AD
                       inet addr:  Bcast:  Mask:
                       UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
                       RX packets:982757 errors:116 dropped:0 overruns:0 frame:116
                       TX packets:439297 errors:0 dropped:0 overruns:0 carrier:0
                       collisions:0 txqueuelen:1000
                       RX bytes:693529078 (661.4 Mb)  TX bytes:78400296 (74.7 Mb)
                       Interrupt:10 Base address:0xa800

             eth0:1    Link encap:Ethernet  HWaddr 00:50:DA:60:5B:AD
                       inet addr:  Bcast:  Mask:
                       UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
                       RX packets:982757 errors:116 dropped:0 overruns:0 frame:116
                       TX packets:439299 errors:0 dropped:0 overruns:0 carrier:0
                       collisions:0 txqueuelen:1000
                       RX bytes:693529078 (661.4 Mb)  TX bytes:78400636 (74.7 Mb)
                       Interrupt:10 Base address:0xa800

             lo        Link encap:Local Loopback
                       inet addr:  Mask:
                       UP LOOPBACK RUNNING  MTU:16436  Metric:1
                       RX packets:785 errors:0 dropped:0 overruns:0 frame:0
                       TX packets:785 errors:0 dropped:0 overruns:0 carrier:0
                       collisions:0 txqueuelen:0
                       RX bytes:2372833 (2.2 Mb)  TX bytes:2372833 (2.2 Mb)

TIP 78:

     DOS to UNIX and UNIX to DOS.

        $ dos2unix file.txt

     And to go the other way from UNIX to DOS

        $ unix2dos unixfile

     See the man page, since there are MAC options.

     NOTE: If you're working file DOS files, you'll probably want to use
           "zip" instead of "gzip" so users on Windows can unzip them.

              $ zip test.txt

TIP 79:

     Need to Run Interactive Commands? Try "expect".

     This simple example waits for the input "hi", in some form before
     returning, immediately, "hello there!". Otherwise, it will wait for
     60 seconds, then, return "hello there!".

          set timeout 60
          expect "hi\n"
          send "hello there!\n"


TIP 80:

     Using PHP as a Command Line Scripting Language.

     The following will grab the complete file from slashdot.

         #!/usr/bin/php -q

         $fileName = "";
         $rss = file($fileName) or die ("Cannot open file $fileName\n");
         for ($index=0; $index < count($rss); $index++)
              echo $rss[$index];

       Note, if you want an example that parses the XML of
       slashdot, then, download the following:


TIP 81:

     Discarding all output -- including stderr messages.

         $ ls  > /dev/null 2>&1

     Or sending all output to a file

         $ someprog > /tmp/file 2>&1

     Sometimes, find displays a lot of errors when searching through
     directories that the user doesn't have access to. To discard
     error messages "stderr", which is normally file descripter "2"
     work the following:

         $ find / -iname 'stuff' 2>/dev/null

          or to pipe results elsewhere

         $ find / -iname 'stuff' > /tmp/results_of_find  2>/dev/null

     Also see (TIP 118).

TIP 82:

     Using MIX.  D. Knuth's  assembly language/machine-code instruction set used in
     his books to illustrate his algorithms.

     Download the source:

       $ ./configure
       $ make
       $ make install

     Documentation can be found at the following link. The link on
     sourceforge is not correct, but, the one below works.

TIP 83:

     Gnuplot [ ].

     This software is ideal for printing graphs.

         gnuplot> set term png
         gnuplot> set output 'testcos.png'
         gnuplot> plot cos(x)*sin(x)
         gnuplot> exit

     Or the following command can be put into "file"

            $ cat > file
            set term png
            set output 'testcos.png'
            plot cos(x)*sin(x)

     Then, run as follows:

            $ gnuplot file

     Or, suppose you have the following file "/home/chirico/data". Comments
     with "#" are not read by gnuplot.

            # File /home/chirico/data
            2005-07-26  1    2.3    3
            2005-07-27  2    3.4    5
            2005-07-28  3    4    6.6
            2005-07-29  4    6    2.5

     And you have the following new "file"

            set term png
            set xdata time
            set timefmt "%Y-%m-%d "
            set format x "%Y/%m/%d"
            set output '/var/www/html/chirico/gnuplot/data.png'
            plot '/home/chirico/data' using 1:2 w linespoints  title '1st col', \
             '/home/chirico/data' using 1:3 w linespoints  title '2nd col', \
             '/home/chirico/data' using 1:4 w linespoints  title '3rd col'

     You can now get a graph of this data running the following:

            $ gnuplot file

TIP 84:

     CPU Information - speed, processor, cache.

            $ cat /proc/cpuinfo

               processor       : 0
               vendor_id       : GenuineIntel
               cpu family      : 15
               model           : 2
               model name      : Intel(R) Pentium(R) 4 CPU 2.20GHz
               stepping        : 9
               cpu MHz         : 2193.221
               cache size      : 512 KB
               fdiv_bug        : no
               hlt_bug         : no
               f00f_bug        : no
               coma_bug        : no
               fpu             : yes
               fpu_exception   : yes
               cpuid level     : 2
               wp              : yes
               flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr
               bogomips        : 4325.37

      "bogomips" is a rough but good way to quickly compare two computer speeds. True it's a
      bogus reading; but, a "good enough" for government work calculation.  See (TIP 10) for
      "vmstat" and "iostat".

TIP 85:

     POVRAY - Making Animated GIFs

     To see this in action, reference:

     These are the basic command to create:

        $ povray orbit.ini -Iorbit.pov
        $ convert -delay 20 *.ppm orbit.gif

     By the way, convert is a program from imagemagick, and it can
     be downloaded from ( ).

    The following is "orbit.pov"

          #include ""
          #include ""
          #include ""
          #include ""
          #include ""
          #include ""

          camera {
            location < 2, 3, -8 >

            look_at  < 0, 0, 0 >
            focal_point <0, 0, 0>
            blur_samples 20

          light_source {
                  < 0, 10, 0>
                  color White
                  area_light <2,0,0>,<0,0,2>, 2, 2
                  adaptive 1
                  fade_distance 8
                  fade_power 1

          sky_sphere {

          plane { <0, 1, 0>, -1
                  texture {
                          pigment {
                                  checker color Blue, color White
                          finish {Phong_Glossy}
          #declare ball0=
                  sphere {
                  <0.5, 0.5, 0>, 1
                  texture {
                  pigment {Yellow}

          #declare ball1=
                  sphere {
                  <3, 2, 0>, 0.5
                  texture {
                  pigment {Blue}

          #declare ball2=
                  sphere {
                  <3, 1, 0>, 1
                  texture {
                  pigment {Green}

          object {ball0 rotate 360*clock*y}
          object {ball1 rotate 720*clock*y}
          object {ball2 rotate 360*(1 - clock)*y}

    And, "orbit.ini" follows:






TIP 86:

     GPG --  GnuPG

                   (SCRIPT 4) on following link:

     Generage key:

        $ gpg --gen-key

     Generate public key ID and fingerprint

        $ gpg --fingerprint

     Get a list of keys:

        $ gpg --list-keys

          pub  1024D/A11C1499 2004-07-15 Mike Chirico <>

          sub  1024g/E1A3C2B3 2004-07-15


        $ gpg -r Mike  --encrypt sample.txt

       This will produce "sample.txt.asc", which is a binary file.  Note, I can use "Mike" because that's the
       name on the list of keys. Again, it will be a binary file.

     Encrypt using "ASCII-armored text"  (--armor), which is probably what you want when sending "in" the body of an
     email, or some document.

        $ gpg  -r Mike  --encrypt --armor sample.txt
        $ gpg -r Mike -e -a sample.txt
        $ gpg --output somefile.asc --armor -r Mike  --encrypt --armor sample.txt

     The above 3 statements will still produce "sample.txt.asc", but look at it, or "$ cat sample.txt.asc" without 
     fear, since there are no binary characters. Yes, you could even compile a program "$ g++ -o test test.c" , then,
     "$ gpg --output test.asc  -r Mike --encrypt --armor test". However, when decrypting make sure to pipe
     the results.

            $ gpg --decrypt test.asc > test

     Export "public" key:

           $ gpg --armor --export Mike > m1.asc

     Signing the file "message.txt":

           $ gpg --clearsign message.txt

     Sending the key to the "key-server"

        First, list the keys.

                $ gpg --list-keys
                                 v------------------ Use this with "0x" in front -------
                  pub  1024D/A11C1499 2004-07-15 Mike Chirico  <>   |
                  sub  1024g/E1A3C2B3 2004-07-15                                        |
                $ gpg --send-keys 0xA11C1499

             The above sends it to the keyserver defined in "/home/chirico/.gnupg/gpg.conf".  Other key servers:


             When you go to your user-group meetings, you need to bring 2 forms of ID, and
             list your Key fingerprint. Shown below is the command for getting this fingerprint.

                $ gpg --fingerprint
                 pub   1024D/A11C1499 2004-07-15
                 Key fingerprint = 9D7F C80D BB7B 4BAB CCA4  1BE9 9056 5BEC A11C 1499
                 uid   Mike Chirico ( <>
                 sub   1024g/E1A3C2B3 2004-07-15

     Receving keys:

        The following will retrieve my key

               $ gpg --recv-keys 0xA11C1499

     Special Note: If you get the following error "GPG: Warning: Using Insecure Memory" , then,
                   " chmod 4755 /path/to/gpg"  to setuid(root) permissioins on the gpg binary.

     NOTE: If using mutt, just before sending with the "y" option, hit "p" to sign or encrypt.

     It's possible to create a gpg/pgp email from the command line. For a tutorial on this,
     reference (SCRIPT 4) at the following link:

TIP 87:

     Working with Dates: Steffen Beyer has developed a Perl and C module for working with dates

     This softare can be downloaded from the following location:

         $ wget
         $ tar -xzvf Date-Calc-5.3.tar.gz
         $ cd Date-Calc-5.3
         $ cp ./examples/cal.c .
         $ gcc cal.c DateCalc.c -o mcal

     The file cal.c contains sample function calls from DateCalc.c.  Note, "DateCalc.c"
     is just a list of functions and includes for "DateCalc.h" and "ToolBox.h".

     Or, and this may be easier, just download the following:

     The above link contains a few examples.

TIP 88:

     Color patterns for mutt.

     The colors can be changed in the /home/user/.muttrc file. The first field begins with
     color, the second field is the foreground color, and the third field is the background
     color, or default.

     An example .muttrc for colors:

       # color patterns for mutt
       color normal     white          black # normal text
       color indicator  black          yellow  # actual message
       color tree       brightmagenta  default # thread arrows
       color status     brightyellow         default # status line
       color error      brightred      default # errors
       color message    magenta        default # info messages
       color signature  magenta        default # signature
       color attachment brightyellow   red     # MIME attachments
       color search     brightyellow   red     # search matches
       color tilde      brightmagenta  default # ~ at bottom of msg
       color markers    red            default # + at beginning of wrapped lines
       color hdrdefault cyan           default # default header lines
       color bold       red            default # hiliting bold patterns in body
       color underline  green          default # hiliting underlined patterns in body
       color quoted     cyan           default # quoted text
       color quoted1    magenta        default
       color quoted2    red            default
       color quoted3    green          default
       color quoted4    magenta           default
       color quoted5    cyan           default
       color quoted6    magenta        default
       color quoted7    red            default
       color quoted8    green          default
       color quoted9    cyan           default
       color body   cyan  default  "((ftp|http|https)://|news:)[^ >)\"\t]+"
       color body   cyan  default  "[-a-z_0-9.+]+@[-a-z_0-9.]+"
       color body   red   default  "(^| )\\*[-a-z0-9*]+\\*[,.?]?[ \n]"
       color body   green default  "(^| )_[-a-z0-9_]+_[,.?]?[\n]"
       color body   red   default  "(^| )\\*[-a-z0-9*]+\\*[,.?]?[ \n]"
       color body   green default  "(^| )_[-a-z0-9_]+_[,.?]?[ \n]"
       color index  cyan  default  ~F         # Flagged
       color index  red   default  ~N         # New
       color index  magenta    default  ~T         # Tagged
       color index  cyan       default  ~D         # Deleted

     Also see (TIP 190)

TIP 89:

     ps command in detail

     Here are the possible codes when using state "$ ps -e -o state,cmd"

                  D   uninterruptible sleep (usually IO)
                  R   runnable (on run queue)
                  S   sleeping
                  T   traced or stopped
                  Z   a defunct ("zombie") process

                  <    high-priority (not nice to other users)
                  N    low-priority (nice to other users)
                  L    has pages locked into memory (for real-time and custom IO)
                  s    is a session leader
                  l    is multi-threaded (using CLONE_THREAD, like NPTL pthreads do)
                  +    is in the foreground process group

    For instance:

     Note that the -o is for user defined, and -e is for select
     all process.

       $ ps -e -o pid,state,start,time,etime,cmd

             9946 S 15:40:45 00:00:00    02:23:29 /bin/bash -i
             9985 T 15:41:24 00:00:01    02:22:50 emacs mout2
            10003 T 15:43:59 00:00:00    02:20:15 emacs NOTES
            10320 T 17:38:42 00:00:00       25:32 emacs stuff.c

     You may want to command below, without the -e, which will give the
     process only under the current terminal.

       $ ps -o pid,state,start,time,etime,cmd

     Want to find what 's impacting your load?

       $ ps -e -o %cpu,pid,state,start,time,etime,%cpu,%mem,cmd|sort -rn|less

       $ ps aux

            root         1  0.0  0.0  1380  480 ?        S    Aug04   0:00 init [3]
            root         2  0.0  0.0     0    0 ?        SWN  Aug04   0:00 [ksoftirqd/0]
            root         3  0.0  0.0     0    0 ?        SW<  Aug04   0:00 [events/0]
            root         4  0.0  0.0     0    0 ?        SW<  Aug04   0:00 [khelper]

     Or, if you want to see the environment add the -e option

       $ ps aeux

            chirico   2735  0.0  0.1  4400 1492 pts/0    S    Aug04   0:00 -bash USER=chirico LOGNAME=chirico HOME=/home/chirico PATH=/usr/
            chirico   2771  0.0  0.0  4328  924 pts/0    S    Aug04   0:00 screen -e^Pa -D -R HOSTNAME=third-fl-71.localdomain TERM=xterm S
            chirico   2772  0.0  0.6  9476 6352 ?        S    Aug04   0:54 SCREEN -e^Pa -D -R HOSTNAME=third-fl-71.localdomain TERM=xterm S
            chirico   2773  0.0  0.1  4432 1548 pts/1    S    Aug04   0:10 /bin/bash STY=2772.pts-0.third-fl-71 TERM=screen TERMCAP=SC|scre
            chirico   2797  0.0  0.1  4416 1496 pts/2    S    Aug04   0:00 /bin/bash STY=2772.pts-0.third-fl-71 TERM=screen TERMCAP=SC|scre
            root      2821  0.0  0.0  4100  952 pts/2    S    Aug04   0:00 su -
            root      2822  0.0  0.1  4384 1480 pts/2    S    Aug04   0:00 -bash
            chirico   2862  0.0  0.1  4428 1524 pts/3    S    Aug04   0:00 /bin/bash STY=2772.pts-0.third-fl-71 TERM=screen TERMCAP=SC|scre
            sporkey   2946  0.0  0.2  6836 2960 ?        S    Aug04   0:15 fetchmail
            chirico   2952  0.0  0.1  4436 1552 pts/5    S    Aug04   0:00 /bin/bash STY=2772.pts-0.third-fl-71 TERM=screen TERMCAP=SC|scre
            chirico   3880  0.0  0.1  4416 1496 pts/6    S    Aug05   0:00 /bin/bash STY=2772.pts-0.third-fl-71 TERM=screen TERMCAP=SC|scre
            root      3904  0.0  0.0  4100  956 pts/6    S    Aug05   0:00 su - donkey
            donkey    3905  0.0  0.1  4336 1452 pts/6    S    Aug05   0:00 -bash
            donkey    3938  0.0  0.2  6732 2856 ?        S    Aug05   0:14 fetchmail
            chirico   3944  0.0  0.1  4416 1496 pts/7    S    Aug05   0:00 /bin/bash STY=2772.pts-0.third-fl-71 TERM=screen TERMCAP=SC|scre

     There is also a -f "forrest" option. Also note below " -bash" is the start of a login shell.

      $ ps aeuxwwf 
     The ww option above gives a wide format with all variables. Use the above command if you plan
     to parse through a Perl script. Otherwise, it may be easier to do a quick read using the command
     below, without "ww".    

      $ ps aeuxf

            root      2339  0.0  0.1  3512 1444 ?        S    Dec01   0:00 /usr/sbin/sshd
            root     25651  0.0  0.1  6764 1980 ?        S    Dec23   0:00  \_ /usr/sbin/sshd
            chirico  25653  0.0  0.2  6840 2236 ?        S    Dec23   0:14      \_ /usr/sbin/sshd
            chirico  25654  0.0  0.1  4364 1440 pts/4    S    Dec23   0:00          \_ -bash USER=chirico LOGNAME=chirico HOME=/home/chirico
            chirico  25690  0.0  0.0  4328  920 pts/4    S    Dec23   0:00              \_ screen -e^Pa -D -R HOSTNAME=third-fl-71.localdomain TERM=xterm
            root      2355  0.0  0.0  2068  904 ?        S    Dec01   0:00 xinetd -stayalive -pidfile /var/run/

     It is also possible to list the process by command line. For example, the following command will only list the emacs

      $ ps -fC emacs
       UID        PID  PPID  C STIME TTY          TIME CMD
       chirico   5049  5020  0 May11 pts/13   00:00:00 emacs -nw Notes
       chirico  12368  5104  0 May12 pts/18   00:00:00 emacs -nw dnotify.c
       chirico  19792 18028  0 May13 pts/20   00:00:00 emacs -nw hello.c
       chirico  14034 27367  0 18:52 pts/8    00:00:00 emacs -nw How_to_Linux_and_Open_Source.txt

     You may also want to consider using top in batch mode. Here the "-n 1" means refresh once,
     and the "b" is for batch. The "fmt -s" is to put it in a more readable format.

       $ top -n 1 b |fmt  -s >>statfile

TIP 90:

     Learning Assembly.

     Once you have written the source, assuming the file is "exit.s", it can be compiled as follows:

            $ as exit.s -o exit.o
            $ ld exit.o -o exit

     Here is the program:

         #INPUT:  none
         #OUTPUT:         returns a status code. This can be viewed
         # by typing
         # echo $?
         # after running the program
         # %eax holds the system call number
         # (this is always the case)
         # %ebx holds the return status
                 .section .data
                 .section .text

                 .globl _start
                 movl $1, %eax # this is the linux kernel command
                 # number (system call) for exiting
                 # a program
                 movl $0, %ebx # this is the status number we will
                 # return to the operating system.
                 # Change this around and it will
                 # return different things to
                 # echo $?
                 int $0x80 # this wakes up the kernel to run
                 # the exit command

     After running this program, you can get the exit code.

            $ echo $?

     That is about all it does; but, get the book for more details. The
     book is free.

TIP 91:

     Creating a sandbox for reiserfstune,debugreiserfs and ACL.  Also see TIP 4.

     Assume you have a reisers files system created from a disk file, which
     means you have done something like the following:

          # dd if=/dev/zero of=disk-rfs count=102400
          # losetup  /dev/loop4 ./disk-rfs
          # mkfs -t reiserfs /dev/loop4
          # mkdir /fs2
          # mount -o loop,acl ./disk-rfs /fs2

     Now, you can run reiserfstune. But, first you will need to umount fs2

          # umount /fs2
          # reiserfstune ./disk-rfs

     Or you can run the debug command

          # debugreiserfs -J ./disk-rfs

     Now, suppose you run through a lot of the debug options on and you destroy this file.

     You can recreate the file and delete the loop device.

          # dd if=/dev/zero of=disk-rfs count=102400
          # losetup -d /dev/loop4
          # mount -o loop,acl ./disk-rfs /fs2

     Now, try working with some of the ACL options - you can only do this
     with the latest kernel and tools -- Fedora Core 2 will work.

     Assume you have 3 users, donkey, chirico and bozo2. You can give
     everyone rights to this file system as follows:

          # setfacl -R -m d:u:donkey:rwx,d:u:chirico:rwx,d:u:bozo2:rwx /fs2

TIP 92:

     SpamAssassin - Setup.

     Step 1.

           Installing the SpamAssassin CPAN utility. You will need to do this
           as root.
              $ su -

           Once you have root privileges invoke cpan.
              # perl -MCPAN -e shell


           Now install with prerequisites policy set to ask.
              cpan> o conf prerequisites_policy ask
              cpan> install Mail::SpamAssassin
           You will get lots of output as the necessary modules are downloaded and
           compiled and installed.

     Step 2.


            Edit the following "/etc/mail/spamassassin/"

            Here is a look at my file

                $ cat /etc/mail/spamassassin/

                # This is the right place to customize your installation of SpamAssassin.
                # See 'perldoc Mail::SpamAssassin::Conf' for details of what can be
                # tweaked.
                # rewrite_subject 0
                # report_safe 1
                # trusted_networks 212.17.35.
                # Below added from book
                # You may want to set this to 5, then, work your way down.
                # Currently I have this 3
                required_hits 3
                # This determines how spam is reported. Currently safe email is reported
                # in the message.
                report_safe 1
                # The will rewrite the tag of the spam message.
                rewrite_subject 1
                # By default, SpamAssassin will run RBL checks.  If your ISP already
                # does this, set this to 1.
                skip_rbl_checks 0

     Step 3.

            Update .procmail.

            You should update the .procmail file as follows. Here is my /home/chirico/.procmail file.

                $ cat /home/chirico/.procmailrc

                #  Must have folder MailTRASH
                # Will get everything from this mail
                * ^From:.*
                # Spamassassin
                * <300000


TIP 93:

     Make Graphs: using dot and neato.

       $ dot -Tpng dotfile -o myout.png

     To see the output reference the following:

     Where "dotfile" is the following:

       $ cat dotfile

       digraph g
               node [shape = record];

               node0 [ label ="<f0> stuff | <f1> J | <f2> "];
               node1 [ label ="<f0> | <f1> E | <f2> "];
               node4 [ label ="<f0> | <f1> C | <f2> "];
               node6 [ label ="<f0> | <f1> I | <f2> "];
               node2 [ label ="<f0> | <f1> U | <f2> "];
               node5 [ label ="<f0> | <f1> N | <f2> "];
               node9 [ label ="<f0> | <f1> Y | <f2> "];
               node8 [ label ="<f0> | <f1> W | <f2> "];
               node10 [ label ="<f0> | <f1> Z | <f2> "];
               node7 [ label ="<f0> | <f1> A | <f2> "];
               node3 [ label ="<f0> | <f1> G | <f2> "];

               "node0":f0 -> "node1":f1;
               "node0":f2 -> "node2":f1;

               "node1":f0 -> "node4":f1;
               "node1":f2 -> "node6":f1;
               "node4":f0 -> "node7":f1;
               "node4":f2 -> "node3":f1;

               "node2":f0 -> "node5":f1;
               "node2":f2 -> "node9":f1;

               "node9":f0 -> "node8":f1;
               "node9":f2 -> "node10":f1;

     Checkout the following article:

     To download this software

TIP 94:

     Makefile: working with conditions

     First note that all the indentations of the file must be
     a single tab. There cannot be any spaces, or make will
     not run.

       $ cat Makefile

        # Compiler flags
        sqliteLIB := $(shell ls /usr/local/lib/
        sqlite3LIB := $(shell ls /usr/local/lib/
        # all assumes sqlite and sqlite3 are installed

        ifeq ("$(sqlite3LIB)","/usr/local/lib/")
             @echo -e "True -- we found the file"
             @echo "False -- we did not find the file"

     So, if I run make I will get the following output.

       $ make
       True -- we found the file

     This is because I have a file /usr/local/lib/ on my system.
     Note how the assignment is made, with the shell command

           sqlite3LIB := $(shell ls /usr/local/lib/

TIP 95:

     Bash: Conditional Expressions

          if [ -e /etc/ntp.conf ]
              echo "You have the ntp config file"
              echo "You do not have the ntp config file"

       Now using an AND condition inside the [ ]. By the way, above, you
       can put the "then" on the same line as the if "if [ -e /etc/ntp.conf ]; then"
       as long as you use the ";".

           if [ \( -e /etc/ntp.conf \) -a \( -e /etc/ntp/ntpservers \) ]
               echo "You have ntp config and ntpservers"
             elif [ -e /etc/ntp.conf ]; then
               echo " You just have ntp.conf "
             elif [ -e /etc/ntp/ntpservers ]; then
               echo " You just have ntpservers "
               echo " you have neither ntp.conf or ntpservers"

        A few things to note above. Else if statement is written as "elif", and when
        dealing with "(" you will need to insert "\(". By the way "-o" can replace "-a"
        and the "-o" is for OR condition. AND can be done as follows too.

           if [ -e /etc/ntp.conf ] && [ -e /etc/ntp/ntpservers ]
               echo "You have ntp config and ntpservers"
             elif [ -e /etc/ntp.conf ]; then
               echo " You just have ntp.conf "
             elif [ -e /etc/ntp/ntpservers ]; then
               echo " You just have ntpservers "
               echo " you have neither ntp.conf or ntpservers"

          Conditional Expressions (files).

             -b file      True if file exists and is a block file
             -c file      True if file exists and is a character device file
             -d file      True if file exists and is a directory
             -e file      True if file exists
             -f file      True if file exists and is a regular file
             -g file      True if file exists and is set goup id
             -G file      True if owned by the effective group ID

             -k file      True if "sticky" bit is set and file exists
             -L file      True if file exists and is a symbolic link
             -n string    True if string is non-null

             -O file      Ture if file exists and is owned by the effective user ID

             -p file      True if file is a named pipe (FIFO)
             -r file      True if file is readable
             -s file      True if file has size > 0
             -S file      True if file exists and is a socket

             -t file      True if file is open and refers to a terminal.
             -u file      True if setuid bit is set
             -w file      True if file exists and is writeable
             -x file      True if file executable
             -x dir       True if directory can be searched

             file1 -nt file2     True if file1 modification date newer than file2
             file1 -ot file2     True if file1 modification date older than file2
             file1 -ef file2     True if file1 and file2 have same inode

          Conditional Expressions (Integers).

             -lt  Less than
             -le  Less than or equal
             -eq  Equal
             -ge  Greater than or equal
             -gt  Greater than
             -ne  Not equal

          Example usage.

               while read num value; do
                if [ $num -gt  2 ]; then
                  echo $value
             } < somefile

          Conditional Expressions (Strings).

             str1 = str2      str1 matches str2
             str1 != str2     str1 does not matches str2
             str1 < str2      str1 is less than str2
             str1 >  str2     str1 is greater than str2
             -n str1          str1 is not null (length greater than 0)
             -z str1          str1 is null (las length 0)

TIP 96:

     CVS: Working with cvs


       To create a repository, and this is normally done by the system admin. This
       is NOT creating a project to checkout, but the location where everything
       will be stored! The initial repository!

             cvs -d repository_root_directory init

       Or here is a specific example:

             cvs -d /work/cvsREPOSITORY/   init

       Creating a directory tree from scratch. For a new project, the easiest thing to
       do is probably to create an empty directory structure, like this:

             $ mkdir sqlite_examples
             $ mkdir sqlite_examples/man
             $ mkdir sqlite_examples/testing

       After that, you use the import command to create the
       corresponding (empty) directory structure inside the repository:

             $ cd <directory>

             $ cvs -d repository_root_directory import  -m "Created directory structure" yoyodyne/dir yoyo start

       Or, here is a specific example.

             $  cd sqlite_examples
             $  cvs  -d /work/cvsREPOSITORY/ import -m 'test SQlite'  sqlite_examples sqlite_examples start

       Now, you can delete the directory sqlite_examples, or go to another directory and type
       the following:

             $ cvs -d /work/cvsREPOSITORY/ co sqlite_examples


           1. cvsps
           2. cvsreport

         cvsps which you can find at

             $ cvsps -f README_sqlite_tutorial.html

TIP 97:

     Common vi and vim commands

           Command mode ESC

                dd       delete
                u        undelete
                y        yank (copy to buffer)
                p/P      p before cursor/P after cursor

                Ctl-g    show current line number
                shft-G   end of file
              n shft-G   move to line n

               /stuff/   search
                  n   repeat in same direction
                  N   repeat in opposite direction
                  /return  repeat seach forward
                  ?return  repeat seach backward

               "dyy  Yank current line to buffer d
               "a7yy Yank next 7 lines to buffer a
               :1,7ya a  Yank [ya] lines 1,7 to buffer a
               :1,7ya b  Yank [ya] lines 1,7 to buffer b

               :5 pu b   Put [pu] buffer b after line 5

               "dP   Put the content of buffer d before cursor
               "ap   Put the contents of buffer a after cursor

               :1,4 w! file2  Write lines 1,4 to file2

               :set nu     Display line numbers
               :set nonum  Turns off display

               :set ic     Ignore Case

               :e <filename> Edit a file in a new buffer

               :g/<reg exp>/p   Print matching regular expression

               :split <filename>
               :sp <filename>
               :split new

                   ctl-w   To move between windows
                   ctl-w-  To change size
                   ctl+wv  Split windows vertically
                   ctl-wq  Close window

               :only       To view only 1 window

            vim dictionary - put the following command in ~/.vimrc

                   set dictionary+=/usr/share/dict/words
                   set thesaurus+=/usr/share/dict/words
               Now, after you type a word <ctl-x><ctl-k><ctl-n> and to 
               go back in the listing <ctl-p>


TIP 98:

     Using apt-get

          $ apt-get update
          $ apt-get -s install <pkage>    <---- if everything is ok, then, remove the s

     Note you may want to use dpkg to purge if you have to do a reinstall.

          $ dpkg --purge exim4-base
          $ dpkg --purge exim4-config
          $ apt-get install exim4

          $ dpkg-reconfigure exim4-config

TIP 99:

     Mounting a cdrom on openbsd and installing packages

          $ mkdir -p /cdrom
          $ mount /dev/cd0a /cdrom
          $ cd /cdrom

     To add packages

          $ pkg_add -v  <directory>

     Mounting a cdrom on linux to a user's home sub-directory:

          $ mkdir -p /home/chirico/cdrom
          $ mount /dev/cdrom /home/chirico/cdrom

TIP 100:

    Creating a boot floppy for knoppix cd:

          $ dd if=/mnt/cdrom/KNOPPIX/boot.img of=/dev/fd0 bs=1440k


    For a lot of the knoppix how-to's

TIP 101:

    Diction and Style Tools for Linux

        $ diction mytext|less

    Or, this can be done interactively

        $ diction
        This is more text to read and you can do with it
        what you want.
        (stdin):1: This is more text to read and you [can -> (do not confuse with "may")] do with it what you want.

       Diction finds all sentences in a document, that contain phrases from  a
       database  of  frequently  misused,  bad  or  wordy diction.  It further
       checks for double words.  If no files are given, the document  is  read
       from  standard input.  Each found phrase is enclosed in [ ] (brackets).
       Suggestions and advice, if any, are printed headed by a right arrow ->.
       A  sentence is a sequence of words, that starts with a capitalised word
       and ends with a full stop, double colon, question mark or  exclaimation
       mark.  A single letter followed by a dot is considered an abbreviation,
       so it does not terminate a sentence.   Various  multi-letter  abbrevia-
       tions are recognized, they do not terminate a sentence as well.

TIP 102:

    Using a mail alias.

       Suppose all root mail on your system to go to one root account

       In the following file:


       Add this line


       Next, run newaliases [/usr/bin/newaliases] as follows:

             $ newaliases

       Special note: It's possible to send mail to more than one address. Suppose you want
                     mail going to above, plus you want it going to user donkey
                     on the local system.

             root: donkey

TIP 103:

    Chrony - this service is similiar to ntp. It keeps accurate time
            on your computer against a very accurate clock in across
            a network with various time delays.


        In the file "/etc/chrony/chrony.conf" add/replace the following


        Next start the chrony service

           $ /etc/init.d/chrony restart

     Next verify that this is working. It may take 20 or 30 minutes to update
     the clock.

     Shell command:
       # chronyc
       chronyc> sourcestats
       210 Number of sources = 3
       Name/IP Address            NP  NR  Span  Frequency   Freq Skew   Std Dev
       ========================================================================            2   0    64       0.000    2000.000  4000ms                2   0    66       0.000    2000.000  4000ms
       FS3.ECE.CMU.EDU             2   0    64       0.000    2000.000  4000ms

    It is probably best to let chrony do its work. However, if you want to
    set both the hardware and software clock, the following will work:

      Sets the hardware clock
        # hwclock --set --date="12/10/04 10:18:05"

      Sync the hardware clock to software
        # hwclock --hctosys

      Set the timezone
        # ln -sf /usr/share/zoneinfo/UTC /etc/localtime
        # ln -sf /usr/share/zoneinfo/US/Eastern /etc/localtime

      Set ZONE in /etc/sysconfig/clock


       or I use the following for my timezone 

          ZONE="America/New York"

    Normally the system keep accurate time with the software clock.

TIP 104:

    NFS mount

     SERVER (

       Make sure nfs is running on the server
           $ /etc/init.d/nfs restart

       At the server the contents of /etc/exports for
       allowing 2 computers ( and
       to access the home directory of this server. Note that
       read write (rw) access is allowed.

          $ cat /etc/exports

       Or, if you have a lot of clients on 192.168.1.* then consider
       the following:


       Next, still at the server, run the exportfs command

          $ exportfs -rv

       IPTABLES (lokkit). If you're using fedora with default lokkit firewall
                then you can put the following under "Other ports".

                  Other ports nfs:tcp nfs:udp

       If the above does not work or you are not using lokkit
        IPTABLES (values in /etc/sysconfig/iptables on SERVER )

        # NFS Need to accept fragmented packets and may not have header
        #             so you will not know where they are coming from
        -A INPUT -f -j ACCEPT
        -A INPUT -p tcp -m tcp -s -m multiport --dports 111,683,686,685,1026,2049,2219  -j ACCEPT
        -A INPUT -p tcp -s -d 0/0 --dport 32765:32768  -j ACCEPT
        -A INPUT -p udp -m udp -s -m multiport --dports 111,683,686,685,1026,2049,2219  -j ACCEPT
        -A INPUT -p udp -s -d 0/0 --dport 32765:32768  -j ACCEPT
        -A INPUT -f -j ACCEPT
        -A INPUT -p tcp -m tcp -s -m multiport --dports 111,683,686,685,1026,2049,2219  -j ACCEPT
        -A INPUT -p tcp -s -d 0/0 --dport 32765:32768  -j ACCEPT
        -A INPUT -p udp -m udp -s -m multiport --dports 111,683,686,685,1026,2049,2219  -j ACCEPT
        -A INPUT -p udp -s -d 0/0 --dport 32765:32768  -j ACCEPT


     CLIENT1 (

          $ mkdir -p /home2

          $ cat /etc/fstab          /home2     nfs     rw 0 0

          $ mount -a -t nfs

        Or to do a one time mounting by hand

          $ mount -t nfs  /home2

        Now /home2 on the client will be /home on the server



        To monitor the client:

          $ nfsstat -c

           Also note you can "cat /proc/net/rpc/nfs" as well.

       To monitor the server (note the -s instead of the -c).

         $ nfsstat -s

           Also note you can "cat /proc/net/rpc/nfsd" as well.

       The following "cat" command is done on the NFS server, and shows which
       clients are mounting. This does not go with examples above. By the way,
       "root_squash" is the default, and means that root access on the clients is
       denied. So, how does the client root get access to these filesystems? You have
       to "su - <someuser>".

              $ cat /proc/fs/nfs/exports
              # Version 1.1
              # Path Client(Flags) # IPs

     (Reference: )

TIP 105:

      Ports used for Microsoft products

      To find out common port mappings, take a look at "/etc/services"

      To find an extensive list, reference

TIP 106:

      Man pages: If man pages are formatting incorrectly with PuTTY, try editing
      the "/etc/man.config" file with the following changes:

           NROFF /usr/bin/groff -Tlatin1 -mandoc
           NEQN /usr/bin/geqn -Tlatin1

         (Reference TIP 7 for using man)

TIP 107:

      Valgrind: check for memory leaks in your programs. (

       This is how you can run it on the program "a.out" for valgrind version 2.2.0

         $ valgrind --logfile=valgrind.output   --tool=memcheck ./a.out

       This is how you write the logfile "--log-file" for valgrind-3.0.1

         $ valgrind  --log-file=valgrind --leak-check=yes --tool=memcheck ./a.out

       With C++ programs with gcc 3.4 and later that use STL, export GLIBCXX_FORCE_NEW 
       only when testing to disable memory caching. Remember to enable for production
       as this will have a performance penalty. Reference 

TIP 108:

      Runlevel Configuring.

       The program ntsysv, run as root, gives you a ncurses GUI to what will
       run on your system on boot. The chkconfig program (man chkconfig) has
       the ability to list which programs are set to start on the chosen 
       run level.

            # ntsysv

            # chkconfig 

       If at this moment you want to see what services are currently running,
       then, run the following command:

            # /sbin/service --status-all

       Note, you can also set these manually. For example, normally you will
       have files in "/etc/init.d/" that will take parameters like "start","stop"

       Take a look at "/etc/init.d/mysql" this file will start and stop the
       mysql daemon. So, how does know which run levels, and the order it gets
       loaded in the run level to other programs? By the K<number> and S<number>


            $ ls /etc/rc3.d/*mysql


       So here on my system the start value is 85. Looking in /etc/rc3.d, which is
       run level 3, any program with a lower number S84something will get loaded
       before mysql.

       I manually set the run level as follows for mysql.

            # cd /etc/rc3.d
            # ln -s ../init.d/mysql S85mysql
            # ln -s ../init.d/mysql K85mysql

            # cd /etc/rc5.d
            # ln -s ../init.d/mysql S85mysql
            # ln -s ../init.d/mysql K85mysql

       Note that I could have chose other numbers as well. "ntsysv" gives
       you a graphical interface.

       This is a way of doing this with "chkconfig" at the command prompt.

            # chkconfig --list mysqld
            mysqld          0:off   1:off   2:off   3:on    4:off   5:on    6:off

       Above you can see it's on. Here's how we would have turned this on with chkconfig.

            # chkconfig --level 35 mysqld on


TIP 109:

       File Alteration Monitor - Gamin a FAM replacement
       ******  EXAMPLE NOT COMPLETE *****

       Working with fam  - file alteration monitor.  Mail uses this to signify
           a change in a file's status.

        Below is the sample C program ftest.c which can be compiled as

              $ gcc -o ftest ftest.c  -lfam

        You will need to work with this as root

              #  ./ftest <somefile absolute path>


TIP 110:

       glibc - this is the main library used by C, and the following
        link below gives you examples on everything from sockets,math,
        date and time functions, user environment, and much more.

       How do you know which version of glibc you are running?

          #include <stdio.h>
          #include <gnu/libc-version.h>
          int main (void)
             puts (gnu_get_libc_version ());
             return 0;

 Thanks to Jorg Esser for pointing this out, there is a
 way to get the GNU C library version directly, by running
        the library name as if it were a command line.

 [chirico@v0 ~]$ /lib/

 GNU C Library stable release version 2.7, by Roland McGrath et al.
 Copyright (C) 2007 Free Software Foundation, Inc.
 This is free software; see the source for copying conditions.
 There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
 Compiled by GNU CC version 4.1.2 20070925 (Red Hat 4.1.2-32).
 Compiled on a Linux >>2.6.20-1.3001.fc6xen<< system on 2007-10-18.
 Available extensions:
  The C stubs add-on version 2.1.2.
  crypt add-on version 2.1 by Michael Glad and others
  GNU Libidn by Simon Josefsson
  Native POSIX Threads Library by Ulrich Drepper et al
  RT using linux kernel aio
  For bug reporting instructions, please see:

TIP 111:

       nslookup and dig - query Internet name servers interactively.

         $ nslookup


      The nslookup command will query the dns server is "/etc/resolve.conf"
      However, you can force a certain dns with "- server".  For example the
      command below goes to the server named dilbert

         $ nslookup - dilbert


      dig gives you more information. You should probably use dig instead
      of nslookup.

      Below I am forcing the lookup from DNS of the name, and
      note that the query time is return too.

         $ dig @  +qr

         ; <<>> DiG 9.2.1 <<>> @ +qr
         ;; global options:  printcmd
         ;; Sending:
         ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55908
         ;; flags: rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
         ;                   IN      A
         ;; Got answer:
         ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55908
         ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2
         ;                   IN      A
         ;; ANSWER SECTION:            5538    IN      A
         ;; AUTHORITY SECTION:            30599   IN      NS            30599   IN      NS
         ;; ADDITIONAL SECTION:      16022   IN      A      7       IN      A
         ;; Query time: 155 msec
         ;; SERVER:
         ;; WHEN: Thu Dec 23 07:48:23 2004
         ;; MSG SIZE  rcvd: 127

      So what if you wanted to know what name the IP address
      resolves to, when using dns

          $ dig @ -x
          ;; ANSWER SECTION:
 3600 IN   CNAME   210.0/
          210.0/ 3600 IN PTR

      Above you can see it resolved to ""

      It's also possible to get all the zone information. The following command
      queries my local dns for the zone information.

          $ dig @ axfr    

      Reference ( )
        Also see TIP 223.

TIP 112:

       Using GNU Autotools - so you can produce the familiar "./configure"  "make"  and "make install"
                             commands. There is also a "make dist".

                The program and the rest of this code can be found at

         A "" is required:

            bin_PROGRAMS = sprog
            sprog_SOURCES =
            sprog_LDADD = @INCLUDES@ @SQLIBOBJS@

         In addition, a "" file is required. Note, AC_CHECK_LIB will
         check the "" file for the "sqlite3_open" file. Note that
         "sqlite3", is a shortcut for "libsqlite3" by convention. If this file
         is not found, AC_CHECK_FILE looks for "/usr/local/lib/libsqlite3.a". If
         this is found, then, "-lsqlite3" is added to the LIBS environment variable.
         Also, "-I/usr/local/include" and "-L/usr/local/lib" will be added on the
         command line. This is common when some one does not have the library in
         the path.  (See TIP 49)

            dnl Process this file with autoconf to produce a configure script.
            AM_INIT_AUTOMAKE(sqliteprog, 1.0)
            CXXFLAGS='-Wall -W -O2 -s -pipe'
             if test "$found" = "no"; then
               AC_CHECK_FILE(/usr/local/lib/libsqlite3.a, found=yes)
               if test "$found" = "yes"; then
                 LIBS="$LIBS -lsqlite3"
                 INCLUDES="$INCLUDES -I/usr/local/include"
                 echo "Are you SURE sqlite3 is installed?"

         To build the configure file, just run the following:

             $ aclocal
             $ autoconf
             $ touch NEWS README AUTHORS ChangeLog
             $ automake --add-missing

         Now if you want to make a tar.gz file "sqliteprog-1.0.tar.gz", then
         all you have to run is the following:

             $ make dist

         Note: did you ever want to save all the output from a ./configure? Well, it
               is automatically saved in the "config.log" file. In fact, this file may
               contain a lot more than what you saw on the screen.

               Also, you may need to rerun ./configure. But before you do, delete
               the "config.cache" file to get a clean build.

TIP 113:

       EMACS - common emacs commands.

         M is the ESC
         C or c is the Ctl

        Shell - when working in a shell. "M-x rename-uniquely" is good for split screen editing.

          M-x rename-uniquely   Use this for multiple shells (renames buffer so it's not the same shell)
          C-c C-z               Send job in background (when working in a shell)
          C-c C-o               commit-kill-output (gets rid of a lot of shell output)
          C-c C-r               reposition at beginning of output
          C-c C-e               reposition at end of output
          M-x send-invisible    Hide passwords - use this before typing a password

         Note: if the shell prompt does not show up correctly, then, you may want to creat a ".emacs_bash"
               file with the following contents:

                          PS1="emacs:\W \$ "

        Directories  (C-x d) give you a directory listing. You know all those annoying "~" and "#"
                     file that you get? You can easily delete these when in "dired" mode by hitting
                     "~", then  "d" to flag it for delete. Then, hit "x" to and confirm deletion.

                     These are other command that work on highlighted files in "dired" mode.

                          R   rename
                          v   view
                          Z   compress the file
                          +   create directory

        Other common commands:

          c-x l          list the line you are on, and how many lines in the document.
                         You will get something like: Page has 4881 lines (4440 + 442),
                         which means you are on the 4440 line.

          c-x rm bookmark make
          c-x rb bookmark bounce
          c-x rb notes
          c-x rb emacs
          c-x / <r> (save position in register <r>)
          c-x j <r> (jump to position in register <r>)
          c-x r SPC 1 (mark current point in register 1)
          c-x r j 1 (jump to marked point in register 1)
          c-x r t <string>  (insert string into register)
          c-x r s 1 (save marked region in register 1)
          c-x r i 1 (insert marked region)
          c-x c-o (delete all blank lines, except one)
          c-x z (repeat the last command ... stop with an a)
          c-x zz (repeat the last command twice)

          goto the next region
          then, C-x r r "name of register"
          to insert the register
          C-x r i "name of register"
          c-x (    start macro
          c-x )    end macro
          c-x e    execute macro
          c-x m   mail
          c-c c-s send
          C-x C-e
          (insert "\n\nExtra Line of text")
          ;; chirico functions in .emacs
          ;; This creates an html template
          (defun my-html ()
          (insert "<html>

          <META HTTP-EQUIV=\"Pragma\" CONTENT=\"no-cache\">
          <META HTTP-EQUIV=\"Expires\" CONTENT=\"-1\">
          <body bgcolor=\"#ffffff\">


     Backspace issues when using "emacs -nw"? They putting the following in your "~/.emacs" file

             (global-set-key "\C-d" 'backward-delete-char)
             (global-set-key "\C-h" 'backward-delete-char)
             (global-set-key (kbd "DEL") 'delete-char)

TIP 114:

        ncftpget - an intelligent ftp client ( Also
                   check your fedora or debian install. This package allows
                   you to easily download packages from ftp sites.

          This is an example of connect to an ftp site, with a subdirectory, and
          downloading all in one command.

           $ ncftpget

          Of if you want to get the fedora core 3 installs

           $ ncftpget*

TIP 115:

        expr - evaluate expressions. You can use this on the command line

           $ expr 6 + 4

         Note the spaces. Without spaces, you get the following:

           $ expr 6+4

         If you're using "*", you'll need a "\" before it

           $ expr 10 \* 10

         This also works for variables

           $ var1=34
           $ expr $var1 + 3


           $ var1=2
           $ var1=`expr $var1 \* 2`
           $ echo $var1

         see (TIP 25) you can get the cosine(.23)

           $ var1=`echo "c(.23)"|bc -l`
           $ echo $var1

         You can also do substrings:

           $ expr substr "BigBear" 4 4

         And length of strings

           $ mstr="12345"
           $ expr length $mstr

         Regular expressions

           $ expr "a3" : [a-z][1-9]

         Or you can get a bit fancy

           $ myexpr="[a-z][1-9]"
           $ echo $myexpr

           $ expr "a3" : $myexpr

         This may not be the best way to find out if it is Friday, but
         it seems to work. It's more of an exercise in xargs.

           $ date
           Fri Dec 31 16:44:47 EST 2004
           $ date|xargs -i expr {} : "[Fri]"

TIP 116:


           $ mypipe="|"
           $ eval ls $mypipe wc
           6       6     129

        Did you catch that? The above statement is the same as

           $ ls | wc

        Where "|" is put into the variable $mypipe

        (also see TIP 118)

TIP 117:

        lxr, glimpse, patchset - tools for reading the kernel source

          Note before going through all this trouble, you may find what
          you're looking for at the following site:


          This example puts some of the files in /home/src since my home
          partition is the largest. Plus, you do not want to over write
          the source in /usr/src/ If you want to put your files elsewhere
          just substitute /home/src for your desired directory.

         patchset -- download and setup

            $ export SRCDIR=/home/src
            $ cd $SRCDIR
            $ wget
            $ export PATH=$PATH:$SRCDIR/patchset-0.5/bin

          Now edit "/home/src/patchset-0.5/etc/patchset.conf" and set WWW_USER to
          whatever your website runs as

                  export WWW_USER=nobody

          Getting kernel source. The last step builds and asks a lot of questions. Enter
          yes to things that interest you, since this is what you will see in the source
          code. It is not going to build for booting. The "downlaod -p" is for downloading
          a patch.

            $ download 2.6.10
            $ createset 2.6.10
            $ make-kernel -b 2.6.10

        glimpse -- download and setup

            $ mkdir -p /home/src/glimpse
            $ cd /home/src/glimpse
            $ wget
            $ tar -xzf glimpse-latest.tar.gz
            $ cd glimpse-4.18.0
            $ ./configure; make
            $ make install

        lxr -- download and setup

            $ make -p /home/src/lxr
            $ cd /home/src/lxr
            $ wget
            $ cd lxr-0.3

          Edit "Makefile" and set PERLBIN to "/usr/bin/perl" or the where perl is
          on your system. Also set INSTALLPREFIX to "/var/www/lxr".  Then, as root
          do the following:

            $ make install

         Apache changes

          Next edit the apache httpd.conf. On my system it is
          "/usr/local/apache2/conf/httpd.conf", but if you did a fedora install
          I think this file is located at "/etc/httpd/conf/httpd.conf".

             Alias  /lxr/ "/var/www/lxr/"
             <Directory "/var/www/lxr/">
               Options ExecCGI Indexes Includes FollowSymLinks MultiViews
                AllowOverride all
                Order allow,deny
                Allow from all

              <Files ~ (search|source|ident|diff|find)>
                    SetHandler cgi-script

          lxr - continued "/var/www/lxr/http/lxr.conf" changes.  The following contains
                my lxr.conf with changes made to almost every variable. Make sure you use
                your website in place of

                 # Configuration file.
                 # Define typed variable "v", read valueset from file.
                 variable: v, Version, [/var/www/lxr/source/versions], [/var/www/lxr/source/defversion]
                 # Define typed variable "a".  First value is default.
                 variable: a, Architecture, (i386, alpha, m68k, mips, ppc, sparc, sparc64)
                 # Define the base url for the LXR files.
                 # These are the templates for the HTML heading, directory listing and
                 # footer, respectively.
                 htmlhead: /var/www/lxr/http/template-head
                 htmltail: /var/www/lxr/http/template-tail
                 htmldir:  /var/www/lxr/http/template-dir
                 # The source is here.
                 sourceroot: /var/www/lxr/source/$v/
                 srcrootname: Linux
                 # "#include <foo.h>" is mapped to this directory (in the LXR source
                 # tree)
                 incprefix: /include
                 # The database files go here.
                 dbdir: /var/www/lxr/source/$v/
                 # Glimpse can be found here.
                 glimpsebin: /usr/local/bin/glimpse
                 # The power of regexps.  This is pretty Linux-specific, but quite
                 # useful.  Tinker with it and see what it does.  (How's that for
                 # documentation?)
                 map: /include/asm[^\/]*/ /include/asm-$a/
                 map: /arch/[^\/]+/ /arch/$a/

         Now you should be ready to run "make-lxr". Make sure the path is setup to patchset,
         which is repeated here. The last step take awhile.

            $ export SRCDIR=/home/src
            $ cd $SRCDIR
            $ export PATH=$PATH:$SRCDIR/patchset-0.5/bin

            $  make-lxr 2.6.10

         Now you need to index the source. Below the ./glimpse_* file will be put in
         root. Checkout the -H option if you do not want them here on a temporary
         bases of if you run out of room.

            $ glimpseindex -o -t -w 5000 /var/www/lxr/source/2.6.10 >& .glimpse_out

         Since the above put the files under /root/.glimpse_* they should be moved

            $ mv /root/.glimps_* /var/www/lxr/source/2.6.10/.
            $ chown -R nobody.nobody ./.glimpse_*

TIP 118:

        exec - you can change standard output and input without starting a new

          The exec redirect the output from ls and date to a file. Nothing
          is show on the terminal until "exec > /dev/tty" is performed

            $ exec > mfile
            $ ls
            $ date
            $ exec > /dev/tty

          This is an example of assigning file descriptor 3 to file "output3" for
          output, then, redirecting "ls" to this descriptor. Finally, file descriptor
          3 is used for input, and the contents are read into the cat command.

            $ exec 3>output3
            $ ls  >& 3
            $ exec 3<output3
            $ cat <&3

        Could you redirect the output to 3 files and stderr?

            $ exec 3>output3
            $ exec 4>output4
            $ exec 5>output5

            $ ls >& 3 >& 4 >& 5 >& 2   // Nope, can't do this.
            output3  output4  output5

         Instead, you should do the following:

            $ ls | tee output3 | tee output4 |tee output5

         Closing the "output" file descriptor

            $ >&3-

         Closing the "input" file descriptor

            $ 3<&-

         See what is still open on 0-10

            $ lsof -a -p $$ -d 0-10

         Recursion - the following counts to 5, then, quits.

            sleep 1
            declare -x n
            let n=${n:=0}+1
            [ $n -le 5 ] && echo "$n" &&  exec $0

         There are some real-life applications for this technique, as follows:

            declare -x N
            declare -x n
            N=${N:=$(od -vAn -N1 -tu4 < /dev/urandom)}
            let n=${n:=0}+1
            [ $(($n%2)) -eq 0 ] && echo "She Loves Me!" || echo "She Loves Me NOT!"
            [ $n -lt $N ] &&  exec $0

TIP 119:

        runlevel - need to know the current runlevel?

           $ who -r
           run-level 3  Dec 31 19:02                   last=S

        Need to know the architecture?

          $ arch

TIP 120:

        at - executes commands at a specified time.

         A few examples here. The 1970 program will run
         next Auguest 2 even though the year 1970 has long past.

          $ at 6:30am Jan 12 < program
          $ at noon tomorrow < program
          $ at 1970 pm August 2 < program

        This is an interactive way to use the command:

          $ at now + 6 minutes
          warning: commands will be executed using (in order) a) $SHELL b) login shell c) /bin/sh
          at> ls
          at> date > /tmp/5min
          at> ^D
          job 3 at 2005-01-01 08:50

        What jobs are in the queue?

          $ atq


          $ at -l

TIP 121:

        Creating a Manpage

         As root you can copy the following to /usr/local/man/man1/soup.1 which will
         give you a manpage for soup.

              .\" Manpage for souptonuts.
              .\" Contact to correct errors or omissions.
              .TH man 1 "04 January 2005" "1.0" "souptonuts man page"
              .SH NAME
              soup \- man page for souptonuts
              .SH SYNOPSIS
              .SH DESCRIPTION
              souptonuts is a collection of linux and open
              source tips.
              off for golf.
              .SH OPTIONS
              The souptonuts does not take any options.
              .SH SEE ALSO
              doughnut(1), golf(8)
              .SH BUGS
              No known bugs at this time.
              .SH AUTHOR
              Mike Chirico (

         So, to view this man page

            $ man soup

         It's also possible to compress

            $ gzip /usr/local/man/man1/soup.1

         For plenty of examples look at the other man pages. Also the following
         is helpful. The last one is a tutorial "man 7 mdoc"

            $ man manpath
            $ man groff
            $ man 7 mdoc

TIP 122:

        dmesg - print out boot messages, or what is in the kernel ring buffer.

          If you missed the messages on boot-up, you can use dmesg to print them.

            $ dmesg > boot.msg

          Or to print, then, clear the ring

            # dmesg -c > boot.msg

          (also see TIP 20)

TIP 123:

        gnus - emacs email nntp news reader (comcast as example with NO TLS or SSL)

          First check that you can connect to the news group:

                 $ telnet 119
                 Connected to
                 Escape character is '^]'.
                 200 News.GigaNews.Com

          If you want to check for TLS or SSL see (TIP 54).

          Here is a very simple configuration example without encryption. It
          appears that comcast does not support ssl or TLS.

          In the "~/.emacs" file you would add the following to get comcast
          news groups

             (setq gnus-select-method '(nntp ""))

          Then, create an "~/.authinfo" file with the following settings using
          you own username and password.

             machine login  password borkeypass0rd

          Next create a "~/.newsrc" with your groups

             comp.lang.c++.moderated! 1-500
             comp.unix.programmer! 1-500
   ! 1-500
             gnu.emacs.gnus! 1-500

          Finally, create a "~/.gnus" with the following email settings for you

             (setq user-mail-address "")

             (defun my-message-mode-setup ()
               (setq fill-column 72)
               (add-hook 'message-mode-hook 'my-message-mode-setup)

          To get into gnus

              E-x gnus

          The following are common gnus commands

                RET  view the article under the cursor

                A A (shift-a, shift a): List all newsgroups known
                                        to the server.

                l (lower-case L)      : List only  subscribed groups
                                        with unread articles.

                L                     : List all newsgroups in .newsrc file.

                g                     : See if new articles have arrived.

            Some commands for reading

                n  next unread article

                p  previous article

                SPC  scroll down  moves to next unread
                     when at the bottom of the article

                del  scroll up

                F  follow-up to group on the article you are
                   reading now.

                f  follow-up to group without citing the article

                R  reply by mail and cite the article

                r  reply by mail without citing the article

                m  new mail

                a  new posting

                c  Catchup

                C-u / t  Show only young headers
                         / t without C-u limits the summary
                         to old headers

                T T  toggle threading

                C-u g  Display raw article
                       hit g to return to normal view

                t  Show all headers  it's a toggle

                W w  Wordwrap the current article

                W r  Decode ROT13  a toggle

                ^  fetch parent of article

                L  create a scorefile-entry based
                   on the current article (low score)
                   ? gives you information what each char means

                I  like L but high score

          Commands to send email

            C-c C-c  send message

            C-c C-d  save message as draft

            C-c C-k  kill message

            C-c C-m f  attach file

            M-q  reformat paragraph

TIP 124:

        Sending Email from telnet

           Note, if you are on the computer you can sometime use the local loopback.
           In fact, sometimes you can only use the local loop back in
           place of ""

            1     [mchirico@soup Notes]$ telnet 25
            2     Trying
            3     Connected to
            4     Escape character is '^]'.
            5     220 ESMTP Postfix (Postfix-20010228-pl03) (Mandrake Linux)
            6     HELO
            7     HELO         // server echo
            8     250
            9     MAIL FROM:
           10     MAIL FROM:   // server echo
           11     250 Ok
           12     RCPT TO:
           13     RCPT TO:   // server echo
           14     250 Ok
           15     DATA
           16     DATA    // echo
           17     354 Enter mail, end with "." on a line by itself
           18     This is a test message
           19     This is a test message
           20     to send
           21     to send
           22     .
           23     250 2.0.0 j0B0uH3L018469 Message accepted for delivery

          Above on line 6 you can type in any domain name. Line 7 is an echo. All
          echos are listed in the comment field.

TIP 125:

          IP forwarding, IP Masquerade

             # echo 1 > /proc/sys/net/ipv4/ip_forward
             # ipchains -F forward
             # ipchains -P forward DENY
             # ipchains -A forward -s -j MASQ
             # ipchains -A forward -i eth1 -j MASQ

          This assumes that your internal network is on eth1, and the
          internet is connected to eth0.

          (Also See TIP 182)

TIP 126:

         Setting KDE as the default desktop manager

               Edit "/etc/sysconfig/desktop" to include the two lines:


TIP 127:

        Have a file and you do not know whay type it is (tar, gz, ASCII, binary) ?
        Use the file command.  Below it is used on the file "mftp"

               $ file mftp
               mftp: Bourne-Again shell script text executable

TIP 128:

        Software RAID: Two good references

        Note, you must setup grub for each RAID 1 device. Suppose you have
        2 SCSI drives (sda and sdb). By default grub is setup on sda; but, you
        need to enable it for sdb (/dev/hdb for ide) as follows:

           grub>device (hd0) /dev/sdb
           grub>root (hd0,0)
           grub>setup (hd0)

           Checking if "/boot/grub/stage1" exists... no
           Checking if "/grub/stage1" exists... yes
           Checking if "/grub/stage2" exists... yes
           Checking if "/grub/e2fs_stage1_5" exists.. yes
           Running "embed /grub/e2fs_stage1_5 (hd0)"... 16 sectors are embedded.
           Running "install /grub/stage1 (hd0) (hd0)1+16 p (hd0,0)/grub/stage2 /grub/grub
         .conf"... succeeded.



        Checking to see if everything is working:

          $ cat /proc/mdstat

        Checking the drives

          $ sfdisk -d /dev/sdb
          $ sfdisk -d /dev/sda

          $ fdisk -l /dev/sda  "This will give general information"
          $ fdisk -l           "General information for all drives"

        Adding raid (assume you want to add the first drive "sda1", or if it is the second
                    drive then substitute "sda2" below )

          $ raidhotadd /dev/md0 /dev/sda1
          $ raidhotadd /dev/md1 /dev/sda2
          $ raidhotadd /dev/md2 /dev/sda3

        This is an example of an cat /proc/mdstat that is working. Note that
        there is a listing for both sda1[0] and sdb1[1]

            $ cat /proc/mdstat

                Personalities : [raid1]
                read_ahead 1024 sectors
                Event: 12
                md0 : active raid1 sda1[0] sdb1[1]
                      104320 blocks [2/2] [UU]

                md1 : active raid1 sda2[0] sdb2[1]
                      1044160 blocks [2/2] [UU]

                md2 : active raid1 sda3[0] sdb3[1]
                      34411136 blocks [2/2] [UU]

                unused devices: <none>

        Compare that to this where md2 is missing sdb3

            $ cat /proc/mdstat

                Personalities : [raid1]
                read_ahead 1024 sectors
                Event: 9
                md0 : active raid1 sda1[0] sdb1[1]
                      104320 blocks [2/2] [UU]

                md1 : active raid1 sda2[0] sdb2[1]
                      1044160 blocks [2/2] [UU]

                md2 : active raid1 sdb3[1]        <---- HERE
                      34411136 blocks [2/1] [_U]

                unused devices: <none>

        If you are rebuilding an array, you can watch it by doing the following:

           $ watch -n1 cat /proc/mdstat

        Need to know the raid setup?

           $ cat /etc/raidtab

TIP 129:

        Resetting Redhat Linux Passwords using GRUB

           1. Press 'e'
           2. Press 'e' again
           3. Append 'single' to the kernel version listing


TIP 130:

        mtr - matt's traceroute. This is an advanced traceroute that keeps
                  $ mtr

                                              Matt's traceroute  [v0.52]
                     third-fl-71.localdomain                            Thu Jan 20 11:05:57 2005
                     Keys:  D - Display mode    R - Restart statistics    Q - Quit
                                                            Packets               Pings
                     Hostname                            %Loss  Rcv  Snt  Last Best  Avg  Worst
                      1.                        0%    3    3     0    0    0      1
                      2. ???
                      3.    0%    3    3     8    7    7      8
                      4.    0%    2    2     8    8    8      8
                      5.    0%    2    2     8    8    8      8
                      6.                       0%    2    2    12   12   12     13
                      7.      0%    2    2    12   12   13     13
                      8.          0%    2    2    13   13   13     13
                      9.         0%    2    2    12   12   13     14
                     10. so-1-0-0.gar4.NewYork1.Level3.n    0%    2    2    14   14   37     61
                     11.    0%    2    2    13   12   13     13
                     12. ge-0-3-0.bbr2.Washington1.Level    0%    2    2    19   19   19     19
                     13. ge-1-1-51.car1.Washington1.Leve    0%    2    2    18   18   19     20
                     14.                         0%    2    2    21   19   20     21
                     15.    0%    2    2    21   20   20     21
                     16.            0%    2    2    23   21   22     23

TIP 131:

        chfn - change finger information

            $ chfn

          Next you are asked for a password and user information.

TIP 132:

        chsh - change login shell

          First, you may want to get a listing of all the possible

             $ chsh -l


TIP 133:

        bash - working with binary, hex and base 3.

         For the variable must be declare as an integer. Then
         specify the  <base>#<value>. The example below is 22 in
         base 3.

              $ declare -i n
              $ n=3#22
              $ echo $n

           Base 16 (hex)

              $ declare -i n2
              $ n2=16#a
              $ echo $n2

           Base 8 (octal)

              $ declare -i n3
              $ n3=8#11
              $ echo $n3
                9   Note 8+1=9

TIP 134:

        monitoring IP traffic. Try iptraf

TIP 135:

        enscript - convert text files to PostScript

TIP 136:

        dd and tar - blocking factor. How to determine the blocking factor, block size
                   so that tar and dd can work together.

        Step 1: Create a large file on local disk, in a directory "1" that will eventually
                be written to tape. This will be created with dd as follows:

                $ mkdir 1
                $ dd if=/dev/zero of=disk-image count=40960
                  40960+0 records in
                  40960+0 records out

                $ cd ..

        Step 2: tar the directory and contents to tape. First rewind the tape. These examples
                use /dev/nst0 as the location of the tape. Make sure to substitute your values
                if needed.

                $ mt -f /dev/nst0 rewind
                $ tar --label="Test 1" --create --blocking-factor=128 --file=/dev/nst0 1

        Step 3: Read data from the tape using a block size of 128k. If you get an I/O error, which
                could happend if you used a different blocking factor above, then, you may need
                to increase the bs to 256, or 512 etc. as needed.

                $ mt -f /dev/nst0 rewind
                $ dd if=/dev/nst0 bs=128k of=testblocksz count=1
                 0+1 records in
                 0+1 records out

                $ ls -l testblocksz
                 -rw-r--r--    1 root     root        65536 Feb  9 10:41 testblocksz

                $ ls -lh testblocksz
                 -rw-r--r--    1 root     root          64k Feb  9 10:41 testblocksz

               Note above that the size 65536 is equal to 64k. That "h" switch in "ls" is for
               human readable.

       Step 4: tar uses a multiplier of 512*blocking-factor to get block size. Again

                 512 * blocking-factor = block size used in dd command.

              Putting in the values, we see that

                 512 * 128 = 65536

       Step 5: So what does this tell you? You can now use these numbers to "dd" files
              to tape. But, first tar will be used to create the file locally.

                $ tar --label="Test 1" --create --blocking-factor=128 --file=test.tar 1

       Step 6: Send this to tape with the dd command. Remember 64k is equal to 65536.

                $ mt -f /dev/nst0 rewind
                $ dd if=test.tar bs=64k of=/dev/nst0

       Step 7: Now test that it can be read with tar command using blocking-factor=128.
             Note the "t" command in tar is for tell. It will not write data.

                $ mt -f /dev/nst0 rewind
                $ tar -tvf /dev/nst0 --blocking-factor=128
                 V--------- 0/0               0 2005-02-09 10:38:20 Test 1--Volume Header--
                 drwxr-xr-x root/root         0 2005-02-09 10:34:10 1/
                 -rw-r--r-- root/root  20971520 2005-02-09 10:34:11 1/disk-image

       Step 8: Reading tape data with dd. Most of the time a high "ibs" input block size

                $ mt -f /dev/nst0 rewind
                $ dd if=/dev/nst0 of=outfromdd.tar ibs=64k
                 321+0 records in
                 41088+0 records out

       Step 9: Verify that outfromdd.tar can be read by tar with blocking-factor=128

                $ tar -tvf outfromdd.tar --blocking-factor=128
                 V--------- 0/0               0 2005-02-09 10:38:20 Test 1--Volume Header--
                 drwxr-xr-x root/root         0 2005-02-09 10:34:10 1/
                 -rw-r--r-- root/root  20971520 2005-02-09 10:34:11 1/disk-image

       PULLING FILES:  The dd command can be used to pull files.

              ssh target_address dd if=remotefile | dd of=localfile

           Or, a specific example of getting a file from a computer called hamlet.

              $ ssh root@hamlet  dd if=/home/cvs/test | dd of=/home/storage/test


             Go to end of data
              $ mt -f /dev/nst0  eod

             Previous record
              $ mt -f /dev/nst0  bsfm 1

             Forward record
              $ mt -f /dev/nst0  fsf 1

              $ mt -f /dev/nst0 rewind

              $ mt -f /dev/nst0 tell

         (Reference TIP 151 - for how to get around firewalls)

         Below is a script that I use to backup computers via ssh. The
         tape drive is on "nis" and the extra space is on "hamlet".

                # Program to backup server remotely
                # Assume remote server is nis, you are on squeezel
                # Recover from tape
                #   dd if=/dev/nst0 of=test.tar.gz bs=64k
                filename="support1.$(date "+%m%d%y%H%M").tar.gz"
                #tar cvzf - $DIRTOBACKUP | ssh  root@nis  '(mt -f /dev/nst0 rewind; dd of=/dev/nst0 bs=64k )'
                tar cvzf - $DIRTOBACKUP | ssh  support1@hamlet "dd of=/home/support1/backups/${filename} "

         Another example program, below, pushes the last ".tar.gz" file to tape:

                # Program to push files to tape
                # Notes on recovering from tape
                #   dd if=/dev/nst0 of=test.tar.gz ibs=64k
                #    or
                #  $ ssh root@tapeserver "mt -f /dev/nst0 rewind"
                #  $ ssh root@tapeserver "dd if=/dev/nst0 ibs=64k"|dd of=cvs1.tar.gz
                # First rewind tape
                ssh root@tapeserver 'mt -f /dev/nst0 rewind'
                # Grab only the last file
                file=$(find /home/cvs  -iname 'cvs*.tar.gz'|sort|tail -n 1)
                dd if=${file}|ssh root@tapeserver 'dd of=/dev/nst0 bs=64k'

TIP 137:

        Apache - redirecting pages. All changes are in httpd.conf

               RedirectMatch (.*)\.gif$$1.jpg

               Redirect /service

          If more than one DNS record points to the server, then, it's 
          possible to redirect based upon which DNS entry was used in
          the web query.

          For example, a single web server has the following three
          DNS entries mapped to its single IP address.


          It's possible to redirect or rewrite the page delivered to
          the client with the following changes in httpd.conf

               RewriteCond  %{HTTP_HOST}  ^$
               RewriteRule  ^/$         [L]

               RewriteCond  %{HTTP_HOST}  ^$
               RewriteRule  ^/$         [L]

TIP 138:

        samba mounts via ssh - mounting a samba share through an ssh tunnel, going
               through an intermediate computer, that accepts ssh. We'll call this
               intermediate computer middle [], and we want to get to
               destination []. The user will be mchirico.

          STEP 1:

               $ mkdir -p /samba/share

          STEP 2:

            This has to be done as root, since we are using a lower port.

               $ ssh -N -L 139: mchirico@

          STEP 3:

              umount /samba/sales
              /bin/mount -t smbfs -o  username=donkey,workgroup=donkeydomain,
                      netbiosname=homecpu //localhost/share /samba/share

TIP 139:

        Music on Fedora Core -- How to play music on with "xmms".

           The following command will show the sound driver:

              $ lspci|grep -i audio

          STEP 1:

                Unmute amixer with the following command:

              $ amixer set Master 100% unmute
              $ amixer set PCM 100%  unmute

                Note you can also get a graphical interface with "alsamixer"

              $ alsamixer

                 h,F1   -- for help
                 Esc    -- exit
                 Tab    -- move to selections

          STEP 2:

                Test a sound file "*.au" with aplay. To quickly find files on your system use
                the "locate *.au" command.

                  $ aplay /usr/lib/python2.3/test/

          STEP 3:

                Install "xmms-mp3-1.2.10-9.2.1.fc3.rf.i386.rpm" which does not come with Fedora because
                of GPL license restrictions.  The latest version of this package can be found
                at the following url:


                  $ rpm -ivh xmms-mp3-1.2.10-9.2.1.fc3.rf.i386.rpm

          STEP 4:

                Go to magnatun "", select genre and make sure xmms
                is the default player.

TIP 140:

        Routing -- getting access to a network 1 hop away. You are currently on the 192 network
                   and you want access to the network that has a computer straddling
                   the two, with /proc/sys/net/ipv4/ip_forward set to 1.

               $ route add -net netmask gw

         To undo:

               $ route del -net netmask gw

         Now you can ping

         Does not work?

         Go on to and execute the following commands:

               $ echo 1 > /proc/sys/net/ipv4/ip_forward
               $ cat /proc/sys/net/ipv4/ip_forward

         To Look at the the gateway, execute the following command.

               $ netstat -r



TIP 141:

        RAM disk -- creating a filesystem in RAM.

               $ mkfs -t ext3 -q /dev/ram1 4096
               $ mkdir -p /fsram
               $ mount /dev/ram1 /fsram -o defaults,rw

TIP 142:

        Create a Live Linux CDROM  using  BusyBox and OpenSSH.

            These steps are rather long. A complete tutorial is given at
            the following link:

TIP 143:

      SystemImager ( SystemImager is software that automates Linux installs,
                       software distribution, and production deployment.

TIP 144:

      Mounted a filesystem in rescue mode, yet, you cannot read and write?  Remount.

             $ mount -o remount /

TIP 145:

      Nmap commands to check for Microsoft VPN connection.

         $ nmap -sO -p 47
         $ nmap -sS -p T:1723

      By the way, with nmap you can specify multiple ports. Below
      is an example of multiple ports; but, use the commands above
      for Microsoft VPN services.

         $ nmap -sS -p T:1723-3000

TIP 146:

      Perl and ssh - monitoring systems. The output from ssh can be parsed. Below is
      a simple procedure to just to read the ssh ouput into perl.

          $pid = open $readme, "ssh root\@hamlet df -lh|" or die "Could not ssh\n";
          while(<$readme>) {
           print $_
          close $readme

      But note, you probably want to do something more complex. Below is a more robust
      example that bypassed all the fortune, heading junk that you may encounter when
      logging in.

          $pid = open $readme, "ssh root\@hamlet df -lh 2>/dev/null|" or die "Could not ssh\n";
          while(<$readme>) {
           print $_
          close $readme

      NO! you CANNOT do bidirectional communication with the open statement. Note the "|" before
      and after below, which cannot be done.

          # Cannot do this!
          $pid = open $readme, "|ssh root\@hamlet df -lh 2>/dev/null|" or die "Could not ssh\n";

      Below is a simple Perl example working with arrays:

          @ArrayOfArray = (
                   [ "ant", "bee" ],
                   [ "mouse", "mole", "rat" ],
                   [ "duck", "goose", "flamingo" ],
                   [  "rose","carnation","sunflower"],

          for $i ( 0 .. $#ArrayOfArray ) {
              for $j ( 0 .. $#{$ArrayOfArray[$i]} ) {
                  print "Element $i $j is $ArrayOfArray[$i][$j]\n";

          # Or this is another way to list elements
          foreach( @ArrayOfArray ) {
             foreach $i (0..$#$_) {
                print "$_->[$i] "
             print "\n";

      Below is an example of working with Hash of Arrays:

          #  ./program < /etc/passwd
              next unless s/^(.*?):\s*//;
              $HoA{$1} = [ split(/:/) ];
          for $i (keys %HoA ) {
              print "$i: @{ $HoA{$i} } \n";

      Example of regular expression. This is my most used regular expression - I like
      this sample. See the "" link at the end of this tip.

          "hot cross buns" =~ /cross/;
          print "Matched: <$`> $& <$'>\n";    # Matched: <hot > cross < buns>

          print "Left:    <$`>\n";            # Left:    <hot >
          print "Match:   <$&>\n";            # Match:   <cross>
          print "Right:   <$'>\n";            # Right:   < buns>

      If you're looking for Perl information, type "man perl", which will show you how
      to get even more information. Or better yet, take a look at the following


      For a quick example on using Perl with SQLite, see the following links:



      Standard input for files. This example will read from stdin, or open a file if given as
      an argument, and convert all "<" to "&lt;" and ">" to "&gt;", which can be handy when
      converting text files to html files. Note the "while(<>)" will take multiple file names
      on the command line.

          while(<>) {

      Perl Debugger is very useful for testing commands and works like an interpreter, just
      like python. So to get into the Perl Debugger execute the command below, "q" to quit.

          $ perl -de 0

      Reference TIP 170

TIP 147:


          # shutdown 8:00 -- Shutdown at 8:00

          # shutdown +13  -- Shutdown after 13min

          # shutdown -r now  -- Shutdown now and restart

          # shutdown -k +2  -- "The system is going DOWN to maintenance mode in 2 minutes!"
                              The above is only a warning.

          # shutdown -h now   -- Shutdown now and halt

          # shutdown -c    -- Cancel shutdown

TIP 148:

      ac -  print statistics about users connect time

          $ ac -p    -- print hour usage by user (individual)
          $ ac -dy   -- print daily usage

       Options can also be combined

          $ ac -dyp

TIP 149:

      Smart Monitoring Tools:
      Disk failing? Or want to know the temperature of your hard-drive?

      For a good, quick tutorial, see the Linux Journal article

      Below are some common commands:

          $ smartctl -i /dev/hda

          $ smartctl -Hc /dev/hda

          $ smartctl -A /dev/hda

TIP 150:

      Monitor dhcp trafic - dhcpdump and tcpdump.

      Download dhcpdump

         $ wget
         $ ./configure
         $ make && make install

     Once it's installed, you can monitor all dhcp traffic as follows, if done with root.

         $ tcpdump -lenx -i eth0 -s 1500 port bootps or port bootpc| dhcpdump

     The above assumes you are using eth0 (ethernet port 0).

TIP 151:

      Port Forwarding with ssh.

      A sample .ssh/config file (note this must have chmod 600 rights)

           ## Server1 ##
               LocalForward 20000
               LocalForward 22000
               HostKeyAlias localhostKey227

      With the above "~/.ssh/config" file, after sshing into it
      is then possible to ssh into nearby computers directly.

         $ ssh -l mchirico
         $ scp -P 22000 authorized_keys* mchirico@localhost:.
         $ ssh -l mchirico localhost -p 22000

      For the complete article reference the following link:
        (Also see TIP 273)

TIP 152:

      Renaming files - suppose you want to rename all the ".htm" files to ".html"

          $ rename .htm .html *.htm

      Or, suppose you files file1, file2, file3 ...

          $ touch file1 file2 file3 file4 file5 file6
          $ rename file file. file*

      The above command will give you "file.1", "file.2" ... "file.6"

TIP 153:

      Renaming files with Perl - this is taken from "Programming Perl 3rd Edition"

          # rename - change filenames
          $op = shift;
          for (@ARGV) {
              $was = $_;
              eval $op;
              die if $@;
              # next line calls built-in function, not the script
              rename($was,$_) unless $was eq $_;

      The above Perl program can be used as follows:

           $ rename 's/\.orig$//'    *.orig
           $ rename 'y/A-Z/a-z/ unless /^Make/' *

      Also reference:

TIP 154:

      R project (

      To start R, just type "R" at the command prompt and "q()" to quit. Below
      2 is raised to powers 0 through 6 and thrown into an array.

           $ R
           > N <- 2^(0:6)
           > N
           [1]  1  2  4  8 16 32 64

      There is a summary summary() command.

           > summary(N)
           Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
           1.00    3.00    8.00   18.14   24.00   64.00

      Note that the array begins as 1 and not 0

           > N[1:3]
           [1] 1 2 4

TIP 155:

      ls - listing files by size, with the biggest file listed last

           $ ls --sort=size -lhr

      The above command sorts files by size, listing the contents in
      "h" human readable format in reverse order.

      Note the options:  --sort={none,time,size,extension}

TIP 156:

     Perl - program to clean up old versions of files

       #   Copyright (c) GPL 2005 Mike Chirico
       # This program deletes old files from several directories
       # and within each directory there must be x number of copies
       # each y number of bytes

       sub delete_old_ones {
         # Don't change setting here of '-lt'
           $pid = open $readme, "ls -lt $directory_and_file|" or die "Could not execute\n";
           while(<$readme>) {
               my @fields = split;
              # Make sure we have $save_count good ones with data
               if ($fields[4] > $bytes_in_file && $save_count > 0) {
                   print "Kept files: $fields[4] $fields[8]\n";
              # delete the old ones
              if ($save_count <= 0 )
                  print "Deleted files: $fields[4] $fields[8]\n";
                  unlink $fields[8];
           close $readme;

       @AofA = (
          [ "/home/cvs/backups/*.gz", "6",196621 ],
          [ "/home/mail/backups/*.gz","5",34 ],
          [ "/home/snort/backups/*.gz","2",34 ],
          [ "/home/server1/backups/*.gz","2",34 ],
          [ "/home/actserver/backups/*.gz","2",34 ],
          [ "/home/server2/backups/*.gz","2",34 ],

       foreach( @AofA ) {

     Reference TIP 170 and the following link:

TIP 157:

     Graphics and Visualization Software that runs on Linux

TIP 158:

     Keeping files in sync going both ways. Unlike rsync, this is not a one way mirror

     You will need ocaml installed first.

       $ wget
       $ tar -xzf ocaml-3.08.3.tar.gz
       $ cd ocaml-3.08.3

       $ ./configure
       $ make world
       $ make opt
       $ make install

     Next, get unison and put it in a different directory.

       $ wget
       $ tar -xzf unison-2.10.2.tar.gz
       $ cd unison-2.10.2
       $ make UISTYLE=text
       $ su
       # cp unison /usr/local/bin/.

     Note, you have to copy the file manually.

     See the following article []

TIP 159:

     Dump ext2/ext3 filesystem information with "dumpe2fs". Perform the mount command
     and query away.

       $ dumpe2fs /dev/sda1

TIP 160:

     sysreport - a script that generates an HTML report on the system configuration. It
     gathers information about the hardware and is somewhat redhat specific. The utility
     should be run as root.

       $ /usr/sbin/sysreport

     Note, this report is being replaced by the python program sosreport. Don't leave
     the results of this file in /tmp, as it contains essential system information. You
     may want to run this as a backup to critical files (boot, etc). Here's how to 

       $ mkdir -p /root/sos
       $ TMPDIR='/root/sos' sosreport -a --batch --no-progressbar

TIP 161:

     Key Bindings Using bind.  You can bind, say, ctl-t to a command.

     Add the following to you "~/.inputrc" file, just as it is typed below with quotes.

           "\C-t": ls -l

     Next, run the command

           $ bind -f .inputrc

     Or, you can do everything on the command line; however, it won't be there the next time
     you log in. Below is the way to do everything on the command line.

           $ bind -x '"\C-t":ls -l'

     To unbind use the "-r" option. Single quotes are not needed.

           $ bind -r "\C-t"

     Getting a list of all bindings can be done as follows, and not this can be redirected
     to the ".inputrc" file for further editing.

           $ bind -p > .inputrc

TIP 162:

     awk - common awk commands.

     Find device names "sd" or with major number 4 and device name "tty". Print the
     record number NR, plus the major number and minor number.

          $ awk '$2 == "sd"||$1 == 4 && $2 == "tty" { print NR,$1,$2}' /proc/devices

     Find device name equal to "sound".

          $ awk '/sound/{print NR,$1,$2}' /proc/devices

     Print the 5th record, first field, in file test

          $ awk 'NR==5{print $1}' test

     Print a record, skip 4 records, print a record etc from file1

          $ awk '(NR-1) % 4 == 0 {print $1}' file1

     Print all records except the last one from file1

          $ tac file1|awk 'NR > 1 {print $0}'|tac

     Print A,B,C ..Z on each line, cycling back to A if greater than 26 lines

          $ awk '{ print substr("ABCDEFGHIJKLMNOPQRSTUVWXYZ",(NR-1)%26+1,1),$0}' file1

     Number of bytes in a directory.

          $ ls -l|awk 'BEGIN{ c=0}{ c+=$5} END{ print c}'

     Remove duplicate, nonconsecutive line. As an advantage over "sort|uniq"
     you can eliminate duplicate lines in an unsorted file.

          $ awk '! a[$0]++' file1

     Or the more efficient script

          $ awk '!($0 in a) {a[$0];print}' file1

     Print only the lines in file1 that have 80 characters or more

          $ awk 'length < 80' file1

     Print line number 25 on an extremely large file -- note it has
     to be efficient and exit after printing line number 25.

          $ awk 'NR==25 {print; exit}'  verybigfile

TIP 163:

     Configuring Remote Logging.  If you have several servers on, you can setup remote logging
     as follows.


        Firewall - allow UDP port 514 on the main server that will receive the logs.

          $ iptables -A INPUT -p udp -s  --dport 514 -j ACCEPT

        Edit "/etc/sysconfig/syslog" and add the "-r" option to SYSLOGD_OPTIONS as shown below.

          SYSLOGD_OPTIONS="-r -m -0"

        Note, the "-r" is to allow remote logging and  "-m 0" specifies that that the syslog process should
        not write regular timestamps.  I prefer to only write timestamps for the clients.

        Next, restart the logging process

          $ service syslog restart


        Edit "/etc/syslog.conf" and add the ip address of the log server, or put in the hostname.

            *.* @

        Next, restart the logging process

          $ service syslog restart

TIP 164:

     kudzu - hardware on your system. To probe the hardware on your system without doing
             anything, issue the following command.

          $ kudzu -p

      But wait, a lot of this information is already recorded in the following file


      You can also use lspci to list all PCI devices.

          $ lspci

      Also, take a look at the script /etc/sbin/sysreport, since this script has a lot of
      info gathering commands. You can pick and choose what you want, or run the complete

      If you just want information on the NIC

          $ ip link show eth0
          2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
          link/ether 00:11:11:8a:be:3f brd ff:ff:ff:ff:ff:ff

TIP 165:

     cfengine - a very power agent for monitoring and administrating both a single computer
              and or multiple computers. [ ]

     The following is a quick example on downloading and installing cfengine.

          $ ncftpget
          $ md5sum cfengine-2.1.15.tar.gz
          f03de82709f84c3d6d916b6e557321f9  cfengine-2.1.15.tar.gz

          $ tar -xzf cfengine-2.1.15.tar.gz

     You need to have a current version of BerkeleyDB (
     Note that BerkeleyDB has a funny install. You cd to the "build_unix" directory, then,

          Installing BerkeleyDB if needed:
                 $ wget
                 $ tar -xzf db-4.3.28.tar.gz
                 $ cd db-4.3.28/build_unix/
                 $ ../dist/configure
                 make install

     You also need a current version of OpenSSL. For instructions on how to install OpenSSL see

     See (TIP 49) on putting "/usr/local/BerkeleyDB.4.3/lib" in the "/etc/" file. Or
     once BerkeleyDB is installed, you can put the location on the command line as follows:

     Configuring cfengine with direct reference to BerkeleyDB.4.3.  First cd to the cfengine source.

          $ ./configure --with-berkeleydb=/usr/local/BerkeleyDB.4.3/lib
          $ make
          $ make install

     Next create the following directories:

          $ mkdir -p /var/cfengine/bin
          $ mkdir -p /var/cfengine/inputs

     Copy needed files (cfagent, cfdoc, cfenvd, cfenvgraph, cfexecd, cfkey, cfrun, cfservd, cfshow):

          $ cp /usr/local/sbin/cf* /var/cfengine/bin

     You'll also need to generate keys. As root, execute the following:

          $ cfkey

     The command above will write the public and private keys in

     You probably want (cfexecd, cfservd, and cfenvd) running on all servers. If you
     add the following to "/etc/rc.local" these daemons will start on reboot.

           # Lines in /etc/rc.local

     Also, make sure you run each command now as follows:

          $ /usr/local/sbin/cfexecd
          $ /usr/local/sbin/cfservd
          $ /usr/local/sbin/cfenvd

     Firewall settings must be adjusted to allows 5308 for tcp/udp. My local network
     is, so I'm opening it up for all my computers.

          $ iptables -A INPUT -p udp -s  --dport 5308 -j ACCEPT
          $ iptables -A INPUT -p tcp -s  --dport 5308 -j ACCEPT

     A set of keys needs to be on the server and hosts. For example, my key on ""
     should be copied over to the server "" as follows:

     This is done from

          $ scp /var/cfengine/ppkeys/
          $ scp /var/cfengine/ppkeys/

     Also, "/var/cfengine/inputs/cfrun.hosts" on the server "" must contain
     all the computers that will get updated. This is "cfrun.hosts" on ""

     Once I'm done, from "" I can run the following test:

          $ cfrun -v

TIP 166:

     cfengine - a quick example. This example will be run as root. You create the file "cfagent.conf" in
     "/var/cfengine/inputs/". The example below will checksum all the files in /home/chirico/deleteme/tripwire,
     it will also comment out the line "finger" in any file located in /tmp/testdir/stuff, also appending
     the command in this file " Edit change with cfengine".

          # /var/cfengine/inputs/cfagent.conf
          # You run this with the following:
          #   cfagent -vK
                 actionsequence = ( files tidy editfiles )
                 ChecksumDatabase = ( /var/cfengine/cache.db )
                 # Below, true to update md5
                 ChecksumUpdates = ( true )
                 /home/chirico/deleteme/tripwire checksum=md5 recurse=inf
                 /home/chirico/deleteme/tripwire/moredata checksum=md5 recurse=inf
                 #/home/chirico/deleteme/tripwire/compress  recurse=inf include=*.txt acti on=compress
                   # If the database isn't secure, nothing is secure...
                 /var/cfengine/cache.db  mode=600 owner=root action=fixall
                 /home/chirico/deleteme/tripwire pattern=*~ recurse=inf age=0
                 # You must put an age. 0 runs now.

                {  /tmp/testdir/stuff

                   HashCommentLinesContaining "finger"
                   AppendIfNoSuchLine "# Edit Change with cfengine "

     A few further notes on the above. The command "actionsequence = ( files tidy editfiles) tells the order
     of what to execute. The heading "tidy:" deletes files, and of course, "editfiles" does the editing of files.

     To run the example, execute the following command. The  "-K" causes the lock file to be ignored.

            $ cfagent -vK

TIP 167:

     Implementing Disk Quotas - a quick example that can easily be done on a live system for testing. There
     is no need to reboot, since you'll be creating a virtual filesystem.

     Do the following as root. First create a mount point.

            # mkdir -p /quota

     Next, create 20M file. Since I have many of these files, I created a special directory "/usr/disk-img"

            # mkdir -p /usr/disk-img
            # dd if=/dev/zero of=/usr/disk-img/disk-quota.ext3 count=40960

     The dd command above create a 20 MB file because, by default, dd uses a block size of 512 bytes. That makes
     the size: 40960*512=20971520.

     Next, format this as an ext3 filesystem

            # /sbin/mkfs -t ext3 -q /usr/disk-img/disk-quota.ext3 -F

     Add the following line to "/etc/fstab"

            /usr/disk-img/disk-quota.ext3    /quota ext3    rw,loop,usrquota,grpquota  0 0

     Now, mount this filesystem

            # mount /quota

     Take a look at it:

            # ls -l /quota

     Now, run "quotacheck"

            # quotacheck -vug /quota

     You'll get errors the first time this is run, because you have no quota files.
     But, run it a second time and you'll see something similiar to the following:

            # quotacheck -vug /quota
            quotacheck: Scanning /dev/loop2 [/quota] done
            quotacheck: Checked 3 directories and 4 files

     Now take a look at the files:

            # ls -l /quota
            total 26
            -rw-------  1 root root  6144 Jun 14 12:23
            -rw-------  1 root root  6144 Jun 14 12:23 aquota.user
            drwx------  2 root root 12288 Jun 14 12:18 lost+found

     Next use "edquota" to grant the user "chirico" a certain quota

            # edquota -f /quota chirico

     This will bring up a menu, and here I have edited so that user "chirico"
     has a soft limit of 120*512=61K, and a soft limit of 2 inodes and a hard limit of 5.

             Disk quotas for user chirico (uid 500):
             Filesystem                   blocks       soft       hard     inodes     soft     hard
             /dev/loop2                        2        120        150         1        2         3

     Next, turn quotas on with the following command:

             $ quotaon /quota

     If you need to turn off quotas, the command is "quotaoff -a" for all filesystems. You'll run into
     errors if you try to run quotacheck, say "quotacheck -avug" because this tries to unmount and mount
     the filesystem. You need to turn off quotas first "quotaoff /quota". Note you only need to run
     quotacheck once, or when doing maintenance after a system crash.

      To get a report on the quote, runn "repquota" as follows:

           $ repquota /quota
            *** Report for user quotas on device /dev/loop0
            Block grace time: 7days; Inode grace time: 7days
                                    Block limits                File limits
            User            used    soft    hard  grace    used  soft  hard  grace
            root      --    1189       0       0              2     0     0
            chirico   -+      93       0       0              4     2     5  6days

      Note above that user "chirico" has used 4 on the file limits. This user has a hard
      limit of 5. So when this user tries to create 2 more files (bring this over the limit of 5)
      then he will get the following error as demonstrated below.

           [chirico@squeezel chirico]$ touch one
           [chirico@squeezel chirico]$ touch two
           loop0: write failed, user file limit reached.
           touch: cannot touch `two': Disk quota exceeded

      Now, if repquota (run by root) is executed it shows the following:

           $ repquota /quota
           *** Report for user quotas on device /dev/loop0
           Block grace time: 7days; Inode grace time: 7days
                                   Block limits                File limits
           User            used    soft    hard  grace    used  soft  hard  grace
           root      --    1189       0       0              2     0     0
           chirico   -+      94       0       0              5     2     5  6days

      Note the "+" sign above. User "chirico" is above the File soft limits, and in this case
      above the hard limits.

      To warn user by sending email to them, run "warnquota", but you need check that
      "/etc/warnquota.conf" is setup correctly. For the example above, this file should
      look as follows:

            $ cat /etc/quotatab
            #  This is sample quotatab (/etc/quotatab)
            #  Here you can specify description of each device for user
            #  Comments begin with hash in the beginning of the line

            # Example of description
            /dev/loop0: This is loopback device

      Just run the following as root:

            $ warnquota

      By the way, if you want to change the grace period, it can only be done on a filesystem
      basis. Not per user.

            $  edquota -t

      Users can run "quota" to see their usage as follows:

            [chirico@squeezel ~]$ quota
            Disk quotas for user chirico (uid 500):
                 Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
                 /dev/loop0      94       0       0               5      10      50

      As you can see from above, I changed my inode limit to 50.

      What about running this on the whole filesystem? Yes, below is an example where I'm running
      this on FC3, on the root of the filesystem "/". This assumes that you have installed the
      quota package. Try doing "rpm -q quota" to see if this package is installed.

        Step 1:

           Check to make sure the quota software is installed. You can either do a "whereis quota",
           or check for the rpm package.

             $ whereis quota
             whereis quota
             quota: /usr/bin/quota /usr/share/man/man1/quota.1.gz

           Checking for the rpm package.

            $ rpm -q quota

        Step 2:

           Edit /etc/fstab and add usrquota and grpquota options for "/dev/VolGroup00/LogVol00",
           which is shown on the first line below:

             /dev/VolGroup00/LogVol00 /                      ext3    defaults,usrquota,grpquota        1 1
             LABEL=/boot             /boot                   ext3    defaults        1 2
             none                    /dev/pts                devpts  gid=5,mode=620  0 0
             none                    /dev/shm                tmpfs   defaults        0 0
             none                    /proc                   proc    defaults        0 0
             none                    /sys                    sysfs   defaults        0 0
             /dev/VolGroup00/LogVol01 swap                    swap    defaults        0 0

        Step 3:

           Remount the filesystem as follows:

             $ mount -o remount /

        Step 4:

           Run quotacheck with the "-m" option. Like the above statement, this will have to be run with
           root priviliges. This creates the quota database files, and it can take a long time if it is
           a large full filesystem.

             $ quotacheck -cugm /

        Step 5:

           This step is optional, but it's good to know if you need to recalculate quotas because of a
           system crash. It's demonstrated here, because at this point quota's have not been turned on.
           Again, note the "m" option below.

             $ quotacheck -avumg

        Step 6:

           Set limits for specific users or groups using the "edquota" command. Shown below is the command
           to setup quotas for user "chirico". Shown below this user has used 161560 blocks, he has a soft
           limit of 1161560 and a hard limit of 900000. He has used 3085 inodes and has a soft limit of 10000
           and a hard limit of 12000.

             $ edquota -f / chirico

             Disk quotas for user chirico (uid 500):
               Filesystem                   blocks       soft       hard     inodes     soft     hard
               /dev/mapper/VolGroup00-LogVol00     161560    1161560     900000       3085    10000    12000

           You can put quotas on groups as well. The following is done as root. See (TIP 186 and TIP 6) for creating
           groups and adding users to groups.

             $ edquota -g share

           If you create a sharable directory for anyone in the group "share" (TIP 6), quota restrictions against
           group "share" will only apply to files added in the "/home/share" directory. When user "chirico" creates
           files in "/home/share" they also go against this user quota as well. However, when files are created in
           his home directory they do not go against the "share" group.

           Note - if you get errors when trying to run "edquota -g share", turn quotas off "quotaoff /" and
                  run "quotacheck -avugm". Then, turn the quotas back on "quotaon /".

           You can see the status of the group quota with the following command:

             $ quota -g share

        Step 7:

           Turn on quotas with the "qutoaon" command. This command needs to be done with root privileges.

             $ quotaon /

        Step 8:

           Check "/etc/quotatab" file for the correct entries. Note that when you do the "mount" command
           the filesystem returned needs to match what is in the "quotatab" file. I have noticed that this
           is not the case by default.

             $ mount
             /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw,usrquota,grpquota)

           So the "/etc/quotatab" must contain the following line.

             /dev/mapper/VolGroup00-LogVol00: This is the Volume group

        Step 9:

           Run "warnquota" as a check that the "/etc/quotatab" files is setup correctly.

              $ warnquota

        Step 10:

           Setup a daily cron job for running "warnquota". The following should be placed
           in "/etc/cron.daily"

              # Place this file in /etc/cron.daily
              # with rights 0755
              if [ $EXITVALUE != 0 ]; then
                 /usr/bin/logger -t warnquota "ALERT exited abnormally with [$EXITVALUE]"
              exit 0

        (TIP 6, TIP 186, and TIP 205)

TIP 168:

     rdist - remote file distribution client program. You can use this program in combination with
     ssh. This program does more than just copy files. Once a file has been copied, you can dictate
     other actions to be performed. Or you can hold off copying all together if the destination is
     running low on inodes, or disk space.

     For the purpose of this example, all commands will been run on "", and the
     computers that will be updated are "" and "". Obviously, you
     would substitute your computer names.

     It helps to setup ssh keys on each computer first. Reference []
     and (TIP 12).

     Step 1: Create the Configuration file myDistfile

       Below is my sample "myDistfile". This file will access hosts "" using username chirico
       and "" with the username running this command, and copy the
       files "/home/chirico/file1" and "/home/chirico/file2" to the these two servers creating the
       directory ~/tmpdir if it doesn't exist. Once these files are updated, a mail check ("sendmail -bv")
       will be performed, and mail will be sent to "chirico@squeezel". This happens twice, once for each file.

       Note, the line "/home/chirico/file2 ->" which moves the file "file2" to renaming the file to "tapedest" in the directory "/home/chirico". Once this file
       is copied, the rights are modified to "chmod +r".  Likewise, "/home/chirico/file2 ->"
       copies the file file2, which is renamed as closetdest.

                 # Contents of myDistfile
                 HOSTS = ( )

                 FILES = ( /home/chirico/file1 /home/chirico/file2 )

                 ${FILES} -> ${HOSTS}
                    # Directory tmpdir will be created if it doesn't exist
                    install tmpdir ;
                    special /home/chirico/file1  "/usr/sbin/sendmail -bv";
                    notify chirico@squeezel;

                 /home/chirico/file2 ->
                    install  /home/chirico/tapedest;
                    special  /home/chirico/tapedest "chmod +r /home/chirico/tapedest";

                 /home/chirico/file2 ->
                    install /home/chirico/closetdest;

     Step 2: Command from to run myDistfile above

       Below is the command that will execute the contents in "myDistfile". This command is run from the
       computer "". All output will go in the file "cmd1rdist.log".

           $ rdist -P /usr/local/bin/ssh -f ./myDistfile -l file=./cmd1rdist.log=all

       Obviously you want a secure copy (using scp), so the -P option uses ssh as your secure
       transport mechanism.

TIP 169:

     Restricting root logins (/etc/securetty). ctl-alt-F4 will give you a prompt for tty3. Note
     that it is one number less. Take a look at the contents of "/etc/securetty". To prevent
     root from logging in on this device, take out tty3 from this listing. Note, you can always
     login as another user, then, su to root.  Below is an example of the default
     "/etc/securetty" that allows root to login to everything.

         [root@squeezel ~]# cat /etc/securetty

TIP 170:

     Perl map function. Try the following to get a quick take on this function,
     which increments each value in the array a;

         @a = (1,2,3);
         map {$_++} @a;
         map { print "$_\n" } @a;


         @a = (1,2,3);
         map { print "$_\n"} map {++$_} @a;

     And you can easily make modifications, like reversing the order

         @a = (1,2,3);
         map { print "$_\n"} reverse map {++$_} @a;

     Plus there is a grep() function that works on each element as well

         @a = (1,2,3);
         map { print "$_\n"} reverse grep{ $_ > 3} map {++$_} @a;

     To get only odd numbers in reverse order:

         @a = (1,2,3);
         map { print "$_\n"} reverse grep{ !($_ % 2)} map {++$_} @a;


TIP 171:

     Perl - subroutine call and shifting through variables. A simple and useful

          sub test {
            local $mval;
            while( $mval = shift ) {
              print " $mval\n";


TIP 172:

     Tcp wrappers - First "/etc/hosts.allow" is check, and if there is an entry in this file, no more
     checking it done.  If are no matches in "/etc/hosts.allow", the "/etc/hosts.deny" file is checked
     and if a match is found, that service is blocked for that host.

     Example "/etc/hosts.deny" file:


     The above file blocks access to computer It's also possible to run commands when
     someone from this computer tries to ssh in. This example sends mail.

         sshd: spawn (echo -e "%d %h %H %u"| /bin/mail -s 'hosts.deny entry' root)

     Of course, you can also run commands in the "/etc/hosts.allow" if you wanted mail sent for a successful

TIP 173:

     pgrep, pkill - look up or signal process based on name and other attributes.

     To quick find all instances of ssh running, for user root, execute the following

           $ pgrep -u root -l ssh

     To kill a process, or send a signal use the "pkill" option. For example, to
     make syslog reread its configuration file:

           $ pkill -HUP syslogd

     Another command command is "pidof" that can tell you how many processes are running.
     This can be useful for detecting DOS attacks.

           $ pidof sshd
           4783 4781 30008 30006 29888 29886 2246

     Above there are 7 sshd's running. Reference "Tcpdump, Raw Socket and Libpap Tutorial"
     at [].

TIP 174:

     Password Cracking - tools to check your users passwords:

      John The Ripper



TIP 175:

     Password Aging - setting the number of days a password is valid.

         $ chage -M 90 <username>

TIP 176:

     Kernel Performance Tuning - /Documentation/sysctl/vm.txt documents kernel settings to
     improve performance. Below are some examples.

       overcommit_memory:  0 -- default estimates the amount of memory for malloc
                           1 -- kernel pretends there is always enough memory until it runs out
                           3 -- never overcommit

          $ cat /proc/sys/vm/overcommit_memory

            The Linux VM subsystem avoids excessive disk seeks by reading
            multiple pages on a page fault. The number of pages it reads
            is dependent on the amount of memory in your machine.
            The number of pages the kernel reads in at once is equal to
            2 ^ page-cluster. Values above 2 ^ 5 don't make much sense
            for swap because we only cluster swap data in 32-page groups.

          $ cat /proc/sys/vm/page-cluster

            This is used to force the Linux VM to keep a minimum number
            of kilobytes free.  The VM uses this number to compute a pages_min
            value for each lowmem zone in the system.  Each lowmem zone gets
            a number of reserved free pages based proportionally on its size.

          $ cat /proc/sys/vm/min_free_kbytes

            This file contains the maximum number of memory map areas a process
            may have. Memory map areas are used as a side-effect of calling
            malloc, directly by mmap and mprotect, and also when loading shared

            While most applications need less than a thousand maps, certain
            programs, particularly malloc debuggers, may consume lots of them,
            e.g., up to one or two maps per allocation.

            The default value is 65536.

          $ cat /proc/sys/vm/max_map_count

        Also see

TIP 177:

     IO Scheduler - /Documentation/block/as-iosched.txt documents kernel settings for disk

     If you're not sure what partitions you have "$ cat /proc/partitions". This example
     assumes hda, and you can see some of the kernel settings:

          $ ls /sys/block/hda/queue/iosched
          back_seek_max  back_seek_penalty  clear_elapsed  fifo_batch_expire  fifo_expire_async
           fifo_expire_sync  find_best_crq  key_type  quantum  queued


TIP 178:

     iozone -- getting data on disk performance ( This is a very
     comprehensive package.

           $ wget
           $ tar -xf iozone3_242.tar
           $ cd iozone3_242/src/current
           $ make linux

     At this point you should read the documentation. There is no "make install". You
     copy it to each filesystem you want to run this program on. Below are some quick
     start commands.

       Good comprehensive test.

           $ iozone -a

       I prefer this for small filesystems. It limits the record size to 10000 and does
       the output in operations per second (higher numbers mean faster drive).

           $ ./iozone -a -s 10000 -O

TIP 179:

     history - bash command to get a history of all commands typed. But, here is a way
     that you can get date and time listed as well.

           $ HISTTIMEFORMAT="%y/%m/%d %T "

     Defining the environment variable above give you the date/time info when you
     execute history:

           $ history
               175  05/06/30 12:51:46 grep '141.162.' mout > mout2
               176  05/06/30 12:51:48 e mout2
               177  05/06/30 12:56:59 ls
               178  05/06/30 12:57:02 ls
               179  05/06/30 12:57:39 ls
               180  05/06/30 12:57:49 ls -l
               181  05/06/30 13:01:10 history
               182  05/06/30 13:01:20 HISTTIMEFORMAT="%y/%m/%d %T "
               183  05/06/30 13:01:23 history

TIP 180:

     .config - Fedora Core getting the .config to rebuild the kernel. You can find
     this file, the ".config" file at the following location:

          $ ls "/lib/modules/$(uname -r)/build/.config"

     Or, to see the contents

          $ cat "/lib/modules/$(uname -r)/build/.config"

     This can be important, if you're planning to build your own kernel.

TIP 181:

     Listing control key settings.

          $ stty -a
          speed 38400 baud; rows 0; columns 0; line = 0;
          intr = ^C; quit = ^\; erase = <undef>; kill = <undef>; eof = ^D; eol = <undef>; eol2 = <undef>; start = ^Q;
          stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0;
          -parenb -parodd cs8 -hupcl -cstopb cread -clocal -crtscts
          -ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon -ixoff -iuclc -ixany -imaxbel
          opost -olcuc -ocrnl -onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
          isig icanon iexten -echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke

TIP 182:

     iptables DNAT and SNAT. You have a webserver on When people query this webserver, you want them
     to goto, with no indication that they are going to another web server. In fact, they always make
     their web hits to

     The following is the iptables commands:

        $ echo 1 > /proc/sys/net/ipv4/ip_forward
        $ iptables -t nat -A PREROUTING -d -p tcp --dport 80 -j DNAT --to
        $ iptables -t nat -A POSTROUTING -d -s -p tcp --dport 80 -j SNAT --to

     Change to whatever source you expect the web browser to come in on. Below is the tcpdump showing
     all traffic is relayed via

         [root@closet iptables]# tcpdump -nN port 80

         17:34:58.790398 IP > S 3620106373:3620106373(0) win 16384 <mss 1460,nop,nop,sackOK>

         17:34:58.790465 IP > S 3620106373:3620106373(0) win 16384 <mss 1460,nop,nop,sackOK>
         17:34:58.790703 IP > S 1973665156:1973665156(0) ack 3620106374 win 5840 <mss 1460,nop,nop,sackOK>
         17:34:58.790720 IP > S 1973665156:1973665156(0) ack 3620106374 win 5840 <mss 1460,nop,nop,sackOK>

         17:34:58.790951 IP > . ack 1 win 17520
         17:34:58.790965 IP > . ack 1 win 17520
         17:34:58.791451 IP > P 1:327(326) ack 1 win 17520
         17:34:58.791472 IP > P 1:327(326) ack 1 win 17520
         17:34:58.791973 IP > . ack 327 win 6432

     Above the web client is on "". You can see that the 1st server "" then goes out to
     the 2nd server "" on the second line. The third line shows the 2nd server "" responding to
     the 1st server, and the forth line passes this data back to the web client "".

     Note: You can save your current iptables setting with the following command:

         $ iptables-save > iptables_store

       The big advantage is that you can store the counters as well.

         $ iptables-save -c > iptables_store_w_cnts

       To restore the file, use the following:

         $ iptables-restore -c < iptables_store_w_cnts

TIP 183:

     mailstats - display mail statistics. This file reads data from "/var/log/mail/statistics"

         [root@closet ~]# mailstats
         Statistics from Sat Jun 25 15:59:52 2005
          M   msgsfr  bytes_from   msgsto    bytes_to  msgsrej msgsdis msgsqur  Mailer
          4        1          2K        0          0K        0       0       0  esmtp
          9        0          0K        1          2K        0       0       0  local
          T        1          2K        1          2K        0       0       0
          C        1                    0                    0

TIP 184:

     Profiling C Applications - Assume you have the following program p1.c:

         /* Program  p1.c */
         #include <stdio.h>

         #include <stdlib.h>
         t1(int i)
                 printf("t1:%d\n", i);
         t2(int j)
                 printf("t2:%d\n", j);
         int main(void)
                 int i, j;
                 for (i = 0; i < 5; ++i) {
                         for (j = 0; j < 2; ++j) {

     Compile the program as follows:

        $ gcc -pg -g -o p1 p1.c
        $ ./p1

     Next, to get the profile graph.

        $ gprof -p -b p1
        Flat profile:
        Each sample counts as 0.01 seconds.
         no time accumulated
          %   cumulative   self              self     total
         time   seconds   seconds    calls  Ts/call  Ts/call  name
          0.00      0.00     0.00       10     0.00     0.00  t2
          0.00      0.00     0.00        5     0.00     0.00  t1

     Above note the 10 calls to t2 and 5 calls to t1.

TIP 185:

     CDPATH - this is a bash variable like PATH that defines a search path
              for the cd command.

     Suppose you have the following directory structure:

                               |-- dirA
                               `-- dirB

     Assume you define CDPATH as follows:


     Now, no matter what directory you are in if you use the cd command below
     you will automatically move to "/home/chirico/stuff/dirA".

             $ cd dirA

     Note you could be in "/etc" and will move directly to "/home/chirico/stuff/dirA".
     This command has the same format as PATH - multiple entries are separated by a colon.
     If the current directory contain a sub-directory dirA, then, it gets priority.

     The following is part of my .bash_profile


TIP 186:

     Groups - add groups and users to groups. The following shows how to create the group "share"
              and add the user "chirico" to this group. The following should be done as root, and
              assumes the account "chirico" already exits.

             $ groupadd share
             $ usermod -G share chirico

     Note the change made to "/etc/group" below:

             $ cat /etc/group|grep 'share'

     If the user chirico is currently logged in, he should run the following
     command to immediately have group "share" rights. Or, the next time he logs
     in he will have access to this group.

             $ newgrp share

     Reference the following (TIP 6, TIP 167).

TIP 187:

     oprofile - steps for running oprofile on Fedora.

     Step 1:

       Find out what version of the kernel you are running.

           $ uname -a
           Linux 2.6.12-1.1398_FC4 #1 Fri Jul 15 00:52:32 EDT 2005 i686 i686 i386 GNU/Linux

     Step 2:

       Download the source in a chosen directory. Above, I'm running 2.6.12-1, but I'm going to go for, since
       it's a little later. You want the signed file as well.

           $ wget
           $ wget

       Now, check the signature.

           $ gpg --verify linux- linux-

     Step 3:

       Unpack the file.

           $ tar -xzf linux-
           $ cd cd linux-

     Step 4:

       Copy the ".config" used to compile your previous kernel. You should find it
       in the following direcotry "/lib/modules/$(uname -r)/build/.config".

       Copy it to the linux- directory.

           $ cp "/lib/modules/$(uname -r)/build/.config" .

     Step 5:

       Run make as follows. It will ask for a few questions on "make oldconfig". The
       make installs below will have to be done with root privileges.

          $ make oldconfig
          $ make bzImage
          $ make modules
          $ make modules_install
          $ make install

     Step 6:

       Edit the "/boot/grub/grub.conf" and set default = 0  as shown below in this

            title Fedora Core (
                    root (hd0,2)
                    kernel /vmlinuz- ro root=/dev/VolGroup00/LogVol00 rhgb quiet
                    initrd /initrd-
            title Fedora Core (2.6.12-1.1398_FC4)
                    root (hd0,2)
                    kernel /vmlinuz-2.6.12-1.1398_FC4 ro root=/dev/VolGroup00/LogVol00 rhgb quiet
                    initrd /initrd-2.6.12-1.1398_FC4.img
            title Fedora Core (2.6.11-1.1369_FC4)
                    root (hd0,2)
                    kernel /vmlinuz-2.6.11-1.1369_FC4 ro root=/dev/VolGroup00/LogVol00 rhgb quiet
                    initrd /initrd-2.6.11-1.1369_FC4.img
            title Other
                    rootnoverify (hd0,1)
                    chainloader +1

     Step 7:

       Shutdown with the restart option.

           $ shutdown -r now

     Step 8:

       Run opcontrol. The commands below are done as root.  My kernel was compiled in the following
       directory "/home/kernel/linux-", so I'll run opcontrol as follows:

           $ opcontrol --vmlinux=/home/kernel/linux-

       Now start.

           $ opcontrol --start
           Using 2.6+ OProfile kernel interface.
           Reading module info.
           Using log file /var/lib/oprofile/oprofiled.log
           Daemon started.
           Profiler running.

       Shutdown opcontrol.

           $ opcontrol --shutdown

       Run report.

           $ opreport

           CPU: CPU with timer interrupt, speed 0 MHz (estimated)
           Profiling through timer interrupt
             samples|      %|
              156088 99.8746 vmlinux
                  60  0.0384
                  30  0.0192 oprofiled
                  23  0.0147
                  13  0.0083 bash
                  12  0.0077 screen
                  10  0.0064 sshd
                   9  0.0058 ssh
                   6  0.0038 ip_tables
                   6  0.0038
                   5  0.0032 b44
                   5  0.0032 ext3
                   5  0.0032
                   4  0.0026 ip_conntrack
                   4  0.0026 jbd
                   2  0.0013 grep
                   1 6.4e-04
                   1 6.4e-04

         Reference the following for more documentation:

TIP 188:

     cyrus-imapd with Postfix using sasldb for authentication. For this example
     the server is and the user is chirico.

        Step 1:
                   $ yum install cyrus-imapd
                   $ yum install cyrus-imapd-utils
            You need "cyrus-imapd-utils" for cyradm.
        Step 2:
          Edit /etc/imapd.conf
               configdirectory: /var/lib/imap
               partition-default: /var/spool/imap
               admins: cyrus
               sievedir: /var/lib/imap/sieve
               sendmail: /usr/sbin/sendmail
               hashimapspool: true
               # Chirico Commented the below line
               # sasl_pwcheck_method: saslauthd
               # Because using sasldb
               sasl_pwcheck_method: auxprop
               sasl_auxprop_plugin: sasldb
               #  Chirico end change
               sasl_mech_list: PLAIN
               tls_cert_file: /usr/share/ssl/certs/cyrus-imapd.pem
               tls_key_file: /usr/share/ssl/certs/cyrus-imapd.pem
               tls_ca_file: /usr/share/ssl/certs/ca-bundle.crt
        Step 3:
           Create a user and password:
              $ saslpasswd2 -c -u `postconf -h myhostname` cyrus
              $ saslpasswd2 -c -u `postconf -h myhostname` chirico
              $ saslpasswd2 -c -u `postconf -h myhostname` allmail
           This will automatically create the file /etc/sasldb2. But look
           at the default rights, assuming you ran saslpasswd2 as root:
                 $ ls -l /etc/sasldb2
                 -rw-r-----  1 root root 12288 Jul 31 09:50 /etc/sasldb2
           We need to correct this in step 4.
        Step 4:
                 $ chown root.mail /etc/sasldb2
                 $ ls -l /etc/sasldb2
                 -rw-r-----  1 root mail 12288 Jul 31 09:50 /etc/sasldb2
        Step 5:
            Update "/etc/postfix/". Note in /etc/imapd.conf the configdirectory
            points to /var/lib/imap, and if I look at this directory I see the
            socket directory. However, after staring /etc/init.d/cyrus-imapd  there
            will be a socket file "/var/lib/imap/socket/lmtp". (See step 6).
                  mailbox_transport = lmtp:unix:/var/lib/imap/socket/lmtp
                  mailbox_transport = cyrus
            Restart postfix.
                  /etc/init.d/postfix restart
        Step 6:

            Start cyrus-imapd and look for the socket file.
                   $ /etc/init.d/cyrus-imapd restart
                   Shutting down cyrus-imapd:                                 [  OK  ]
                   Starting cyrus-imapd: preparing databases... done.         [  OK  ]
            Now you should see the lmtp file:
                   $ ls -l /var/lib/imap/socket/lmtp
                   srwxrwxrwx  1 root root 0 Jul 31 10:04 /var/lib/imap/socket/lmtp
        Step 7:
            Add users. Note, you may have to go back to step 3 to add them to /etc/sasldb2
            as well.
                   $ su - cyrus
                   $ cyradm
         > cm user.chirico
         > quit
            Now got back as root, and check that everything was created correctly.
                   $ ls /var/spool/imap/c/user/
                   total 8
                   drwx------  2 cyrus mail 4096 Jul 31 10:21 chirico
        Step 8:
             Run a mail test. We'll do this as root to the chirico account.
                   $ mail -s 'First test'  chirico
                   first test
             Now, still as root check the maillog. Normally everything should work.
                  $ tail /var/log/maillog
             However, I got the following error below.

                     Jul 31 10:29:03 tape postfix/cleanup[30124]: AE7CB1B34A4: message-id=<>
                     Jul 31 10:29:03 tape postfix/qmgr[30120]: AE7CB1B34A4: from=<>, size=315, nrcpt=1 (queue active)
                     Jul 31 10:29:03 tape pipe[30128]: fatal: pipe_comand: execvp /cyrus/bin/deliver: No such file or directory

             If you get a similiar error, you may need to adjust the settting in /etc/postfix/
                 # This is the problem in /etc/postfix/
                 cyrus     unix  -       n       n       -       -       pipe
                   user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user}
             My deliver file is the following
                 $ ls -l /usr/lib/cyrus-imapd/deliver
                 -rwxr-xr-x  1 root root 846228 Apr  4 18:59 /usr/lib/cyrus-imapd/deliver
             So I need to change my /etc/postfix/ as follows:
                 # Fix because by deliver file is under  /usr/lib/cyrus-imapd/deliver
                 cyrus     unix  -       n       n       -       -       pipe
                   user=cyrus argv=/usr/lib/cyrus-imapd/deliver -e -r ${sender} -m ${extension} ${user}
             If changes were needed, like I had to do, restart postfix
                 $ /etc/init.d/postfix restart
             Now, if everything works, you should start to see numbers in the spool directory like "1." and
                 $ ls -l /var/spool/imap/c/user/chirico/
                 total 40
                 -rw-------  1 cyrus mail  545 Jul 31 10:44 1.
                 -rw-------  1 cyrus mail  547 Jul 31 10:45 2.
                 -rw-------  1 cyrus mail 1276 Jul 31 10:45 cyrus.cache
                 -rw-------  1 cyrus mail  153 Jul 31 10:21 cyrus.header
                 -rw-------  1 cyrus mail  196 Jul 31 10:45 cyrus.index

        Step 9:
             Local firewall.
                 # imap
                iptables -A INPUT -p udp -s  --dport 143 -j ACCEPT
                iptables -A INPUT -p tcp -s  --dport 143 -j ACCEPT

        Step 10:
             Configure cyrus-imapd to start for run-level 3 and 5.

                 # chkconfig --level 35 cyrus-imapd on

       HINTS - 

        Something to watch out for:

              Something to watch out for: If a user creates a .forward file in their shell account with the
              following entry, then, mail will not get mail relayed to cyrus.

                  "|exec /usr/bin/procmail"

             The /etc/maillog will show something like this:

                    to=<>, orig_to=<chirico>, relay=local, delay=0,
                          status=sent (delivered to command: exec /usr/bin/procmail)

             Remove the ".forward" file from their home directory and you'll get the following:

                    to=<>, relay=cyrus, delay=0,
                          status=sent (

              mutt with IMAP?  (See TIP 190)

TIP 189:

     expand - convert tabs to spaces in a file.

             $ expand How_to_Linux_and_Open_Source.txt > notabs

TIP 190:

     mutt with imap - assume you have setup imap (see tip 188). Now how do you configure
                      your ".muttrc" file to automatically connect, securely to the IMAP server?

     Below is an example of my ".muttrc" file. For this example, assume my password is "S0m3paSSw0r9".

             $ cat .muttrc
             set spoolfile = "imaps://
             set imap_force_ssl=yes
             set certificate_file=~/.mutt/certificates/72d31154.0

     Now, you want to copy the certificate as a "file.pem" and run "c_rehash" to convert this
     file to a number. See the article. See the following article on how to do this under the 
     fetchmail section.

     This is a quick summary of creating this key.

             $ openssl s_client -connect -showcerts > file.pem
             $ c_rehash ~/.mutt/certificates

TIP 191:

     Apache - CGI scripts.  There are two ways to enable CGI scripts. The second method is the
              prefered method.

           First way, the easy way. Look for the "http.conf" file. On Fedora Core, this file can be
           found under "/etc/httpd/conf/httpd.conf". Edit this file as follows to make 
           "" execute scripts.

                     ScriptAlias /chirico-cgi/ "/home/chirico/cgi-bin/"

           Second way, the better way. Instead of doing the above, make the following change in 

                     <Directory /home/chirico/cgi-bin>

                      Options +ExecCGI
                      SetHandler chirico-cgi

        Running a test script. Now copy the following test script into the directory "/home/chirico/cgi-bin"
        and change the rights to execute for the user running this.

                  # Save as test.cgi
                  #  chown apache.apache test.cgi
                  #  chmod 700 test.cgi
                  echo "Content-Type: text/html"
                  echo "Hello world from user <b>`whoami`</b>! "

TIP 192:

     Bash - using getopts for your bash scripts.

                   while getopts "ab:cd:" Option
                   # b and d take arguments
                     case $Option in
                       a) echo -e "a = $OPTIND";;
                       b) echo -e "b = $OPTIND $OPTARG";;
                       c) echo -e "c = $OPTIND";;
                       d) echo -e "d = $OPTIND $OPTARG";;
                  shift $(($OPTIND - 1))

TIP 193:

     Sieve - creating sieve recipes with "sieveshell"

     The following sieve script put all mail into the 
     folder jefferson. This assumes that I have already created the IMP 
     directory, or mail box (INBOX.jefferson), which can be done in mutt 
     with the "C" command. Below is an example of finding ""
     anywhere in the header. 

         # This is a file named jefferson.siv
         require ["fileinto"];
         if header :contains "Received" "from" {
           fileinto "INBOX.jefferson";

     Now, from the command propt execute "sieveshell" with the hostname of the
     imap server. My server is, so I would execute the 

          $ sieveshell
          connecting to
          Please enter your password:****
          > put jefferson.siv
          > activate jefferson.siv
          > list
          jefferson.siv  <- active script
          > quit

     Note the put brings in the script. And you need to activiate it.

     You can activate a sieve script for any user on your system if you are
     root. This is an example of activating a script for user chirico. Assume
     below the root prompt is "#".

          # sieveshell -a chirico -u chirico

     You can also automate everything from a bash script. But note after
     the -e the commands, and not a file with the commands, follows within
     quotes. This is the script I use for my home system.

         sieveshell -a chirico -u chirico -e 'deactivate
         delete chirico.siv
         put chirico.siv
         activate chirico.siv


TIP 194:

     emacs - editing files remotely with tramp. Tramp comes with the latest version of emacs.
             That means if you're using Fedora core 4, with emacs, you have tramp. This is 
             ideal for editing files on remote computers that do not use emacs.

             Edit the ".emacs" file and add the following line:

                 (require 'tramp)
                 (setq tramp-default-method "scp")

             Now, to edit a file on computer (C-x, C-f) and
             enter the following in Find file:

                 Find file:/




TIP 195:

     trusted X11 forwarding - running gnome and KDE both on one screen, at the same
             time securely. The following assumes gnome is running on the current
             computer and "" has KDE

              $ ssh -Y
              $ startkde

          Or assume you want to run gnome on ""

              $ ssh -Y
              $ gnome-session

          By default Fedora Core allows ForwardX11 over ssh. Note you want to use
          the -Y option above and NOT -X. 

          Suppose you want a remote "gnome-session" on ctl-alt-F12. Below is an 
          example of getting the remote computer, and you
          can still have the above configuration.

          First you must allow magic cookies for each server connection.
              $ MCOOKIE=$(mcookie)
              $ xauth add $(hostname)/unix:1 MIT-MAGIC-COOKIE-1 $MCOOKIE
              $ xauth add localhost/unix:1 MIT-MAGIC-COOKIE-1 $MCOOKIE
          Again, note that you have to add this for EACH connection. So if you wanted 2 as well

              $ MCOOKIE=$(mcookie)
              $ xauth add $(hostname)/unix:2 MIT-MAGIC-COOKIE-1 $MCOOKIE
              $ xauth add localhost/unix:2 MIT-MAGIC-COOKIE-1 $MCOOKIE

          On create a new xterm. If :1 is take below
          try :2. The vt12 is for switching to ctl-alt-F12.

              $ xinit -- :1 vt12

          Note, if you do not add the above cookies, you will get the follow error:
               Xlib: connection to ":1.0" refused by server
               Xlib: No protocol specified

          The screen may be hard to read. At this point ssh -Y to the remote computer.

              $ ssh -Y
              $ gnome-session

          Yes, you will get errors about sound and some custom drivers is the remove 
          computer has different hardware. After is loads, you can switch back and
          forth between session with (ctl-alt-F12)  and (ctl-alt-F7)

TIP 196:

     Suspend ssh session - you have just sshed into a computer "ssh -l user", and you
          want to get back to the terminal prompt of the computer you started with. Escapte, by 
          default with ssh is "~", so enter "~" followed by "ctl-z" to suspend.

TIP 197:

     Quick way to send a text file 

              $ sendmail -f < /etc/fstab

        Or you can use mutt and send a binary file

              $ mutt -s "Pictures of the Kids" -a kids.jpg < text.txt

TIP 198:

     size - determining the size of the text segment, data segment, and "bss" or uninitialized data segment.

              $ size /bin/sh /bin/bash
               text        data     bss     dec     hex filename
              586946      22444   18784  628174   995ce /bin/sh
              586946      22444   18784  628174   995ce /bin/bash

          Note above that "/bin/sh" and "/bin/bash" have equal text,data and bss numbers. It's
          highly likely that these are the same programs.

              $ ls -l /bin/sh
               lrwxrwxrwx  1 root root 4 Jan 14  2005 /bin/sh -> bash

          Yep, it's the same program. Here's a further definition of each segment.

              Text segment: The machine instructions that the CPU executes. This is usually
                            read only and sharable.

              Data segment: Contains initialized variables in a program. You also know these
                            as declarations and definitions.

                               int max = 200;

              Uninitialized data segment: Think of this as a declaration only, or data that
                            is only initialized by the kernel to arithmetic 0 or null pointers
                            before program execution.

                               char s[10];

TIP 199:

     Using the at command.

       Below is a simple example if running the ls command at 11:42am that
       will send mail -m to the user that executed it.

       We'll execute job1 defined as follows and set to be executable. 

           $ cat ./job1
           date >> /tmp/job1

       The at command is listed below. For queue "-q" names you can only
       specify one letter. Here we're using x. The letter determines the
       priority with "a" the highest.

           $ at -q x  -f ./job1 -m  11:54am
           job 3 at 2005-10-04 11:54

       Now, if you execute the atq command, you'll get the following.

           $ atq
           3       2005-10-04 11:54 x chirico

       It's also possible to execute jobs at the command line entering
       a ctl-d at the end of the input.

           $ at -q x  -m 12:08pm
           at> ls -l
           at> who
           at> date
           at> ^D

       Or for a job to execute 1 minute from now.

           $ at -q x -m `date -d '1 minute' +"%H:%M"`
           at> ls -l
           at> date

     Important points: The atd daemon must be running. To check if
      it's running do the following:

            $ /etc/init.d/atd status

      Also, if there is an /etc/at.allow file, then only users in that
      file will be allowed to execute at.

      If /etc/at.deny exists but is empty and there is no /etc/at.allow,
      then, everyone can execute the at command.

TIP 200:

     lsusb - command will display all USB buses and all devices connected.

        $ lsusb
        Bus 005 Device 003: ID 413c:2010 Dell Computer Corp.
        Bus 005 Device 002: ID 413c:1003 Dell Computer Corp.
        Bus 005 Device 001: ID 0000:0000
        Bus 004 Device 001: ID 0000:0000
        Bus 003 Device 003: ID 0fc5:1227 Delcom Engineering
        Bus 003 Device 002: ID 046d:c016 Logitech, Inc. Optical Mouse
        Bus 003 Device 001: ID 0000:0000
        Bus 002 Device 001: ID 0000:0000
        Bus 001 Device 001: ID 0000:0000

TIP 201:

     Memory fragmentation - if you suspect workload memory fragmentation issues
     and you want to monitor the current state of you system, then, consider
     looking at the output from /proc/buddyinfo on recent kernels.

        $ cat /proc/buddyinfo
      Node 0, zone      DMA    541    218     42      2      0      0      0      1      1      1      0 
      Node 0, zone   Normal   2508   2614     52      1      5      5      0      1      1      1      0 
      Node 0, zone  HighMem      0      1      3      0      1      0      0      0      0      0      0 

     The following definition is taken from  ./Documentation/filesystems/proc.txt in the 
     Linux kernel source.

       Each column represents the number of pages of a certain order which are
       available.  In this case, there are 0 chunks of 2^0*PAGE_SIZE available in
       ZONE_DMA, 4 chunks of 2^1*PAGE_SIZE in ZONE_DMA, 101 chunks of 2^4*PAGE_SIZE
       available in ZONE_NORMAL, etc...

TIP 202:

     arp - Linux ARP kernel moduel.  This command implements the Address Resolution Protocol.

     This is an example of the command.

        $ arp
        Address                  HWtype  HWaddress           Flags Mask            Iface        ether   00:50:DA:60:5B:AD   C                     eth0    ether   00:11:11:8A:BE:3F   C                     eth0          ether   00:0F:66:47:15:73   C                     eth0

TIP 203:

     dbench - performance monitoring. 

     So, how does your system react when the load average is above 600. Have you even seen a 
     computer with a load average of 600? Well, this could be your chance. 


     The following gives a load average of 10 on my system.

        $ dbench 34

     If you want a higher load, just increase the number.

TIP 204:

     /etc guide - a listing of common files in the /etc directory.

        /etc/exports: this file is used to configure NFS.

        /etc/ftpusers: the users on your system who are restricted from FTP login.

        /etc/motd: message of the day, which users see after login.

        /etc/named.conf: DNS config file.

        /etc/profile: common user information.

        /etc/inittab: this file contains runlevel start information.

        /etc/services: the services and their respective ports.

        /etc/shells: this contains the names of all shells installed on the system.

        /etc/passwd: this file contains user information.

        /etc/group: security group rights.

TIP 205: 

     logger - is a bash command utility for writing to /var/log/messages or the
            other files defined in /etc/syslog.conf.

            $ logger -t TEST more of a test here

         This is what shows up in /var/log/messages
            Oct 28 07:15:50 squeezel TEST: more of a test here

TIP 206: 

     accton, lastcomm - accouting on and last command. This is 
         a way to monitor users on your system. As root, you 
         would implement this as follows:

           $ accton -h
            Usage: accton [-hV] [file]
            [--help] [--version]

            The system's default process accounting file is /var/account/pacct.

         Note the default file location is /var/account/pacct so we'll turn
         it on system wide with the following command.

           $ accton /var/account/pacct

         Now take a look at this file. It will grow. To see command that
         are executed, use the lastcomm command.

           $ lastcomm

         The above command gives output for all users. To get the data
         for user "chirico" execute the following command:

           $ lastcomm --user chirico

         You can also get a summary of commands with sa.

           [chirico@big ~]$ sa
           30       5.23re       0.00cp    10185k
           11       4.83re       0.00cp     8961k   ***other
            8       0.13re       0.00cp    19744k   nagios*
            4       0.00re       0.00cp     2542k   automount*
            3       0.00re       0.00cp      680k   sa
            2       0.13re       0.00cp    17424k   check_ping
            2       0.13re       0.00cp      978k   ping

         To turn off accounting, execute accton without a filename.

           $ accton

TIP 207:

     CPU Temperature on a laptop. The following is the temperature
       of my Dell laptop. 

           $ cat /proc/acpi/thermal_zone/THM/temperature
           temperature:             58 C


TIP 208:

     script -f with mkfifo to allow another user to view what you type
          in real-time.

        Step 1.  Create a fifo (first in first out) file that the other
              user can view. For this example create the file /tmp/scriptout

               [chirico@laptop ~]$ mkfifo /tmp/scriptout

        Step 2.  Have the second user, voyeur user, cat this file. Output will block
              for them until you complete step 3. The other user, voyer,
              is executing the command below.
               [voyeur@laptop ~]$ cat /tmp/scriptout

        Step 3.  The original user runs the following command.

               [chirico@laptop ~]$ script -f  /tmp/scriptout
               Script started, file is /tmp/scriptout

             Now anything typed, including a vi session, will be displayed to the
             voyeur user in step 2. 

        See TIP 46.

TIP 209:

     fsck forced on next reboot.  To do this, as root issue the following commands.

            $ cd /
            $ touch forcefsck

          Now reboot the system, and when it comes up fsck will be forced on the system.

            $ shutdown -r now

TIP 210:

     /dev/random and /dev/urandom differ in their random generating properties. /dev/random
        only returns bytes when enough noise has been generated from the entropy pool. In
        contrast /dev/urandom will always return bytes.

     Reference: (rand.c)

TIP 211:

     Want to find out the speed of your NIC?  (Full Duplex or Half), then use ethtool.

               [root@squeezel ~]# ethtool eth0
               Settings for eth0:
                       Supported ports: [ MII ]
                       Supported link modes:   10baseT/Half 10baseT/Full
                                               100baseT/Half 100baseT/Full
                                               1000baseT/Half 1000baseT/Full
                       Supports auto-negotiation: Yes
                       Advertised link modes:  10baseT/Half 10baseT/Full
                                               100baseT/Half 100baseT/Full
                                               1000baseT/Half 1000baseT/Full
                       Advertised auto-negotiation: Yes
                       Speed: 100Mb/s
                       Duplex: Full
                       Port: Twisted Pair
                       PHYAD: 1
                       Transceiver: internal
                       Auto-negotiation: on
                       Supports Wake-on: g
                       Wake-on: d
                       Current message level: 0x000000ff (255)
                       Link detected: yes

     Normally you do not want want auto-negotiation unless it is done on
     both sides. Auto-negotiation is a protocol. It does NOT automatically 
     determine the configuration of the port on the other side of the Ethernet 
     cable and then match it. (Reference: "Network Warrier", section 3-2 
     by Gary A. Donahue. 2005)

        $  ethtool -s eth1 autoneg off duplex full speed 100

TIP 212:

     rpm install hang? You might need to delete the lock state information. 

        $ nl /etc/rc.d/rc.sysinit | grep rpm
        720   rm -f /var/lib/rpm/__db* &> /dev/null

     Note the command 

        $ rm -f /var/lib/rpm/__db*

     Because sometimes you will run "rpm -ivh somerpm" and it will just sit

TIP 213:

     Apache - limit access to certain directories based on IP address in the
         httpd.conf file.

          You can do this completely from /etc/httpd/conf/httpd.conf which
          are shown below for multiple IP addresses. Note that all 3 setting
          are the same.


          However, the following is different

         only allows to

          Some complete settings in /etc/httpd/conf/httpd.conf 

             <Directory /var/www/html/chirico/>

                 Order allow,deny
                 Allow from      # All 10.
                 Allow from  # All 192.168
                 Allow from 127             # All 127.

          Here's an example that only allows access to .html files
          and nothing else for a particular directory.

             <Directory "/var/www/html/chirico/protected">
             Satisfy All
             Order allow,deny
             Deny from all
             <Files *.html>
               Order deny,allow
               Allow from all
               Satisfy Any


          Don't forget to reload httpd with the following command.
             $ /etc/init.d/httpd reload


TIP 214:

     Open Files - determining how many files are currently open.

             $ cat /proc/sys/fs/file-nr
             2030    263     104851
              |       |        \- maximum open file descriptors 
              |       |         
              |       \- total free allocated file descriptors
              (Total allocated file descriptors since boot)

        Note the maximum number can be set or changed.

              $ cat /proc/sys/fs/file-max

        To change this

              $ echo "804854" > /proc/sys/fs/file-max
        Note lsof | wc -l will report higher numbers because this includes
        open files that are not using file descriptors such as directories,
        memory mapped files, and executable text files.

         and also see the man page for this: man 5 proc )

TIP 215:

     Ctrl-Alt-Del will cause an immediate reboot, without syncing dirty buffers by
     setting the value > 0 in /proc/sys/kernel/ctrl-alt-del.

               $ echo 1 > /proc/sys/kernel/ctrl-alt-del

        (Reference: man 5 proc)

TIP 216:

     Redefining keys in X using xev and xmodmap.  The program xev, used in an X window 
     terminal screen will display information on mouse movements, keys pressed and 

        $ xev

     Now type shift-4 and you'll notice the event details below:

        KeyPress event, serial 29, synthetic NO, window 0x3800001,
            root 0x60, subw 0x0, time 55307049, (418,242), root:(428,339),
            state 0x1, keycode 13 (keysym 0x24, dollar), same_screen YES,
            XLookupString gives 1 bytes: (24) "$"
            XmbLookupString gives 1 bytes: (24) "$"
            XFilterEvent returns: False
        KeyRelease event, serial 29, synthetic NO, window 0x3800001,
            root 0x60, subw 0x0, time 55307184, (418,242), root:(428,339),
            state 0x1, keycode 13 (keysym 0x24, dollar), same_screen YES,
            XLookupString gives 1 bytes: (24) "$"
     So, if you want to redefine this key to say copyright, see (/usr/X11R6/include/X11/keysymdef.h)
     you would type the following.

        $ xmodmap -e 'keycode 13 = 4 copyright'

     To get the key back to the dollar, issue the following command.

        $ xmodmap -e 'keycode 13 = 4 dollar'

     By the way it's possible to define multiple key codes for a sigle key. You'll need
     to have a key defined as the Mode_switch. Perhaps you'd like to use the Windows key,
     or the key with the Microsoft logo on it, since you're using Linux. This key is
     keycode 115

        $ xmodmap -e 'keycode 115 = Mode_switch'

     Now you could define 3 values to the shift-4. For this example use ld, Yen and dollar.

        $ xmodmap -e 'keycode 13 = 4 dollar sterling yen'

     So pressing the keys gives you the following:

          shift-$          (dollar sign)
          Windows-$        (lb sign)
          Windows-shift-$  (Yen sign)

     You could go crazy and redefine all you keys.

     (Thanks to hisham for this tip).

TIP 217:

     Threads - which version of threads are you using?

          $ getconf GNU_LIBPTHREAD_VERSION
          NPTL 2.3.90

     For a history on threads used with gcc reference the following:

     By the way, you can query all system settings with the
     following command:

          $ getconf -a

TIP 218:

     Screenshots using ImageMagick. 

        If you want the entire screen, execute the following:

            $ import -window root screen.png

        Or to crosshair select the region with your mouse, execute 
        the following instead.

            $ import screen.png

        KDE has the ability to take screenshots with the command below.

            $ ksnapshot

        GNOME likewise has a command too.

            $ gnome-panel-screenshot --delay 6

        Visting ImageMagick again, the xwininfo command give window information and the id can be
        used to capture images with the import command.

            $ xwininfo

          xwininfo: Please select the window about which you
          would like information by clicking the
          mouse in that window.

          xwininfo: Window id: 0x1e00007 "chirico@squeezel:/work/svn/souptonuts - Shell - Konsole"

          Absolute upper-left X:  4
          Absolute upper-left Y:  21
          Relative upper-left X:  0
          Relative upper-left Y:  0
          Width: 880
          Height: 510
          Depth: 24
          Visual Class: TrueColor
          Border width: 0
          Class: InputOutput
          Colormap: 0x20 (installed)
          Bit Gravity State: NorthWestGravity
          Window Gravity State: NorthWestGravity
          Backing Store State: NotUseful
          Save Under State: no
          Map State: IsViewable
          Override Redirect State: no
          Corners:  +4+21  -396+21  -396-493  +4-493
          -geometry 880x510+0+0

        Now use the import command with the Window id. My example is shown below.

             $ import -window 0x1e00007  id.miff

        And to quickly display this image that you just saved, use the display command.

             $ display id.miff 

TIP 219:

     File Access over SSH using FUSE (Filesystem in USErspace). This is a very good way to 
     mount a remote filesystem locally. It's  like a secure NFS mount, but you don't require 
     admin privileges on the remote computer. You do need to have fuse-sshfs installed on 
     the local computer that will perform the filesystem mount.

     The following works with Fedora Core 5. Only the users added to the fuse group can mout 
     external drives. Below the user chirico is being added to the group fuse.

             $ yum install fuse-sshfs
             $ usermod -a -G fuse chirico
     You'll need to reboot.

             $ shutdown -r now

     Next I'm going to mount the remote filesystem This is done as user chirico
     on the local computer. I'm using root on the remote computer because I
     want to mount the complete drive.

             $ mkdir v0
             $ sshfs  v0
             $ cd v0
             $ ls -l
               bin   dev  home  lost+found     media  mnt  opt   q     sbin     srv  tmp  var
               boot  etc  lib   master_backup  misc   net  proc  root  selinux  sys  usr

     Now to unmount the filesystem

             $ fusermount -u /home/chirico/v0

     Yes, you can mount the filesystem on boot. Below shows an example entry for /etc/fstab, but
     this only allows user on the current system to view what is is /mnt/v0. 

                /mnt/v0 fuse defaults    0 0


TIP 220:

     OpenVPN - A full-featured SSL VPN solution. The following demonstrates
               a very simple OpenVPN setup between two Fedora Core 5 computers and

               As root install the package on both computers.

                  $ yum -y install openvpn

       Setup on

                  $ iptables -A INPUT -p udp -s  --dport 1194 -j ACCEPT
                  $ iptables -A INPUT -i tun+ -j ACCEPT
                  $ iptables -A INPUT -i tap+ -j ACCEPT
                  $ iptables -A INPUT -i tap+ -j ACCEPT
                  $ iptables -A FORWARD -i tap+ -j ACCEPT

             Note - make sure you have commented out the following line 
                    in /etc/sysconfig/iptables

                  # -A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited

             Now from continuting with the commands that need to be executed on do one of the following
                  $ openvpn --remote  --dev tun1 --ifconfig --verb 9

             The above statement gives lots of errors. Once it's working you may want
             the following statement without the --verb 9 option.

                  $ openvpn --remote  --dev tun1 --ifconfig

             After you finish the setup commands for immediately below, you'll be
             able to access as

       Setup on

                  $ iptables -A INPUT -p udp -s  --dport 1194 -j ACCEPT
                  $ iptables -A INPUT -i tun+ -j ACCEPT
                  $ iptables -A INPUT -i tap+ -j ACCEPT
                  $ iptables -A INPUT -i tap+ -j ACCEPT
                  $ iptables -A FORWARD -i tap+ -j ACCEPT

             Note - again, make sure you have commented out the following line 
                    in /etc/sysconfig/iptables

                  # -A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited

             The openvpn commands are are reversed from what is shown

                  $ openvpn --remote --dev tun1 --ifconfig --verb 9

                  $ openvpn --remote --dev tun1 --ifconfig

             Now you can access all services and ports from on for 
             such services as MySQL, secure Web, imap, etc. A quick test is nmap as follows:

                  $ nmap -A -T4

                  Starting Nmap 4.03 ( ) at 2006-05-20 13:54 EDT
                  Interesting ports on
                  (The 1671 ports scanned but not shown below are in state: closed)
                  PORT     STATE SERVICE VERSION
                  22/tcp   open  ssh     OpenSSH 4.3 (protocol 2.0)
                  111/tcp  open  rpcbind  2 (rpc #100000)
                  3306/tcp open  mysql   MySQL (unauthorized)

                  Nmap finished: 1 IP address (1 host up) scanned in 7.116 seconds

TIP 221:

     openssl - Some common commands.

             Finding the openssldir (Directory for OpenSSL files).

                  $ openssl version -a|grep OPENSSLDIR
                  OPENSSLDIR: "/etc/pki/tls"

             Connect to a secure SMTP server with STARTTLS, assuming the server name is

                  $ openssl s_client -connect -starttls


      Reference (

TIP 222:

     Bash functions. This is easy, and I find it very useful to create bash functions
     for repeated commands. For example, suppose you want to create a quick bash function
     to cd to /var/log, tail messages and tail secure. You can create this function as

                  [root@v5 log]# m()
                  > { cd /var/log
                  { cd /var/log
                  > tail messages
                  tail messages
                  > tail secure
                  tail secure
                  > }
     Above I'm typing m() then hitting return. Note the echo on the next line followed
     by the prompt >. I then enter {. 

TIP 223:

     Stats on DNS Server. You can get stats on your DNS server. 

       The following works for BIND 9:

                  $ rndc stats

         On my system I see the output in "/var/named/chroot/var/named/data/named_stats.txt", which
         if an FC4 system. By the way, if you're using BIND 8, the command is "ndc stats", but that
         has a completely different format.

       Format of the output

            +++ Statistics Dump +++ (1153791199)
            success 297621
            referral 32
            nxrrset 21953
            nxdomain 33742
            recursion 28243
            failure 54
            --- Statistics Dump --- (1153791199)

      The number (1153791199) can be converted with the date command.

                  $ date -d '1970-01-01 1153791199 sec'
                  Tue Jul 25 02:33:19 EDT 2006

      That's 1153791199 seconds since 1970-01-01 UCT. Which is 4 hours fast,
      from EDT.

TIP 224:

     snmp - simple network monitoring protocol. The following steps setup snmp on Fedora Core 5.

                  $ yum install net-snmp*

         Next add the following line in "/etc/snmp/snmpd.conf" at the bottom.

                  rocommunity pA33worD

         Start the snmp service.

                  $ /etc/init.d/snmpd restart

         Once started, from the command prompt, it's possible to get stats on the computer.

                  $ snmpwalk -v 1 -c pA33worD localhost system
                  $ snmpwalk -v 1 -c pA33worD localhost interface

                  $ snmpgetnext -v 1 -c pA33worD localhost sysUpTime
                  DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (26452) 0:04:24.52

         Note the Timeticks is in 100th of a second. So the computer above has been running
         for 264.52 seconds.

         Reference( TIP 225 shows how to use MRTG for gathering snmp stats).

TIP 225:

     MRTG - Multi Router Traffic Grapher.  

                  $ cfgmaker --output=/etc/mrtg/ \
                    ifref=ip --global "workdir:/var/www/html/mrtg/stats"\


TIP 226:

     Back Trace - This is a method of getting a back trace for all processes on the system.
                  it assumes the following: a. Kernel was build with CONFIG_MAGIC_SYS-REQ 
                  enabled (which Fedora 5 kernels are) b. You can get direct access to the 

             Step 1.

                  Ctl-Alt-F1 (This brings you to the text console)

             Step 2.

                  Note above that's Alt-ScrollLock followed by Ctl-ScrollLock. You should see
                  a lot of text on the screen. To fast to read, but don't worry the text will
                  be in /var/log/messages at the end.

                  On my system the ScrollLock key is next to the NumLock key. 

TIP 227:

     Ext3 Tuning - One advantage of Ext3 over Ext2 is directory indexing, which  imporves file
                  access in directories containing large files or when the directory contains
                  many files. Directory indexing improves performance by using hashed binary 

                  There are two ways to enable dir_index. First, find the device using the mount

                      $ mount

                      /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
                      proc on /proc type proc (rw)
                      sysfs on /sys type sysfs (rw)
                      devpts on /dev/pts type devpts (rw,gid=5,mode=620)
                      /dev/sda1 on /boot type ext3 (rw) <--- This is the one you want
                      tmpfs on /dev/shm type tmpfs (rw)
                      none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
                      sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
                      automount(pid2001) on /net type autofs (rw,fd=4,pgrp=2001,minproto=2,maxproto=4)                   

                  From the above command, the device used is /dev/sda1.  Using the tune2fs command,
                  directory indexing will only apply to directories created after running the 
                  command below.               

                       $ tune2fs -O dir_index /dev/sda1

                  However, if you want it to apply to all directories, use the e2fsck command as  
                  shown below:

                       $ e2fsck -D -f /dev/sda1

                  You'll need to bypass the warning message.

     Reference: "Tuning Journaling File Systems: A small amount of effort an dtime can yield big
                results",by Steve Best. Linux Magazine, September 10, 2006. This author as has
                a very good book titled: "Linux Debugging and Performance Tuning."

TIP 228:

     NIC bonding - binding two or more NICs to one IP address to improve performance. The following
                instructions were done on Fedora Core 5.

         Step 1.
            Create the file ifcfg-bond0 with the IP address, netmask and gateway. Shown
            below is my file.

              $ cat /etc/sysconfig/network-scripts/ifcfg-bond0


         Step 2.

            Modify eth0, eth1 and eth2. Shown below are each one of my files. Note that
            you must comment out, or remove the ip address, netmask, gateway and hardware
            address from each one of these files, since settings should only come from
            the ifcfg-bond0 file above. I've chosen to comment out the lines, instead of 
            removing, should I decide to unbond my NICS sometime in the future.

              $ cat /etc/sysconfig/network-scripts/ifcfg-eth0

                  # Linksys Gigabit Network Adapter
                  # Settings for Bond

              $ cat /etc/sysconfig/network-scripts/ifcfg-eth1

                  # Linksys Gigabit Network Adapter
                  # Settings for bonding

              $ cat /etc/sysconfig/network-scripts/ifcfg-eth2

                  # Linksys Gigabit Network Adapter

         Step 3.

            Set the load parameters for bond0 bonding kernel module. Append the
            following lines to /etc/modprobe.conf 

                 # bonding commands 
                 alias bond0 bonding
                 options bond0 mode=balance-alb miimon=100

         Step 4.

            Load the bond driver module from the command prompt.

                 $ modprobe bonding

         Step 5.

            Restart the network, or restart the computer. Note I restarted to computer,
            since my NICs above had MAC assignments.

                 $ service network restart   # Or restart computer

             Take a look at the proc settings. 

                 $ cat /proc/net/bonding/bond0 
                 Ethernet Channel Bonding Driver: v3.0.3 (March 23, 2006)

                 Bonding Mode: adaptive load balancing
                 Primary Slave: None
                 Currently Active Slave: eth2
                 MII Status: up
                 MII Polling Interval (ms): 100
                 Up Delay (ms): 0
                 Down Delay (ms): 0

                 Slave Interface: eth2
                 MII Status: up
                 Link Failure Count: 0
                 Permanent HW addr: 00:13:72:80:62:f0

          Good, well written article describing the steps above.

          Documentation for bonding that can also be found in the kernel

TIP 229:

     /etc/nsswitch.conf - System Databases and Name Service Switch configuration file. 

         This file determines lookup order of services. For example, to match a name
         to an IP address, an entry can be put into the /etc/hosts file. Or a DNS query
         can be made. What's the order?  Normally, it's the entry in the /etc/hosts file.
         because /etc/nsswitch.conf contains the following setting
            hosts:      files dns

         See man nsswitch.conf for more settings.

TIP 230:

     Finding DST settings on the live system. In 2007 Daylight Saving Time was extended in the United 
     States, Canada, and Bermuda. Before this change we adjusted the clocks on the last Sunday in 
     October - Not anymore. We now change it on the first Sunday in November.

            $ zdump -v EST5EDT |grep '2007'

            EST5EDT  Sun Mar 11 06:59:59 2007 UTC = Sun Mar 11 01:59:59 2007 EST isdst=0 gmtoff=-18000
            EST5EDT  Sun Mar 11 07:00:00 2007 UTC = Sun Mar 11 03:00:00 2007 EDT isdst=1 gmtoff=-14400
            EST5EDT  Sun Nov  4 05:59:59 2007 UTC = Sun Nov  4 01:59:59 2007 EDT isdst=1 gmtoff=-14400
            EST5EDT  Sun Nov  4 06:00:00 2007 UTC = Sun Nov  4 01:00:00 2007 EST isdst=0 gmtoff=-18000

    Correct settings for EDT are shown above. Note, the months Mar and Nov.

    You can also run the same command by location.

           $ zdump -v /usr/share/zoneinfo/America/New_York|grep '2007'

    Note: This time conversion file can be created manually. For instructions on how to perform
          this task, execute the following command.

           $ man zic

          zic is the time zone compiler.

TIP 231:

     Qt - Compiling Qt 4 programs statically to run on remote systems that do
          have Qt 4 libraries installed. You actually download the Qt 4 source

       Step 1 - Download Qt 4.

          You will download a separate version of Qt 4. Yes, even if you have
          Qt 4 installed on your system, you'll want to download another 
          version to statically compile your programs. I performed the
          following steps on my computer:

               $ mkdir -p /home/src/qt
               $ wget
               $ cd /home/src/qt
               $ tar -xzf qt-x11-opensource-src-4.2.2.tar.gz

          Note, make sure you get the latest version of Qt. When I'm wrote this it
          was 4.2.2. Check for updates.

       Step 2 - Compile Qt for static mode

          The text step is to compile qt for static mode.

               $ cd /home/src/qt/qt-x11-opensource-src-4.2.2
               $ ./configure -static -prefix /home/src/qt/qt-x11-opensource-src-4.2.2
               $ make sub-src

          At this point Qt 4 is installed in static mode. 

       Step 3 - Set PATH

          Now set the PATH to reference this version.

               $ PATH=/home/src/qt/qt-x11-opensource-src-4.2.2/bin:$PATH
               $ export PATH

       Step 4 - Compile Your Source

          My program source is located in /home/chirico/widgetpaint

               $ cd /home/chirico/widgetpaint
               $ qmake -project
               $ qmake -config release
               $ make

TIP 232:

     SELinux - FC6 quick fix for problems. Using system-config-securitylevel to
               fix simple problem. (Also see TIP 238).

               $ ssh -Y user@servertofix
               $ system-config-securitylevel
           You do not have to ssh into the computer as root. As long as X is running   
           "init 5", then you can run the system-config command above and it will
           ask you for the root password. 

           Reference (TIP 238).

TIP 233:

     Mutt - tagging multiple messages and moving them to a different folder.

         If you want to tag multiple messages with mutt, use the capital T, when
         in mutt. 

               ~A  (To tag all messages. Note, enter the tilda "~" without quotes)
               ;s  (After entering ;s, you'll be asked where to save the message)

         From here you can create a new fold. If you're using IMAP mail boxes, then
         use C to create a mailbox.

         To delete messages without exiting mutt, enter "$", without the quotes.

         (Reference: )

TIP 234:

     Mutt - color coding message in mutt.

         The following is written in the .muttrc file. 

           color index brightblue default Poker
           color body brightyellow default Error

         Note, the first line will color blue all indexes with
         the word Poker. The second operates on the body of the

TIP 235:

     cat - header, stdin, and footer. (Working with /dev/fd/0 or -)

         If you have data from a command that you want preceded by
         the contents of a  header file and followed by data in 
         a footer file, then, the following command may help. 

          $ w|cat header /dev/fd/0 footer

         Above the output of the "w" command follows the contents of
         the header file. Note "/dev/fd/0" refers to stdin. Yes, you
         could use "-" in its place in this situation. However, if 
         "-" is used as the first argument, it will be interpreted as
         as a command line option, whereas "/dev/fd/0" would not.

TIP 236:

     biosdecode - Querying the Bios from the command prompt.

         This command can be executed as followed from root:

         $ biosdecode

         SYSID present.
            Revision: 0
            Structure Table Address: 0x000F0411
            Number Of Structures: 1
         SMBIOS 2.3 present.
            Structure Table Length: 2570 bytes

TIP 237:

     emacs - commands in your ~/.emacs file to disable splash screen startup

            ;;disable splash screen and startup message
            (setq inhibit-startup-message t)
            (setq initial-scratch-message nil)

TIP 238:

     SELinux - fixing SELinux problems in the audit.log, since the
            last reboot; and, building a kernel module to permit

          These instructions have only been tested on Fedora Core 7. The
          first step is to install checkpolicy, and audit. Normally audit
          is already installed.

            $ yum install checkpolicy
            $ yum install audit

            $ mkdir -p /root/selinux && cd /root/selinux
            $ audit2allow -M moduleName -l -i /var/log/audit/audit.log

            $ cp moduleName.pp /usr/share/selinux/targeted/.
            $ cd /usr/share/selinux/targeted/

            $ semodule -i moduleName.pp

         Note: You may need to load the module from /usr/share/selinux/targeted
         if you get the following error: "semodule:  Could not read file". This
         problem seems to be version dependent.

         Next, check to make sure the module is loaded.

            $ semodule -l

         Note, you may want to change the name "moduleName" to something more
         descriptive. You definitely need to change the name if you run this
         a second time, since each time this is run old changes are overwritten.

         It is also possible to do the steps independently. In fact, you could
         build the .te file by hand. Here's an example.

           [Need to finish - see banssh project]



         If you really get stuck, you may need to relabel all files on your system.
         First edit /etc/selinux/config and set to permissive mode. Next run the following
            $ touch /.autorelabel

         The following is an excellent reference for creating your own policies:

TIP 239:

     Yum Database Fix-up - you may have done a yum update, then, inadvertently 
     killed it.  It maybe necessary to rebuild the database.

             $ rm /var/lib/rpm/__db*
             $ rpm --rebuilddb
             $ yum clean all

     Note, you may also run into the situation where you need to reinstall a package
     directly. The following example shows how to reinstall the sysstat package on
     fedora 8.

             $ wget
             $ rpm -ivh --replacepkgs sysstat-7.0.4-3.fc8.i386.rpm

TIP 240:

     Convert Epoch Seconds to the Current Time. Note, some programs like Nagios list 
     epoch seconds. Here's a way to do the conversion.

             $ date -d "1970-01-01 1184521826 sec GMT"
             Sun Jul 15 13:50:26 EDT 2007

     The above command converts 1184521826 to the current time.

TIP 241:

     vmstat - For disk IO subsystem total statistics since last boot use the -D option

               $ vmstat -D
                  27 disks 
                   2 partitions 
             2766536 total reads
              526906 merged reads
            61184034 read sectors
            21233780 milli reading
             8849711 writes
             3719803 merged writes
           100480938 written sectors
           181253052 milli writing
                   0 inprogress IO
               12854 milli spent IO

     The last stat shows 12854 ms spent reading from the disk. 
     Merged reads and merged writes happen when the kernel tries to 
     combine requests for contiguous regions on the disk for a performance

     If you want more detailed totals, use the -d option.  

     An important note, vmstat can provide totals on disk performance whereas
     iostat provides data rate of change during the sample. 


TIP 242:

     htop - This is an excellent substitute for top. This program is easier
     to read, with better color coded output.

TIP 243:

     ls - hints. Although the -d option is often used to find directories, it
     can also be used with wildcards ".*" to list all files beginning with a

        $ ls -d .*
        . .bash_logout .config .eggcups .qt .redhat .sqlite_history
        .. .bash_history .bashrc .eclipse .emacs 

TIP 244:

     aureport - Getting a nice SELinux audit report. Options include [today, this-month,
                this-week ..etc]. And, if you get anything in the avc row, then, you
                can issue the --avc -i option.

       $ aureport --start today

        Summary Report
        Range of time in logs: 10/12/2007 10:09:05.572 - 10/24/2007 14:20:01.242
        Selected time for report: 10/24/2007 00:00:01 - 10/24/2007 14:20:01.242
        Number of changes in configuration: 0
        Number of changes to accounts, groups, or roles: 0
        Number of logins: 0
        Number of failed logins: 0
        Number of authentications: 1
        Number of failed authentications: 0
        Number of users: 1
        Number of terminals: 2
        Number of host names: 1
        Number of executables: 3
        Number of files: 0
        Number of AVC's: 0
        Number of MAC events: 0
        Number of failed syscalls: 0
        Number of anomaly events: 0
        Number of responses to anomaly events: 0
        Number of crypto events: 0
        Number of process IDs: 105
        Number of events: 111

TIP 245:

     Postfix - Sender Dependent Relay Host Maps. You would use this
             type of setup with Google Apps, where you're supporting
             local Linux email accounts with your domain MX record
             pointing to Google.

             sender_dependent_relayhost_maps = hash:/etc/postfix/sender_relayhost
             smtp_sasl_auth_enable = yes
             smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
             smtp_sasl_security_options = noanonymous
             smtp_sender_dependent_authentication = yes

             #format: sender-address         relayhost

             #email                    email:password


TIP 246:

     Finding the source from an rpm file, using the audit package as an example.

        $ rpm -qi audit

        Name        : audit                        Relocations: (not relocatable)
        Version     : 1.5.6                             Vendor: Fedora Project
        Release     : 2.fc7                         Build Date: Mon 03 Sep 2007 11:42:01 AM EDT
        Install Date: Fri 12 Oct 2007 10:48:28 AM EDT      Build Host:
        Group       : System Environment/Daemons    Source RPM: audit-1.5.6-2.fc7.src.rpm
        Size        : 586509                           License: GPL
        Signature   : DSA/SHA1, Thu 06 Sep 2007 04:42:18 PM EDT, Key ID b44269d04f2a6fd2
        Packager    : Fedora Project
        URL         :
        Summary     : User space tools for 2.6 kernel auditing
        Description :
        The audit package contains the user space utilities for
        storing and searching the audit records generate by
        the audit subsystem in the Linux 2.6 kernel.        

     The above information give you the source package name audit-1.5.6-2.fc7.src.rpm.
     Next to findout your release version:

        $ cat /etc/redhat-release
        Fedora release 7 (Moonshine)

     To get the download location

        $ grep 'SRPMS' /etc/yum.repos.d/fedora-updates.repo 

    So, to get our file, we'd use the following command:

        $ wget

    Note - consider installing yum-utils and rpmdevtools, especially if you plan to rebuild
         the kernel from source. (Reference:

        $ yum install yum-utils rpmdevtools

    You may also want to check for source packages in the following directory:


    To get the source of a package from yum, use yumdownloader. For example
    if you wanted to get the souce from the yum-updatesd package, use the
    following command:

       $ yumdownloader --source yum-updatesd

    This will put the file yum-updatesd-0.9-1.fc9.src.rpm in the current directory.

TIP 247:

     Kernel source - pulling down the latest version of the
     kernel. This is Torvald's daily snapshot.

        $ git clone git:// linux-2.6

     Once downloaded, use the following command above to get updates:

        $ git pull

TIP 248:

     syscalls - want to know all the system calls available?

        $ man syscalls


TIP 249:
     Rute User's Tutorial and Exposition (Version 1.0.0) by Paul Sheer. This
     has a lot of Linux and programming tips:


TIP 250:
     dmidecode - Get serial numbers, pic-slots, and other system
     information that's normally stored in your computer's BIOS.
     Yes, you can do this from the command prompt as root:

           $ dmidecode


TIP 251:

     whatmask - This is a subnet mask notation conversion tool. Or a
     Tool for calculating available host address ranges with CIDR
     notation input.

     For example, suppose you want to calculate for confirm how
     to construct two equal subnets off of the 192.168.1 network,
     including netmask, start and stop usable IP addresses.

        $ whatmask

                   IP Entered = ..................:
                   CIDR = ........................: /25
                   Netmask = .....................:
                   Netmask (hex) = ...............: 0xffffff80
                   Wildcard Bits = ...............:
                   Network Address = .............:
                   Broadcast Address = ...........:
                   Usable IP Addresses = .........: 126
                   First Usable IP Address = .....:
                   Last Usable IP Address = ......:

        $ whatmask

                   IP Entered = ..................:
                   CIDR = ........................: /25
                   Netmask = .....................:
                   Netmask (hex) = ...............: 0xffffff80
                   Wildcard Bits = ...............:
                   Network Address = .............:
                   Broadcast Address = ...........:
                   Usable IP Addresses = .........: 126
                   First Usable IP Address = .....:
                   Last Usable IP Address = ......:

TIP 252:

     /etc/sysconfig/init  This file has settings for interactive prompt during
              the run level initializtion (run levels are set in /etc/inittab).
              So, if you want to be prompted to load up everthing from sshd, ntp
              etc., then, change the prompt below to yes.

              # Set to anything other than 'no' to allow hotkey interactive startup...

TIP 253:

     Need to change the localtime on your computer? Say you want it to be
     US Eastern. Just copy the time file (This assumes Fedora or RedHat).

        $ cp /usr/share/zoneinfo/US/Eastern /etc/localtime

TIP 254:

     You use putty from Windows; but, when you try to run tools like
     lokkit, mc, or any Nurses menu on your Linux box the display is
     hard to read. To fix this, from Putty, select the following
     options (Window/Translation). Now under the box titled "Received
     data assumed to be in which character set: choose UTF-8.

TIP 255:

     eth0, eth1, or eth10? If you stuck and cannot figure out what device
     your NIC is registering under, perhaps the kernel has loaded from boot,
     then take a look under the following:

        [root@soekris00 network-scripts]# ls /sys/class/net/
        eth10  eth11  eth12  eth13  eth14  eth15  eth16  eth9  gre0  lo  tunl0

     Okay, but you want to start at eth0. If fact you can control which NIC
     starts at which device.  Here's how.

        $ udevinfo -a -p /sys/class/net/eth10
        looking at device '/class/net/eth10':

      Take the following information above and create the following file


      And populate this file with the following information:


TIP 256:

     Compiling a kernel on a 64 bit computer for a 32 bit computer.
     I ran into this when building a custom kernel for the soekris device,
     where I needed to compile the kernel on my fast 64 bit computer.

     Use the ARCH=<param> command on both menuconfig and bzImage

          make ARCH=i386  menuconfig

     Note, even when filling in the .config parameters, you need to use
     the ARCH command above if you're compiling on a 64 bit computer
     for a 32 bit system.

          make ARCH=i386  bzImage make ARCH=i386  modules

TIP 257:

     Automatically loading a kernel module during boot. Copy the module
     under the /lib/modules/$(uname -r)/ directory.

         cp yourmodule.ko /lib/modules/$(uname -r)/.  depmod -a

     Note depmod will load all modules under /lib/modules/$(uname -r)/
     provided it has a newer timestamp. So if something isn't getting
     loaded, you may want to touch the file.

TIP 258:

     Generate a uuid: uuidgen - command-line utility to create a new UUID value


     The above command generated the following uuid:


     Each time this command is run a new uuid is generated.     

TIP 259:

     Filesystem Hierarchy Standard - a reference to how a standard
     Unix filesystem is organized. This is needed reading for package

TIP 260:

     Emacs - you have a file where you want to replace the returns
     hidden in the document with some other combination.

     For example, suppose you have to the following text:

          This is a sample 

     And you want to convert it to the following

         This is a sample\
     Note, you're adding \ before the returns.

     You can do this in emacs as follows. The hidden return
     is ctl-q ctl-j.  So 

          esc-x replace-string ctl-q ctl-j <return>\ctl-q ctl-j

     This comes in handy for C string assignments.


TIP 261:

     Changing Postfix to be the default on a Fedora installation.

       Step 1:

            $ /sbin/service sendmail stop
            $ chkconfig sendmail off
            $ alternatives --config mta

         You'll need to follow the instructions after executing the alternatives 

       Step 2:

            $ /sbin/service postfix start

            $ /sbin/chkconfig --list postfix

TIP 262:    

     Commands for creating a swap file.

       Step 1: 

          Create the file. 

          This file will be 1024*524288 bytes.  Generally it is a good
          idea to create the swap file twice as big as the amount of
          RAM that you have installed if you are under a 1 G.
          However, if you have larger amounts of RAM, it's best to run
          you own tests with free to see how you're using the swap
             $ dd if=/dev/zero of=/swapfile0 bs=1024 count=524288

       Step 2: 

          Setup the swap area on the file you created.

             $ mkswap /swapfile0

       Step 3: 

          Enable the file for swapping

             $ swapon /swapfile0

       Step 4: 

          Permanently enable the swap file on boot. 
          Add the following lines to /etc/fstab.

             /swapfile0  swap  swap   defaults   0 0

       Step 5: 

          Check that the swap file is working the the free command. Also,
          reboot too to make sure the swap file works on restart and that
          /etc/fstab was correctly configured.

          [root@soekris30 ~]# free -m
               total       used       free     shared    buffers     cached
               Mem:           502        173        328          0         11        134
               -/+ buffers/cache:         27        474
               Swap:          511          0        511

TIP 263:    

     Commands for creating a bridge on your Linux box. Or basically
     this turns your Linux box into a router where you just plug in
     devices. This example set IP address as the IP
     address of the bridge. Since this box is also a server, you'll
     need to setup the default gateway, which only affects this

             $ brctl addbr br0

             $ ifconfig eth0 down
             $ ifconfig eth1 down
             $ ifconfig eth3 down

             $ addif br0 eth0
             $ addif br0 eth1
             $ addif br0 eth2

             $ ifconfig br0

             $ ifconfig eth0 up
             $ ifconfig eth1 up
             $ ifconfig eth2 up

             $ ifconfig br0 up

             $ route add default gw br0

     To find out if the bridge is working, use the netstat command.

             $ netstat -i
      Kernel Interface table
      Iface       MTU Met    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
      br0        1500   0   105139      0      0      0    78613      0      0      0 BMRU
      eth0       1500   0   923738     13    370     13   737339      3      0      3 BMRU
      eth1       1500   0   143691      0      0      0   166542      4      0      4 BMRU
      eth2       1500   0   134115      0      0      0   220353      4      0      4 BMRU

     You might want to change your firewall settings to allow traffic
     all traffic, which is what the first command is doing below by
     flushing any previous firewall settings. The next commands block
     the ports into this device. Now, my Linux box here is the Soekris
     net5501, so I'm blocking port 111 (both udp and tcp) to this
     device, which is at IP address

             $ iptables -F
             $ iptables -A INPUT -i br0 -p tcp  --dport 111 -d -m physdev  --physdev-is-in -j DROP
             $ iptables -A INPUT -i br0 -p udp  --dport 111 -d -m physdev  --physdev-is-in -j DROP

    Now you may want to block certain traffic going through this
    router. The example below prevents the device attached on eth2
    from sending packets to eth1 on port 111.
             $ iptables -A FORWARD -i br0 -p tcp --dport 111 -m physdev --physdev-in eth2 --physdev-out eth1 -j DROP

    Okay, so the above command blocks port 111 from eth2 to eth1. If
    you want to block all traffic from a device attached to this
    router, you may want to consider using ebtables, which is a layer
    2 protocol (operating at a lower level than iptables).

             $ ebtables -A FORWARD -s 00:0b:db:c3:39:24 -j DROP

TIP 264:    

     Traffic shaping - using the tc command to control network traffic. 

     The tc command works particularly well with bridging. Suppose we
     wanted to slow down traffic on eth5. First, lets get some
     readings before making changes.

       $ ping soekris10
       PING ( 56(84) bytes of data.
       64 bytes from ( icmp_seq=1 ttl=64 time=1.89 ms
       64 bytes from ( icmp_seq=2 ttl=64 time=0.445 ms
       64 bytes from ( icmp_seq=3 ttl=64 time=0.479 ms
       64 bytes from ( icmp_seq=4 ttl=64 time=0.458 ms

     Now the traffic is going to be slowed by 100 ms, with the
     following command. Note that soekris10 is connected to eth1.

       $ tc qdisc add dev eth1 root netem delay 100ms

     After this change, note the following increase of 100ms in ping

      $ ping soekris10
      PING ( 56(84) bytes of data.
      64 bytes from ( icmp_seq=1 ttl=64 time=203 ms
      64 bytes from ( icmp_seq=2 ttl=64 time=101 ms
      64 bytes from ( icmp_seq=3 ttl=64 time=101 ms

    You may want to change this setting back to what is was, which 
    can be done with the following command:

      $ tc qdisc change dev eth1 root netem delay 0ms


TIP 265:    

     Consolidate duplicate files via hardlinks.  This is a package
     that automatically walks through files, on the same filesystem,
     looking for duplicates. When a duplicate is found, one file is
     chosen as the master and the other duplicate matches link to this

     $ mkdir 1
     $ mkdir 2
     $ echo "stuff here" >1/file1
     $ cp 1/file1 2/.

     Now, you have two files that are the same; however, the timestamp does
     differ. To see what hard link finds, use the -ncv option. Note (-n) option
     prevents changes from being made.

     $ hardlink  -ncvv .

     Directories 3
     Objects 5
     IFREG 2
     Mmaps 1
     Comparisons 1
     Would link 1
     Would save 4096

     Again, no changes have actually been made yet. We can verify this by looking at
     the inodes for the file.

     $ ls -i 1 2

     12738583 file1

     12738584 file1

     So 1/file1 has inode 12738583, which is different from 2/file1,
     which has 12738584.

     Okay, let's run the program for real, by taking out the -n

     $ hardlink  -cvv .
     Linked ./1/file1 to ./2/file1, saved 11

     Directories 3
     Objects 5
     IFREG 2
     Mmaps 1
     Comparisons 1
     Linked 1
     saved 4096

     Now that shows that it ran, and to really confirm, let's look at
     the inodes.

     $ ls -i 1 2
     12738583 file1

     12738583 file1

     Okay. They are the same. Now if were a very large file, you'd see
     a decrease in disk space, since you're only pointing to the contents
     of one file.

     Interesting note, if you edit the file with emacs, it will not
     save changes in both places. Because the default settings of
     emacs save the contents into a new file, you'll only get the
     changes made in the file you're editing.

     If you had made a soft link (ln -s file1a file2a), then, changing 
     one file with emacs will change the other ... just an important point
     to note.

TIP 266:    

     dstat - versatile tool for generating system resource statistics.


     Alternative to vmstat with the advantage of comparing multiple
     stats side by side.

     Below are some useful commands:
        Total system output displayed and collected in the file fileout.

           dstat --time -av --output fileout

TIP 267:    

     Compiling C++ programs with the boost library.

           g++ -lboost_regex

     The example above links the regex libary. There are over 70 such
     libraries. They can be linked using -lboost_libname, where
     libname is the name of the libarary.

TIP 268:    

     Hardening Red Hat Enterprise Linux 5. The following is a good talk
     by Steve Grubb.

     If that link does not exist, I have a copy of the pdf at the following:

     Also checkout some of the other presentations in the 2008 Red Hat Summit.

TIP 269:    

     iotop - command to monitor I/O usage by processes or threads. This reading 
     comes directly from the kernel and requires the kernel to be compiled with 
     the CONFIG_TASKSTATS and CONFIG_TASK_IO_ACCOUNTING options. This is the case
     with the latest Fedora distributions.

TIP 270:    

     Process substitution - a way to combine multiple command pipes
     into a single command line. It a way of avoiding tmp files.
     Here's a simple example. You have two files. You want the
     contents sorted and only list the differences between each
     file. However, you don't want any temp files created that will
     later need to be cleaned up. Plus, you want it all done on one
     command line.

         $  cat a

         $ cat b

         $ comm -3 <(sort a | uniq) <(sort b | uniq)

TIP 271:    

     Common subversion commands - the most common everyday commands.

     The following is done with the real project banssh on Google Code.

     1. Checkout the latest version of the project. This will store 
        the project in the directory banssh-read-only. Just change
        this name, if you want something else.

        $ svn checkout banssh-read-only

     2. Revert back to version N (save version 334). You can pick and choose
        and valid version numbers.

        $ svn update -r 334

     3. Get the latest update.

         $ svn update

     4. This requires write access, but suppose you want to add a tag for your
        release. This example will add release banssh-0.0.3

         $ svn copy \
             -m "Banssh release 0.0.3"

     5. Delete files or file. Below deleting the file

         $ svn remove

     6. Getting a list of files in the respository. This example gets
        a list of files beginning with H.

         $ svn list H*

     7. Committing changes with a message.

         $ svn commit -m 'Sample msg'

     8. Adding files. This can also be applied to directories.

         $ svn add

     9. Suppose you've made changes to your file, but you haven't
        committed. You want to see what these changes are.

         $ svn diff

       Note you can also see all changes relative to a particular

         $ svn diff -r 34 

     10. To list the log of commits. You may want to pipe the
       result to a file.
         $ svn log

     11. Need general information about the repository.

         $ svn info

  Path: .
  Repository Root:
  Repository UUID: 554197c9-0241-0830-1070-ccc24ce314de
  Revision: 427
  Node Kind: directory
  Schedule: normal
  Last Changed Author: mchirico
  Last Changed Rev: 426
  Last Changed Date: 2009-02-03 19:48:24 -0500 (Tue, 03 Feb 2009)

     12. Need more commands?

         $ svn help

TIP 272:    

     Difference between .bash_profile and .bashrc

             .bash_profile - commands inside this file only get executed by the login shell.

             .bashrc - commands inside this file only get execute when you run a subshell

             .bash_logout - only gets executed on logout, so it's good for deleting tmp files
                            or clearning history.

      Of course, it's very likely that command from .bashrc will also get executed on 
      login, since often .bashrc is called within .bash_profile. Look for the following

               # Code in .bash_profile that call .bashrc
               # Get the aliases and functions
               if [ -f ~/.bashrc ]; then
                    . ~/.bashrc

TIP 273:    

     Port forwarding with ssh and scp. Note the lowercase "p" for ssh and the 
     uppercase "P" for scp. 

     In the following example server2 is only accessibile via server1. You are
     current on a third computer, which can only reach server2 via server1.

       Step 1. 

           Setup the ssh connection. Connect to the first server, server1
           but put the second server, server2 after the -L

              ssh user1@server1 -L 22000:server2:22

       Step 2.        

           Now, in a new terminal window, on your current computer login
           to port 22000. Note, you running this command on your local computer
           which will go through server1 to login to server2

              ssh user2@localhost -p 22000

       Step 3.

           The following is an example of copying a file.

              scp -P 22000 file1 user2@localhost

TIP 274:    

     Generating computer names, with preceding zeros, using the seq

     Suppose you have 1000 or so computers numbered as follows:


     And you need a quick way of generating the list of names, with
     numbers below 100 preceded with one or two zeros. Do worry, there
     is a one liner to do this.

            $ seq -f "server%03g" 999

TIP 275:    

     How to increase the I/O priority of a process.

          $ ionice -c1 -n0  <PID>


           "-c1"  signifies Real time scheduling

           "-n0"  is the highest priority. Compare to "-n4"

           "-p <PID>"  is the process ID   

     If you just want to see the I/O scheduling priority of a process,
     use the following command:

         $ ionice <PID>

TIP 276:    

     Extracting the contents of a cpio file. 

     The following command will extract the contents of a cpio file.

         $ cat soa_linux_x86_101310_disk1.cpio |cpio -idmv

TIP 277:    

     bonnie++  measuring disk performance. 

     The following program will aggressively measure disk performance.


     You can run this program with the following parameters:

           bonnie++ -n 0 -u 0 -r <physical RAM> -s <20x physical ram> -f -b -d <mounted array>

     Below is an example run as root:

           bonnie++ -n 0 -u 0 -r 512 -s 20480 -f -b -d .

TIP 278:    

     Wireless with Fedora - Broadcom Corporation BCM4312 802.11b/g

     You may have a problem getting you're wireless card working with Linux. It's
     possible you may need to download and compile the driver.

     I did the following for the 64 bit driver.

       tar -xzf hybrid-portsrc-x86_64-v5_10_91_9.tar.gz
       make -C /lib/modules/$(uname -r)/build  M=`pwd`
       sudo cp wl.ko /lib/$(uname -r)/.
       sudo depmod
       sudo modprobe wl

TIP 279:    

     Making the terminal window larger or small. For example, if you're 
     showing someone code, you make want to make the gnome-terminal window

        ctl-shft-+  (This make it larger. Control shift plus at the same time)

        ctl -     (That is a control minus, to make it smaller)

TIP 280:    

     If you approach a terminal where someone is logged in, you can automatically
     log them out with the following command:




     Simple open command that restarts the close if a signal
     occurs.  Also note, the POSIX standards committee decided
     all new functions would not use errno and would instead
     directly return the error number in the function.

     A lot of functions return -1 on an error condition, then,
     set errno to the value of the error.  This will still work
     for all the well known functions; but, it's changing.

     /* start of code open.c
        Compile gcc -o open open.c

        Reference (Look for simple_but_common_x.x.x.tar.gz):

     #include <stdio.h>
     #include <unistd.h>
     #include <sys/types.h>
     #include <sys/stat.h>

     #include <fcntl.h>
     #include <stdlib.h>

     #include <string.h>             /* for strerror(int errno) */
     #include <errno.h>

     #define BUFLEN 100
     extern int errno;

     main (void)
       int fp,error;
       char buf[BUFLEN+1];

       if ((fp = open ("data", O_RDWR | O_CREAT, 0600)) == -1)
           fprintf (stderr, "Can't open data: %s\n", strerror (errno));
           return 1;

       snprintf (buf, BUFLEN, "123");
       write (fp, buf, strlen (buf));

       // Restart close should a signal occur */
       while((( error = close (fp) ) == -1) && (errno == EINTR));
       if(error == -1)
         perror("Failed to close the file\n");

       return 0;
     /* end of open.c */


     Example of setting the effective UID on a file

     /*  start of code
       gcc uid_open.c -o uid_open
       chown root.chirico uid_open
       chmod u+s uid_open

       Now you can run this as chirico and write to the
       root directory

     #include <stdio.h>
     #include <stdlib.h>
     #include <sys/types.h>
     #include <sys/stat.h>

     #include <fcntl.h>
     #include <string.h>
     #include <unistd.h>

     int main()
             int fd;

             if ((fd = open("/root/datajunk", O_RDWR | O_CREAT, 0600)) == -1) {
                     fprintf(stderr, "Can't open file \n");
                     return 1;

             write(fd, "0123456", strlen("0123456"));
             return 0;

     /* end of code */


     Writing a C http post.

     For downloads reference:


     Writing a 2.6.x Kernel Module:

       Look for the latest version of "procreadwrite".  This is a 2.6 kernel
       modules that demonstrates how to create /proc entires and write directly
       to user-land via tty.  It's updated to reflect replacement of "current->tty"
       with "current->signal->tty".


     Creating a filename with '\n'.  This goes with (TIP 71)

         /**** topen.c ***********************************************************
           Filenames can be created with any character except the null character
           and a slash.

           This example creates a file with returns '\n\n'

           There's a way to remove a file by inode:

               $ ls -libt *

           And, once you know the inode

               $ find . -inum <num> -exec mv '{}' goodstuff \;


               $ find . -inum <num> -exec rm '{}' goodstuff \;


               $ find . -inum <num> -exec cat '{}' \;


             gcc -o topen -Wall -W -O2 -s -pipe  topen.c


         #include <stdio.h>

         #include <unistd.h>
         #include <sys/types.h>
         #include <sys/stat.h>
         #include <fcntl.h>
         #include <stdlib.h>

         #include <string.h>             /* for strerror(int errno) */
         #include <errno.h>

         #define BUFLEN 100
         extern int errno;

         main (void)
           int fp,error;
           char buf[BUFLEN+1];

           if ((fp = open ("\n\n\n\n\n\n\n\n\n", O_RDWR | O_CREAT, 0600)) == -1)
               fprintf (stderr, "Can't open data: %s\n", strerror (errno));
               return 1;

           snprintf (buf, BUFLEN, "123");
           write (fp, buf, strlen (buf));

           // Restart close should a signal occur */
           while((( error = close (fp) ) == -1) && (errno == EINTR));
           if(error == -1)
             perror("Failed to close the file\n");

           return 0;


  **Note, if you want email notification after every 50 new tips have been
    added, then, click on the following link:


     Working With The Lemon Parser Generator.


     copy command for std container output.

        #include <iostream>
        #include <list>

        #include <vector>
        #include <iterator>

        using namespace std;
        int main(void)
          vector<int> v;
          list<int> l;





  /* Copyright (c) 2005 Mike Chirico

      Example of using virtual functions. Note the use of "initialization lists"
              for assinging the variable first and last.

        g++ -o virtualfunc -Wall -W -O2 -s -pipe



  #include <iostream>

  #include <string>
  #include <list>
  #include <algorithm>
  #include <iterator>
  #include <functional>

  using namespace std;

  class Employee {
  string first,last;
    Employee(const string& fn="John",const  string& ln="Smith"): first(fn),last(ln) {}
    virtual void print() const {
    cout << "First name: " << first << ", Last name: " << last << endl;
    virtual ~Employee() {}

  class Manager : public Employee {
    int level;
    list<Employee*> subordinates;
    Manager(const string& fn="Ivan",const string& ln="Stedwick", int lvl=1): Employee(fn,ln), level(lvl) {}
    void print() {
      cout << "Manager level: " << level << "  ";
      cout << "Supervises:" << endl;
      cout << endl << endl;
    void addstaff(Employee& staff){
    void addstaff(Employee* staff){


   int main()
     Employee p0("Lisa","Payne");
     Manager m0;

     m0.addstaff(new Employee("Zoe","Bear")); /* uses void addstaff(Employee* staff) */
     m0.addstaff(new Employee("Leah","Bopper"));
     m0.addstaff(new Employee("Abby","Chicken"));
     m0.addstaff(p0);  /* void addstaff(Employee& staff)  needed for this one */
     m0.addstaff(new Employee());

     return 0;


   /*  Named Constructor Idiom.

   #include <iostream>

   #include <cmath>
   using namespace std;

   class Point {

     static Point rectangular(float x, float y);
     static Point polar(float radius, float angle);
     float get_x() { return x_; }
     float get_y() { return y_; }

     Point(float x, float y);
     float x_, y_;

   inline Point::Point(float x, float y)
     : x_(x), y_(y) {}

   inline Point Point::rectangular(float x, float y)
   { return Point(x,y); }

   inline Point Point::polar(float radius, float angle)
   { return Point(radius*cos(angle),radius*sin(angle)); }

   int main(void)
     Point p1 = Point::rectangular(5.7,1.2);
     Point p2 = Point::polar(5.7,1.2);

     cout << "(" << p1.get_x() << ", " << p1.get_y() << ")" << endl;
     cout << "(" << p2.get_x() << ", " << p2.get_y() << ")" << endl;


      Copyright (c) 2004 GPL Mike Chirico, or

      Reference: "The C++ Programming Language", 3rd ed, by Stroustrup
      pg. 246.



   #include <iostream>

   class Name {
     char* s;

   class Table {
     Name *p;
     size_t sz;
     Table(size_t s=15) {
               p = new Name[sz=s];
               for(size_t i=0; i< sz; ++i) p[i].s="****";
     Table(const Table &t);
     Table& operator=(const Table&);
     int prt();
     void asgn(char* ts,size_t index);
     ~Table(){ delete[] p; }

   Table& Table::operator=(const Table &t)
     if( this != &t) {
       delete[] p;
       p = new Name[];
       for(size_t i=0; i<; ++i) p[i]=t.p[i];
     return *this;

   int Table::prt()
     for(size_t i=0; i< sz; ++i) std::cout << p[i].s << " ";
     std::cout << std::endl;
     return 0;

     asgn will increase the array of strings, if needed
     to size index+1, and add the string ts to position
   void Table::asgn(char* ts,size_t index)
     if(index < sz ) {
     }else if ( index >= sz ){
        Name *tp;

        p = new Name[index+1];

        for(size_t i=0; i< sz; ++i) p[i].s=tp[i].s;
        delete [] tp;
        for(size_t i=sz; i < index; ++i)p[i].s="****";



   int main(void)

     Table t1;
     Table t2(5);

     // this is bigger than initial sz




   The following is an example of creating a vector like structure in C.

/* vector.c --
 * Copyright 2009  cwxstat LLC., Elkins Park, Pennsylvania.
 * All Rights Reserved.
 * This program is free software; you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation; either version 2 of the License, or
 * (at your option) any later version.
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * GNU General Public License for more details.
 * You should have received a copy of the GNU General Public License
 * along with this program; if not, write to the Free Software
 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
 * Authors:
 *     Mike Chirico <>



This works a bit like C++'s vector.


#include <stdio.h>
#include <stdlib.h>
#include <malloc.h>
#include <string.h>

typedef struct
 char **key;
 char **val;
 int argc;
} Key_val;

typedef struct
 char **key;
 Key_val **val;
 int argc;
} Vec;

Vec *
vecAdd(Vec * c, const char *key, Key_val * val)

 char *s = NULL;
 Key_val *v = NULL;
 char **t = NULL;
 Key_val **tC = NULL;

 s = (char *)malloc(sizeof(char) * (strlen(key) + 1));
 if (s == NULL)
  return NULL;

 v = val;

 strcpy(s, key);

 if (c == NULL) {
  c = (Vec *) malloc(sizeof(Vec));
  if (c == NULL)
   return NULL;
  c->key = NULL;
  c->val = NULL;
  c->argc = 0;
 c->argc = c->argc + 1;
 t = (char **) realloc(c->key,sizeof(char *) * (long unsigned int)c->argc);
 if (t == NULL)
  return NULL;

 t[c->argc - 1] = s;
 c->key = t;

 tC = realloc(c->val, sizeof(Key_val *) * (long unsigned int) c->argc);
 if (tC == NULL)
  return NULL;
 tC[c->argc - 1] = v;
 c->val = tC;

 return c;

Key_val *
keyAdd(Key_val * c, const char *key, const char *val)

 char *s = NULL;
 char *v = NULL;
 char **t = NULL;

 s = (char *)malloc(sizeof(char) * (strlen(key) + 1));
 if (s == NULL)
  return NULL;
 v = (char *)malloc(sizeof(char) * (strlen(val) + 1));
 if (v == NULL)
  return NULL;

 strcpy(s, key);
 strcpy(v, val);

 if (c == NULL) {
  c = (Key_val *) malloc(sizeof(Key_val));
  if (c == NULL)
   return NULL;
  c->key = NULL;
  c->val = NULL;
  c->argc = 0;
 c->argc = c->argc + 1;
 t = realloc(c->key, sizeof(char *) * (long unsigned int) c->argc);
 if (t == NULL)
  return NULL;

 t[c->argc - 1] = s;
 c->key = t;

 t = realloc(c->val, sizeof(char *) * (long unsigned int) c->argc);
 if (t == NULL)
  return NULL;
 t[c->argc - 1] = v;
 c->val = t;

 return c;

pr(Key_val * c)
 int i;

 if (c == NULL)
 for (i = 0; i < c->argc; ++i)
  printf("%s->%s\n", c->key[i], c->val[i]);


prV(Vec * c)
 int i;

 if (c == NULL)
 for (i = 0; i < c->argc; ++i) {
  printf("[%s]=>\n", c->key[i]);



myfree(Key_val * c)
 if (c == NULL)

 int i;
 for (i = 0; i < c->argc; ++i) {


myfreeV(Vec * c)
 if (c == NULL)

 int i;
 for (i = 0; i < c->argc; ++i) {


char *
find(Key_val * c,const char *s)
 int i;
 for (i = 0; i < c->argc; ++i)
  if (strcmp(c->key[i], s) == 0)
   return c->val[i];

 return NULL;

   Find a particular key_val in a vector given 
   a vector key.
Key_val *
findK(Vec * c, const char *s)
 int i;
 for (i = 0; i < c->argc; ++i)
  if (strcmp(c->key[i], s) == 0)
   return c->val[i];

 return NULL;

 Key_val *k = NULL;
 Vec *v = NULL;
 char *s;

 k = keyAdd(k, "one", "1");
 k = keyAdd(k, "two", "2");
 k = keyAdd(k, "three", "3");
 k = keyAdd(k, "four", "4");
 v = vecAdd(v, "ONE", k);

 k = NULL;
 k = keyAdd(k, "twenty one", "21");
 k = keyAdd(k, "twenty two", "22");
 k = keyAdd(k, "twenty three", "23");
 k = keyAdd(k, "twenty four", "24");
 v = vecAdd(v, "TWO", k);


 printf("\n\n ................ \n\n");

        /* Example returning key_val from the string found in vector v */

 s = (char *)malloc(sizeof(char) * 80);
 strcpy(s, "two");
 fprintf(stderr, "find(c,%s)=%s\n", s, find(k, s));
 strcpy(s, "four");
 fprintf(stderr, "find(c,%s)=%s\n", s, find(k, s));


 /* Note myfreeV calls this */

 return 0;


  (2) (3)
  (5) (6) (7)
  (8) (9)
 (10) (11) (12) (13)

   (1)(2)(3) Excellent resource for bash scripts.  
   (4) rpm resource
   (6) GNU Manuals Online 
   (8) Authors Website
   (11)(12)  System Admin 
   (13)  Excellent source of sed scripts


    "THE Java Programming Language, Fourth Edition", Ken Arnold, James Gosling,
    David Holmes. Prentice Hall. 2005

    "The Ruby Programming Language", David Flanagan, Yukihiro Matsumoto
    O'Reilly. 2008.

    "Essential Linux Device Drivers", Sreekrishnan Venkateswaran
    Prentice Hall. 2008.

    "Head First Object-Oriented Analysis & Design", Brett D. McLaughlin,
    Gary Pollice  and David West. O'Reilly. 2006.

    "Design Patterns: Elements of Reusable Object-Oriented Software",
    Erich Gamma,
     Richard Helm, Ralph Johnson, John Vlissides. Addison Wesley. 1995.

    "Head First Design Patterns", Bert Bates, Elisabeth Freeman,
    Eric Freeman,
     Kathy Sierra. O'Reilly. 2004.

    "The Definitive Guide to SQLite", Michael Owens. Apress.

    "Higher Order Perl, Trnasforming Programs with Programs", Mark
    Jason Dominus

    "Effective C++, 55 Specific Ways to Improve Your Programs and
    Designs", Scott Meyers.
        Third Edition.

    "C++ Common Knowledge, Essential Intermediate Programming", Stephen
    C. Dewhurst.

    "UNIX Network Programming, The Sockets Networking API", Volume 1,
    Third Edition.
        W. Richard Stevens, Bill Fenner, Andrew M. Rudoff.

    "UNIX Network Programming, Interprocess Communications", Volume 2,
    Second Edition.
        W. Richard Stevens.

    "UNIX SYSTEMS Programming, Communication, Concurrency, and Threads",
    Kay A. Robbins,
        Steven Robbins

    "Programming with POSIX Threads", David R. Butenhof. Addison-Wesley

    "The C++ Programming Language" Third Edition. Bjarne
    Stroustrup. Addison-Wesley.

    "C Programming Language" (2nd Edition), Second Edition, Kernighan
    and Ritchie

    "Advanced Linux Programming" by Mark Mitchell, Jeffrey Oldham,
    and Alex Samuel, of
        CodeSourcery LL. This book if free at the following resource:

    "Accelerated C++, Practical Programming by Example" Andrew Koenig,
    Barbara E. Moo.

    "C: A Reference Manual", Fifth Edition, Samuel P. Harbison, Guy
    L. Steele.

    "C++ Standard Library: A Tutorial and Reference, The", Nicolai
    M. Josuttis. Addison Wesley.

    "C++ Templates: The Complete Guide", David Vandevoorde, Nicolai
    M. Josuttis. Addison Wesley.

    "Exceptional C++: 47 Engineering Puzzles, Programming Problems,
    and Solutions", Herb Sutter.
     Addison Wesley.

    "More Exceptional C++", Herb Sutter.

    "Exceptional C++ Style: 40 New Engineering Puzzles, Programming
    Problems, and Solutions",
       Herb Sutter. Addison Wesley.

    "The Art of Computer Programming (TAOCP)", Vol 1,Vol 2, Vol 3. Donald
    E. Knuth. Addison-Wesley.

    "Programming Perl, 3rd Edition", Tom Christiansen, Jon Orwant,
    Larry Wall. O'Reilly.

    "Programming from the Ground Up", Jonathan Bartlett, Edited by
    Dominick Bruno, Jr.

    "Expert C Programming", Peter van der Linden, Prentice Hall PTR.

    "C++ Coding Standards 101 Rules, Guidelines, and Best Practices",
    by Herb Sutter and
       Andrei Alexandrescu.

    "Linux Kernel Development: A practical guide to the design and
    implementation of
       the Linux kernel", by Robert Love, Sams Publishing.

    "C++ Template Metaprogramming: Concepts, Tools, and Techniques from
    Boost and Beyond", by
       David Abrahams and Aleksey Gurtovoy. Addison Wesley.


     "Zen and the Art of Motorcycle Maintenance: An Inquiry into Values",
     Robert Pirsig.

     "Lila: An Inquiry Into Morals", Robert Pirsig.


    "Structure and Interpretation of Computer Programs", Harold Abelson,
    Gerald Jay Sussman,
      Julie Sussman. This book is free:


    Linux Networking-HOWTO (Previously the Net-3 Howto)


The following people made suggestions and corrections:
  - Jorge Fabregas <> TIP 21 - Malcolm Parsons
  <> TIP 44 - Andreas Haunschmidt
  <> TIP 102, TIP 90 -
  (Following links ) - Jacques.GARNIER-EXTERIEUR@EU.RHODIA.COM TIP 46


  - Tobias Nix <> TIP 12 - Philip Vanmontfort
  <> TIP 36 - Jorg Esser <>

  TIP 110


Linux Quota Tutorial

This tutorial walks you through implementing disk quotas for both users and groups on
Linux, using a virtual filesystem, which is a filesystem created from a disk file. Since quotas work on a
per-filesystem basis, this is a way to implement quotas on a sub-section, or even multiple subsections of
your drive, without reformatting. This tutorial also covers quotactl, or quota's C interface, by way of
an example program that can store disk usage in a SQLite database for monitoring data usage over time.

Gmail on Home Linux Box using Postfix and Fetchmail
If you have a Google Gmail account, you can relay mail from your home linux system. It's a good exercise in
configuring Postfix with TLS and SASL. Plus, you will learn how to bring down the mail safely, using
fetchmail with the "sslcertck" option.

Breaking Firewalls with OpenSSH and PuTTY
If the system administrator deliberately filters out all traffic except port 22 (ssh), to a single server,
it is very likely that you can still gain access other computers behind the firewall. This article shows how remote
Linux and Windows users can gain access to firewalled samba, mail, and http servers. In essence, it shows how openSSH
and Putty can be used as a VPN solution for your home or workplace.

Create your own custom Live Linux CD

These steps will show you how to create a functioning Linux system,
with the latest 2.6 kernel compiled from source, and how to integrate
the BusyBox utilities including the installation of DHCP. Plus, how
to compile in the OpenSSH package on this CD based system.
On system boot-up a filesystem will be created and the
contents from the CD will be uncompressed and
completely loaded into RAM -- the CD could be removed
at this point for boot-up on a second computer. The remaining
functioning system will have full ssh capabilities.
You can take over any PC assuming, of course, you have
configured the kernel with the appropriate drivers and the
PC can boot from a CD.

SQLite Tutorial

This article explores the power and simplicity of sqlite3,
first by starting with common commands and triggers, then the attach statement with the union operation is introduced in a
way that allows multiple tables, in separate databases, to be combined as one virtual table,
without the overhead of copying or moving data. Next, the simple sign function and the amazingly powerful trick of
using this function in SQL select statements to solve complex queries with a single pass through the data is demonstrated,
after making a brief mathematical case for how the sign function defines the absolute value and IF conditions.

Lemon Parser Tutorial
Lemon is a compact, thread safe, well-tested parser generator written by D. Richard Hipp. Using a parser generator,
along with a scanner like flex, can be advantageous because there is less code to write. You just write the grammar for the parser.
This article is an introduction to the Lemon Parser, complete with examples.

Chirico img

Mike Chirico, a father of triplets (all girls) lives outside of
Philadelphia, PA, USA. He has worked with Linux since 1996, has a Masters
in Computer Science and Mathematics from Villanova University, and has
worked in computer-related jobs from Wall Street to the University of
Pennsylvania. His hero is Paul Erdos, a brilliant number theorist who was
known for his open collaboration with others.

Mike's notes page is souptonuts. For
open source consulting needs, please send an email to
. All consulting work must include a donation to