Monday, June 24, 2013

【一萬小時的練習】反覆做到厭煩,累積無法撼動的成果

莫扎特(Wolfgang Mozart)3歲就會彈琴、4歲開始表演小提琴,5歲就能譜曲,因為他是音樂神童。」「威廉絲姐妹(Venus & Serena Williams)有絕佳的身材、體力和反射神經,是天生的網球選手啊!」在聊天嗑牙的話題中,我們常以「神童」「天才資優」來論斷某個人的卓越表現,那 些不可思議的奇蹟總是發生在別人身上,「自己沒有天分、就是比較差」變成絕佳的掩飾藉口,還沒努力過,就在潛意識裡將自己的能力劃地自限。
我們長時間被個人主義的光環、奇蹟發生的浪漫給蒙蔽,疏忽了許多成功者默默付出常人無法想像的嚴格磨練。抽絲剝繭莫扎特的童年故事,其實,他在6歲以前就累積了3500小時的練習時間。音樂神童?把出色表現簡化為基因決定論,可能只是你偷懶的藉口。

關鍵不在天賦,而是練習
心理學家卡羅‧狄維克(Carol Dweck)曾經對「天分至上」的心態做了一項實驗。他將年齡相仿的小學生分成「認同才能是由天賦決定」的天才論組,以及「努力可以讓人更聰明」的努力論組,然後讓他們一起解題。
結果發現,天才論組的小朋友遇見難題時,大多乾脆放棄、拒絕回答;而努力論組的孩子,在面對挑戰時,總能夠盡力摸索、不輕言放棄,最後的表現甚至超出眾人預期,交出了比天才論組更好的成績單。
狄維克認為天才論組的「定型心態」(fixed mind-set)是讓自己表現不佳的終極理由,認為自己本來就比別人差,打從心底的相信自己無法更好,成為阻礙成長的關鍵。但努力論組的「成長心態」 (growth mind-set)則截然不同,篤信努力就會更好的人們,會把每次挫折轉換成自我改善的機會,反而意外地開啟了學習、發展和適應的可能。
當你終於肯承認,天份並不能成為你的絆腳石時,該怎麼挽救過去所荒廢的一切?
馬修‧施雅德(Matthew Syed)是英國桌球第一高手,他觀察所有在運動場上表現優異的選手們,都經歷過了一萬個小時的磨練。
透過比常人更長時間的練習,嘗過更多的失敗與挫折之後,才能正面迎擊每個孤獨又折磨自我的挑戰;嚴格的自律、長時間的累積,一點一點地集結成果,才有攀上頂尖的可能。在所有領域中,沒有人能例外。

永遠給自己下一個挑戰
就算是普通人,只要孜孜不倦地維持練習、熟練各種基本技巧,提高穩定度與專注力後,每個人都能在關鍵時刻展現出「天才」般的表現。而除了投入大量的時間之外,練習的品質也不可忽略,因此,「有目標的訓練」成為另一個你得思考的重點。
只做自己喜歡的練習是人的天性,然而,留在舒適圈內只會讓你停滯不前,有了明確的目標後,值得投入的,是「應該做」而不是「喜歡做」的練習。
戴爾電腦(Dell)創辦人麥可.戴爾(Michael Dell):「我不喜歡只做我喜歡的事,我喜歡做能讓公司成功的事。」細膩觀察每次練習成果與預期表現之間的落差,從中獲得有用的反饋訊息 (feedback),試著調整錯誤,就能逐步改善缺失。堅持這三步驟:找到目標、進行練習、得到反饋,每個人都能仰望卓越。
哲學家亞里斯多德曾言:「卓越是經由訓練及漸漸習慣而來的,我們不是因為具備美德而舉止得宜,反而是因為舉止合宜而具備美德。只有自己切身體驗才能 發現,原來,卓越不是一種作為,而是習慣。」苦練「對的事情」、誠實面對自我成績,盡力改造自己,就像日本首富柳井正的名言:「每天每天,做到生厭地去 做,是成功的祕訣。」
最後,馬修認為:「卓越的弔詭之處即在於──卓越往往建立於必要的失敗之上。」我們怎麼會曉得會在這段路途中跌倒多少次呢?天助自助者,奮力超越看似不可能到達的極限,成功了,一切都很值得。

持續不懈怠的3個祕技

Tips1別怕挫折!
許多人覺得失敗是難堪不已的經驗,但眾多成功者的故事不斷印證「失敗為成功之母」,在每次跌跤中汲取的教訓最可貴,這雖然是骨灰級的老生重談,卻依舊為千百年來成功者不變的法則。

Tips2 嚴格自律!
「休息一下好了!」「好想偷懶啊!」這樣的想法是最要不得的,人往往會因為一時偷閒致使全盤皆輸,嚴格控制自我的練習時間與頻率,除了能夠維持一定的水準之外,也可以訓練自我對抗惰性的意志力。
Tips3 別在乎外在眼光
被人視為天才總會有心理包袱,「一定要維持出色表現不可」「千萬別失敗啊」一但在乎別人的眼光,你就會被困住,難以突破自我,最終淪為常人,甩開外界的指指點點,勇於挑戰才會讓自己更好!
(整理‧撰文 / 陳書榕  編輯 / 劉揚銘,本文取材自《經理人月刊》2012年4月號)

Reference:
http://www.managertoday.com.tw/?p=12338

Saturday, June 22, 2013

jQuery UI DatePicker to show month year only

  $("#monthPicker").datepicker({
    dateFormat: 'yy-mm',
    changeMonth: true,
    changeYear: true,
    showButtonPanel: true,

    onClose: function(dateText, inst) {
      var month = $("#ui-datepicker-div .ui-datepicker-month :selected").val();
      var year = $("#ui-datepicker-div .ui-datepicker-year :selected").val();
      //$(this).val($.datepicker.formatDate('yy-mm', new Date(year, month, 1)));
      $(this).datepicker('setDate', new Date(year, month, 1));
    }
  });

  $("#monthPicker").focus(function () {
    $(".ui-datepicker-calendar").hide();
    //$("#ui-datepicker-div").position({
    //  my: "center top",
    //  at: "center bottom",
    //  of: $(this)
    //});
  });

Reference:
http://stackoverflow.com/questions/2208480/jquery-ui-datepicker-to-show-month-year-only
http://thiamteck.blogspot.ca/2011/03/jquery-ui-datepicker-with-month-and.html

Thursday, June 20, 2013

The Linux Programmer’s Toolbox

The Linux Programmer's Toolbox (Paperback) ~ John Fusco
http://www.amazon.com/Linux-Programmers-Toolbox-John-Fusco/dp/0132198576/ref=sr_1_1?ie=UTF8&s=books&qid=1259773513&sr=1-1


Table of Contents


Copyright

Dedication

Prentice Hall Open Source Software Development Series

Foreword
Preface

Who Should Read This Book

The Purpose of This Book

How to Read This Book

How This Book Is Organized

Acknowledgments

About the Author
Download ChapterDownload Chapter
1 TokenChapter 1. Downloading and Installing Open Source Tools

Section 1.1. Introduction

Section 1.2. What Is Open Source?
Section 1.3. What Does Open Source Mean to You?

Section 1.3.1. Finding Tools

Section 1.3.2. Distribution Formats
Section 1.4. An Introduction to Archive Files

Section 1.4.1. Identifying Archive Files

Section 1.4.2. Querying an Archive File

Section 1.4.3. Extracting Files from an Archive File
Section 1.5. Know Your Package Manager

Section 1.5.1. Choosing Source or Binary

Section 1.5.2. Working with Packages
Section 1.6. Some Words about Security and Packages

Section 1.6.1. The Need for Authentication

Section 1.6.2. Basic Package Authentication

Section 1.6.3. Package Authentication with Digital Signatures

Section 1.6.4. GPG Signatures with RPM

Section 1.6.5. When You Can’t Authenticate a Package
Section 1.7. Inspecting Package Contents

Section 1.7.1. How to Inspect Packages

Section 1.7.2. A Closer Look at RPM Packages

Section 1.7.3. A Closer Look at Debian Packages
Section 1.8. Keeping Packages up to Date

Section 1.8.1. Apt: Advanced Package Tool

Section 1.8.2. Yum: Yellowdog Updater Modified

Section 1.8.3. Synaptic: The GUI Front End for APT

Section 1.8.4. up2date: The Red Hat Package Updater
Section 1.9. Summary

Section 1.9.1. Tools Used in This Chapter

Section 1.9.2. Online References
Download ChapterDownload Chapter
1 TokenChapter 2. Building from Source

Section 2.1. Introduction
Section 2.2. Build Tools

Section 2.2.1. Background

Section 2.2.2. Understanding make

Section 2.2.3. How Programs Are Linked

Section 2.2.4. Understanding Libraries
Section 2.3. The Build Process

Section 2.3.1. The GNU Build Tools

Section 2.3.2. The configure Stage

Section 2.3.3. The Build Stage: make

Section 2.3.4. The Install Stage: make install
Section 2.4. Understanding Errors and Warnings
Section 2.4.1. Common Makefile Mistakes

Section 2.4.1.1. Shell Commands

Section 2.4.1.2. Missing Tabs

Section 2.4.1.3. VPATH Confusion

Section 2.4.2. Errors during the configure Stage

Section 2.4.3. Errors during the Build Stage

Section 2.4.4. Understanding Compiler Errors

Section 2.4.5. Understanding Compiler Warnings

Section 2.4.6. Understanding Linker Errors
Section 2.5. Summary

Section 2.5.1. Tools Used in This Chapter

Section 2.5.2. Online References
Download ChapterDownload Chapter
1 TokenChapter 3. Finding Help

Section 3.1. Introduction
Section 3.2. Online Help Tools

Section 3.2.1. The man Page

Section 3.2.2. man Organization

Section 3.2.3. Searching the man Pages: apropos

Section 3.2.4. Getting the Right man Page: whatis

Section 3.2.5. Things to Look for in the man Page

Section 3.2.6. Some Recommended man Pages

Section 3.2.7. GNU info

Section 3.2.8. Viewing info Pages

Section 3.2.9. Searching info Pages

Section 3.2.10. Recommended info Pages

Section 3.2.11. Desktop Help Tools
Section 3.3. Other Places to Look

Section 3.3.1. /usr/share/doc

Section 3.3.2. Cross Referencing and Indexing

Section 3.3.3. Package Queries
Section 3.4. Documentation Formats

Section 3.4.1. TeX/LaTeX/DVI

Section 3.4.2. Texinfo

Section 3.4.3. DocBook

Section 3.4.4. HTML

Section 3.4.5. PostScript

Section 3.4.6. Portable Document Format (PDF)

Section 3.4.7. troff
Section 3.5. Internet Sources of Information

Section 3.5.1. www.gnu.org

Section 3.5.2. SourceForge.net

Section 3.5.3. The Linux Documentation Project

Section 3.5.4. Usenet

Section 3.5.5. Mailing Lists

Section 3.5.6. Other Forums
Section 3.6. Finding Information about the Linux Kernel

Section 3.6.1. The Kernel Build

Section 3.6.2. Kernel Modules

Section 3.6.3. Miscellaneous Documentation
Section 3.7. Summary

Section 3.7.1. Tools Used in This Chapter

Section 3.7.2. Online Resources
Download ChapterDownload Chapter
1 TokenChapter 4. Editing and Maintaining Source Files

Section 4.1. Introduction
Section 4.2. The Text Editor

Section 4.2.1. The Default Editor

Section 4.2.2. What to Look for in a Text Editor

Section 4.2.3. The Big Two: vi and Emacs

Section 4.2.4. Vim: vi Improved
Section 4.2.5. Emacs

Section 4.2.5.1. Emacs Features

Section 4.2.5.2. Modes? What Modes?

Section 4.2.5.3. Emacs Commands and Shortcuts

Section 4.2.5.4. Cursor Movement

Section 4.2.5.5. Deleting, Cutting, and Pasting

Section 4.2.5.6. Search and Replace

Section 4.2.5.7. Browsing and Building Code with Emacs

Section 4.2.5.8. Text Mode Menus

Section 4.2.5.9. Customizing Emacs Settings

Section 4.2.5.10. Emacs for vi Users

Section 4.2.5.11. GUI Mode

Section 4.2.5.12. The Bottom Line on Emacs

Section 4.2.6. Attack of the Clones

Section 4.2.7. Some GUI Text Editors at a Glance

Section 4.2.8. Memory Usage

Section 4.2.9. Editor Summary
Section 4.3. Revision Control

Section 4.3.1. Revision Control Basics
Section 4.3.2. Defining Revision Control Terms

Section 4.3.2.1. Project

Section 4.3.2.2. Add/Remove

Section 4.3.2.3. Check In

Section 4.3.2.4. Check Out

Section 4.3.2.5. Branch

Section 4.3.2.6. Merge

Section 4.3.2.7. Label

Section 4.3.2.8. In Summary

Section 4.3.3. Supporting Tools

Section 4.3.4. Introducing diff and patch

Section 4.3.5. Reviewing and Merging Changes
Section 4.4. Source Code Beautifiers and Browsers

Section 4.4.1. The Indent Code Beautifier

Section 4.4.2. Astyle Artistic Style

Section 4.4.3. Analyzing Code with cflow

Section 4.4.4. Analyzing Code with ctags

Section 4.4.5. Browsing Code with cscope

Section 4.4.6. Browsing and Documenting Code with Doxygen
Section 4.4.7. Using the Compiler to Analyze Code

Section 4.4.7.1. Dependencies

Section 4.4.7.2. Macro Expansions
Section 4.5. Summary

Section 4.5.1. Tools Used in This Chapter

Section 4.5.2. References

Section 4.5.3. Online Resources
Download ChapterDownload Chapter
1 TokenChapter 5. What Every Developer Should Know about the Kernel

Section 5.1. Introduction
Section 5.2. User Mode versus Kernel Mode

Section 5.2.1. System Calls

Section 5.2.2. Moving Data between User Space and Kernel Space
Section 5.3. The Process Scheduler

Section 5.3.1. A Scheduling Primer

Section 5.3.2. Blocking, Preemption, and Yielding

Section 5.3.3. Scheduling Priority and Fairness

Section 5.3.4. Priorities and Nice Value

Section 5.3.5. Real-Time Priorities

Section 5.3.6. Creating Real-Time Processes

Section 5.3.7. Process States
Section 5.3.8. How Time Is Measured

Section 5.3.8.1. System Time Units

Section 5.3.8.2. The Kernel Clock Tick

Section 5.3.8.3. Timing Your Application
Section 5.4. Understanding Devices and Device Drivers

Section 5.4.1. Device Driver Types

Section 5.4.2. A Word about Kernel Modules

Section 5.4.3. Device Nodes

Section 5.4.4. Devices and I/O
Section 5.5. The I/O Scheduler

Section 5.5.1. The Linus Elevator (aka noop)

Section 5.5.2. Deadline I/O Scheduler

Section 5.5.3. Anticipatory I/O Scheduler

Section 5.5.4. Complete Fair Queuing I/O Scheduler

Section 5.5.5. Selecting an I/O Scheduler
Section 5.6. Memory Management in User Space

Section 5.6.1. Virtual Memory Explained
Section 5.6.2. Running out of Memory

Section 5.6.2.1. When a Process Runs out of Memory

Section 5.6.2.2. When the System Runs out of Memory

Section 5.6.2.3. Locking Down Memory
Section 5.7. Summary

Section 5.7.1. Tools Used in This Chapter

Section 5.7.2. APIs Discussed in This Chapter

Section 5.7.3. Online References

Section 5.7.4. References
Download ChapterDownload Chapter
1 TokenChapter 6. Understanding Processes

Section 6.1. Introduction
Section 6.2. Where Processes Come From

Section 6.2.1. fork and vfork

Section 6.2.2. Copy on Write

Section 6.2.3. clone
Section 6.3. The exec Functions

Section 6.3.1. Executable Scripts

Section 6.3.2. Executable Object Files

Section 6.3.3. Miscellaneous Binaries

Section 6.4. Process Synchronization with wait
Section 6.5. The Process Footprint

Section 6.5.1. File Descriptors

Section 6.5.2. Stack

Section 6.5.3. Resident and Locked Memory

Section 6.6. Setting Process Limits

Section 6.7. Processes and procfs
Section 6.8. Tools for Managing Processes

Section 6.8.1. Displaying Process Information with ps

Section 6.8.2. Advanced Process Information Using Formats

Section 6.8.3. Finding Processes by Name with ps and pgrep

Section 6.8.4. Watching Process Memory Usage with pmap

Section 6.8.5. Sending Signals to Processes by Name
Section 6.9. Summary

Section 6.9.1. System Calls and APIs Used in This Chapter

Section 6.9.2. Tools Used in This Chapter

Section 6.9.3. Online Resources
Download ChapterDownload Chapter
1 TokenChapter 7. Communication between Processes

Section 7.1. Introduction
Section 7.2. IPC Using Plain Files

Section 7.2.1. File Locking

Section 7.2.2. Drawbacks of Using Files for IPC
Section 7.3. Shared Memory

Section 7.3.1. Shared Memory with the POSIX API

Section 7.3.2. Shared Memory with the System V API
Section 7.4. Signals

Section 7.4.1. Sending Signals to a Process

Section 7.4.2. Handling a Signal

Section 7.4.3. The Signal Mask and Signal Handling

Section 7.4.4. Real-Time Signals

Section 7.4.5. Advanced Signals with sigqueue and sigaction

Section 7.5. Pipes
Section 7.6. Sockets
Section 7.6.1. Creating Sockets

Section 7.6.1.1. Socket Domains

Section 7.6.1.2. Socket Types

Section 7.6.1.3. Socket Protocols

Section 7.6.2. Local Socket Example Using socketpair

Section 7.6.3. Client/Server Example Using Local Sockets

Section 7.6.4. Client Server Using Network Sockets
Section 7.7. Message Queues

Section 7.7.1. The System V Message Queue

Section 7.7.2. The POSIX Message Queue

Section 7.7.3. Difference between POSIX Message Queues and System V Message Queues
Section 7.8. Semaphores

Section 7.8.1. Semaphores with the POSIX API

Section 7.8.2. Semaphores with the System V API
Section 7.9. Summary

Section 7.9.1. System Calls and APIs Used in This Chapter

Section 7.9.2. References

Section 7.9.3. Online Resources
Download ChapterDownload Chapter
1 TokenChapter 8. Debugging IPC with Shell Commands

Section 8.1. Introduction
Section 8.2. Tools for Working with Open Files

Section 8.2.1. lsof

Section 8.2.2. fuser

Section 8.2.3. ls

Section 8.2.4. file

Section 8.2.5. stat
Section 8.3. Dumping Data from a File

Section 8.3.1. The strings Command

Section 8.3.2. The xxd Command

Section 8.3.3. The hexdump Command

Section 8.3.4. The od Command
Section 8.4. Shell Tools for System V IPC

Section 8.4.1. System V Shared Memory

Section 8.4.2. System V Message Queues

Section 8.4.3. System V Semaphores
Section 8.5. Tools for Working with POSIX IPC

Section 8.5.1. POSIX Shared Memory

Section 8.5.2. POSIX Message Queues

Section 8.5.3. POSIX Semaphores

Section 8.6. Tools for Working with Signals
Section 8.7. Tools for Working with Pipes and Sockets

Section 8.7.1. Pipes and FIFOs

Section 8.7.2. Sockets

Section 8.8. Using Inodes to Identify Files and IPC Objects
Section 8.9. Summary

Section 8.9.1. Tools Used in This Chapter

Section 8.9.2. Online Resources
Download ChapterDownload Chapter
1 TokenChapter 9. Performance Tuning

Section 9.1. Introduction
Section 9.2. System Performance
Section 9.2.1. Memory Issues

Section 9.2.1.1. Page Faults

Section 9.2.1.2. Swapping

Section 9.2.2. CPU Utilization and Bus Contention

Section 9.2.3. Devices and Interrupts

Section 9.2.4. Tools for Finding System Performance Issues
Section 9.3. Application Performance

Section 9.3.1. The First Step with the time Command

Section 9.3.2. Understanding Your Processor Architecture with x86info

Section 9.3.3. Using Valgrind to Examine Instruction Efficiency

Section 9.3.4. Introducing ltrace

Section 9.3.5. Using strace to Monitor Program Performance

Section 9.3.6. Traditional Performance Tuning Tools: gcov and gprof

Section 9.3.7. Introducing OProfile
Section 9.4. Multiprocessor Performance

Section 9.4.1. Types of SMP Hardware

Section 9.4.2. Programming on an SMP Machine
Section 9.5. Summary

Section 9.5.1. Performance Issues in This Chapter

Section 9.5.2. Terms Introduced in This Chapter

Section 9.5.3. Tools Used in This Chapter

Section 9.5.4. Online Resources

Section 9.5.5. References
Download ChapterDownload Chapter
1 TokenChapter 10. Debugging

Section 10.1. Introduction
Section 10.2. The Most Basic Debugging Tool: printf

Section 10.2.1. Problems with Using printf

Section 10.2.2. Using printf Effectively

Section 10.2.3. Some Final Words on printf Debugging
Section 10.3. Getting Comfortable with the GNU Debugger: gdb

Section 10.3.1. Running Your Code with gdb

Section 10.3.2. Stopping and Restarting Execution
Section 10.3.3. Inspecting and Manipulating Data

Section 10.3.3.1. print Expression Syntax

Section 10.3.3.2. Print Examples

Section 10.3.3.3. Calling Functions from gdb

Section 10.3.3.4. Some Notes about the C++ and Templates

Section 10.3.3.5. Some Notes about the C++ Standard Template Library

Section 10.3.3.6. The display Command

Section 10.3.4. Attaching to a Running Process with gdb

Section 10.3.5. Debugging Core Files

Section 10.3.6. Debugging Multithreaded Programs with gdb

Section 10.3.7. Debugging Optimized Code
Section 10.4. Debugging Shared Objects

Section 10.4.1. When and Why to Use Shared Objects

Section 10.4.2. Creating Shared Objects

Section 10.4.3. Locating Shared Objects

Section 10.4.4. Overriding the Default Shared Object Locations

Section 10.4.5. Security Issues with Shared Objects

Section 10.4.6. Tools for Working with Shared Objects
Section 10.5. Looking for Memory Issues

Section 10.5.1. Double Free

Section 10.5.2. Memory Leaks

Section 10.5.3. Buffer Overflows

Section 10.5.4. glibc Tools

Section 10.5.5. Using Valgrind to Debug Memory Issues

Section 10.5.6. Looking for Overflows with Electric Fence
Section 10.6. Unconventional Techniques

Section 10.6.1. Creating Your Own Black Box

Section 10.6.2. Getting Backtraces at Runtime

Section 10.6.3. Forcing Core Dumps

Section 10.6.4. Using Signals

Section 10.6.5. Using procfs for Debugging
Section 10.7. Summary

Section 10.7.1. Tools Used in This Chapter

Section 10.7.2. Online Resources

Section 10.7.3. References

Debug Lecture

Debug Lecture

More: http://web.cecs.pdx.edu/~jrb/cs201/lectures/

1. overview

 why bugs?
 bugs happen due to 
  human carelessness
  time pressure
  entropy and chaos
   jurassic park ...  (your bug is a T-rex ...)

 the human brain
  don't underestimate the subconscious problem solving mechanism
  think about the problem
  what do you know
  binary search ...  when you are befuddled
   divide and conquer
  "all your assumptions are invalid"
    Joe Maybee
  don't whine too much
   "the o.s./compiler is broke"...  
   it happens, but it isn't the 1st case
  occam's razor

 superior engineers are willing and able to go to the next level
  down (note: black boxes are a fine theory for blaming
   problems on other people ...)

 you have to look inside the hood

2. tools 
 sw engineering ...
  SPECIFICATION AND DESIGN
  code walkthrus
  not our concern ... so much, but having other people
   look at your code can do wonders

 code analysis ...
  static
   cscope
   cflow
  dynamic 
   debuggers
   valgrind
   logging 
    printf pros/cons
      
    cons:
      you use it because that is all you know
      you are modifying the code and may introduce
     more bugs
     even in the printf statements

3. theory of debuggers
 what is a debugger for?
  peering at runtime behavior
  not debugging but observing
   debugger is at keyboard

 very high-level observations
  programs run at mega instructions per sec.
  we have to STOP them to try and understand
   look at runtime environment
   *this doesn't make sense* (but we have no
    better model)

  interpreted versus machine-code/compiled
   interpreted can be built into interpreter
    example: perl -d foo.pl

  compiled/assembled more complex
   includes machine-supported instruction set
   compiler symbol table
    map C/C++ line to machine code
   runtime process envirornment
    stack
     functions on stack
    heap (malloc'ed memory)
    text segment
    data segment
   compiler/linker (virtual memory)/cpu instructions/
    debugger/o.s. interactions
    
  keep in mind limitations discovered via physics/C.S. 
   this century

   Heisenberg - if you measure it, you modify its
    behavior
   Godel - mathematical black boxes are an abstraction;
    i.e. your computer can catch on fire.
   Turing - you can't write a program that will find
    all the HALT possibilities; i.e., there will
    always be one more bug.
 
debuggers
 all debuggers are alike
  one program want to control another program's execution
   even down to the machine instruction level
   one instruction or HLL statement at a time
  see the other program's memory spaces
   stack/heap
  possibly change other program's memory spaces
   consider multi-user o.s. protection models ...
  basically you set breakpoints/run/see what happened
   and *think*

 breakpoints
  for machine/compiled code,  we need to be able to somehow
   say STOP here and return control to debugger

  breakpoint is typically a special instruction inserted by
   magic into the code that causes a sw trap
   and sends a "interrupt" to the debugger

  often text is modified ...  

 debugger modes
  single thread
   UNIX debugger execution model
    debugger is parent of debuggee child
  parallel (tasks or threads)
   unix attach
   depends on o.s. and debugger IPC models
  kernel (tricky ...  See Heisenberg)
   2 cpus with one under the control of the other

 the debugger cycle

the debug cycle in gdb terms:

 0.  think ...  analyze the problem
  what do you know about the problem.
  what do you NOT KNOW about the problem.
 1.  set a breakpoint 
   (gdb) break  line/function
 2.  run it to the breakpoint
   (gdb) run  or cont
OR singlestep with step/next
 3.  analyse (and try again) 
  analyze the stack
   (gdb) bt
  analyze the variables
   (gdb) print  x
  analyze where you are code wise (list)
   (gdb) list 
    list main
    list 101

-------------------------------------------------------------
core variation
 not rm -f core

 1. jim, he's dead ...  (there is no runtime phase)

 2. fireup the debugger on the core module
  % gdb mybomb core  

  3.  analyze as above
  (gdb) bt  <----------- the big ticket item

Note *where* the program died.  You can run the program now
too and often you want to do that and try to run it to the
point (just before) where it seems to do, and then step to/thru
the "spot of death".

Note:  you need to turn core dump on if off

 % ulimit -a  <--- to check
 % ulimit -c unlimited  <---- to turn on

-------------------------------------------------------------
attach variation:  see handout
-------------------------------------------------------------
gdb  (this is a checklist)

 .a little history (very little)
 .basic commands (Appendix 1, see below)
 .debugging C++
 .debugging parallel processes
  use windows ...  one gdb, one thread/process
 .core debugging (above)
 .attach debugging (see handout)

--------------------------------------------------------------
Appendix 1:

gdb commands by function - simple guide
More important commands have a (*) by them.

Startup 
% gdb -help        print startup help, show switches
*% gdb object         normal debug 
*% gdb object core        core debug (must specify core file)
%% gdb object pid        attach to running process
% gdb        use file command to load object 

Help
*(gdb) help        list command classes
(gdb) help running        list commands in one command class
(gdb) help run        bottom-level help for a command "run" 
(gdb) help info        list info commands (running program state)
(gdb) help info line        help for a particular info command
(gdb) help show        list show commands (gdb state)
(gdb) help show commands        specific help for a show command

Breakpoints
*(gdb) break main        set a breakpoint on a function
*(gdb) break 101        set a breakpoint on a line number
*(gdb) break basic.c:101        set breakpoint at file and line (or function)
*(gdb) info breakpoints        show breakpoints
*(gdb) delete 1        delete a breakpoint by number
(gdb) delete        delete all breakpoints (prompted)
(gdb) clear        delete breakpoints at current line
(gdb) clear function        delete breakpoints at function
(gdb) clear line        delete breakpoints at line
(gdb) disable 2        turn a breakpoint off, but don't remove it
(gdb) enable 2        turn disabled breakpoint back on
(gdb) tbreak function|line        set a temporary breakpoint
(gdb) commands break-no ... end        set gdb commands with breakpoint
(gdb) ignore break-no count        ignore bpt N-1 times before activation
(gdb) condition break-no expression         break only if condition is true
(gdb) condition 2 i == 20         example: break on breakpoint 2 if i equals 20
(gdb) watch expression        set software watchpoint on variable
(gdb) info watchpoints        show current watchpoints

Running the program
*(gdb) run        run the program with current arguments
*(gdb) run args redirection        run with args and redirection
(gdb) set args args...        set arguments for run 
(gdb) show args        show current arguments to run
*(gdb) cont        continue the program
*(gdb) step         single step the program; step into functions
(gdb) step count        singlestep \fIcount\fR times
*(gdb) next        step but step over functions 
(gdb) next count        next \fIcount\fR times
*(gdb) CTRL-C        actually SIGINT, stop execution of current program 
*(gdb) attach process-id        attach to running program
*(gdb) detach        detach from running program
*(gdb) finish        finish current function's execution
(gdb) kill        kill current executing program 

Stack backtrace
*(gdb) bt        print stack backtrace
(gdb) frame        show current execution position
(gdb) up        move up stack trace  (towards main)
(gdb) down        move down stack trace (away from main)
*(gdb) info locals        print automatic variables in frame
(gdb) info args        print function parameters 

Browsing source
*(gdb) list 101        list 10 lines around line 101
*(gdb) list 1,10         list lines 1 to 10
*(gdb) list main  list lines around function 
*(gdb) list basic.c:main        list from another file basic.c
*(gdb) list -        list previous 10 lines
(gdb) list *0x22e4        list source at address
(gdb) cd dir        change current directory to \fIdir\fR
(gdb) pwd          print working directory
(gdb) search regexpr        forward current for regular expression
(gdb) reverse-search regexpr        backward search for regular expression
(gdb) dir dirname        add directory to source path
(gdb) dir        reset source path to nothing
(gdb) show directories        show source path

Browsing Data
*(gdb) print expression        print expression, added to value history
*(gdb) print/x expressionR        print in hex
(gdb) print array[i]@count        artificial array - print array range
(gdb) print $        print last value
(gdb) print *$->next        print thru list
(gdb) print $1        print value 1 from value history
(gdb) print ::gx        force scope to be global
(gdb) print 'basic.c'::gx        global scope in named file (>=4.6)
(gdb) print/x &main        print address of function
(gdb) x/countFormatSize address        low-level examine command
(gdb) x/x &gx        print gx in hex
(gdb) x/4wx &main    print 4 longs at start of \fImain\fR in hex
(gdb) x/gf &gd1      print double
(gdb) help x        show formats for x
*(gdb) info locals        print local automatics only
(gdb) info functions regexp        print function names
(gdb) info variables  regexp        print global variable names
*(gdb) ptype name        print type definition
(gdb) whatis expression       print type of expression
*(gdb) set variable = expression        assign value
(gdb) display expression        display expression result at stop
(gdb) undisplay        delete displays
(gdb) info display        show displays
(gdb) show values        print value history (>= gdb 4.0)
(gdb) info history        print value history (gdb 3.5)

Object File manipulation
(gdb) file object        load new file for debug (sym+exec)
(gdb) file object -readnow        no incremental symbol load
(gdb) file       discard sym+exec file info
(gdb) symbol-file object        load only symbol table
(gdb) exec-file object        specify object to run (not sym-file)
(gdb) core-file core        post-mortem debugging

Signal Control
(gdb) info signals        print signal setup
(gdb) handle signo actions         set debugger actions for signal
(gdb) handle INT print        print message when signal occurs
(gdb) handle INT noprint        don't print message
(gdb) handle INT stop        stop program when signal occurs
(gdb) handle INT nostop        don't stop program
(gdb) handle INT pass        allow program to receive signal
(gdb) handle INT nopass        debugger catches signal; program doesn't
(gdb) signal signo        continue and send signal to program
(gdb) signal 0        continue and send no signal to program

Machine-level Debug
(gdb) info registers        print registers sans floats
(gdb) info all-registers        print all registers
(gdb) print/x $pc        print one register
(gdb) stepi        single step at machine level
(gdb) si        single step at machine level
(gdb) nexti        single step (over functions) at machine level
(gdb) ni        single step (over functions) at machine level
(gdb) display/i $pc        print current instruction in display
(gdb) x/x &gx        print variable gx in hex
(gdb) info line 22        print addresses for object code for line 22
(gdb) info line *0x2c4e        print line number of object code at address
(gdb) x/10i main        disassemble first 10 instructions in \fImain\fR
(gdb) disassemble addr        dissassemble code for function around addr

History Display
(gdb) show commands        print command history (>= gdb 4.0)
(gdb) info editing       print command history (gdb 3.5)
(gdb) ESC-CTRL-J        switch to vi edit mode from emacs edit mode
(gdb) set history expansion on       turn on c-shell like history
(gdb) break class::member       set breakpoint on class member. may get menu
(gdb) list class::member        list member in class
(gdb) ptype class               print class members
(gdb) print *this        print contents of this pointer
(gdb) rbreak regexpr     useful for breakpoint on overloaded member name

Miscellaneous
(gdb) define command ... end        define user command
*(gdb) RETURN        repeat last command
*(gdb) shell command args        execute shell command 
*(gdb) source file        load gdb commands from file
*(gdb) quit        quit gdb

-------------------------------------------------------------
Appendix 2:

Henry Spencer - 10 commandments for C programmers 
------------------------------------------------- 
Commandments copyright (c) 1988 Henry Spencer, University of Toronto.
Used by permission.

1.  Thou shalt run lint frequently and study its pronouncements
with care, for verily its perception and judgement oft exceed thine.
(Modern amendment: use ANSI C with prototypes where possible).

2.  Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end.

3.  Thou shalt cast all function arguments to the expected type if they
are not of that type already, even when thou art convinced that
this is unnecessary, lest they take cruel vengeance upon thee
when thou least expect it.

4.  If thy header files fail to declare the return types of thy library
functions, thou shalt declare them thyself with the most meticulous
care, lest grevious harm befall thy program.

5. Thou shalt check the array bounds of all strings (indeed, all arrays),
for surely where thou typest *foo* someone someday shall type
*supercalifragilisticexpialidocious*.

6.  If a function be advertised to return an error code in the event of
difficulties, thou shalt check for that code, yea, even though the
checks triple the size of thy code and produce aches in thy
typing fingers, for if thou thinkest "it cannot happen to me," the
gods shall surely punish thee for thy arrogance.

7.  Thou shalt study the libraries and strive not to re-invent them without
cause, that thy code may be short and readable and thy days pleasant
and productive.

8. Thou shalt make thy program's purpose and structure clear to
thy fellow man by using the One True Brace Style*, even if thou
likest it not, for thy creativity is better used in solving problems
than in creating beautiful new impediments to understanding.

*(The One True Brace Style is the style of program layout demonstrated
in K & R).

.9 Thy external identifiers shall be unique in the first six characters,
though this harsh discipline be irksome and the years of its necessity
stretch before thee seemingly without end, lest thou tear thy hair out
and go mad on that fateful day when thou desirest to make thy program 
run on an old system.

.10 Thou shalt foreswear, renounce, and abjure the vile heresy which
claimeth that "All the world's a VAX", and have no commerce with
the benighted heathens who cling to this barbarous belief, that
the days of thy program may be long even though the days of thy
current machine be short.  (Programs should be written to be portable 
and with the assumption that the software will outlast the current
hardware).
 
JRB's supplementary rules:
 
1.  Every open(2) should have an equal and opposite
close(2).  Likewise, every fopen(3) should have a matching  
fclose(3), and every malloc(3) should have a matching free(3).

2.  ALWAYS be careful about NULL pointers and arguments to function calls
that take strings.  Use assertions for dealing with NULL pointers. 
See assert(3) for details.

3. Be particularly careful with routines that can
"accidentally" overwrite data or stack regions; avoid using these routines
if possible.  For example, use fgets() rather than gets().
strncpy(3) instead of strcpy(2).  Put limits in routines that you write.

4.  Be careful with kernel calls, especially with pointers to buffers.
The kernel blindly believes what you tell it and can easily overwrite
parts of your stack or data space. If so, core dump.
Something like this can happen to you:
foo()
{
 int fd[1];
 int x;  <---- why is this here?
 pipe(fd);
}

5.  never ignore information the system gives you; segmentation
violations are important, and give you a lot of information about what's
going wrong (even more if you use the debugger to look at a core
dump).  Don't rm -f core without making an attempt to determine
what happened.
 % gdb myproggoesboom core
 (gdb) bt

6.  Always check the error return code on system calls and library
calls.  Read the man pages often and carefully.  Write them
with care too.  If you write man pages, include examples.
(Dangerously different...).

7.  NEVER make assumptions like "This malloc could never fail."
Of course it could.  Similar assumptions include "This
file has to exist, so of course, the open can't fail.
(Check out the access(2) call.)  Well-written programs
have a lot of error-checking code.

8. C programming style on UNIX is like this:
 
function foo
 weed out error possibility A
 weed out error possibility B
 do the function

Pascal is like this:
 if (not A)
  if (not B)
   do the function
  else
 else
 

Debugging Tools and Techniques

This is the html version of the file http://thermalnoise.files.wordpress.com/2007/01/debugging.pdf.

Google automatically generates html versions of documents as we crawl the web.

Page 1

Debugging Tools and Techniques

Thus Spake The Master Programmer: “When

you have learned to snatch the error code from

the trap frame, it will be time for you to

Leave.”

- The Tao Of Programming

Ananth Shrinivas

Solaris Engineering

Sun Microsystems

Page 2

Approx & Non-Linear Agenda

➢ Common Problems and Common Tools

➢ The Problems in Theory

➢ Memory Management

➢ Profiling and Execution Tracing

➢ Core Dumps, Network Monitoring

➢ The Solutions in Practice

➢ GDB, Netcat, Wireshark (The Swiss Army Knives)

➢ strace and ltrace (Dynamic Execution Tracers)

➢ Valgrind and Friends (Emulators and Interposing Libraries)

➢ gprof and oprofile (Instrumentation and Sampling Profilers)

➢ The One True Tool: State of Art in Debugging

Page 3

Common Problems

➢ Memory Management: Invalid Pointers, Buffer

Overflows, Double Frees, Memory Leaks

➢ Tracing Execution and Flow Control: Profiling

and Performance analysis, Code path verification,

Code coverage, Debugging Logical errors

➢ Multi-threaded Programming: Race Conditions,

Deadlocks, Lock Contention

➢ Advanced Problems: Core Dumps, Disassembly,

Debugging your operating system, Compiler Bugs,

Hardware Bugs, Debugger Bug, Understanding

foreign code !

Page 4

Common Linux Debugging

Tools

➢ Memory: valgrind, Insure++, Purify, memwatch

➢ Execution Tracing: strace, ltrace, gdb

➢ Process Monitoring: pmap, lsof, top, /proc

➢ Profiling: gprof, oprofile, CodeAnalyst, vTune

➢ Code Coverage: gcov

➢ Multithreaded Programming: helgrind, $BRAIN

➢ General Purpose Debuggers: gdb, dbx, DDD

➢ Static Code Analyzers: gcc, lint, splint, Purify

➢ Grokkers: cflow, cscope, ctags, lxr, opengrok

Underlined, Italicized = Proprietary Tools

Page 5

A Program on Disk -

Executable

Courtsey:

http://www.linuxforums.org/misc/understanding_elf_using_readelf_and_objdump.html

Executable and Linkable

Format

Relocatable files (gcc -c)

Shared objs (gcc -shared)

Executable files (ld)

readelf – Read elf headers,

sections and symbol tables

Objdump – Disassemble elf

objects and hack around

Neat Tools: nm, strings, od

Page 6

A Program in Memory - Process

Text Segment – Machine Code.

Shared, Read-Only.

Data Segment – Initialized global

variables from executable

BSS – Has uninitialized global

variables set to zero (Block

started by symbol)

Stack – Collection of stack

frames. Grows Downward

Heap – Dynamic memory for

programs and libraries. Grows

Upward

Low Address

High Address

Page 7

The GNU Debugger (GDB)

➢ Source/Instruction => Process/Core/Kernel(kgdb)

➢ Frequently used commands (TAB Autocompletes)

➢ file/attach – load a binary file for execution

➢ kill/run – the loaded file or process

➢ list – list a file, funcs, lines, addr.

➢ break/clear – breakpoint at func, lines, addr.

➢ step/stepi – single step one source line/mi

➢ next/nexti – step over subroutines

➢ cont – continue until next breakpoint or end

➢ disable/enable – breakpoint manipulation

Page 8

The GNU Debugger (GDB)

➢ Frequently used commands (continued ....)

➢ bt – Print backtrace of all stack frames

➢ frame N – switch stack frame context

➢ display/print – variables, expressions

➢ ptype – print type of variable (stabs/ctf)

➢ info – shows a huge number of useful things

➢ Tools useful in conjunction with GDB

➢ pmap – Display process address layout

➢ elfsh – interactive shell for elfdump !

➢ biew – ncurses gui to explore elf objects

Page 9

Preparing for a GDB session

➢ Don't strip – Cost of Disk << Cost of Engineering

➢ Never ever omit the frame pointer (-fomit-frame-

pointer is evil)

➢ Add the enhanced symbol table (gcc -g)

➢ Disable optimizations when creating a debug

executable (-O0)

➢ Add GNU specific extensions for a lot of extra

debugging power (-ggdb3)

➢ .gdbinit – put redundant commands into this file

Page 10

DDD – GDB for X

Page 11

strace

➢ Powerful runtime tool to trace syscalls and signals.

➢ Restrict to system calls or classes using -e trace

(!)= syscall|set|process|network|ipc|file|desc

➢ Attach to existing process using -p

➢ Follow children of fork() -f and vfork() -F

➢ Coarse profiling using -tt, -r , -c

➢ -s display upto only n characters

➢ Symbol name demangling using -C

➢ Instruction Pointer at the time of trace -i

➢ Log output to a file using -o and -ff

Page 12

ltrace

➢ Runtime tool to trace library calls

(How are library calls and syscalls different)

➢ Aggregate syscalls by count -c

➢ Use ldd to find out static link-time library

dependencies and -l , -L to filter library names.

➢ Indent call flow using -n

➢ Trace system calls to using -S !

➢ Most other options are strace syntax compatible

➢ For ltrace Internals see PTRACE(2)

➢ BUGGY ! ELF32 only ! dlopen() not traced !

Page 13

Profiling Tools

➢ Time from Shell / gettimeofday() / clock()

➢ Instrumentation Profilers

➢ GPROF – Collection/Analysis of execution profile

➢ GCOV - Hotspot detection using code coverage

➢ Quantum Limitations – Heisenberg Principle

➢ Sampling Profilers

➢ oprofile – Kernel, CPU supported counters and

event monitors – understand your CPU well.

➢ AMD CodeAnalyst and Intel vTune

Page 14

Nifty Tools for Unlucky Days

➢ Networking Tools

➢ Wireshark – Brilliant protocol analyzer

➢ Netstat – A lot of useful statistics and views

➢ Netcat – TCP/IP Swiss Army Knife

➢ Nmap – Network Exploration / Port Scanner

➢ Filesystem Tools

➢ fuser – Identify processes using a file/socket

➢ lsof – List of open files. Command line hell

➢ watch (-d)– Repeatedly executes a command.

Waits for output to change. Highlights the change.

Page 15

MM :: Valgrind / Cachegrind

➢ The most advanced MM debugging tool

➢ Use of uninitialized memory

➢ Reading/writing memory after it has been freed

➢ Reading/writing off the end of malloc() areas

➢ Reading/writing to wrong addresses on the stack !

➢ Memory leaks - i.e.malloc() pointers lost forever

➢ Mismatched use of malloc/new[] vs. free/delete[]

➢ Overlapping pointers in memcpy() and friends

➢ Some misuses of the POSIX pthreads API

➢ Memory hog ! 25-75 times slower ! -O0 works best

Page 16

The One True Tool

If you thought Valgrind was mind-

boggling wait until you see what is the

state-of-art in debugging.

DTrace (OpenSolaris, FreeBSD, MacOS)

Using vim as an IDE all in one

Using vim as an IDE all in one
From Vim Tips Wiki
Jump to: navigation, search
Please review this tip:
This tip was imported from vim.org and needs general review.
You might clean up comments or merge similar tips.
Add suitable categories so people can find the tip.
Please avoid the discussion page (use the Comments section below for notes).
If the tip contains good advice for current Vim, remove the {{review}} line.
Tip 1439 Previous Next created December 12, 2006 · complexity basic · author Johnny · version n/a
I've read a lot of tips about how to make Vim as an IDE like editor. Most of them are really useful, and I want to sum up them in this tip, and then add some of my experiences.
Here are some useful tips to read:
VimTip64 Always set your working directory to the file you're editing Vim online
VimTip58 Switching back and forth between ViM and Visual Studio _NET Vim online
VimTip1119 How to use Vim like an IDE Vim online
Here are some scripts I recommend:
Project 1.1.4 Organize/navigate projects of files (like IDE/buffer explorer).
TagList 4.2 Source code browser (supports C/C++, Java, Perl, Python, TCL, SQL, PHP, etc).
MiniBufExpl 6.3.2 Elegant buffer explorer; takes very little screen space.
ShowMarks-2.2 Visually shows the location of marks.
OmniCppComplete-0.4 C/C++ omni-completion with ctags database.
CRefVim-1.0.4 A C-reference manual especially designed for Vim.
exUtility-4.1.0 Global search,symbol search,tag track...(Like IDE/Source Insight).
Here are some programs you may need to download:
http://gnuwin32.sourceforge.net/
diffutils-2.8.7-1.exe
gawk-3.1.3-2.exe
id-utils-4.0-2.exe
http://ctags.sourceforge.net/
ctags.exe
Here are some scripts for your vimrc: " --------------------
" ShowMarks
" --------------------
let showmarks_include = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
let g:showmarks_enable = 1
" For marks a-z
highlight ShowMarksHLl gui=bold guibg=LightBlue guifg=Blue
" For marks A-Z
highlight ShowMarksHLu gui=bold guibg=LightRed guifg=DarkRed
" For all other marks
highlight ShowMarksHLo gui=bold guibg=LightYellow guifg=DarkYellow
" For multiple marks on the same line.
highlight ShowMarksHLm gui=bold guibg=LightGreen guifg=DarkGreen
" --------------------
" Project
" --------------------
map :Project
map :Project:redraw/
nmap ToggleProject
let g:proj_window_width = 30
let g:proj_window_increment = 50
" --------------------
" exTagSelect
" --------------------
nnoremap :ExtsToggle
nnoremap ts :ExtsSelectToggle
nnoremap tt :ExtsStackToggle
map ] :ExtsGoDirectly
map [ :PopTagStack
let g:exTS_backto_editbuf = 0
let g:exTS_close_when_selected = 1
" --------------------
" exGlobalSearch
" --------------------
nnoremap :ExgsToggle
nnoremap gs :ExgsSelectToggle
nnoremap gq :ExgsQuickViewToggle
nnoremap gt :ExgsStackToggle
map :GS
map :GSW
let g:exGS_backto_editbuf = 0
let g:exGS_close_when_selected = 0
" --------------------
" exSymbolTable
" --------------------
nnoremap ss :ExslSelectToggle
nnoremap sq :ExslQuickViewToggle
nnoremap :ExslToggle
nnoremap :ExslQuickSearch/^
nnoremap sg :ExslGoDirectly
let g:exSL_SymbolSelectCmd = 'TS'
" --------------------
" exEnvironmentSetting
" --------------------
function g:exES_UpdateEnvironment()
if exists( 'g:exES_PWD' )
silent exec 'cd ' . g:exES_PWD
endif
if exists( 'g:exES_Tag' )
let &tags = &tags . ',' . g:exES_Tag
endif
if exists( 'g:exES_Project' )
silent exec 'Project ' . g:exES_Project
endif
endfunction
" --------------------
" TagList
" --------------------
" F4: Switch on/off TagList
nnoremap :TlistToggle
" TagListTagName - Used for tag names
highlight MyTagListTagName gui=bold guifg=Black guibg=Orange
" TagListTagScope - Used for tag scope
highlight MyTagListTagScope gui=NONE guifg=Blue
" TagListTitle - Used for tag titles
highlight MyTagListTitle gui=bold guifg=DarkRed guibg=LightGray
" TagListComment - Used for comments
highlight MyTagListComment guifg=DarkGreen
" TagListFileName - Used for filenames
highlight MyTagListFileName gui=bold guifg=Black guibg=LightBlue
"let Tlist_Ctags_Cmd = $VIM.'/vimfiles/ctags.exe' " location of ctags tool
let Tlist_Show_One_File = 1 " Displaying tags for only one file~
let Tlist_Exist_OnlyWindow = 1 " if you are the last, kill yourself
let Tlist_Use_Right_Window = 1 " split to the right side of the screen
let Tlist_Sort_Type = "order" " sort by order or name
let Tlist_Display_Prototype = 0 " do not show prototypes and not tags in the taglist window.
let Tlist_Compart_Format = 1 " Remove extra information and blank lines from the taglist window.
let Tlist_GainFocus_On_ToggleOpen = 1 " Jump to taglist window on open.
let Tlist_Display_Tag_Scope = 1 " Show tag scope next to the tag name.
let Tlist_Close_On_Select = 1 " Close the taglist window when a file or tag is selected.
let Tlist_Enable_Fold_Column = 0 " Don't Show the fold indicator column in the taglist window.
let Tlist_WinWidth = 40
" let Tlist_Ctags_Cmd = 'ctags --c++-kinds=+p --fields=+iaS --extra=+q --languages=c++'
" very slow, so I disable this
" let Tlist_Process_File_Always = 1 " To use the :TlistShowTag and the :TlistShowPrototype commands without the taglist window and the taglist menu, you should set this variable to 1.
":TlistShowPrototype [filename] [linenumber]
" --------------------
" MiniBufExpl
" --------------------
let g:miniBufExplTabWrap = 1 " make tabs show complete (no broken on two lines)
let g:miniBufExplModSelTarget = 1 " If you use other explorers like TagList you can (As of 6.2.8) set it at 1:
let g:miniBufExplUseSingleClick = 1 " If you would like to single click on tabs rather than double clicking on them to goto the selected buffer.
let g:miniBufExplMaxSize = 1 " setting this to 0 will mean the window gets as big as needed to fit all your buffers.
"let g:miniBufExplForceSyntaxEnable = 1 " There is a Vim bug that can cause buffers to show up without their highlighting. The following setting will cause MBE to
"let g:miniBufExplorerMoreThanOne = 1 " Setting this to 0 will cause the MBE window to be loaded even
"let g:miniBufExplMapCTabSwitchBufs = 1
"let g:miniBufExplMapWindowNavArrows = 1
"for buffers that have NOT CHANGED and are NOT VISIBLE.
highlight MBENormal guibg=LightGray guifg=DarkGray
" for buffers that HAVE CHANGED and are NOT VISIBLE
highlight MBEChanged guibg=Red guifg=DarkRed
" buffers that have NOT CHANGED and are VISIBLE
highlight MBEVisibleNormal term=bold cterm=bold gui=bold guibg=Gray guifg=Black
" buffers that have CHANGED and are VISIBLE
highlight MBEVisibleChanged term=bold cterm=bold gui=bold guibg=DarkRed guifg=Black
" --------------------
" OmniCppComplete
" --------------------
" set Ctrl+j in insert mode, like VS.Net
imap
" :inoremap pumvisible() ? "\" : "\u\"
" set completeopt as don't show menu and preview
set completeopt=menuone
" Popup menu hightLight Group
highlight Pmenu ctermbg=13 guibg=LightGray
highlight PmenuSel ctermbg=7 guibg=DarkBlue guifg=White
highlight PmenuSbar ctermbg=7 guibg=DarkGray
highlight PmenuThumb guibg=Black
" use global scope search
let OmniCpp_GlobalScopeSearch = 1
" 0 = namespaces disabled
" 1 = search namespaces in the current buffer
" 2 = search namespaces in the current buffer and in included files
let OmniCpp_NamespaceSearch = 1
" 0 = auto
" 1 = always show all members
let OmniCpp_DisplayMode = 1
" 0 = don't show scope in abbreviation
" 1 = show scope in abbreviation and remove the last column
let OmniCpp_ShowScopeInAbbr = 0
" This option allows to display the prototype of a function in the abbreviation part of the popup menu.
" 0 = don't display prototype in abbreviation
" 1 = display prototype in abbreviation
let OmniCpp_ShowPrototypeInAbbr = 1
" This option allows to show/hide the access information ('+', '#', '-') in the popup menu.
" 0 = hide access
" 1 = show access
let OmniCpp_ShowAccess = 1
" This option can be use if you don't want to parse using namespace declarations in included files and want to add namespaces that are always used in your project.
let OmniCpp_DefaultNamespaces = ["std"]
" Complete Behaviour
let OmniCpp_MayCompleteDot = 0
let OmniCpp_MayCompleteArrow = 0
let OmniCpp_MayCompleteScope = 0
" When 'completeopt' does not contain "longest", Vim automatically select the first entry of the popup menu. You can change this behaviour with the OmniCpp_SelectFirstItem option.
let OmniCpp_SelectFirstItem = 0
After setting this, now you can really using Vim as an IDE-like editor.
I usually like to create project use exUtility, use "gvim project_name.vimenvironment"
You can browse project file by Project-plugin.
You can global search and edit them by exUtility-plugin.
You can jump tag and track code by exUtility-plugin.
You can analysis code by taglist-plugin.
You can choose buffer by minibuffer-plugin.
You can set clear mark by showmark-plugin.
edit Comments

I think cscope should also have been on this list, especially for people who are editing C files (as opposed to C++, which seems to be the main focus of this tip). It has a lot more features than ctags. A nice tutorial can be found at cscope.sourceforge.net.
For real-time source code analysis, this plugin might help as well: http://www.vim.org/scripts/script.php?script_id=2368

Kscope vs. SourceInsight

Kscope vs. SourceInsight

雖然以前就聽過Kscope,但是為了效率(不需要用到滑鼠)我一直都是用 vim + cscope 在trace code,不過最近念頭一轉,決定不要繼續虐待自己而開始使用滑鼠 :)
於是首先就要裝Kscope,使用過一小段時間過後發現他還蠻像Windows上的SourceInsight,而搜尋速度與要價台幣六千元(2004左右的價格)的SourceInsight相比也不相上下,操作方式也大同小異。

以下是Kscope與SourceInsight的Screenshots:


Kscope


SourceInsight (*)


*
摘自:http://weblogs.java.net/blog/staufferjames/archive/2007/07/littleknown_but.html

張貼者: t@c0 位於 4:04 PM

cscope - is a console mode or text-based graphical interface that allows computer programmers or software developers to search C source code (there is limited support for other languages). It is often used on very large projects to find source code, functions, declarations, definitions and regular expressions given a text string.

cscope is a good equivalent on linux.If you use KDE, there's a nice GUI to it called Kscope.

Monday, June 10, 2013

Regionally independent date time parsing


date.bat:
@echo off

:: If you want the date independently of the region day/month order, you can use "WMIC os GET LocalDateTime" as a source, since it's in ISO order:
for /F "usebackq tokens=1,2 delims==" %%i in (`wmic os get LocalDateTime /VALUE 2^>NUL`) do if '.%%i.'=='.LocalDateTime.' set ldt=%%j
set format1=%ldt:~0,4%-%ldt:~4,2%-%ldt:~6,2% %ldt:~8,2%:%ldt:~10,2%:%ldt:~12,6%
set format2=%ldt:~0,4%-%ldt:~4,2%-%ldt:~6,2%_%ldt:~8,2%%ldt:~10,2%%ldt:~12,6%

echo Local date is [%format1%]
echo Local date is [%format2%]

Reference:
http://stackoverflow.com/questions/203090/how-to-get-current-datetime-on-windows-command-line-in-a-suitable-format-for-us

Thursday, June 6, 2013

Volume Shadow Copy Failed to create the storage area association

Problem:
When I tried to move the storage area from one disk to another disk, I saw following error message:

Failed to create the storage area association.

Error 0x8004231d: The specified shadow copy storage association is in use and so can't be deleted.

Solution:

Run cmd as administrator

List existing volume shadow copies:
cmd> vssadmin list shadows

List volume shadow copy storage associations:
cmd> vssadmin list shadowstorage

cmd> vssadmin delete shadowstorage /for=D: /on=D:

Error: The specified shadow copy storage association is in use.

Note: VSSadmin is now replaced by the Diskshadow on windows server 2008 and Windows Server 2012.

cmd> vssadmin delete shadows /For=D: /Oldest

Error: Snapshots were found, but they were outside of your allowed context. Try removing them with the backup application which created them.

Note: VSSadmin command is now replaced by the Diskshadow command on windows server 2008 and Windows Server 2012.

Use diskshadow command to remove the shadow copies:
cmd> diskshadow
diskshadow> help

List all volume shadow copies on the computer:
diskshadow> list shadows all
Number of shadow copies listed: 1

To list diskshadow command options:
diskshadow> delete shadows

Delete the oldest shadow copy of the given volume or shared folder:
diskshadow> delete shadows oldest d:
diskshadow> delete shadows oldest \\SERVER\SHARE

Delete all shadow copies of the given volume or shared folder:
diskshadow> delete shadows volume d:

pfSense Remotely Circumvent Firewall Lockout by Temporarily Changing the Firewall Rules

pfSense Remotely Circumvent Firewall Lockout by Temporarily Changing the Firewall Rules

You could (very temporarily) disable firewall rules by typing:
# pfctl -d

Once you have regained the necessary access, turn the firewall back on by typing:
# pfctl -e

Alternately, the loaded ruleset is left in /tmp/rules.debug. You can edit that to fix your connectivity issue and reload those rules like so:
# pfctl -f /tmp/rules.debug

# less  /tmp/rules.debug | grep MyGatewayIP

After that, do whatever work you need to do in the WebGUI to make the fix permanent. (From billm in this forum post)

Note: 建議透過 WebGUI 去做 IP address and gateway 的改動,這樣正確的值才會寫到 /tmp/rules.debug,否則網路有可能就不通 (還是沿用舊 IP,新 IP 沒被寫入 rule 檔)。

Flush all (nat, filter, queue, state, info, table) rules and reload from the file /etc/pf.conf
# pfctl -F all -f /tmp/rules.debug

Report on the currently loaded filter ruleset.
# pfctl -s rules

Report on the currently loaded nat ruleset.
# pfctl -s nat

Report on the currently running state table (very useful).
# pfctl -s state

If you do not want to disable pf, but you still need to get in, you can run the following shell command to add an "allow all" rule on the WAN:
# pfSsh.php playback enableallowallwan

Note: This is VERY DANGEROUS to keep around, so once you have regained access to the GUI with proper rules, be sure to delete this "allow all" rule.

Add firewall rule at the command line with easyrule
You can use the command line version of easyrule to add a firewall rule to let you back in.

# easyrule pass wan tcp x.x.x.x y.y.y.y 443

That would pass in from the remote IP x.x.x.x to your WAN IP, y.y.y.y on port 443. Adjust as needed.

Remotely Circumvent Firewall Lockout With SSH Tunneling
If you blocked access to the WebGUI remotely (which is smart to do, anyhow) but you still have access with SSH, then there is a relatively easy way to get in: SSH Tunneling.

If the WebGUI is on port 80, set your client to forward local port 80 (or 8080, or whatever) to remote port "localhost:80", then point your browser to http://localhost:80 (or whichever local port you chose.) If your WebGUI is on another port, use that instead. Obviously, if you are using https you will still need to use https to access the WebGUI this way.

Here is how to setup a port 80 tunnel in PuTTY:

Fill out the options as shown, then click add. Once you connect and enter your username/password, you can access the WebGUI using your redirected local port.

Wednesday, June 5, 2013

不管你做那一行,看完這篇文章,理解透了,就等於你清華大學MBA畢業了。



不管你做那一行,看完這篇文章,理解透了,就等於你清華大學MBA畢業了。文章很長慢慢讀……

海爾集團CEO/張瑞敏 演講語錄

人成熟與不成熟跟年齡沒有關係。人成熟不成熟,就是你能不能站在對方的角度去看待事物,就是能不能把我的世界變成你的世界。這個社會有很多的成年人,還沒有脫離幼稚的行為。一點小事情就跟別人爭來爭去。

人不成熟的第一個特徵︰就是立即要回報。

他不懂得只有春天播種,秋天才會斬獲。很多人在做任何事情的時候,剛剛付出一點點,馬上就要得到回報。(學鋼琴,學英語等等,剛開始就覺得難,發現不行,立即就要放棄。)很多人做生意,開始沒有什麼成績,就想著要放棄,有的人一個月放棄,有的人三個月放棄,有的人半年放棄,有的人一年放棄,我不明白人們為什麼輕易放棄,但是我知道,放棄是一種習慣,一種典型失敗者的習慣。所以說你要有眼光,要看得更遠一些,眼光是用來看未來的﹗

對在生活中有放棄習慣的人,有一句話一定要送給你︰〞成功者永不放棄,放棄者永不成功〞。那為什麼很多的人做事容易放棄呢?美國著名成功學大師拿破崙希爾說過︰

窮人有兩個非常典型的心態︰
1、永遠對機會說︰〞不〞;
2、總想〞一夜暴富〞。

今天你把什麼機會都放到他的面前,他都會說〞不〞。就是今天你開飯店很成功,你把你開飯店的成功經驗,發自內心的告訴你的親朋好友,讓他們也去開飯店,你能保證他們每個人都會開飯店嗎?是不是照樣有人不干。

所以這是窮人一個非常典型的心態,他會說︰〞你行,我可不行﹗〞。一夜暴富的表現下於,你跟他說任何的生意,他的第一個問題就是〞掙不掙錢〞,你說〞掙錢〞,他馬上就問第二個問題〞容易不容易〞,你說〞容易〞,這時他跟著就問第三個問題〞快不快〞,你說〞快〞﹗這時他就說〞好,我做﹗〞呵呵,你看,他就這么的幼稚﹗

大家想一想,在這個世界上有沒有一種︰〞又掙錢,又容易,又快的〞,沒有的,即使有也輪不到我們啊,所以說在生活中,我們一定要懂得付出。那為什麼你要付出呢?因為你是為了追求你的夢想而付出的,人就是為了希望和夢想活著的,如果一個人沒有夢想,沒有追求的話,那一輩子也就沒有什麼意義了﹗
在生活中你想獲得什麼,你就得先付出什麼。你想獲得時間,你就得先付出時間,你想獲得金錢,你得先付出金錢。你想得到愛好,你得先犧牲愛好。你想和家人有更多的時間在一起,你先得和家人少在一起。

但是,有一點是明確的,你在這個項目中的付出,將會得到加倍的回報。就象一粒種子,你把它種下去以後,然後澆水,施肥,鋤草,殺虫。最後你斬獲的是不是幾十倍,上百倍的回報。

在生活中,你一定要懂得付出,你不要那麼急功近利,馬上想得到回報,天下沒有白吃的午餐,你輕輕鬆松是不可能成功的。

一定要懂得先付出﹗

人不成熟的第二個特徵︰就是不自律。

不自律的主要表現下那裡呢?

一、不願改變自己︰
你要改變自己的思考模式和行為模式。你要改變你的壞習慣。其實,人與人之間能力是沒有多大區別,區別在於思考模式的不同。一件事情的發生,你去問成功者和失敗者,他們的回答是不一樣的,甚至是相違背的。

我們今天的不成功是因為我們的思考模式不成功。一個好的公式是︰當你種植一個思考的種子,你就會有行動的斬獲,當你把行動種植下去,你會有習慣的斬獲,當你再把習慣種植下去,你就會有個性的斬獲,當你再把個性種植下去,就會決定你的命運。

但是如果你種植的是一個失敗的種子,你得到的一定是失敗,如果你種植的是一個成功的種子,那麼你就一定會成功。

很多人有很多的壞習慣,如︰看電視,打麻將,喝酒,泡舞廳,他們也知道這樣的習慣不好,但是他們為什麼不願意改變呢?因為很多人寧願忍受那些不好的生活模式,也不願意忍受改變帶來的痛苦。

二、背後議論別人︰
如果在生活中,你喜歡議論別人的話,有一天一定會傳回去,中國有一句古話,論人是非者,定是是非人。

三、消極,抱怨︰
你在生活中喜歡那些人呢?是那些整天愁眉苦臉,整天抱怨這個抱怨哪個的人,還是喜歡那些整天開開心心的人。如果你在生活中是那些抱怨的,消極的人的話,你一定要改變你性格中的缺陷。如果你不改變的話,你是很難適應這個社會的。你也是很難和別人合作的。

生活當中你要知道,你怎樣對待生活,生活也會怎樣對待你,你怎樣對待別人,別人也會怎樣對待你。所以你不要消極,抱怨。你要積極,永遠的積極下去,就是那句話︰成功者永不抱怨,抱怨者永不成功。

人不成熟的第三個特徵︰經常被情緒所左右。

一個人成功與否,取決於五個原素︰

學會控制情緒
健康的身體
良好的人際關係
時間管理
財務管理

如果你想成功,一定要學會管理好這五個原素,為什麼把情緒放在第一位呢?把健康放在第二位呢?是因為如果你再強的身體,如果你情緒不好,就會影響到你的身體,現下一個人要成功20%靠的是智商,80%靠的是情商,所以你要控制好你的情緒,情緒對人的影響是非常大的。人與人之間,不要為了一點點小事情,就暴跳如雷,這樣是不好的。

所以在生活中,你要養成什麼樣的心態呢?你要養成〞三不〞,〞三多〞︰
不批評、不抱怨、不指責;
多鼓勵、多表揚、多讚美。

你就會成為一個受社會大眾歡迎的人。如果你想讓你的伙伴更加的優秀,很簡單,永遠的激勵和讚美他們。

即使他們的確有毛病,那應該怎么辦呢?這時是不是應該給他們建議,在生活中你會發現有這樣一個現象,有人給別人建議的時候,別人能夠接受,但是有建議的時候別人就會生氣。其實建議的模式是最重要的,就是〞三明治〞讚美,建議,再讚美﹗

想一想,你一天讚美了幾個人,有的人可能以為讚美就是吹捧,就是拍馬屁。讚美和吹捧是有區別的,讚美有四個特點︰

1、是真誠的
2、是發自內心的
3、被大眾所接受的
4、無私的

如果你帶有很強的目的性去讚美,那就是拍馬屁。當你讚美別人時候,你要大聲的說出來,當你想批評別人的時候,一定要咬住你的舌頭﹗

人不成熟的第四個特徵︰不願學習,自以為是,沒有歸零心態。

其實人和動物之間有很多的相似之處,動物的自我保護意識比人更強(嬰兒與小豬)但是,人和動物最大的區別在於,人會學習,人會思考。人是要不斷學習的,你千萬不要把你的天賦潛能給埋沒了,一定要學習,一定要有一個空杯的心態。我們象誰去學習呢?就是直接向成功人士學習﹗

你要永遠學習積極正面的東西,不看,不聽那些消極,負面的東西。一旦你吸收了那些有毒的思想,它會腐蝕你的心靈和人生的。在這個知識經濟的時代裡,學習是你通向未來的唯一護照。在這樣一個速度,變化,危機的時代,你只有不斷的學習你才不會被這個時代所拋棄,一定要有學習,歸零的心態。去看每一個人的優點,〞三人行,必有我師也〞﹗

人不成熟的第五個特徵︰做事情不靠信念,靠人言。

我們說相信是起點,堅持是終點。很多人做事不靠信念,喜歡聽別人怎么說。對自己所做的事業,沒有100%的信心,相信和信念是兩個不同的概念,相信是看得見的,信念是看不見的。

信念是人類的一種態度,但是很多的人他們做事,不靠信念的,而是要聽別人怎么說,你要登上山峰,要問那些爬到山頂的人,千萬不能問沒有爬過山的人。
這裡不是說別人的建議不要去聽,你可以去參考,但是你要記住,你來做這個生意是為了實現你的夢想,實現你自己的價值。其他的人是不會關心你的夢想的,只有你自己關心你自己的夢想,只有你自己關心你自己能否真正的成功。這才是最重要的﹗

只要你的選擇是正確的,永遠不要在乎別人怎么說,以上的人不成熟的五個特徵,你們自己去對照,那一個特徵是你有的,你一定要在最短的時間裡改正,只要你相信你自己能夠戰勝自己的不成熟,你就會逐漸的成長,成熟起來,你就會得到你想要的那種生活。你就會實現你時間自由、財務自由、精神自由的人生夢想!

Squid transparent SSL proxy on pfSense

Hi there,

I've got squid 2.7 setup and running as a transparent HTTP proxy on
pfSense 2.1 snapshot from June 28th.

Now I'd like to set it up as an HTTPS transparent proxy as well.

In the proxy server's custom options box I've added :

https_port 127.0.0.1:3129 transparent \
cert=/etc/certs/pfsense.example.org.pem \
key=/etc/certs/pfsense.example.org.key

Then I've created a NAT (Port Forward) rule to redirect all HTTPS
(destination port) traffic over to 127.0.0.1:3129, and automatically
added an associated filter rule which allows such connections.

Now when I'm trying to access to https://www.gmail.com for example, I've
got the browser warning about the name mismatch wrt the local
certificate (we're fine with that), but then I've got this message in my
browser :

(92) Protocol error

Squid's access.log contains :

1343186054.441 256 10.10.10.100 TCP_MISS/502 1481 GET https://www.gmail.com/ - DIRECT/74.125.237.150 text/html

And Squid's cache.log contains :

2012/07/25 14:14:14| SSL unknown certificate error 20 in /C=US/ST=California/L=Mountain View/O=Google Inc/CN=mail.google.com
2012/07/25 14:14:14| fwdNegotiateSSL: Error negotiating SSL connection on FD 37: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed (1/-1/0)

Any idea what I'm doing wrong ?
===
> Any idea what I'm doing wrong ?

This is what you're doing wrong:
> Now I'd like to set it up as an HTTPS transparent proxy as well.

HTTPS traffic is encrypted, and squid is lacking the proper
keys/certificates to decrypt it.

In theory, you could set up squid with its own certificates, but that
will turn squid into a man-in-the-middle, i.e. all your clients will
complain that the certificate doesn't match the sites they're trying to
access.

IOW: Just don't do it.

I'd suggest looking into browser autoconfiguration using auto.pac /
wpad.dat files.

-Stefan
===
I know this is man in the middle, and I even wrote that we were OK with the browser message which clearly says there's something like a man in the middle attack going on.

Since I've added its own certificate to Squid, it isn't lacking them, and so it "*should*" work from what I've read on the net about this subject. But clearly I'm missing something because instead of having the traffic decrypted by Squid and then encrypted again by Squid for local clients, I've got a Protocol Error.

So my original question was not about it being OK to do it or not, but more about why it didn't work as expected.

Thanks for your feedback anyway, if I can't do otherwise I'll play with autoconfiguration scripts.

bye

--
Jerome Alet
===
> So my original question was not about it being OK to do it or not, but
more about why it didn't work as expected.

See here:

http://wiki.squid-cache.org/Features/SslBump

You need to allow for ssl cert errors or ignore ssl certificate errors.
This could be a threat because squid decides on the validity of certificate
on say name mismatch by itself without end user being informed.

Regards,
Nishant
===
> I decided to enable transparent proxy on my school firewall because I
> was getting a million requests a day to configure proxy settings on
> student laptops.

> But now that I turned on transparent proxy, students have discovered
> that they can get to banned sites (like facebook) via https.
> http://www.facebook.com is blocked but https://www.facebook.com still
> works.

> Can someone let me know how to block these? I understand I have to deny
> the 'connect method' but don't see where to do this. Can this only be
> done in command line?

You cannot transparently proxy SSL connections. You would have to deny
outbound access to port 443 and if they want SSL, they must configure
the proxy settings into their browser(s) either by hand or automatically
with something like WPAD.

Jim
===
If you don´t want any www.facebook.com connections at all you can use the DNS Forwarder to change its IP to something else...
===
> I can't block tcp 443 on a wholesale basis; we need it for lots of stuff. If I can do it for a single domain, I'm there.

The idea is to set up a non-transparent proxy for all traffic and block any traffic not using the proxy.
The whole purpose of https is to prevent a third party (in this case your firewall) from seeing anything above the minimum routing information (source and destination IP address).
I think WPAD is the way to go for this one.

(Where I went to high school, they somehow blocked certain https sites, but I think it was by IP and the subscription service they used for the block list actually listed all the IPs for facebook and other blocked sites.)
===
Web proxy caching is a way to store requested Internet objects (e.g. data like web pages) available via the HTTP, FTP, and Gopher protocols on a system closer to the requesting site. Web browsers can then use the local Squid cache as a proxy HTTP server, reducing access time as well as bandwidth consumption. This is often useful for Internet service providers to increase speed to their customers, and LANs that share an Internet connection. Because it is also a proxy (i.e. it behaves like a client on behalf of the real client), it can provide some anonymity and security. However, it also can introduce significant privacy concerns as it can log a lot of data including URLs requested, the exact date and time, the name and version of the requester's web browser and operating system, and the referrer.

A client program (e.g. browser) either has to specify explicitly the proxy server it wants to use (typical for ISP customers), or it could be using a proxy without any extra configuration: "transparent caching", in which case all outgoing HTTP requests are intercepted by Squid and all responses are cached. The latter is typically a corporate set-up (all clients are on the same LAN) and often introduces the privacy concerns mentioned above.

Reference:
http://lists.pfsense.org/pipermail/list/2012-July/002614.html
http://comments.gmane.org/gmane.comp.security.firewalls.pfsense.support/20768
http://en.wikipedia.org/wiki/Squid_%28software%29
http://en.wikipedia.org/wiki/Proxy_server#Transparent_proxy
http://wiki.squid-cache.org/Features/HTTPS

Tuesday, June 4, 2013

Monitor network bandwidth usage with NetFlow, flow-tools, MySQL on FreeBSD

nfdump
nfsen

ng_netflow - a NetGraph-based kernel module for FreeBSD.
flow-tools

pfflowd
softflowd

sFlow - Eric Chou: NFSen 和 NFDump 都是好軟體,謝謝介紹。順帶一提的是有一些產品已不支持 NetFlow v5,例如 Cisco ASA (8.0 後) 還有就是我以前在安裝時曾浪費了一些時間抓 Bug 結果發現 ASA 根本需要用不同的版本,NetFlow Security Event (NSEL)。在高傳送或非 Cisco 的環境下,有時 sFlow 也是一種選擇。

ipcad - is an IP accounting daemon. It uses bpf or pcap to access interfaces and gather IP statistics. Collected numbers are arranged to form an address-to-address flow pairs and than can be accessed via rsh in Cisco fashion, or exported via NetFlow UDP protocol.

fprobe - a NetFlow probe - libpcap-based tool that collects network traffic data and emit it as NetFlow flows towards the specified collector.

Wireshark - Wireshark is a free and open-source packet analyzer.
http://www.wireshark.org/

Adventnet Netflow Analyzer
http://www.manageengine.com/products/netflow

Using "NetFlow" requires:

  • Sensor: netflow export from your network device(s) - e.g. on Cisco IOS "ip flow-export destination x.x.x.x yyyy"
  • Collector: a netflow collector daemon/application to stick the exported flow info into a database
  • Analyzer/Cruncher/Reporter: an analysis tool to report on the netflow information collected.

RusDyr: I usually recommend ng_netflow + flow-tools. I use it on regular basis in ISP's environment.

http://www.gorlani.com/portal/articles/gathering-netflow-data-with-a-freebsd-server
http://forums.freebsd.org/showthread.php?t=31256

NetFlow Packet transport protocol

NetFlow records are traditionally exported using User Datagram Protocol (UDP) and collected using a NetFlow collector. The IP address of the NetFlow collector and the destination UDP port must be configured on the sending router. The standard value is UDP port 2055, but other values like 9555 or 9995 are often used.

For efficiency reasons, the router traditionally does not keep track of flow records already exported, so if a NetFlow packet is dropped due to network congestion or packet corruption, all contained records are lost forever. The UDP protocol does not inform the router of the loss so it can send the packets again. This can be a real problem, especially with NetFlow v8 or v9 that can aggregate a lot of packets or flows into a single record. A single UDP packet loss can cause a huge impact on the statistics of some flows.

That is why some modern implementations of NetFlow use the Stream Control Transmission Protocol (SCTP) to export packets so as to provide some protection against packet loss, and make sure that NetFlow v9 templates are received before any related record is exported. Note that TCP would not be suitable for NetFlow because a strict ordering of packets would cause excessive buffering and delays.

The problem with SCTP is that it requires interaction between each NetFlow collector and each routers exporting NetFlow. There may be performance limitations if a router has to deal with many NetFlow collectors, and a NetFlow collector has to deal with lots of routers, especially when some of them are unavailable due to failure or maintenance.

SCTP may not be efficient if NetFlow must be exported toward several independent collectors, some of which may be test servers that can go down at any moment. UDP allows simple replication of NetFlow packets using Network taps or L2 or L3 Mirroring. Simple stateless equipment can also filter or change the destination address of NetFlow UDP packets if necessary. Since NetFlow export almost only use network backbone links, packet loss will often be negligible. If it happens, it will mostly be on the link between the network and the NetFlow collectors.

On Sensor machine, make sure netgraph is supported:
# ls /boot/kernel/netgraph*
# ls /boot/kernel/ng_ether*
# ls /boot/kernel/ng_one2many*

On Sensor machine:
# vim /boot/loader.conf
ng_ether_load="YES"
ng_one2many_load="YES"

Reboot the system to ensure the kernel modules are loaded:

# sync;sync;reboot
or
# kldload /boot/kernel/ng_ether.ko
# kldload /boot/kernel/ng_one2many.ko

Make sure the kernel modules are loaded:
# kldstat
Id Refs Address Size Name
1 7 0xffffffff80200000 1323388 kernel
2 1 0xffffffff81524000 45c8 ng_ether.ko
3 3 0xffffffff81529000 15330 netgraph.ko
4 1 0xffffffff8153f000 2bc0 ng_one2many.ko

On Sensor machine:
# vim /etc/ng_conf
mkpeer em0: netflow lower iface0
name em0:lower netflow
connect em0: netflow: upper out0
mkpeer netflow: ksocket export inet/dgram/udp
msg netflow:export connect inet/192.168.0.130:2055

Note: em0 is the network interface to listen to.
Note: 192.168.0.130:2055 is the collector's IP address and its port number.

On Sensor machine:
# /usr/sbin/ngctl -f /etc/ng_conf

# sockstat | grep 2055

Sensing the Sensor

It is also useful to determine that the sensor data is reaching your collector's network interface before installing a collector. A simple tcpdump invocation should be sufficient to let you see whether traffic is coming from your sensor's IP address, to the collector's IP address at the specified port.

Run tcpdump on Collector:
# tcpdump -nettti em0 udp and port 2055

Note: you have to wait for few minutes to see the Sensor sending the packets.

Install flow-tools on Collector with MySQL support:
# cd /usr/ports/net-mgmt/flow-tools/
# make config-recursive
# make install clean

MYSQL=on: MySQL database support

# mkdir /var/log/netflows
# chmod 755 /var/log/netflows
# chown root:wheel /var/log/netflows

Note: the configuration files of flow-tools can be found at /usr/local/etc/flow-tools.

Run flow-capture on Collector:
# /usr/local/bin/flow-capture -n 287 -S 5 -w /var/log/netflows/ 0/0/2055

Note: -S stat_interval log a timestamped message every stat_interval minutes indicating counters such as the number of flows received, packets processed, and lost flows.
Note: -n rotations the number of times a new file per day. -n 287 means 288 samples during all day (every 5 minutes).
Note: 0/0/2055 is LocalIP/RemoteIP/port. Use 0 for any IP address.
Note: -w workdir set the workdir to /var/log/netflows/.

Make sure flow-capture is running:
# ps auxww | grep -i flow-capture
# sockstat | grep flow-cap

Make sure the network interface of the Collector is in promiscuous mode:
# ifconfig em0 | grep -i promisc
em0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500

# grep -i promisc /var/log/messages
Jun 1 21:10:21 bsd-netflow kernel: em0: promiscuous mode enabled

If the interface is not in promiscuous mode:
# ifconfig em0 promisc

Log Files
When flow-capture is working correctly, data files will be stored in the specified directory, with data split into date folders, such as:

/var/netflow/sensorXY/YYYY/YYYY-MM/YYYY-MM-DD

the file naming convention for the incremental files are:
- tmp-v05.YYY-MM-DD.HHMMSS+UTCTZ // temporary file
- ft-v05.YY-MM-DD.HHMMSS+UTCTZ // permanent file

flow-capture will generate a tmp-* file once it is running:
# ps auxww | grep -i flow-capture

# find /var/log/netflows -name 'tmp*'
/var/log/netflows/2013/2013-05/2013-05-31/tmp-v05.2013-05-31.224501-0700

Note: file begins with tmp-* is a temporary file, which will be moved to ft-* as a permanent file (every 5 minutes in our setting).
Note: v05 means Netflow version 5.
Note: -0700 means UTC (standard time) -7 hours.

Log messages
flow-capture logs its messages, errors to syslog's /var/log/messages which can be monitored.

Export the netflow data from binary to ASCII text file:
# flow-export -f2 -m0x303000 < ft-v05.2013-05-31.174501-0700 > test.txt

Export the netflow data from binary into MySQL with flow-tools:
Create a new database called "netflow", with following table:

CREATE TABLE `flows` (
  `FLOW_ID` bigint(32) unsigned NOT NULL AUTO_INCREMENT,
  `UNIX_SECS` int(32) unsigned NOT NULL DEFAULT '0',
  `UNIX_NSECS` int(32) unsigned NOT NULL DEFAULT '0',
  `SYSUPTIME` int(20) NOT NULL,
  `EXADDR` varchar(16) NOT NULL,
  `DPKTS` int(32) unsigned NOT NULL DEFAULT '0',
  `DOCTETS` int(32) unsigned NOT NULL DEFAULT '0',
  `FIRST` int(32) unsigned NOT NULL DEFAULT '0',
  `LAST` int(32) unsigned NOT NULL DEFAULT '0',
  `ENGINE_TYPE` int(10) NOT NULL,
  `ENGINE_ID` int(15) NOT NULL,
  `SRCADDR` varchar(16) NOT NULL DEFAULT '0',
  `DSTADDR` varchar(16) NOT NULL DEFAULT '0',
  `NEXTHOP` varchar(16) NOT NULL DEFAULT '0',
  `INPUT` int(16) unsigned NOT NULL DEFAULT '0',
  `OUTPUT` int(16) unsigned NOT NULL DEFAULT '0',
  `SRCPORT` int(16) unsigned NOT NULL DEFAULT '0',
  `DSTPORT` int(16) unsigned NOT NULL DEFAULT '0',
  `PROT` int(8) unsigned NOT NULL DEFAULT '0',
  `TOS` int(2) NOT NULL,
  `TCP_FLAGS` int(8) unsigned NOT NULL DEFAULT '0',
  `SRC_MASK` int(8) unsigned NOT NULL DEFAULT '0',
  `DST_MASK` int(8) unsigned NOT NULL DEFAULT '0',
  `SRC_AS` int(16) unsigned NOT NULL DEFAULT '0',
  `DST_AS` int(16) unsigned NOT NULL DEFAULT '0',
  PRIMARY KEY (`FLOW_ID`),
  KEY `SRCADDR` (`SRCADDR`),
  KEY `DSTADDR` (`DSTADDR`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

Create a "rotate program" that will actually enter in the information into mysql:
# cat /etc/myscript/flow-mysql-export.sh
#!/bin/sh
/usr/local/bin/flow-export -f3 -u "username:password:localhost:3306:netflow:flows" < /var/log/netflows/$1

Kill the previous flow-capture process:
# ps auxww | grep -i flow-capture
root 13813 0.0 0.5 22588 9628 ?? Ss 5:39PM 0:00.65 /usr/local/bin/flow-capture -w /var/log/netflows -S 5 0/0/2055

# kill 13813

Run flow-capture with mysql-export enabled:
# /usr/local/bin/flow-capture -n 287 -S 5 -w /var/log/netflows -R /etc/myscript/flow-mysql-export.sh 0/0/2055

Run MySQL Query:
mysql> SELECT * FROM netflow.flows;

Note: IP protocol 1 is for ICMP, 6 is for TCP and 17 is for UDP. Refer to the List of IP protocol numbers.

Print the netflow data in human readable format:
# flow-print -p < /var/log/netflows/2013/2013-06/2013-06-01/ft-v05.2013-06-01.215000-0700 | less

Print the netflow data in human readable format:
# flow-cat /var/log/netflows/2013/2013-06/2013-06-01/ft-v05.2013-06-01.220000-0700 | flow-print -f 5 | less

Note: the octets column, one octet equals to 8 bits (one byte).

Print the netflow data with filter:
# flow-cat ft-v05.2013-06-04.161001-0700 | flow-filter -p 53 | flow-print -f 5 | less

Generate reports from the netflow data:
# flow-cat /var/log/netflows/2013/2013-06/2013-06-01/ft-v05.2013-06-01.214000-0700 | flow-report | less

Install FlowViewer:
# cd /usr/ports/net-mgmt/flowviewer
# make config-recursive
# make install

# cd /usr/local/www/flowviewer
# cp FlowViewer_Configuration.pm.dist FlowViewer_Configuration.pm

You can find additional information in the:
# less /usr/local/share/doc/flowviewer/README

Alternative tool to export the netflow data:
p5-Cflow is a perl module for analyzing raw flow files written by cflowd, a package used to collect Cisco NetFlow data.

# cd /usr/ports/net-mgmt/p5-Cflow/
# make install clean

# flowdumper -s ft-v05.2013-05-31.174151-0700
# flowdumper -v ft-v05.2013-05-31.174151-0700
# flowdumper -V ft-v05.2013-05-31.174151-0700

Install nfsen:
# cd /usr/ports/net-mgmt/nfsen
# make config-recursive

nfdump-1.6.9
check NFTRACK - with PortTracker support.

# make install

/usr/local/etc/nfsen.conf
/usr/local/var/nfsen
/usr/local/libexec/nfsen
/usr/local/www/nfsen

%sources = (
'mysrv' => { 'port' => '2055', 'col' => '#0000ff', 'type' => 'netflow' },
);

Note: mysrv needs to be resolvable to the IP address of the netflow source device (add record in /etc/hosts file).

Note: check directory and file permission!!!

# cd /usr/local/www/apache22/data
# ln -s /usr/local/www/nfsen nfsen

# less /usr/ports/net-mgmt/nfsen/work/nfsen-1.3.6p1/contrib/PortTracker/INSTALL

# mkdir /usr/local/var/nfsen/portdb

# /usr/local/etc/rc.d/nfsen restart

Reference:
http://bbs.chinaunix.net/forum.php?mod=viewthread&tid=667760
http://en.wikipedia.org/wiki/NetFlow
http://lijian366.i.sohu.com/blog/view/170814918.htm
http://meefirst.blogspot.ca/2012/02/installing-nfsen-on-freebsd-9.html
http://www.gorlani.com/portal/articles/gathering-netflow-data-with-a-freebsd-server
http://www.kelvinism.com/2008/12/netflow-into-mysql-with-flow-tools_5439.html
http://www.netadmin.com.tw/article_content.aspx?sn=1111030003
http://www.nomoa.com/bsd/toolkit/monitoring/netflow/collector.html

network monitor bandwidth usage log tools:

Nagios
http://www.nagios.org/

Icinga (a fork of Nagios)
https://www.icinga.org/

Munin
http://munin-monitoring.org/

Monit
http://mmonit.com/monit/

MRTG
http://oss.oetiker.ch/mrtg/

PRTG Network Monitor - Windows GUI implementation of MRTG's functionality (limited freeware version available).
http://www.paessler.com/prtg

Spiceworks
http://www.spiceworks.com/

Darkstat - realtime network statistics. It also offers bandwidth graphs for an interface, as well as traffic to/from specific IP addresses.
http://unix4lyfe.org/darkstat/

RRDtool - Reimplementation of MRTG's graphing and logging features
http://oss.oetiker.ch/rrdtool/

Cacti - A similar tool using RRDtool
http://www.cacti.net/index.php

Observium - A heavily automated platform for network graphing using RRDtool
http://www.observium.org/wiki/Main_Page

Netflow - Netflow is another option for bandwidth usage analysis. Netflow is a standard means of traffic accounting supported by many routers and firewalls. You need a Netflow collector running on a host inside your network to collect the data. pfSense can export Netflow data to the collector using the pfflowd package, or softflowd.

ntop - If you need even more detail than that, you might need the ntop package. It will even track where connections were made by local PCs, and how much bandwidth was used on individual connections.

vnstat - is another bandwidth monitoring tool available to install as a package. See the Vnstat doc for more information.

sysmon - A network tool designed for high performance and accurate monitoring.
http://www.sysmon.org/
建置Sysmon輕便型網路監控告警系統 http://www.netadmin.com.tw/article_content.aspx?sn=1103030012

pftop - is a small, curses-based utility for real-time display of active states and rule statistics for pf, the packet filter (for OpenBSD).

Note: Then press the following: 7, Shift +R (Capital R) , s, 1

7 is speed screen
Shift R is sort by RATE
s, 1 is setting refresh to 1 second.
h for help.
q for quit.

bandwidthd - tracks usage of TCP/IP network subnets and builds HTML files with graphs to display network utilization. Charts are built by individual IP.

Under Console:
  • iptraf
  • trafshow
  • iftop
  • nload
  • ifstat
  • systat -ifstat 1

The fork of nagios to icinga is a good thing, much in the same way as quagga was a great fork of zebra.

Reference:
http://doc.pfsense.org/index.php/How_can_I_monitor_bandwidth_usage%3F
http://blog.ijun.org/2013/05/monitor-network-traffic-with-vnstat-on.html

segmentation fault core dump

A segmentation fault occurs when a program tried to access memory it has not been told it can use by the OS. Memory is split into segments. If a program tries to read or write a memory address from a segment it has not been allocated, the OS sends a signal (SIGSEGV) to the process, telling it "naughty boy!", and by default the process falls over with this error message.

"core dumped" means the state of the program is written to a file called "core". This is helpful for debuggers which can read the core file and work out where the program crashed, the values in the variables, registers, what was on the stack and so on.

When you use scanf, you have to pass the memory address into which the input will be written by the scanf function. You passed the value of the integer "age". age is probably 0 or some random number at the point scanf gets it (it hasn't been assigned to, so officially it's value is undefined). This random value is almost certainly not a memory address in a segment which has been allocated to the program, hence the segmentation fault. The correction paulsm4 provided shows the syntax specifying the address of the integer "age".

Addresses and pointers to variables is a tricky subject to start with. Don't worry - you'll get a lot of core dumps before you think you understand it, and then a whole lot more before you actually understand it.

Reference:

http://www.linuxquestions.org/questions/programming-9/segmentation-fault-core-dumped-what-508083/
http://stupefydeveloper.blogspot.ca/2008/10/gdb-examining-core-dumps.html
http://www.cprogramming.com/debugging/segfaults.html
http://stackoverflow.com/questions/1518711/c-programming-how-does-free-know-how-much-to-free
http://stackoverflow.com/questions/1963745/how-does-free-know-how-much-memory-to-deallocate
http://stackoverflow.com/questions/851958/where-do-malloc-free-store-allocated-sizes-and-addresses
http://stackoverflow.com/questions/1119134/how-do-malloc-and-free-work
http://stackoverflow.com/questions/3923784/how-does-the-free-function-work-in-c
http://www.memorymanagement.org/