## Tuesday, August 4, 2015

### git commands fundamentals

What I have learned as Git basic commands
• git init
• git log
• git status
• git commit -m "message for comming"
• git diff <#hash code>..<#hash code>
• git clean (it is to delete files from system!)
• .gitignore
• git log origin/master
• git branch -r
• git pull origin branch
• git commit -am "commit"  (no need to use "git add" as separate step)
• git tag -a | git tag -s
• git config --global alias.lga "log --graph --oneline --all --decorate"
• git reflog (2nd trash bin - 30days by default only)
• git stash | git stash list | git stash apply | git stash pop | git stash branch new_branch_for_stash(half way work, nor do want to commit, nor want to lose the modification)
• git checkout (go to a branch or fetch latest file version)
• git checkout master ; git merge feature1 (merge feature1 into master branch)
• git diff --cached (compare repo and staging area)
• git rebase (rebase the branches created from, conflicts might occur)
• git cherry-pick (allow one commit and merge into)
• git push origin master:new_branch (push master branch's changes up to origin's new_branch branch, ":new_branch" is optional if the branch name is the same as local branch)
• git push origin  :remote_branch_name (delete remote branch should be done with caution, because other people might be working on top of it)
• git log -p -1 <file or path> or git log -p <file or path>

## Monday, August 3, 2015

### linux diff command

a good article for diff command output interpreting.
referenced from: here

# diff Output Formats

diff has several mutually exclusive options for output format. The following sections describe each format, illustrating how diff reports the differences between two sample input files.

## Two Sample Input Files

Here are two sample files that we will use in numerous examples to illustrate the output of diff and how various options can change it.
This is the file lao':
The Way that can be told of is not the eternal Way;
The name that can be named is not the eternal name.
The Nameless is the origin of Heaven and Earth;
The Named is the mother of all things.
Therefore let there always be non-being,
so we may see their subtlety,
And let there always be being,
so we may see their outcome.
The two are the same,
But after they are produced,
they have different names.
This is the file tzu':
The Nameless is the origin of Heaven and Earth;
The named is the mother of all things.

Therefore let there always be non-being,
so we may see their subtlety,
And let there always be being,
so we may see their outcome.
The two are the same,
But after they are produced,
they have different names.
They both may be called deep and profound.
Deeper and more profound,
The door of all subtleties!
In this example, the first hunk contains just the first two lines of lao', the second hunk contains the fourth line of lao' opposing the second and third lines of tzu', and the last hunk contains just the last three lines of tzu'.

## Showing Differences Without Context

The "normal" diff output format shows each hunk of differences without any surrounding context. Sometimes such output is the clearest way to see how lines have changed, without the clutter of nearby unchanged lines (although you can get similar results with the context or unified formats by using 0 lines of context). However, this format is no longer widely used for sending out patches; for that purpose, the context format (see section Context Format) and the unified format (see section Unified Format) are superior. Normal format is the default for compatibility with older versions of diff and the Posix standard.

### Detailed Description of Normal Format

The normal output format consists of one or more hunks of differences; each hunk shows one area where the files differ. Normal format hunks look like this:
change-command
< from-file-line
< from-file-line...
---
> to-file-line
> to-file-line...
There are three types of change commands. Each consists of a line number or comma-separated range of lines in the first file, a single character indicating the kind of change to make, and a line number or comma-separated range of lines in the second file. All line numbers are the original line numbers in each file. The types of change commands are:
lar'
Add the lines in range r of the second file after line l of the first file. For example, 8a12,15' means append lines 12--15 of file 2 after line 8 of file 1; or, if changing file 2 into file 1, delete lines 12--15 of file 2.
fct'
Replace the lines in range f of the first file with lines in range t of the second file. This is like a combined add and delete, but more compact. For example, 5,7c8,10' means change lines 5--7 of file 1 to read as lines 8--10 of file 2; or, if changing file 2 into file 1, change lines 8--10 of file 2 to read as lines 5--7 of file 1.
rdl'
Delete the lines in range r from the first file; line l is where they would have appeared in the second file had they not been deleted. For example, 5,7d3' means delete lines 5--7 of file 1; or, if changing file 2 into file 1, append lines 5--7 of file 1 after line 3 of file 2.

### An Example of Normal Format

Here is the output of the command diff lao tzu' (see section Two Sample Input Files, for the complete contents of the two files). Notice that it shows only the lines that are different between the two files.
1,2d0
< The Way that can be told of is not the eternal Way;
< The name that can be named is not the eternal name.
4c2,3
< The Named is the mother of all things.
---
> The named is the mother of all things.
>
11a11,13
> They both may be called deep and profound.
> Deeper and more profound,
> The door of all subtleties!

## Showing Differences in Their Context

Usually, when you are looking at the differences between files, you will also want to see the parts of the files near the lines that differ, to help you understand exactly what has changed. These nearby parts of the files are called the context.
GNU diff provides two output formats that show context around the differing lines: context format and unified format. It can optionally show in which function or section of the file the differing lines are found.
If you are distributing new versions of files to other people in the form of diff output, you should use one of the output formats that show context so that they can apply the diffs even if they have made small changes of their own to the files. patch can apply the diffs in this case by searching in the files for the lines of context around the differing lines; if those lines are actually a few lines away from where the diff says they are, patch can adjust the line numbers accordingly and still apply the diff correctly. See section Applying Imperfect Patches, for more information on using patchto apply imperfect diffs.

### Context Format

The context output format shows several lines of context around the lines that differ. It is the standard format for distributing updates to source code.
To select this output format, use the -C lines'--context[=lines]', or -c' option. The argument lines that some of these options take is the number of lines of context to show. If you do not specify lines, it defaults to three. For proper operation, patch typically needs at least two lines of context.

#### Detailed Description of Context Format

The context output format starts with a two-line header, which looks like this:
*** from-file from-file-modification-time
--- to-file to-file-modification time
You can change the header's content with the -L label' or --label=label' option; see section Showing Alternate File Names.
Next come one or more hunks of differences; each hunk shows one area where the files differ. Context format hunks look like this:
***************
*** from-file-line-range ****
from-file-line
from-file-line...
--- to-file-line-range ----
to-file-line
to-file-line...
The lines of context around the lines that differ start with two space characters. The lines that differ between the two files start with one of the following indicator characters, followed by a space character:
!'
A line that is part of a group of one or more lines that changed between the two files. There is a corresponding group of lines marked with !' in the part of this hunk for the other file.
+'
An "inserted" line in the second file that corresponds to nothing in the first file.
-'
A "deleted" line in the first file that corresponds to nothing in the second file.
If all of the changes in a hunk are insertions, the lines of from-file are omitted. If all of the changes are deletions, the lines of to-file are omitted.

#### An Example of Context Format

Here is the output of diff -c lao tzu' (see section Two Sample Input Files, for the complete contents of the two files). Notice that up to three lines that are not different are shown around each line that is different; they are the context lines. Also notice that the first two hunks have run together, because their contents overlap.
*** lao Sat Jan 26 23:30:39 1991
--- tzu Sat Jan 26 23:30:50 1991
***************
*** 1,7 ****
- The Way that can be told of is not the eternal Way;
- The name that can be named is not the eternal name.
The Nameless is the origin of Heaven and Earth;
! The Named is the mother of all things.
Therefore let there always be non-being,
so we may see their subtlety,
And let there always be being,
--- 1,6 ----
The Nameless is the origin of Heaven and Earth;
! The named is the mother of all things.
!
Therefore let there always be non-being,
so we may see their subtlety,
And let there always be being,
***************
*** 9,11 ****
--- 8,13 ----
The two are the same,
But after they are produced,
they have different names.
+ They both may be called deep and profound.
+ Deeper and more profound,
+ The door of all subtleties!

#### An Example of Context Format with Less Context

Here is the output of diff --context=1 lao tzu' (see section Two Sample Input Files, for the complete contents of the two files). Notice that at most one context line is reported here.
*** lao Sat Jan 26 23:30:39 1991
--- tzu Sat Jan 26 23:30:50 1991
***************
*** 1,5 ****
- The Way that can be told of is not the eternal Way;
- The name that can be named is not the eternal name.
The Nameless is the origin of Heaven and Earth;
! The Named is the mother of all things.
Therefore let there always be non-being,
--- 1,4 ----
The Nameless is the origin of Heaven and Earth;
! The named is the mother of all things.
!
Therefore let there always be non-being,
***************
*** 11 ****
--- 10,13 ----
they have different names.
+ They both may be called deep and profound.
+ Deeper and more profound,
+ The door of all subtleties!

### Unified Format

The unified output format is a variation on the context format that is more compact because it omits redundant context lines. To select this output format, use the -U lines'--unified[=lines]', or -u' option. The argument lines is the number of lines of context to show. When it is not given, it defaults to three.
At present, only GNU diff can produce this format and only GNU patch can automatically apply diffs in this format. For proper operation, patch typically needs at least two lines of context.

#### Detailed Description of Unified Format

The unified output format starts with a two-line header, which looks like this:
--- from-file from-file-modification-time
+++ to-file to-file-modification-time
You can change the header's content with the -L label' or --label=label' option; see See section Showing Alternate File Names.
Next come one or more hunks of differences; each hunk shows one area where the files differ. Unified format hunks look like this:
@@ from-file-range to-file-range @@
line-from-either-file
line-from-either-file...
The lines common to both files begin with a space character. The lines that actually differ between the two files have one of the following indicator characters in the left column:
+'
A line was added here to the first file.
-'
A line was removed here from the first file.

#### An Example of Unified Format

Here is the output of the command diff -u lao tzu' (see section Two Sample Input Files, for the complete contents of the two files):
--- lao Sat Jan 26 23:30:39 1991
+++ tzu Sat Jan 26 23:30:50 1991
@@ -1,7 +1,6 @@
-The Way that can be told of is not the eternal Way;
-The name that can be named is not the eternal name.
The Nameless is the origin of Heaven and Earth;
-The Named is the mother of all things.
+The named is the mother of all things.
+
Therefore let there always be non-being,
so we may see their subtlety,
And let there always be being,
@@ -9,3 +8,6 @@
The two are the same,
But after they are produced,
they have different names.
+They both may be called deep and profound.
+Deeper and more profound,
+The door of all subtleties!

### Showing Which Sections Differences Are in

Sometimes you might want to know which part of the files each change falls in. If the files are source code, this could mean which function was changed. If the files are documents, it could mean which chapter or appendix was changed. GNU diff can show this by displaying the nearest section heading line that precedes the differing lines. Which lines are "section headings" is determined by a regular expression.

#### Showing Lines That Match Regular Expressions

To show in which sections differences occur for files that are not source code for C or similar languages, use the -F regexp' or --show-function-line=regexp' option. diff considers lines that match the argument regexp to be the beginning of a section of the file. Here are suggested regular expressions for some common languages:
^[A-Za-z_]'
C, C++, Prolog
^('
Lisp
^@$$chapter\|appendix\|unnumbered\|chapheading$$'
Texinfo
This option does not automatically select an output format; in order to use it, you must select the context format (see section Context Format) or unified format (see section Unified Format). In other output formats it has no effect.
The -F' and --show-function-line' options find the nearest unchanged line that precedes each hunk of differences and matches the given regular expression. Then they add that line to the end of the line of asterisks in the context format, or to the @@' line in unified format. If no matching line exists, they leave the output for that hunk unchanged. If that line is more than 40 characters long, they output only the first 40 characters. You can specify more than one regular expression for such lines; diff tries to match each line against each regular expression, starting with the last one given. This means that you can use -p' and -F' together, if you wish.

To show in which functions differences occur for C and similar languages, you can use the -p' or --show-c-function' option. This option automatically defaults to the context output format (see section Context Format), with the default number of lines of context. You can override that number with -C lines' elsewhere in the command line. You can override both the format and the number with -U lines' elsewhere in the command line.
The -p' and --show-c-function' options are equivalent to -F'^[_a-zA-Z$]'' if the unified format is specified, otherwise -c -F'^[_a-zA-Z$]'' (see section Showing Lines That Match Regular Expressions). GNU diff provides them for the sake of convenience.

### Showing Alternate File Names

If you are comparing two files that have meaningless or uninformative names, you might want diff to show alternate names in the header of the context and unified output formats. To do this, use the -L label' or --label=label' option. The first time you give this option, its argument replaces the name and date of the first file in the header; the second time, its argument replaces the name and date of the second file. If you give this option more than twice, diff reports an error. The -L' option does not affect the file names in the pr header when the -l' or --paginate'option is used (see section Paginating diff Output).
Here are the first two lines of the output from diff -C2 -Loriginal -Lmodified lao tzu':
*** original
--- modified

## Showing Differences Side by Side

diff can produce a side by side difference listing of two files. The files are listed in two columns with a gutter between them. The gutter contains one of the following markers:
white space
The corresponding lines are in common. That is, either the lines are identical, or the difference is ignored because of one of the --ignore' options (see section Suppressing Differences in Blank and Tab Spacing).
|'
The corresponding lines differ, and they are either both complete or both incomplete.
<'
The files differ and only the first file contains the line.
>'
The files differ and only the second file contains the line.
('
Only the first file contains the line, but the difference is ignored.
)'
Only the second file contains the line, but the difference is ignored.
\'
The corresponding lines differ, and only the first line is incomplete.
/'
The corresponding lines differ, and only the second line is incomplete.
Normally, an output line is incomplete if and only if the lines that it contains are incomplete; See section Incomplete Lines. However, when an output line represents two differing lines, one might be incomplete while the other is not. In this case, the output line is complete, but its the gutter is marked \' if the first line is incomplete, /' if the second line is.
Side by side format is sometimes easiest to read, but it has limitations. It generates much wider output than usual, and truncates lines that are too long to fit. Also, it relies on lining up output more heavily than usual, so its output looks particularly bad if you use varying width fonts, nonstandard tab stops, or nonprinting characters.
You can use the sdiff command to interactively merge side by side differences. See section Interactive Merging with sdiff, for more information on merging files.

## Controlling Side by Side Format

The -y' or --side-by-side' option selects side by side format. Because side by side output lines contain two input lines, they are wider than usual. They are normally 130 columns, which can fit onto a traditional printer line. You can set the length of output lines with the -W columns' or--width=columns' option. The output line is split into two halves of equal length, separated by a small gutter to mark differences; the right half is aligned to a tab stop so that tabs line up. Input lines that are too long to fit in half of an output line are truncated for output.
The --left-column' option prints only the left column of two common lines. The --suppress-common-lines' option suppresses common lines entirely.

### An Example of Side by Side Format

Here is the output of the command diff -y -W 72 lao tzu' (see section Two Sample Input Files, for the complete contents of the two files).
The Way that can be told of is n   <
The name that can be named is no   <
The Nameless is the origin of He        The Nameless is the origin of He
The Named is the mother of all t   |    The named is the mother of all t
>
Therefore let there always be no        Therefore let there always be no
so we may see their subtlety,           so we may see their subtlety,
And let there always be being,          And let there always be being,
so we may see their outcome.            so we may see their outcome.
The two are the same,                   The two are the same,
But after they are produced,            But after they are produced,
they have different names.              they have different names.
>    They both may be called deep and
>    Deeper and more profound,
>    The door of all subtleties!

## Making Edit Scripts

Several output modes produce command scripts for editing from-file to produce to-file.

### ed Scripts

diff can produce commands that direct the ed text editor to change the first file into the second file. Long ago, this was the only output mode that was suitable for editing one file into another automatically; today, with patch, it is almost obsolete. Use the -e' or --ed' option to select this output format.
Like the normal format (see section Showing Differences Without Context), this output format does not show any context; unlike the normal format, it does not include the information necessary to apply the diff in reverse (to produce the first file if all you have is the second file and the diff).
If the file d' contains the output of diff -e old new', then the command (cat d && echo w) | ed - old' edits old' to make it a copy of new'. More generally, if d1'd2', ..., dN' contain the outputs of diff -e old new1'diff -e new1 new2', ..., diff -e newN-1 newN', respectively, then the command (cat d1 d2 ... dN && echo w) | ed - old' edits old' to make it a copy of newN'.

#### Detailed Description of ed Format

The ed output format consists of one or more hunks of differences. The changes closest to the ends of the files come first so that commands that change the number of lines do not affect how ed interprets line numbers in succeeding commands. ed format hunks look like this:
change-command
to-file-line
to-file-line...
.
Because ed uses a single period on a line to indicate the end of input, GNU diff protects lines of changes that contain a single period on a line by writing two periods instead, then writing a subsequent ed command to change the two periods into one. The ed format cannot represent an incomplete line, so if the second file ends in a changed incomplete line, diff reports an error and then pretends that a newline was appended.
There are three types of change commands. Each consists of a line number or comma-separated range of lines in the first file and a single character indicating the kind of change to make. All line numbers are the original line numbers in the file. The types of change commands are:
la'
Add text from the second file after line l in the first file. For example, 8a' means to add the following lines after line 8 of file 1.
rc'
Replace the lines in range r in the first file with the following lines. Like a combined add and delete, but more compact. For example, 5,7c' means change lines 5--7 of file 1 to read as the text file 2.
rd'
Delete the lines in range r from the first file. For example, 5,7d' means delete lines 5--7 of file 1.

#### Example ed Script

Here is the output of diff -e lao tzu' (see section Two Sample Input Files, for the complete contents of the two files):
11a
They both may be called deep and profound.
Deeper and more profound,
The door of all subtleties!
.
4c
The named is the mother of all things.

.
1,2d

### Forward ed Scripts

diff can produce output that is like an ed script, but with hunks in forward (front to back) order. The format of the commands is also changed slightly: command characters precede the lines they modify, spaces separate line numbers in ranges, and no attempt is made to disambiguate hunk lines consisting of a single period. Like ed format, forward ed format cannot represent incomplete lines.
Forward ed format is not very useful, because neither ed nor patch can apply diffs in this format. It exists mainly for compatibility with older versions of diff. Use the -f' or --forward-ed' option to select it.

### RCS Scripts

The RCS output format is designed specifically for use by the Revision Control System, which is a set of free programs used for organizing different versions and systems of files. Use the -n' or --rcs' option to select this output format. It is like the forward ed format (see section Forwarded Scripts), but it can represent arbitrary changes to the contents of a file because it avoids the forward ed format's problems with lines consisting of a single period and with incomplete lines. Instead of ending text sections with a line consisting of a single period, each command specifies the number of lines it affects; a combination of the a' and d' commands are used instead of c'. Also, if the second file ends in a changed incomplete line, then the output also ends in an incomplete line.
Here is the output of diff -n lao tzu' (see section Two Sample Input Files, for the complete contents of the two files):
d1 2
d4 1
a4 2
The named is the mother of all things.

a11 3
They both may be called deep and profound.
Deeper and more profound,
The door of all subtleties!

## Merging Files with If-then-else

You can use diff to merge two files of C source code. The output of diff in this format contains all the lines of both files. Lines common to both files are output just once; the differing parts are separated by the C preprocessor directives #ifdef name or #ifndef name#else, and #endif. When compiling the output, you select which version to use by either defining or leaving undefined the macro name.
To merge two files, use diff with the -D name' or --ifdef=name' option. The argument name is the C preprocessor identifier to use in the #ifdef and #ifndef directives.
For example, if you change an instance of wait (&s) to waitpid (-1, &s, 0) and then merge the old and new files with the --ifdef=HAVE_WAITPID' option, then the affected part of your code might look like this:
do {
#ifndef HAVE_WAITPID
if ((w = wait (&s)) < 0  &&  errno != EINTR)
#else /* HAVE_WAITPID */
if ((w = waitpid (-1, &s, 0)) < 0  &&  errno != EINTR)
#endif /* HAVE_WAITPID */
return w;
} while (w != child);
You can specify formats for languages other than C by using line group formats and line formats, as described in the next sections.

### Line Group Formats

Line group formats let you specify formats suitable for many applications that allow if-then-else input, including programming languages and text formatting languages. A line group format specifies the output format for a contiguous group of similar lines.
For example, the following command compares the TeX files old' and new', and outputs a merged file in which old regions are surrounded by \begin{em}'-\end{em}' lines, and new regions are surrounded by \begin{bf}'-\end{bf}' lines.
diff \
--old-group-format='\begin{em}
%<\end{em}
' \
--new-group-format='\begin{bf}
%>\end{bf}
' \
old new
The following command is equivalent to the above example, but it is a little more verbose, because it spells out the default line group formats.
diff \
--old-group-format='\begin{em}
%<\end{em}
' \
--new-group-format='\begin{bf}
%>\end{bf}
' \
--unchanged-group-format='%=' \
--changed-group-format='\begin{em}
%<\end{em}
\begin{bf}
%>\end{bf}
' \
old new
Here is a more advanced example, which outputs a diff listing with headers containing line numbers in a "plain English" style.
diff \
--unchanged-group-format=" \
--old-group-format='-------- %dn line%(n=1?:s) deleted at %df:
%<' \
--new-group-format='-------- %dN line%(N=1?:s) added after %de:
%>' \
--changed-group-format='-------- %dn line%(n=1?:s) changed at %df:
%<-------- to:
%>' \
old new
To specify a line group format, use diff with one of the options listed below. You can specify up to four line group formats, one for each kind of line group. You should quote format, because it typically contains shell metacharacters.
--old-group-format=format'
These line groups are hunks containing only lines from the first file. The default old group format is the same as the changed group format if it is specified; otherwise it is a format that outputs the line group as-is.
--new-group-format=format'
These line groups are hunks containing only lines from the second file. The default new group format is same as the the changed group format if it is specified; otherwise it is a format that outputs the line group as-is.
--changed-group-format=format'
These line groups are hunks containing lines from both files. The default changed group format is the concatenation of the old and new group formats.
--unchanged-group-format=format'
These line groups contain lines common to both files. The default unchanged group format is a format that outputs the line group as-is.
In a line group format, ordinary characters represent themselves; conversion specifications start with %' and have one of the following forms.
%<'
stands for the lines from the first file, including the trailing newline. Each line is formatted according to the old line format (see section Line Formats).
%>'
stands for the lines from the second file, including the trailing newline. Each line is formatted according to the new line format.
%='
stands for the lines common to both files, including the trailing newline. Each line is formatted according to the unchanged line format.
%%'
stands for %'.
%c'C''
where C is a single character, stands for CC may not be a backslash or an apostrophe. For example, %c':'' stands for a colon, even inside the then-part of an if-then-else format, which a colon would normally terminate.
%c'\O''
where O is a string of 1, 2, or 3 octal digits, stands for the character with octal code O. For example, %c'\0'' stands for a null character.
Fn'
where F is a printf conversion specification and n is one of the following letters, stands for n's value formatted with F.
e'
The line number of the line just before the group in the old file.
f'
The line number of the first line in the group in the old file; equals e + 1.
l'
The line number of the last line in the group in the old file.
m'
The line number of the line just after the group in the old file; equals l + 1.
n'
The number of lines in the group in the old file; equals l - f + 1.
E, F, L, M, N'
Likewise, for lines in the new file.
The printf conversion specification can be %d'%o'%x', or %X', specifying decimal, octal, lower case hexadecimal, or upper case hexadecimal output respectively. After the %' the following options can appear in sequence: a -' specifying left-justification; an integer specifying the minimum field width; and a period followed by an optional integer specifying the minimum number of digits. For example, %5dN' prints the number of new lines in the group in a field of width 5 characters, using the printf format "%5d".
(A=B?T:E)'
If A equals B then T else EA and B are each either a decimal constant or a single letter interpreted as above. This format spec is equivalent to T if A's value equals B's; otherwise it is equivalent to E. For example, %(N=0?no:%dN) line%(N=1?:s)' is equivalent to no lines' if N (the number of lines in the group in the the new file) is 0, to 1 line' if N is 1, and to %dN lines' otherwise.

### Line Formats

Line formats control how each line taken from an input file is output as part of a line group in if-then-else format.
For example, the following command outputs text with a one-column change indicator to the left of the text. The first column of output is -' for deleted lines, |' for added lines, and a space for unchanged lines. The formats contain newline characters where newlines are desired on output.
diff \
--old-line-format='-%l
' \
--new-line-format='|%l
' \
--unchanged-line-format=' %l
' \
old new
To specify a line format, use one of the following options. You should quote format, since it often contains shell metacharacters.
--old-line-format=format'
formats lines just from the first file.
--new-line-format=format'
formats lines just from the second file.
--unchanged-line-format=format'
formats lines common to both files.
--line-format=format'
formats all lines; in effect, it sets all three above options simultaneously.
In a line format, ordinary characters represent themselves; conversion specifications start with %' and have one of the following forms.
%l'
stands for the the contents of the line, not counting its trailing newline (if any). This format ignores whether the line is incomplete; See section Incomplete Lines.
%L'
stands for the the contents of the line, including its trailing newline (if any). If a line is incomplete, this format preserves its incompleteness.
%%'
stands for %'.
%c'C''
where C is a single character, stands for CC may not be a backslash or an apostrophe. For example, %c':'' stands for a colon.
%c'\O''
where O is a string of 1, 2, or 3 octal digits, stands for the character with octal code O. For example, %c'\0'' stands for a null character.
Fn'
where F is a printf conversion specification, stands for the line number formatted with F. For example, %.5dn' prints the line number using the printf format "%.5d". See section Line Group Formats, for more about printf conversion specifications.
The default line format is %l' followed by a newline character.
If the input contains tab characters and it is important that they line up on output, you should ensure that %l' or %L' in a line format is just after a tab stop (e.g. by preceding %l' or %L' with a tab character), or you should use the -t' or --expand-tabs' option.
Taken together, the line and line group formats let you specify many different formats. For example, the following command uses a format similar to diff's normal format. You can tailor this command to get fine control over diff's output.
diff \
--old-line-format='< %l
' \
--new-line-format='> %l
' \
--old-group-format='%df%(f=l?:,%dl)d%dE
%<' \
--new-group-format='%dea%dF%(F=L?:,%dL)
%>' \
--changed-group-format='%df%(f=l?:,%dl)c%dF%(F=L?:,%dL)
%<---
%>' \
--unchanged-group-format=" \
old new

### Detailed Description of If-then-else Format

For lines common to both files, diff uses the unchanged line group format. For each hunk of differences in the merged output format, if the hunk contains only lines from the first file, diff uses the old line group format; if the hunk contains only lines from the second file, diff uses the new group format; otherwise, diff uses the changed group format.
The old, new, and unchanged line formats specify the output format of lines from the first file, lines from the second file, and lines common to both files, respectively.
The option --ifdef=name' is equivalent to the following sequence of options using shell syntax:
--old-group-format='#ifndef name
%<#endif /* not name */
' \
--new-group-format='#ifdef name
%>#endif /* name */
' \
--unchanged-group-format='%=' \
--changed-group-format='#ifndef name
%<#else /* name */
%>#endif /* name */
'
You should carefully check the diff output for proper nesting. For example, when using the the -D name' or --ifdef=name' option, you should check that if the differing lines contain any of the C preprocessor directives #ifdef'#ifndef'#else'#elif', or #endif', they are nested properly and match. If they don't, you must make corrections manually. It is a good idea to carefully check the resulting code anyway to make sure that it really does what you want it to; depending on how the input files were produced, the output might contain duplicate or otherwise incorrect code.
The patch -D name' option behaves just like the diff -D name' option, except it operates on a file and a diff to produce a merged file; See section Options to patch.

### An Example of If-then-else Format

Here is the output of `diff -DTWO lao tzu' (see section Two Sample Input Files, for the complete contents of the two files):
#ifndef TWO
The Way that can be told of is not the eternal Way;
The name that can be named is not the eternal name.
#endif /* not TWO */
The Nameless is the origin of Heaven and Earth;
#ifndef TWO
The Named is the mother of all things.
#else /* TWO */
The named is the mother of all things.

#endif /* TWO */
Therefore let there always be non-being,
so we may see their subtlety,
And let there always be being,
so we may see their outcome.
The two are the same,
But after they are produced,
they have different names.
#ifdef TWO
They both may be called deep and profound.
Deeper and more profound,
The door of all subtleties!
#endif /* TWO */

## Wednesday, July 29, 2015

### emacs users ergonomic concern

Emacs users are heavy function/modifier key users. I have followed originally the general idea to swap left ctrl with caplock, but the pinky finger really gets sore after days. Now by article from Xah Lee (http://ergoemacs.org/emacs/emacs_pinky.html), I realize that I need a ergo keyboard as well as swap ctrl with alt.

It is annoying to swtich function key at the time that I am already used to the original positions, but for healthy concideration, it ought to be good for long run!

I am now looking for good keyboards and try to get used to ctrl-alt swapped layout.

Happy emacsing!

## Saturday, July 25, 2015

### Officially, I am entitled as an Emacs beginner :)

I have studied emacs for couple of weeks and spent numerous time tinkering it. So much fun to learn such amazing text editing tool :D.

until today, I found out that people should rebind cap-lock key to ctrl to ease the left hand muscles! I have used by-default ctrl key for these days and didn't find it difficult to use after several try days.

now I have to reshape the ctrl key memory in head, but surely it will take little time!

## Saturday, July 18, 2015

### General Information

• .net managed code talks to and uses external resources, that needs to implement IDisposable interface
• GC(Garbage Collector) tries best to clean memory when using external resources, such as SQLconnection, without IDisposable. Thus, it is impossible to detect when app will fail.
• GC is triggered when the memory threshold is reached.

### Managed resources v.s. unmanaged resources

Managed resources basically means "managed memory" that is managed by the garbage collector. When you no longer have any references to a managed object (which uses managed memory), the garbage collector will (eventually) release that memory for you.

Unmanaged resources are then everything that the garbage collector does not know about. For example:
Open files
Open network connections
Unmanaged memory
In XNA: vertex buffers, index buffers, textures, etc.

Normally you want to release those unmanaged resources before you lose all the references you have to the object managing them. You do this by calling Dispose on that object, or (in C#) using the using statement which will handle calling Dispose for you.

If you neglect to Dispose of your unmanaged resources correctly, the garbage collector will eventually handle it for you when the object containing that resource is garbage collected (this is "finalization"). But because the garbage collector doesn't know about the unmanaged resources, it can't tell how badly it needs to release them - so it's possible for your program to perform poorly or run out of resources entirely.

If you implement a class yourself that handles unmanaged resources, it is up to you to implement Dispose and Finalize correctly.

### Best Practices

• best practice #1: Dispose of IDisposable objects as soon as you can.
• Implementing IDisposable code pattern as:
•  private bool _disposed;
public void Dispose()
{
Dispose(true);
// Use SupressFinalize in case a subclass
// of this type implements a finalizer.
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if (!_disposed)
{
if (disposing)
{
// Clear all property values that maybe have been set
// when the class was instantiated
id = 0;
name = String.Empty;
pass = String.Empty;
}
// Indicate that the instance has been disposed.
_disposed = true;
}
}

• best practice #2: If you use IDisposable objects (e.g. SQLConnection _connection) as instance fields, implement IDisposable.
• best practice #3: allow Dispose() to be called multiple times and do not throw exceptions (last 3 lines in the code sample below)
•  public class DatabaseState : IDisposable
{
private SqlConnection _connection;
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if(disposing && _connection != null)
{ _connection.Dispose();
_connection = null;
}
}
• best practice #4 implement IDisposable to support disposing resources in a class heraychy.
• best practice #5 if you use unmanaged resources, declare a finalizer which cleans them up.
• best practice #6: Visual Studio using code analysis CA2000 (not enabled by default) to check this issue.
• best practice #7: if your class implements an interface and use IDisposable fields, extend the interface from IDisposable rather than the implementing class. This conflicts with OO Design.
• best practice #8: if you implement IDisposable, do not implement it explicitly.

## Friday, July 17, 2015

### Generic Constraints

Generic Constraints is used when T in the class needs to have specific characteristics/methods that object type doesn't have (object type has only "toString" and "hasHashCode" methods).
• "where" keyword can be used to constrain the generic type, for example:
• public class SqlRepository<T> : IRepository<T> where T : class
• we can create a new interface with such method and use "where" keyword to constrain T. for example:
• public interface IEntity {bool IsValid()} so that "where T : IEntity" will allow "T.IsValid()".
• Note. usually we do not need to define own interface, check in advance if .net framework already has such!
• default(T) will assign default value to T. if T is class -> null; if T is struct -> 0
• where T : new() -> T has a default constructor so that T can be instantiated in code, for example:
• public T CreateNewT() {T t = new T(); return t;}
• constraints is preferable to be implemented in concrete class instead of in interface
• Covariant V.S. Contravariant
•  public interface IReadOnlyRepository<out T> : IDisposable
{
T FindById(int id);
IQueryable<T> FindAll();
}
public interface IWriteOnlyRepository<in T> : IDisposable
{
void Delete(T entity);
int Commit();
}
public interface IRepository<T> : IReadOnlyRepository<T>, IWriteOnlyRepository<T>
{
}
using (IRepository<Employee> employeeRepository
= new SqlRepository<Employee>(new EmployeeDb()))
{
DumpPeople(employeeRepository);
}

• Reflection in C# (for Generic part)  // ToDo

## Thursday, July 16, 2015

### Study HashSet and SortedList generics and remove generics in code of business logic

Note. This lesson is learned from Scott Allen C# Generic videos in Pluralsight.

### 1) Hashset (how to remove object duplicates)

HashSet type is supposed to identify duplicates and guarantee the uniqueness of stored objects, while the uniqueness/equality comparison shall be defined by the code.

HashSet has one construction that accepts a IEqualityComparer interface.

var departments = new SortedDictionary<string, HashSet<Employee>>();

### 2) SortedSet (how to sort objects)

//SortedSet is similar, needs to implement IComparer<> interface
new SortedSet<Employee>(new EmployeeComparer());

public class EmployeeComparer : IEqualityComparer<Employee>, IComparer<Employee>
{
Equals ();
GetHashCode ();
}

### 3) Cleaner code - hide those fussy "new", "generic" keywords by create a new class to inherit

namespace Employees
{
public class Employee
{
public string Name { get; set; }
}
public class EmployeeComparer : IEqualityComparer<Employee>, IComparer<Employee>
{
public bool Equals(Employee x, Employee y)
{
return string.Equals(x.Name, y.Name);
}
public int GetHashCode(Employee obj)
{
return obj.Name.GetHashCode();
}
public int Compare(Employee x, Employee y)
{
return string.Compare(x.Name, y.Name);
}
}
public class DepartmentCollection : SortedDictionary<string, HashSet<Employee>>
{
public DepartmentCollection Add(string departmentName, Employee employee)
{
if (!ContainsKey(departmentName)) {
}
return this;
}
}
class Program
{
static void Main(string[] args)
{
var departments = new DepartmentCollection();
departments.Add("Sales", new Employee { Name = "Xi" })
.Add("Sales", new Employee { Name = "Xi" })
.Add("Sales", new Employee { Name = "Liu" });
departments.Add("Engineers", new Employee { Name = "Jin" })
.Add("Engineers", new Employee { Name = "Jin" })
.Add("Engineers", new Employee { Name = "Jin" });
foreach (var item in departments)
{
Console.WriteLine(item.Key);
foreach (var employee in item.Value)
{
Console.WriteLine("\t" + employee.Name);
}
}
}
}
}

## Monday, July 13, 2015

### [book read] Win Friends influence people

This is a mind blowing book for me!

In order to get the most out of this book:

• Develop a deep, driving desire to master the principles of human relations
• Read each chapter twice before going on to the next one
• Stop frequently to ask self how you can apply each suggestion
• Underscore each important idea
• Review this book each month :)
• Apply these principles at every opportunity!
• make a lovely game out of the learning
• Check up each week on the progress you are making. Ask yourself what mistakes you have made, what improvement, what lessons you have learned for the future
• Keep notes

Summary of the book

1. Don't criticize, condemn or complain
2. Give honest and sincere appreciation
3. Arouse in the other person an eager want

Six ways to make people like you
1. Become genuinely interested in other people
2. Smile
3. Remember that a person's name is to that person the sweetest and most important sound in any language
4. Be a good listener. Encourage others to talk about themselves
5. Talk in terms of the other person's interests
6. Make the other person feel important and do it sincerely

## Sunday, July 5, 2015

### C# in a Nutshell Chapter 15 - Streams and I/O

The .NET stream architecture centers on three concepts: backing stores, decorators, and adapters.

Backing store streams
These are hard-wired to a particular type of backing store, such as FileStream or NetworkStream

Decorator streams
These feed off another stream, transforming the data in some way, such as DeflateStream or CryptoStream

Both backing store and decorator streams deal exclusively in bytes. Although this is flexible and efficient, applications often work at higher levels such as text or XML. Adapters bridge this gap by wrapping a stream in a class with specialized methods typed to a particular format. For example, a text reader exposes a ReadLine method; an XML writer exposes a WriteAttributes method.

An adapter wraps a stream, just like a decorator. Unlike a decorator, however, an adapter is not itself a stream; it typically hides the byte-oriented methods completely.

To summarize, backing store streams provide the raw data; decorator streams provide transparent binary transformations such as encryption; adapters offer typed methods for dealing in higher-level types such as strings and XML. To compose a chain, you simply pass one object into another’s constructor.

### Stream Class

using System;
using System.IO;
class Program
{
static void Main()
{
// Create a file called test.txt in the current directory:
using (Stream s = new FileStream ("test.txt", FileMode.Create))
{
// True
Console.WriteLine (s.CanWrite);
// True
Console.WriteLine (s.CanSeek);
// True

s.WriteByte (101);
s.WriteByte (102);
byte[] block = { 1, 2, 3, 4, 5 };
s.Write (block, 0, block.Length); // Write block of 5 bytes
Console.WriteLine (s.Length);
Console.WriteLine (s.Position);
s.Position = 0; // 7
// 7
// Move back to the start
// 102
// Read from the stream back into the block array:
Console.WriteLine (s.Read (block, 0, block.Length)); // 5
// Assuming the last Read returned 5, we'll be at
// the end of the file, so Read will now return 0:
Console.WriteLine (s.Read (block, 0, block.Length)); // 0
}
}
}

A stream may support reading, writing, or both. If CanWrite returns false , the stream is read-only; if CanRead returns false , the stream is write-only.

With Read , you can be certain you’ve reached the end of the stream only when the method returns 0 . So, if you have a 1,000 byte stream, the following code may fail to read it all into memory:

// Assuming s is a stream:
byte[] data = new byte [1000];

The Read method could read anywhere from 1 to 1,000 bytes, leaving the balance of the stream unread.

Here’s the correct way to read a 1,000-byte stream:

byte[] data = new byte [1000];
// bytesRead will always end up at 1000, unless the stream is itself smaller in length:
int chunkSize = 1;
while (bytesRead < data.Length && chunkSize > 0)

Fortunately, the BinaryReader type provides a simpler way to achieve the same result:

If the stream is less than 1,000 bytes long, the byte array returned reflects the actual stream size. If the stream is seekable, you can read its entire contents by replacing 1000 with (int)s.Length .

### Seeking

A stream is seekable if CanSeek returns true . With a seekable stream (such as a file stream), you can query or modify its Length (by calling SetLength ), and at any time change the Position at which you’re reading or writing. The Position property is relative to the beginning of the stream; the Seek method, however, allows you to move relative to the current position or the end of the stream.

With a nonseekable stream (such as an encryption stream), the only way to determine its length is to read it right through. Furthermore, if you need to reread a previous section, you must close the stream and start afresh with a new one.

### Closing and Flush

Streams must be disposed after use to release underlying resources such as file and socket handles. A simple way to guarantee this is by instantiating streams within using blocks.

In general, streams follow standard disposal semantics:

• Dispose and Close are identical in function.
• Disposing or closing a stream repeatedly causes no error.
Closing a decorator stream closes both the decorator and its backing store stream. With a chain of decorators, closing the outermost decorator (at the head of the chain) closes the whole lot.

Some streams internally buffer data to and from the backing store to lessen round tripping and so improve performance (file streams are a good example of this). This means data you write to a stream may not hit the backing store immediately; it can be delayed as the buffer fills up. The Flush method forces any internally buffered data to be written immediately. Flush is called automatically when a stream is closed, so you never need to do the following: s.Flush(); s.Close();

### Timeouts

A stream supports read and write timeouts if CanTimeout returns true . Network streams support timeouts; file and memory streams do not. For streams that support timeouts, the ReadTimeout and WriteTimeout properties determine the desired timeout in milliseconds, where 0 means no timeout. The Read and Write methods indicate that a timeout has occurred by throwing an exception.

### FileStream

The simplest way to instantiate a FileStream is to use one of the following static methods on the File class:

FileStream fs2 = File.OpenWrite (@"c:\temp\writeme.tmp");  // Write-only
FileStream fs3 = File.Create (@"c:\temp\writeme.tmp"); // Read/write

OpenWrite and Create differ in behavior if the file already exists. Create truncates any existing content; OpenWrite leaves existing content intact with the stream positioned at zero. If you write fewer bytes than were previously in the file, OpenWrite leaves you with a mixture of old and new content.

Instantiate a FileStream is also possible. The following opens an existing file for read/write access without overwriting it:

var fs = new FileStream ("readwrite.tmp", FileMode.Open);

### File Class

The following static methods read an entire file into memory in one step:
• File.ReadAllLines (returns an array of strings)
• File.ReadAllBytes (returns a byte array)
The following static methods write an entire file in one step:
• File.WriteAllText
• File.WriteAllLines
• File.WriteAllBytes
• File.AppendAllText (great for appending to a log file)

There’s also a static method called File.ReadLines : this is like ReadAllLines except that it returns a lazily-evaluated IEnumerable<string> . This is more efficient because it doesn’t load the entire file into memory at once. LINQ is ideal for consuming the results: the following calculates the number of lines greater than 80 characters in length:
int longLines = File.ReadLines ("filePath").Count (l => l.Length > 80);

### MemoryStream

Closing and flushing a MemoryStream is optional. If you close a MemoryStream , you can no longer read or write to it, but you are still permitted to call ToArray to obtain the underlying data.
Flush does absolutely nothing on a memory stream.

### PipeStream

PipeStream was introduced in Framework 3.5. It provides a simple means by which one process can communicate with another through the Windows pipes protocol.
There are two kinds of pipe:

1. Anonymous pipe: Allows one-way communication between a parent and child process on the same computer.
2. Named pipe: Allows two-way communication between arbitrary processes on the same computer—or different computers across a Windows network.

PipeStream is an abstract class with four concrete subtypes. Two are used for anonymous pipes and the other two for named pipes:

1. AnonymousPipeServerStream and AnonymousPipeClientStream
2. NamedPipeServerStream and NamedPipeClientStream

### BufferedStream

BufferedStream decorates, or wraps, another stream with buffering capability.
Buffering improves performance by reducing round trips to the backing store. Here’s how we wrap a FileStream in a 20 KB BufferedStream :

// Write 100K to a file:
File.WriteAllBytes ("myFile.bin", new byte [100000]);
using (FileStream fs = File.OpenRead ("myFile.bin"))
using (BufferedStream bs = new BufferedStream (fs, 20000))
{
Console.WriteLine (fs.Position);
// 20000
}

In this example, the underlying stream advances 20,000 bytes after reading just 1 byte, thanks to the read-ahead buffering. We could call ReadByte another 19,999 times before the FileStream would be hit again.

Coupling a BufferedStream to a FileStream , as in this example, is of limited value because FileStream already has built-in buffering. Its only use might be in enlarging the buffer on an already constructed FileStream .
Closing a BufferedStream automatically closes the underlying backing store stream.

A Stream deals only in bytes; to read or write data types such as strings, integers, or XML elements, you must plug in an adapter.

TextReader and TextWriter are the abstract base classes for adapters that deal exclusively with characters and strings.

using (FileStream fs = File.Create ("test.txt"))
using (TextWriter writer = new StreamWriter (fs))
{
writer.WriteLine ("Line1");
writer.WriteLine ("Line2");
}
using (FileStream fs = File.OpenRead ("test.txt"))
{
}

Because text adapters are so often coupled with files, the File class provides the static methods CreateText , AppendText , and OpenText to shortcut the process:

using (TextWriter writer = File.CreateText ("test.txt"))
{
writer.WriteLine ("Line1");
writer.WriteLine ("Line2");
}
using (TextWriter writer = File.AppendText ("test.txt"))
writer.WriteLine ("Line3");

This also illustrates how to test for the end of a file (viz. reader.Peek() ). Another option is to read until reader.ReadLine returns null.

You can also read and write other types such as integers, but because TextWriter invokes ToString on your type, you must parse a string when reading it back:

using (TextWriter w = File.CreateText ("data.txt"))
{
w.WriteLine (123);
// Writes "123"
w.WriteLine (true);
// Writes the word "true"
}
using (TextReader r = File.OpenText ("data.txt"))
{
// myInt == 123
// yes == true
}

### Character encodings

TextReader and TextWriter are by themselves just abstract classes with no connection to a stream or backing store. The StreamReader and StreamWriter types, however, are connected to an underlying byte-oriented stream, so they must convert between characters and bytes. They do so through an Encoding class from the System.Text namespace, which you choose when constructing the StreamReader or StreamWriter . If you choose none, the default UTF-8 encoding is used.

The StringReader and StringWriter adapters don’t wrap a stream at all; instead, they use a string or StringBuilder as the underlying data source. This means no byte translation is required—in fact, the classes do nothing you couldn’t easily achieve with a string or StringBuilder coupled with an index variable. Their advantage, though, is that they share a base class with StreamReader / StreamWriter . For instance, suppose we have a string containing XML and want to parse it with an XmlReader .
The XmlReader.Create method accepts one of the following:

1. A URI
2. A Stream

So, how do we XML-parse our string? Because StringReader is a subclass of TextReader , we’re in luck. We can instantiate and pass in a StringReader as follows:

BinaryReader and BinaryWriter read and write native data types: bool , byte , char ,
decimal , float , double , short , int , long , sbyte , ushort , uint , and ulong , as well as

string s and arrays of the primitive data types.

of a seekable stream:

This is more convenient than reading directly from a stream, because it doesn't require a loop to ensure that all data has been read.

### Compression Streams

Two general-purpose compression streams are provided in the System.IO.Compression namespace: DeflateStream and GZipStream.

DeflateStream and GZipStream are decorators; they compress or decompress data from another stream that you supply in construction. In the following example, we compress and decompress a series of bytes, using a FileStream as the backing store:

using (Stream s = File.OpenRead ("compressed.bin"))
using (Stream ds = new DeflateStream (s, CompressionMode.Decompress))
for (byte i = 0; i < 100; i++)
// Writes 0 to 99

Even with the smaller of the two algorithms, the compressed file is 241 bytes long: more than double the original! Compression works poorly with “dense,” nonrepetitive binary filesdata!

In the next example, we compress and decompress a text stream composed of 1,000 words chosen randomly from a small sentence. This also demonstrates chaining a backing store stream, a decorator
stream, and an adapter (as depicted at the start of the chapter in Figure 15-1), and the use asynchronous methods:

string[] words = "The quick brown fox jumps over the lazy dog".Split();
Random rand = new Random();
using (Stream s = File.Create ("compressed.bin"))
using (Stream ds = new DeflateStream (s, CompressionMode.Compress))
using (TextWriter w = new StreamWriter (ds))
for (int i = 0; i < 1000; i++)
await w.WriteAsync (words [rand.Next (words.Length)] + " ");
Console.WriteLine (new FileInfo ("compressed.bin").Length);
// 1073
using (Stream s = File.OpenRead ("compressed.bin"))
using (Stream ds = new DeflateStream (s, CompressionMode.Decompress))

In this case, DeflateStream compresses efficiently to 1,073 bytes—slightly more than 1 byte per word.

Compressing in memory

Sometimes you need to compress entirely in memory. Here’s how to use a Memory Stream for this purpose:

byte[] data = new byte[1000];
// We can expect a good compression
// ratio from an empty array!
var ms = new MemoryStream();
using (Stream ds = new DeflateStream (ms, CompressionMode.Compress))
ds.Write (data, 0, data.Length);
byte[] compressed = ms.ToArray();
Console.WriteLine (compressed.Length);
// 113
// Decompress back to the data array:
ms = new MemoryStream (compressed);
using (Stream ds = new DeflateStream (ms, CompressionMode.Decompress))
for (int i = 0; i < 1000; i += ds.Read (data, i, 1000 - i));

The using statement around the DeflateStream closes it in a textbook fashion, flushing any unwritten buffers in the process. This also closes the MemoryStream it wraps —meaning we must then call ToArray to extract its data.

### Working with Zip Files

new feature in Framework 4.5 - ZipArchive and ZipFile classes

ZipFile is a static helper class for ZipArchive;

ZipFile ’s CreateFromDirectory method adds all the files in a specified directory into a zip file:
ZipFile.CreateFromDirectory (@"d:\MyFolder", @"d:\compressed.zip");

whereas ExtractToDirectory does the opposite and extracts a zip file to a directory:
ZipFile.ExtractToDirectory (@"d:\compressed.zip", @"d:\MyFolder");

### File and Directory Operations

FileInfo offers an easier way to change a file’s read-only flag:

Here are all the members of the FileAttribute enum that GetAttributes returns:

Archive, Compressed, Device, Directory, Encrypted, Hidden, Normal, NotContentIndexed, Offline, ReadOnly, ReparsePoint, SparseFile, System, Temporary

File security

The GetAccessControl and SetAccessControl methods allow you to query and change the operating system permissions assigned to users and roles via a FileSecurity object (namespace System.Security.AccessControl ). You can also pass a FileSecurity object to a FileStream ’s constructor to specify permissions when creating a new file.

In this example, we list a file’s existing permissions, and then assign execution permission to the “Users” group:

FileSecurity sec = File.GetAccessControl (@"d:\test.txt");
AuthorizationRuleCollection rules = sec.GetAccessRules (true, true,
typeof (NTAccount));
foreach (FileSystemAccessRule rule in rules)
{
Console.WriteLine (rule.AccessControlType); // Allow or Deny
Console.WriteLine (rule.FileSystemRights); // e.g., FullControl
Console.WriteLine (rule.IdentityReference.Value); // e.g., MyDomain/Joe
}
var sid = new SecurityIdentifier (WellKnownSidType.BuiltinUsersSid, null);
string usersAccount = sid.Translate (typeof (NTAccount)).ToString();
FileSystemAccessRule newRule = new FileSystemAccessRule
(usersAccount, FileSystemRights.ExecuteFile, AccessControlType.Allow);
File.SetAccessControl (@"d:\test.txt", sec);

### The Directory Class

he static Directory class provides a set of methods analogous to those in the File class—for checking whether a directory exists ( Exists ), moving a directory ( Move ), deleting a directory ( Delete ), getting/setting times of creation or last access, and getting/setting security permissions. Furthermore, Directory exposes the following static methods:

string GetCurrentDirectory ();
void
SetCurrentDirectory (string path);
DirectoryInfo CreateDirectory (string path);
DirectoryInfo GetParent
(string path);
string
GetDirectoryRoot (string path);
string[] GetLogicalDrives();
// The following methods all return full paths:
string[] GetFiles
(string path);
string[] GetDirectories
(string path);
string[] GetFileSystemEntries (string path);
IEnumerable<string> EnumerateFiles (string path);
IEnumerable<string> EnumerateDirectories (string path);
IEnumerable<string> EnumerateFileSystemEntries (string path);

### FileInfo and DirectoryInfo

The static methods on File and Directory are convenient for executing a single file or directory operation. If you need to call a series of methods in a row, the FileInfo and DirectoryInfo classes provide an object model that makes the job easier.

FileInfo offers most of the File ’s static methods in instance form—with some additional properties such as Extension , Length , IsReadOnly , and Directory —for returning a DirectoryInfo object. For example:

FileInfo fi = new FileInfo (@"c:\temp\FileInfo.txt");
Console.WriteLine (fi.Exists); // false
using (TextWriter w = fi.CreateText())
w.Write ("Some text");
Console.WriteLine (fi.Exists);  // false (still)
fi.Refresh();
Console.WriteLine (fi.Exists);  // true

(fi.Name); // FileInfo.txt
(fi.FullName);  //c:\temp\FileInfo.txt
(fi.DirectoryName); //c:\temp
(fi.Directory.Name); //temp
(fi.Extension); // .txt
(fi.Length); // 9

fi.Encrypt();
fi.Attributes ^= FileAttributes.Hidden; //(toggle hidden flag)
Console.WriteLine (fi.CreationTime);

fi.MoveTo (@"c:\temp\FileInfoX.txt");
DirectoryInfo di = fi.Directory;
Console.WriteLine (di.Name);   // temp
Console.WriteLine (di.FullName);  // c:\temp
Console.WriteLine (di.Parent.FullName); // c:\
di.CreateSubdirectory ("SubFolder");

Here’s how to use DirectoryInfo to enumerate files and subdirectories:

DirectoryInfo di = new DirectoryInfo (@"e:\photos");
foreach (FileInfo fi in di.GetFiles ("*.jpg"))
Console.WriteLine (fi.Name);
foreach (DirectoryInfo subDir in di.GetDirectories())
Console.WriteLine (subDir.FullName);

## Saturday, July 4, 2015

### Conditional Attribute

The Conditional attribute instructs the compiler to ignore any calls to a particular class or method, if the specified symbol has not been defined.

static void Main()
{
WriteLine();
}

[Conditional("TESTMODE")]
public static void WriteLine() { Console.WriteLine("HelloWorld"); }

In Visual Stuido project property -> Build -> Optional compiling symbol to set it

The Conditional attribute is ignored at runtime—it’s purely an instruction to the compiler.

### Code Contracts

public static bool AddIfNotPresent<T> (IList<T> list, T item)
{
Contract.Requires (list != null); // Precondition
Contract.Ensures (list.Contains (item)); // Postcondition
if (list.Contains(item)) return false;
return true;
}

The preconditions are defined by Contract.Requires and are verified when the method starts. The postcondition is defined by Contract.Ensures and is verified not where it appears in the code, but when the method exits. Preconditions and postconditions act like assertions and, in this case, detect the following errors:
• A bug in the method whereby we forgot to add the item to the list
• A bug in the method whereby we forgot to add the item to the list
Preconditions and postconditions must appear at the start of the method.

### Windows Eventlog

To write to a Windows event log:
1. Choose one of the three event logs (usually Application).
2. Decide on a source name and create it if necessary.
3. Call EventLog.WriteEntry with the log name, source name, and message data.

The source name is an easily identifiable name for your application. You must register a source name before you use it—the CreateEventSource method performs this function. You can then call WriteEntry :
const string SourceName = "MyCompany.WidgetServer";
// CreateEventSource requires administrative permissions, so this would
// typically be done in application setup.
if (!EventLog.SourceExists (SourceName))
EventLog.CreateEventSource (SourceName, "Application");
EventLog.WriteEntry (SourceName, "Service started; using configuration file=...", EventLogEntryType.Information);

### The Stopwatch Class

The Elapsed property returns the elapsed interval as a TimeSpan :

Stopwatch s = Stopwatch.StartNew();
System.IO.File.WriteAllText ("test.txt", new string ('*', 30000000));
Console.WriteLine (s.Elapsed); // 00:00:01.4322661

### LINQ

A query operator never alters the input sequence; instead, it returns a new sequence. This is consistent with the functional programming paradigm, from which LINQ was inspired.

string[] s = {"Dirk", "Xi", "Auli", "Jouni"};
IEnumerable<string> items = s.where (n => n.length >=4);

Sequence Operators
s.OrderBy()
s.Select()
s.Take()
s.Skip()
s.Reverse()

Element Operators
s.First()
s.Last()
s.ElementAt()

Aggregation Operators
s.Count();
s.Min();

Quantifiers (return a bool)
s.Contains()
s.Any()\

Other Operators

int[] seq1 = { 1, 2, 3 };
int[] seq2 = { 3, 4, 5 };
IEnumerable<int> concat = seq1.Concat (seq2); // { 1, 2, 3, 3, 4, 5 }
IEnumerable<int> union = seq1.Union (seq2); // { 1, 2, 3, 4, 5 }

Query Expression

Query expressions always start with a from clause and end with either a select or
group clause. The from clause declares a range variable (in this case, n), which you
can think of as traversing the input sequence—rather like foreach.

IEnumerable<string> query =
from n in names
where n.Contains ("a") // Filter elements
orderby n.Length // Sort elements
select n.ToUpper(); // Translate each element (project)

### The Into Keyword

The into keyword lets you “continue” a query after a projection and is a shortcut
for progressively querying.
The only place you can use into is after a select or group clause. into “restarts” a
query, allowing you to introduce fresh where, orderby, and select clauses.

IEnumerable<string> query =
from n in names
select n.Replace ("a", "").Replace ("e", "").Replace ("i", "")
.Replace ("o", "").Replace ("u", "")
into noVowel
where noVowel.Length > 2 orderby noVowel select noVowel;

Otherwise, we need to write

IEnumerable<string> query =
from n in names
select n.Replace ("a", "").Replace ("e", "").Replace ("i", "")
.Replace ("o", "").Replace ("u", "");
query = from n in query where n.Length > 2 orderby n select n;

Scoping rules
All range variables are out of scope following an into keyword. The following will not compile:
var query =
from n1 in names
select n1.ToUpper()
into n2 // Only n2 is visible from here on.
where n1.Contains ("x") // Illegal: n1 is not in scope.
select n2;

### Projection Strategies

Object Initializers

class TempProjectionItem
{
public string Original; // Original name
public string Vowelless; // Vowel-stripped name
}

string[] names = { "Tom", "Dick", "Harry", "Mary", "Jay" };

IEnumerable<TempProjectionItem> temp =
from n in names select new TempProjectionItem
{
Original = n,
Vowelless = n.Replace ("a", "").Replace ("e", "").Replace ("i", "")
.Replace ("o", "").Replace ("u", "")
};

Anonymous Types
We can eliminate the TempProjectionItem class in our previous
example with anonymous types:

var intermediate = from n in names
select new
{
Original = n,
Vowelless = n.Replace ("a", "").Replace ("e", "").Replace ("i", "")
.Replace ("o", "").Replace ("u", "")
};
IEnumerable<string> query = from item in intermediate
where item.Vowelless.Length > 2
select item.Original;

Let Keyword

string[] names = { "Tom", "Dick", "Harry", "Mary", "Jay" };
IEnumerable<string> query =
from n in names
let vowelless = n.Replace ("a", "").Replace ("e", "").Replace ("i", "") .Replace ("o", "").Replace ("u", "")
where vowelless.Length > 2 orderby vowelless
select n; // Thanks to let, n is still in scope.

### Interpreted Queries

LINQ provides two parallel architectures: local queries for local object collections, and interpreted queries for remote data sources.

Interpreted queries operate over sequences that implement IQueryable<T>.

combined interpreted and local queries