How To: Install/Setup SmartSVN in Ubuntu
----------------------------------------
This is a simple guide to install and setup SmartSVN in Ubuntu 7.04 (Feisty Fawn). You’ll need to have Java-6-Sun already installed first since it won’t run on GCJ. I don’t know if it’ll work on gcj or not.
Download SmartSVN
-----------------
1. Please download the ‘Generic’ archive from the SmartSVN download page (currently the latest version is 3.0.1).
2. Extract the archive (right click on it, click on Extract Here)
3. [optionally] Move SmartSVN to your ‘Apps’ folder. I usually copy download-and-extract kind of apps to ~/Apps. You can place the application anywhere you like, perhaps in your home directory.
Add to Applications Menu
------------------------
1. Right click on the Applications menu
2. Click on Edit Menus
3. Select the Programming Category (the category is up to you but Programming seems right to me)
4. Click on the New Item button. The ‘Create Launcher’ window sholuld appear:
1. Select type: “Application”
2. Enter Name: “SmartSVN”
3. Select Command: “[YourSmartSVNFolder]/bin/smartsvn.sh”
4. Click on the icon button. It should load the browse icons window
5. Click on browse and point it to “[YourSmartSVNFolder]/bin/”
6. The list will now display several SmartSVN icon previews.
7. Select smartsvn-32×32.png and click on OK to close the browse icon window
8. Click on OK
5. Click on Close.
Note: Replace [YourSmartSVNFolder] with where your SmartSVN folder is.
Now SmartSVN should be easily accessible via Applications > Programming > SmartSVN. Actually this section of the guide is pretty generic and should work on other download-and-extract kind of applications.
Setting Up SmartSVN
-------------------
First of all start SmartSVN via the Applications menu.
1. Edit > Preferences > External Tools
2. Enter File Pattern: “*”
3. Enter Command: “gnome-open”
4. Enter Arguments: “${filePath}”
5. Click on Insert
6. Click on Directory Command in the left pane (right under External Tools)
7. Check on “Enable Open Directory Command”
8. Enter Command: “gnome-open”
9. Enter Arguments: “${directoryPath}”
This will allow SmartSVN to open files/folder using the right applications. Double clicking on folders will open a Nautilus window and text/code files will load up gedit (or whatever it is associated with on your PC)
Please tell me if anything is confusing, missing or just plain wrong.
Have Fun!
2009年9月17日星期四
2009年9月7日星期一
AMP setup
1.Download & Unpack
Download and install PHP from http://www.php.net/downloads.php, you should grab the newest 5.x.x Windows Binaries zip package that will work on apache.
My file was named: php-5.2.9-1-Win32.zip
2. Unzip php. In my case, I unzipped to:
C:\php\
3. Rename C:\php\php.ini-dist it to php.ini
4.Edit your php.ini
Open php.ini in a text editor and scroll down about halfway through the file and look for doc_root then change it to point to whatever your Apache DocumentRoot is set to. In my case: doc_root = "C:\public_html"
Scroll down about 7 more lines and change the extension_dir from extension_dir = "./" to the location of the ext directory after you unzipped PHP. in my case: extension_dir = "C:\php\ext"
If you are going to be using your server for testing, i recommend (this is optional) you search and change the error reporting to give you info on any errors, notices, or bad coding. If you want to enable this type of stuff, search for error_reporting for and change:
error_reporting = E_ALL & ~E_NOTICE & ~E_STRICT
to
error_reporting = E_ALL | E_NOTICE | E_STRICT
5. Editing Apache Conf File
Using Notepad open httpd.conf (should be start-menu shortcut "Apache HTTP Server 2.2 > Configure Apache Server > Edit the Apache httpd.conf Configuration File"). Either at the very beginning or end of the file add the following lines: (NOTE: be sure to change BOTH C:/php parts to the directory you installed your php to)
LoadModule php5_module "C:/php/php5apache2_2.dll"
AddType application/x-httpd-php .php
PHPIniDir "C:/php"
Note: If you installed the older Apache 2.0, instead of the above lines, you will need to use the lines listed on the bottom step of the Apache 2.0 tutorial.
6.[OPTIONAL] Editing Apache Conf File (part 2)
To get apache to automatically look for an index.php, search httpd.conf for DirectoryIndex (about line 212) and add the files you want apache to look for when a directory is loaded (if it doesn't find any of these files, it displays folder contents). Mine looks like:
DirectoryIndex index.php index.html default.html
7. Testing
Restart Apache if it is already running (if it doesn't start or you get errors, use your Apache "Test Configuration" shortcut in the Start Menu to see why).
To test your PHP simply create a test.php file in your Apache "DocumentRoot" folder (E:\public_html\ in my case). In your test.php file, type these 3 lines and then load the file in your browser like http://localhost/test.php (you should get a whole long list of php variables, settings, etc):
phpinfo();
?>
8.Documentation Suggestion
One weird thing I have noticed about PHP is that it does not come with documentation of any kind. If you are a php developer/programmer, I suggest you download the documentation. Downloads can be found on http://www.php.net/download-docs.php and I personally recommend the "English - Windows HTML Help" (chm) version as the search is so handy, although they all contain the same information.
Download and install PHP from http://www.php.net/downloads.php, you should grab the newest 5.x.x Windows Binaries zip package that will work on apache.
My file was named: php-5.2.9-1-Win32.zip
2. Unzip php. In my case, I unzipped to:
C:\php\
3. Rename C:\php\php.ini-dist it to php.ini
4.Edit your php.ini
Open php.ini in a text editor and scroll down about halfway through the file and look for doc_root then change it to point to whatever your Apache DocumentRoot is set to. In my case: doc_root = "C:\public_html"
Scroll down about 7 more lines and change the extension_dir from extension_dir = "./" to the location of the ext directory after you unzipped PHP. in my case: extension_dir = "C:\php\ext"
If you are going to be using your server for testing, i recommend (this is optional) you search and change the error reporting to give you info on any errors, notices, or bad coding. If you want to enable this type of stuff, search for error_reporting for and change:
error_reporting = E_ALL & ~E_NOTICE & ~E_STRICT
to
error_reporting = E_ALL | E_NOTICE | E_STRICT
5. Editing Apache Conf File
Using Notepad open httpd.conf (should be start-menu shortcut "Apache HTTP Server 2.2 > Configure Apache Server > Edit the Apache httpd.conf Configuration File"). Either at the very beginning or end of the file add the following lines: (NOTE: be sure to change BOTH C:/php parts to the directory you installed your php to)
LoadModule php5_module "C:/php/php5apache2_2.dll"
AddType application/x-httpd-php .php
PHPIniDir "C:/php"
Note: If you installed the older Apache 2.0, instead of the above lines, you will need to use the lines listed on the bottom step of the Apache 2.0 tutorial.
6.[OPTIONAL] Editing Apache Conf File (part 2)
To get apache to automatically look for an index.php, search httpd.conf for DirectoryIndex (about line 212) and add the files you want apache to look for when a directory is loaded (if it doesn't find any of these files, it displays folder contents). Mine looks like:
DirectoryIndex index.php index.html default.html
7. Testing
Restart Apache if it is already running (if it doesn't start or you get errors, use your Apache "Test Configuration" shortcut in the Start Menu to see why).
To test your PHP simply create a test.php file in your Apache "DocumentRoot" folder (E:\public_html\ in my case). In your test.php file, type these 3 lines and then load the file in your browser like http://localhost/test.php (you should get a whole long list of php variables, settings, etc):
phpinfo();
?>
8.Documentation Suggestion
One weird thing I have noticed about PHP is that it does not come with documentation of any kind. If you are a php developer/programmer, I suggest you download the documentation. Downloads can be found on http://www.php.net/download-docs.php and I personally recommend the "English - Windows HTML Help" (chm) version as the search is so handy, although they all contain the same information.
2009年8月24日星期一
Installl JAVA in UNIX server
Install the JRE
Step 1:
------
Download the java package from Java Official website
http://www.java.com/en/download/linux_manual.jsp?locale=en&host=www.java.com:80
Step 2:
-------
Login as root, cd to the directory that contains the JRE you just downloaded, move the downloaded file jre-6u15-linux-i586.bin to /usr/local, and then cd to that directory:
# mv jre-6u15-linux-i586.bin
# cd /usr/local
Step 3:
-------
Extract the contents of the JRE & follow the instructions:
# ./jre-6u15-linux-i586.bin
Step 4:
-------
Once you're finished, add jre1.6.0_15/bin to your PATH:
# export PATH=/usr/local/jre1.6.0_15/bin:$PATH
As you, and not root, do the same thing:
$ export PATH=/usr/local/jre1.6.0_15/bin:$PATH
Step 5:
-------
Check the java version
# java -version
Step 1:
------
Download the java package from Java Official website
http://www.java.com/en/download/linux_manual.jsp?locale=en&host=www.java.com:80
Step 2:
-------
Login as root, cd to the directory that contains the JRE you just downloaded, move the downloaded file jre-6u15-linux-i586.bin to /usr/local, and then cd to that directory:
# mv jre-6u15-linux-i586.bin
# cd /usr/local
Step 3:
-------
Extract the contents of the JRE & follow the instructions:
# ./jre-6u15-linux-i586.bin
Step 4:
-------
Once you're finished, add jre1.6.0_15/bin to your PATH:
# export PATH=/usr/local/jre1.6.0_15/bin:$PATH
As you, and not root, do the same thing:
$ export PATH=/usr/local/jre1.6.0_15/bin:$PATH
Step 5:
-------
Check the java version
# java -version
2009年7月27日星期一
Word Wrap in Eclipse
You can enable Word Wrap in Eclipse. It will screws up the line number, line height, etc but they will get back to their normal state once you turn off Virtual Word Wrap.
How to install?
1. Open Eclipse
2. Help > Software Updates > Find and Install
3. Search for New Features to Install
4. New Remote Site
5. Enter the url - http://ahtik.com/eclipse-update/
6. Install and Enjoy
To turn on the word-wrap, what you need to do is "right-Click" and select the "Virtual Word Wrap"
How to install?
1. Open Eclipse
2. Help > Software Updates > Find and Install
3. Search for New Features to Install
4. New Remote Site
5. Enter the url - http://ahtik.com/eclipse-update/
6. Install and Enjoy
To turn on the word-wrap, what you need to do is "right-Click" and select the "Virtual Word Wrap"
2009年7月2日星期四
Ubuntu Add shortcut keys
1. Go to System->Preferences->Keyboards
2. Go to e.g. "Run a terminal", click on the "shorcut", and press your selected key , e.g. "Ctrl+F1"
2. Go to e.g. "Run a terminal", click on the "shorcut", and press your selected key , e.g. "Ctrl+F1"
Ubuntu Unix basic commands
## useradd : add the user
sudo useradd -d /home/silverstreet -m silverstreet
sudo passwd silverstreet
Enter new UNIX password : sil2852ver
Retype new UNIX password: sil2852ver
## SUDO : get the root user priviledges
sudo -i
## SEARCH string in files
find . -exec grep -q "Silver_Log::info" '{}' \; -print
## CHMOD in all sub folders
chmod -R 777 sources
## CLEAR file content
> filename
sudo useradd -d /home/silverstreet -m silverstreet
sudo passwd silverstreet
Enter new UNIX password : sil2852ver
Retype new UNIX password: sil2852ver
## SUDO : get the root user priviledges
sudo -i
## SEARCH string in files
find . -exec grep -q "Silver_Log::info" '{}' \; -print
## CHMOD in all sub folders
chmod -R 777 sources
## CLEAR file content
> filename
Install Chinese Language on Ubuntu
Install Chinese Language on Ubuntu
Filed under Ubuntu/Debian by didi
In order to enable chinese input on Ubuntu, chinese language need to be installed first.
1. System–>Administration–>Language Support
Click on “Install/Remove Languages
2. Tick on Chinese language and click “Apply Changes”.
3. U will be prompted for password, once inserted, it will continue downloading required files.
4. Once finished download, it will proceed to installation.
5. Go to System–> Preferences–> SCIM Input Setup to configure chinese input.
6. Once finished setup, u can now start writing chinese.
Filed under Ubuntu/Debian by didi
In order to enable chinese input on Ubuntu, chinese language need to be installed first.
1. System–>Administration–>Language Support
Click on “Install/Remove Languages
2. Tick on Chinese language and click “Apply Changes”.
3. U will be prompted for password, once inserted, it will continue downloading required files.
4. Once finished download, it will proceed to installation.
5. Go to System–> Preferences–> SCIM Input Setup to configure chinese input.
6. Once finished setup, u can now start writing chinese.
2009年4月24日星期五
Complexity and Big-O Notation [interview question]
An important question is: How efficient is an algorithm or piece of code? Efficiency covers lots of resources, including:
CPU (time) usage
memory usage
disk usage
network usage
All are important but we will mostly talk about CPU time in 367. Other classes will discuss other resources (e.g., disk usage may be an important topic in a database class).
Be careful to differentiate between:
Performance: how much time/memory/disk/... is actually used when a program is run. This depends on the machine, compiler, etc. as well as the code.
Complexity: how do the resource requirements of a program or algorithm scale, i.e., what happens as the size of the problem being solved gets larger.
Complexity affects performance but not the other way around.
The time required by a method is proportional to the number of "basic operations" that it performs. Here are some examples of basic operations:
one arithmetic operation (e.g., +, *).
one assignment
one test (e.g., x == 0)
one read
one write (of a primitive type)
Some methods perform the same number of operations every time they are called. For example, the size method of the Sequence class always performs just one operation: return numItems; the number of operations is independent of the size of the sequence. We say that methods like this (that always perform a fixed number of basic operations) require constant time.
Other methods may perform different numbers of operations, depending on the value of a parameter or a field. For example, for many of the Sequence methods, the number of operations depends on the size of the sequence. We call the important factor (the parameters and/or fields whose values affect the number of operations performed) the problem size or the input size.
When we consider the complexity of a method, we don't really care about the exact number of operations that are performed; instead, we care about how the number of operations relates to the problem size. If the problem size doubles, does the number of operations stay the same? double? increase in some other way? For constant-time methods like the size method, doubling the problem size does not affect the number of operations (which stays the same).
Furthermore, we are usually interested in the worst case: what is the most operations that might be performed for a given problem size (other cases -- best case and average case -- are discussed below). For example, the addBefore method (for a sequence implemented using an array) has to move the current item and all of the items that come after the current item one place to the right in the array. In the worst case, all of the items in the array must be moved. Therefore, in the worst case, the time for addBefore is proportional to the number of items in the sequence, and we say that the worst-case time for addBefore is linear in the number of items in the sequence. For a linear-time method, if the problem size doubles, the number of operations also doubles.
Constant and linear times are not the only possibilities. For example, consider method createSeq:
Sequence createSeq( int N ) {
Sequence s = new Sequence();
for (int k=1; k<=N; k++) s.addBefore(new Integer(k));
return s;
}
Note that, for a given N, the for-loop above is equivalent to:
s.addBefore( new Integer(1) );
s.addBefore( new Integer(2) );
s.addBefore( new Integer(3) );
...
s.addBefore( new Integer(N) );
As discussed above, the number of operations for addBefore is proportional to the number of items in the sequence when addBefore is called. For the N calls shown above, the sequence lengths are: 0, 1, 2, ..., N-1. So what is the total time for all N calls? It is proportional to 0 + 1 + 2 + ... + N-1.
Recall that we don't care about the exact time, just how the time depends on the problem size. For method createSeq, the "problem size" is the value of N (because the number of operations will be different for different values of N). It is clear that the time for the N calls (and therefore the time for method createSeq) is not independent of N (so createSeq is not a constant-time method). Is it proportional to N (linear in N)? That would mean that doubling N would double the number of operations performed by createSeq. Here's a table showing the value of 0+1+2+...+(N-1) for some different values of N:
N 0+1+2+...+(N-1)
4 6
8 28
16 120
Clearly, the value of the sum does more than double when the value of N doubles, so createSeq is not linear in N. In the following graph, the bars represent the lengths of the sequence (0, 1, 2, ..., N-1) for each of the N calls.
The value of the sum (0+1+2+...+(N-1)) is the sum of the areas of the individual bars. You can see that the bars fill about half of the square. The whole square is an N-by-N square, so its area is N2; therefore, the sum of the areas of the bars is about N2/2. In other words, the time for method createSequence is proportional to the square of the problem size; if the problem size doubles, the number of operations will quadruple. We say that the worst-case time for createSeq is quadratic in the problem size.
Big-O Notation
We express complexity using big-O notation. For a problem of size N:
a constant-time method is "order 1": O(1)
a linear-time method is "order N": O(N)
a quadratic-time method is "order N squared": O(N2)
Note that the big-O expressions do not have constants or low-order terms. This is because, when N gets large enough, constants and low-order terms don't matter (a constant-time method will be faster than a linear-time method, which will be faster than a quadratic-time method). See below for an example.
Formal definition:
A function T(N) is O(F(N)) if for some constant c and for values of N greater than some value n0:
T(N) <= c * F(N)
The idea is that T(N) is the exact complexity of a method or algorithm as a function of the problem size N, and that F(N) is an upper-bound on that complexity (i.e., the actual time/space or whatever for a problem of size N will be no worse than F(N)). In practice, we want the smallest F(N) -- the least upper bound on the actual complexity.
For example, consider T(N) = 3 * N2 + 5. We can show that T(N) is O(N2) by choosing c = 4 and n0 = 2. This is because for all values of N greater than 2:
3 * N2 + 5 <= 4 * N2
T(N) is not O(N), because whatever constant c and value n0 you choose, I can always find a value of N greater than n0 so that 3 * N2 + 5 is greater than c * N.
How to Determine Complexities
In general, how can you determine the running time of a piece of code? The answer is that it depends on what kinds of statements are used.
Sequence of statements
statement 1;
statement 2;
...
statement k;
(Note: this is code that really is exactly k statements; this is not an unrolled loop like the N calls to addBefore shown above.) The total time is found by adding the times for all statements:
total time = time(statement 1) + time(statement 2) + ... + time(statement k)
If each statement is "simple" (only involves basic operations) then the time for each statement is constant and the total time is also constant: O(1). In the following examples, assume the statements are simple unless noted otherwise.
if-then-else statements
if (cond) {
sequence of statements 1
}
else {
sequence of statements 2
}
Here, either sequence 1 will execute, or sequence 2 will execute. Therefore, the worst-case time is the slowest of the two possibilities: max(time(sequence 1), time(sequence 2)). For example, if sequence 1 is O(N) and sequence 2 is O(1) the worst-case time for the whole if-then-else statement would be O(N).
for loops
for (i = 0; i < N; i++) {
sequence of statements
}
The loop executes N times, so the sequence of statements also executes N times. Since we assume the statements are O(1), the total time for the for loop is N * O(1), which is O(N) overall.
Nested loops
for (i = 0; i < N; i++) {
for (j = 0; j < M; j++) {
sequence of statements
}
}
The outer loop executes N times. Every time the outer loop executes, the inner loop executes M times. As a result, the statements in the inner loop execute a total of N * M times. Thus, the complexity is O(N * M). In a common special case where the stopping condition of the inner loop is j < N instead of j < M (i.e., the inner loop also executes N times), the total complexity for the two loops is O(N2).
Statements with method calls:
When a statement involves a method call, the complexity of the statement includes the complexity of the method call. Assume that you know that method f takes constant time, and that method g takes time proportional to (linear in) the value of its parameter k. Then the statements below have the time complexities indicated.
f(k); // O(1)
g(k); // O(k)
When a loop is involved, the same rule applies. For example:
for (j = 0; j < N; j++) g(N);
has complexity (N2). The loop executes N times and each method call g(N) is complexity O(N).
Some methods may require different amounts of time on different calls, even when the problem size is the same for both calls. For example, we know that if addBefore is called with a sequence of length N, it may require time proportional to N (to move all of the items and/or to expand the array). This is what happens in the worst case. However, when the current item is the last item in the sequence, and the array is not full, addBefore will only have to move one item, so in that case its time is independent of the length of the sequence; i.e., constant time.
In general, we may want to consider the best and average time requirements of a method as well as its worst-case time requirements. Which is considered the most important will depend on several factors. For example, if a method is part of a time-critical system like one that controls an airplane, the worst-case times are probably the most important (if the plane is flying towards a mountain and the controlling program can't make the next course correction until it has performed a computation, then the best-case and average-case times for that computation are not relevant -- the computation needs to be guaranteed to be fast enough to finish before the plane hits the mountain).
On the other hand, if occasionally waiting a long time for an answer is merely inconvenient (as opposed to life-threatening), it may be better to use an algorithm with a slow worst-case time and a fast average-case time, rather than one with so-so times in both the average and worst cases.
For addBefore, for a sequence of length N, the worst-case time is O(N), the best-case time is O(1), and the average-case time (assuming that each item is equally likely to be the current item) is O(N), because on average, N/2 items will need to be moved.
Note that calculating the average-case time for a method can be tricky. You need to consider all possible values for the important factors, and whether they will be distributed evenly.
When do Constants Matter?
Recall that when we use big-O notation, we drop constants and low-order terms. This is because when the problem size gets sufficiently large, those terms don't matter. However, this means that two algorithms can have the same big-O time complexity, even though one is always faster than the other. For example, suppose algorithm 1 requires N2 time, and algorithm 2 requires 10 * N2 + N time. For both algorithms, the time is O(N2), but algorithm 1 will always be faster than algorithm 2. In this case, the constants and low-order terms do matter in terms of which algorithm is actually faster.
However, it is important to note that constants do not matter in terms of the question of how an algorithm "scales" (i.e., how does the algorithm's time change when the problem size doubles). Although an algorithm that requires N2 time will always be faster than an algorithm that requires 10*N2 time, for both algorithms, if the problem size doubles, the actual time will quadruple.
When two algorithms have different big-O time complexity, the constants and low-order terms only matter when the problem size is small. For example, even if there are large constants involved, a linear-time algorithm will always eventually be faster than a quadratic-time algorithm. This is illustrated in the following table, which shows the value of 100*N (a time that is linear in N) and the value of N2/100 (a time that is quadratic in N) for some values of N. For values of N less than 104, the quadratic time is smaller than the linear time. However, for all values of N greater than 104, the linear time is smaller.
N 100*N N2/100
102 104 102
103 105 104
104 106 106
105 107 108
106 108 1010
107 109 1012
Best-case and Average-case Complexity
CPU (time) usage
memory usage
disk usage
network usage
All are important but we will mostly talk about CPU time in 367. Other classes will discuss other resources (e.g., disk usage may be an important topic in a database class).
Be careful to differentiate between:
Performance: how much time/memory/disk/... is actually used when a program is run. This depends on the machine, compiler, etc. as well as the code.
Complexity: how do the resource requirements of a program or algorithm scale, i.e., what happens as the size of the problem being solved gets larger.
Complexity affects performance but not the other way around.
The time required by a method is proportional to the number of "basic operations" that it performs. Here are some examples of basic operations:
one arithmetic operation (e.g., +, *).
one assignment
one test (e.g., x == 0)
one read
one write (of a primitive type)
Some methods perform the same number of operations every time they are called. For example, the size method of the Sequence class always performs just one operation: return numItems; the number of operations is independent of the size of the sequence. We say that methods like this (that always perform a fixed number of basic operations) require constant time.
Other methods may perform different numbers of operations, depending on the value of a parameter or a field. For example, for many of the Sequence methods, the number of operations depends on the size of the sequence. We call the important factor (the parameters and/or fields whose values affect the number of operations performed) the problem size or the input size.
When we consider the complexity of a method, we don't really care about the exact number of operations that are performed; instead, we care about how the number of operations relates to the problem size. If the problem size doubles, does the number of operations stay the same? double? increase in some other way? For constant-time methods like the size method, doubling the problem size does not affect the number of operations (which stays the same).
Furthermore, we are usually interested in the worst case: what is the most operations that might be performed for a given problem size (other cases -- best case and average case -- are discussed below). For example, the addBefore method (for a sequence implemented using an array) has to move the current item and all of the items that come after the current item one place to the right in the array. In the worst case, all of the items in the array must be moved. Therefore, in the worst case, the time for addBefore is proportional to the number of items in the sequence, and we say that the worst-case time for addBefore is linear in the number of items in the sequence. For a linear-time method, if the problem size doubles, the number of operations also doubles.
Constant and linear times are not the only possibilities. For example, consider method createSeq:
Sequence createSeq( int N ) {
Sequence s = new Sequence();
for (int k=1; k<=N; k++) s.addBefore(new Integer(k));
return s;
}
Note that, for a given N, the for-loop above is equivalent to:
s.addBefore( new Integer(1) );
s.addBefore( new Integer(2) );
s.addBefore( new Integer(3) );
...
s.addBefore( new Integer(N) );
As discussed above, the number of operations for addBefore is proportional to the number of items in the sequence when addBefore is called. For the N calls shown above, the sequence lengths are: 0, 1, 2, ..., N-1. So what is the total time for all N calls? It is proportional to 0 + 1 + 2 + ... + N-1.
Recall that we don't care about the exact time, just how the time depends on the problem size. For method createSeq, the "problem size" is the value of N (because the number of operations will be different for different values of N). It is clear that the time for the N calls (and therefore the time for method createSeq) is not independent of N (so createSeq is not a constant-time method). Is it proportional to N (linear in N)? That would mean that doubling N would double the number of operations performed by createSeq. Here's a table showing the value of 0+1+2+...+(N-1) for some different values of N:
N 0+1+2+...+(N-1)
4 6
8 28
16 120
Clearly, the value of the sum does more than double when the value of N doubles, so createSeq is not linear in N. In the following graph, the bars represent the lengths of the sequence (0, 1, 2, ..., N-1) for each of the N calls.
The value of the sum (0+1+2+...+(N-1)) is the sum of the areas of the individual bars. You can see that the bars fill about half of the square. The whole square is an N-by-N square, so its area is N2; therefore, the sum of the areas of the bars is about N2/2. In other words, the time for method createSequence is proportional to the square of the problem size; if the problem size doubles, the number of operations will quadruple. We say that the worst-case time for createSeq is quadratic in the problem size.
Big-O Notation
We express complexity using big-O notation. For a problem of size N:
a constant-time method is "order 1": O(1)
a linear-time method is "order N": O(N)
a quadratic-time method is "order N squared": O(N2)
Note that the big-O expressions do not have constants or low-order terms. This is because, when N gets large enough, constants and low-order terms don't matter (a constant-time method will be faster than a linear-time method, which will be faster than a quadratic-time method). See below for an example.
Formal definition:
A function T(N) is O(F(N)) if for some constant c and for values of N greater than some value n0:
T(N) <= c * F(N)
The idea is that T(N) is the exact complexity of a method or algorithm as a function of the problem size N, and that F(N) is an upper-bound on that complexity (i.e., the actual time/space or whatever for a problem of size N will be no worse than F(N)). In practice, we want the smallest F(N) -- the least upper bound on the actual complexity.
For example, consider T(N) = 3 * N2 + 5. We can show that T(N) is O(N2) by choosing c = 4 and n0 = 2. This is because for all values of N greater than 2:
3 * N2 + 5 <= 4 * N2
T(N) is not O(N), because whatever constant c and value n0 you choose, I can always find a value of N greater than n0 so that 3 * N2 + 5 is greater than c * N.
How to Determine Complexities
In general, how can you determine the running time of a piece of code? The answer is that it depends on what kinds of statements are used.
Sequence of statements
statement 1;
statement 2;
...
statement k;
(Note: this is code that really is exactly k statements; this is not an unrolled loop like the N calls to addBefore shown above.) The total time is found by adding the times for all statements:
total time = time(statement 1) + time(statement 2) + ... + time(statement k)
If each statement is "simple" (only involves basic operations) then the time for each statement is constant and the total time is also constant: O(1). In the following examples, assume the statements are simple unless noted otherwise.
if-then-else statements
if (cond) {
sequence of statements 1
}
else {
sequence of statements 2
}
Here, either sequence 1 will execute, or sequence 2 will execute. Therefore, the worst-case time is the slowest of the two possibilities: max(time(sequence 1), time(sequence 2)). For example, if sequence 1 is O(N) and sequence 2 is O(1) the worst-case time for the whole if-then-else statement would be O(N).
for loops
for (i = 0; i < N; i++) {
sequence of statements
}
The loop executes N times, so the sequence of statements also executes N times. Since we assume the statements are O(1), the total time for the for loop is N * O(1), which is O(N) overall.
Nested loops
for (i = 0; i < N; i++) {
for (j = 0; j < M; j++) {
sequence of statements
}
}
The outer loop executes N times. Every time the outer loop executes, the inner loop executes M times. As a result, the statements in the inner loop execute a total of N * M times. Thus, the complexity is O(N * M). In a common special case where the stopping condition of the inner loop is j < N instead of j < M (i.e., the inner loop also executes N times), the total complexity for the two loops is O(N2).
Statements with method calls:
When a statement involves a method call, the complexity of the statement includes the complexity of the method call. Assume that you know that method f takes constant time, and that method g takes time proportional to (linear in) the value of its parameter k. Then the statements below have the time complexities indicated.
f(k); // O(1)
g(k); // O(k)
When a loop is involved, the same rule applies. For example:
for (j = 0; j < N; j++) g(N);
has complexity (N2). The loop executes N times and each method call g(N) is complexity O(N).
Some methods may require different amounts of time on different calls, even when the problem size is the same for both calls. For example, we know that if addBefore is called with a sequence of length N, it may require time proportional to N (to move all of the items and/or to expand the array). This is what happens in the worst case. However, when the current item is the last item in the sequence, and the array is not full, addBefore will only have to move one item, so in that case its time is independent of the length of the sequence; i.e., constant time.
In general, we may want to consider the best and average time requirements of a method as well as its worst-case time requirements. Which is considered the most important will depend on several factors. For example, if a method is part of a time-critical system like one that controls an airplane, the worst-case times are probably the most important (if the plane is flying towards a mountain and the controlling program can't make the next course correction until it has performed a computation, then the best-case and average-case times for that computation are not relevant -- the computation needs to be guaranteed to be fast enough to finish before the plane hits the mountain).
On the other hand, if occasionally waiting a long time for an answer is merely inconvenient (as opposed to life-threatening), it may be better to use an algorithm with a slow worst-case time and a fast average-case time, rather than one with so-so times in both the average and worst cases.
For addBefore, for a sequence of length N, the worst-case time is O(N), the best-case time is O(1), and the average-case time (assuming that each item is equally likely to be the current item) is O(N), because on average, N/2 items will need to be moved.
Note that calculating the average-case time for a method can be tricky. You need to consider all possible values for the important factors, and whether they will be distributed evenly.
When do Constants Matter?
Recall that when we use big-O notation, we drop constants and low-order terms. This is because when the problem size gets sufficiently large, those terms don't matter. However, this means that two algorithms can have the same big-O time complexity, even though one is always faster than the other. For example, suppose algorithm 1 requires N2 time, and algorithm 2 requires 10 * N2 + N time. For both algorithms, the time is O(N2), but algorithm 1 will always be faster than algorithm 2. In this case, the constants and low-order terms do matter in terms of which algorithm is actually faster.
However, it is important to note that constants do not matter in terms of the question of how an algorithm "scales" (i.e., how does the algorithm's time change when the problem size doubles). Although an algorithm that requires N2 time will always be faster than an algorithm that requires 10*N2 time, for both algorithms, if the problem size doubles, the actual time will quadruple.
When two algorithms have different big-O time complexity, the constants and low-order terms only matter when the problem size is small. For example, even if there are large constants involved, a linear-time algorithm will always eventually be faster than a quadratic-time algorithm. This is illustrated in the following table, which shows the value of 100*N (a time that is linear in N) and the value of N2/100 (a time that is quadratic in N) for some values of N. For values of N less than 104, the quadratic time is smaller than the linear time. However, for all values of N greater than 104, the linear time is smaller.
N 100*N N2/100
102 104 102
103 105 104
104 106 106
105 107 108
106 108 1010
107 109 1012
Best-case and Average-case Complexity
Class not registered in IE7
Every time when i open my IE7, and browsed certain pages, the browser will showed "Class not registered" javascript error. Open a command window, and try re-registering the IE dlls with a batch file like this:
regsvr32 c:\windows\system32\urlmon.dll
regsvr32 c:\windows\system32\actxprxy.dll
regsvr32 c:\windows\system32\shdocvw.dll
regsvr32 c:\windows\system32\mshtml.dll
regsvr32 c:\windows\system32\browseui.dll
regsvr32 c:\windows\system32\jscript.dll
regsvr32 c:\windows\system32\vbscript.dll
regsvr32 c:\windows\system32\oleaut32.dll
Then, it works fine. Good Luck.
regsvr32 c:\windows\system32\urlmon.dll
regsvr32 c:\windows\system32\actxprxy.dll
regsvr32 c:\windows\system32\shdocvw.dll
regsvr32 c:\windows\system32\mshtml.dll
regsvr32 c:\windows\system32\browseui.dll
regsvr32 c:\windows\system32\jscript.dll
regsvr32 c:\windows\system32\vbscript.dll
regsvr32 c:\windows\system32\oleaut32.dll
Then, it works fine. Good Luck.
Hardware specification - Motherboard model, CPU model
Something we do need to know what hardware specification for our PC without open the CPU casing. There is a simple way to do it:
1. go to download : cpuz.exe
2. Unzip the file and run the cpuz.exe
3. The program will provide you the information for Motherboard model, CPU model
1. go to download : cpuz.exe
2. Unzip the file and run the cpuz.exe
3. The program will provide you the information for Motherboard model, CPU model
Thunderbird Certificate Error
Everytime when you send out an email using Thunderbird, and there is an error message telling that there is a Certificate error. To remove this message , please do following:
Step 1: Down the thunderbird "remember_mismatched_domains-1.4.6-fx+tb+sm.xpi" addon
go to http://mirror.internode.on.net/pub/mozilla/addons/2131/
Step 2: In Mozilla Thunderbird, open Add-ons from the Tools menu.
Step 3: Click the Install button, and locate/select the file you downloaded (remember_mismatched_domains-1.4.6-fx+tb+sm.xpi) and click "OK".
Step 4: Restart the mozilla and select "remember the certificate permanently"
Step 1: Down the thunderbird "remember_mismatched_domains-1.4.6-fx+tb+sm.xpi" addon
go to http://mirror.internode.on.net/pub/mozilla/addons/2131/
Step 2: In Mozilla Thunderbird, open Add-ons from the Tools menu.
Step 3: Click the Install button, and locate/select the file you downloaded (remember_mismatched_domains-1.4.6-fx+tb+sm.xpi) and click "OK".
Step 4: Restart the mozilla and select "remember the certificate permanently"
Backup Thunderbird email
(1) Find the profile Thunderbird folderThey are located in different places for different versions of Windows and they may be assigned random file names that make them difficult to recognize.
On Windows 2000/XP/2003/Vista machines the location for your Thunderbird profile is:
C:\Documents and Settings\\Application Data\Thunderbird\Profiles\\
On Windows 9x/Me PCs it can usually be found at:C:\Windows\Application Data\Thunderbird\Profiles\\
If you can't locate your profiles then check out this document for more information:
http://kb.mozillazine.org/Profile_Folder
(2) Copy the Thunderbird profile folder - Done!To back these up just copy the profiles to another location, drive, CD or USB stick. It's as simple as that! You can do it manually but you cab also use Windows Scheduler to do the job automatically with a batch file (for experts).
On Windows 2000/XP/2003/Vista machines the location for your Thunderbird profile is:
C:\Documents and Settings\
On Windows 9x/Me PCs it can usually be found at:C:\Windows\Application Data\Thunderbird\Profiles\
If you can't locate your profiles then check out this document for more information:
http://kb.mozillazine.org/Profile_Folder
(2) Copy the Thunderbird profile folder - Done!To back these up just copy the profiles to another location, drive, CD or USB stick. It's as simple as that! You can do it manually but you cab also use Windows Scheduler to do the job automatically with a batch file (for experts).
Remove "welcome to Thunderbird" page
1. In Thunderbird, Go to Tools> Options> General.
2. Uncheck the option for Thunderbird Start Page
2. Uncheck the option for Thunderbird Start Page
Force Thunderbird to send HTML
Force Thunderbird to send HTML
Step 1:go to “Tools”->”Options”
Step 2Make sure you are in “Composition” tab and click on the “Send Options”
Step 3Change the default setting from “Ask me what to do” to “Send the message in both plain text and HTML”
Step 1:go to “Tools”->”Options”
Step 2Make sure you are in “Composition” tab and click on the “Send Options”
Step 3Change the default setting from “Ask me what to do” to “Send the message in both plain text and HTML”
Thunderbird : Potentially executable content
Error Message:
An error occurred while sending mail. The mail server responded: Potentially executable content. If you meant to send this file then please package it up as a zip and try again.
Forward Inline or As Attachment by Default
To change whether Mozilla Thunderbird inserts the forwarded message as an attachment or inline in the new email:
- Select Tools Options... (or Thunderbird Preferences...) from the menu.
- Go to the Composition category.
- Make sure you're on the General tab.
- Choose Inline or As Attachment under Forward messages:.
- Close the preferences window.
Toshiba Notebook Driver download
If you need any driver from Toshiba, you may visit this URL
http://www.pc.toshiba-asia.com/support/downloads/index.jsp
e.g.: Satellite L310, Satellite L300
Remove znc.exe and alp.exe virus
znc.exe and alp.exe are Trojan and will stole your information from your PC, it automaticallly launch each time u enter the windows.
There is no way to remove it except you use an antivirus software, i try Nod32, AVG, mcafee, and Norton antivirus, norton antivirus able to block the virus from spreading. Every time when you plug in your thumb drive, the auton run file will duplicate the virus in your PC. If your PC is installed with Norton Antivirus, it will block it from spreading.
There is no way to remove it except you use an antivirus software, i try Nod32, AVG, mcafee, and Norton antivirus, norton antivirus able to block the virus from spreading. Every time when you plug in your thumb drive, the auton run file will duplicate the virus in your PC. If your PC is installed with Norton Antivirus, it will block it from spreading.
订阅:
博文 (Atom)