Efficiency of Queues
As with a stack, items can be inserted and removed from a queue in O(1) time.
Monday, October 31, 2011
Stack Code
The main() method in the StackApp class creates a stack that can hold 10 items, pushes 4 items onto the stack, and then displays all the items by popping them off the stack, and then displays all the items by popping them off the stack until it's empty. Here's the output.,
80 60 40 20Notice how the order of the data is reversed. Because the last item pushed is the first one popped, the 80 appears first in the output. the the 80 appears first in the output.
Efficiency of StacksItems can be both pushed and popped from the stack implemented in the StackX class in constant O(1) time. That is, the time is not dependent on how many items are in the stack and is therefore very quick. No comparisons or moves are necessary. .
Sunday, October 30, 2011
Plain English explanation of Big O
Disclaimer, stolen from Jon Skeet's answer:
Quick note, this is almost certainly confusing Big O notation (which is an upper bound) with Theta notation (which is a two-side bound). In my experience this is actually typical of discussions in non-academic settings. Apologies for any confusion caused.
The simplest definition I can give for Big-O notation is this:
Big-O notation is a relative representation of the complexity of an algorithm.
There are some important and deliberately chosen words in that sentence:
relative: you can only compare apples to apples. You can't compare an algorithm to do arithmetic multiplication to an algorithm that sorts a list of integers. But two algorithms that do arithmetic operations (one multiplication, one addition) will tell you something meaningful;
representation: Big-O (in its simplest form) reduces the comparison between algorithms to a single variable. That variable is chosen based on observations or assumptions. For example, sorting algorithms are typically compared based on comparison operations (comparing two nodes to determine their relative ordering). This assumes that comparison is expensive. But what if comparison is cheap but swapping is expensive? It changes the comparison; and
complexity: if it takes me one second to sort 10,000 elements how long will it take me to sort one million? Complexity in this instance is a relative measure to something else.
Come back and reread the above when you've read the rest.
The best example of Big-O I can think of is doing arithmetic. Take two numbers (123456 and 789012). The basic arithmetic operations we learnt in school were:
addition;
subtraction;
multiplication; and
division.
Each of these is an operation or a problem. A method of solving these is called an algorithm.
Addition is the simplest. You line the numbers up (to the right) and add the digits in a column writing the last number of that addition in the result. The 'tens' part of that number is carried over to the next column.
Let's assume that the addition of these numbers is the most expensive operation in this algorithm. It stands to reason that to add these two numbers together we have to add together 6 digits (and possibly carry a 7th). If we add two 100 digit numbers together we have to do 100 additions. If we add two 10,000 digit numbers we have to do 10,000 additions.
See the pattern? The complexity (being the number of operations) is directly proportional to the number of digits n in the larger number. We call this O(n) or linear complexity.
Subtraction is similar (except you may need to borrow instead of carry).
Multiplication is different. You line the numbers up, take the first digit in the bottom number and multiply it in turn against each digit in the top number and so on through each digit. So to multiply our two 6 digit numbers we must do 36 multiplications. We may need to do as many as 10 or 11 column adds to get the end result too.
If we have two 100-digit numbers we need to do 10,000 multiplications and 200 adds. For two one million digit numbers we need to do one trillion (1012) multiplications and two million adds.
As the algorithm scales with n-squared, this is O(n2) or quadratic complexity. This is a good time to introduce another important concept:
We only care about the most significant portion of complexity.
The astute may have realized that we could express the number of operations as: n2 + 2n. But as you saw from our example with two numbers of a million digits apiece, the second term (2n) becomes insignificant (accounting for 0.00002% of the total operations by that stage).
The Telephone Book
The next best example I can think of is the telephone book, normally called the White Pages or similar but it'll vary from country to country. But I'm talking about the one that lists people by surname and then initials or first name, possibly address and then telephone numbers.
Now if you were instructing a computer to look up the phone number for "John Smith", what would you do? Ignoring the fact that you could guess how far in the S's started (let's assume you can't), what would you do?
A typical implementation might be to open up to the middle, take the 500,000th and compare it to "Smith". If it happens to be "Smith, John", we just got real lucky. Far more likely is that "John Smith" will be before or after that name. If it's after we then divide the last half of the phone book in half and repeat. If it's before then we divide the first half of the phone book in half and repeat. And so on.
This is called a bisection search and is used every day in programming whether you realize it or not.
So if you want to find a name in a phone book of a million names you can actually find any name by doing this at most 21 or so times (I might be off by 1). In comparing search algorithms we decide that this comparison is our 'n'.
For a phone book of 3 names it takes 2 comparisons (at most).
For 7 it takes at most 3.
For 15 it takes 4.
...
For 1,000,000 it takes 21 or so.
That is staggeringly good isn't it?
In Big-O terms this is O(log n) or logarithmic complexity. Now the logarithm in question could be ln (base e), log10, log2 or some other base. It doesn't matter it's still O(log n) just like O(2n2) and O(100n2) are still both O(n2).
It's worthwhile at this point to explain that Big O can be used to determine three cases with an algorithm:
Best Case: In the telephone book search, the best case is that we find the name in one comparison. This is O(1) or constant complexity;
Expected Case: As discussed above this is O(log n); and
Worst Case: This is also O(log n).
Normally we don't care about the best case. We're interested in the expected and worst case. Sometimes one or the other of these will be more important.
Back to the telephone book.
What if you have a phone number and want to find a name? The police have a reverse phone book but such lookups are denied to the general public. Or are they? Technically you can reverse lookup a number in an ordinary phone book. How?
You start at the first name and compare the number. If it's a match, great, if not, you move on to the next. You have to do it this way because the phone book is unordered (by phone number anyway).
So to find a name:
Best Case: O(1);
Expected Case: O(n) (for 500,000); and
Worst Case: O(n) (for 1,000,000).
The Travelling Salesman
This is quite a famous problem in computer science and deserves a mention. In this problem you have N towns. Each of those towns is linked to 1 or more other towns by a road of a certain distance. The Travelling Salesman problem is to find the shortest tour that visits every town.
Sounds simple? Think again.
If you have 3 towns A, B and C with roads between all pairs then you could go:
A -> B -> C
A -> C -> B
B -> C -> A
B -> A -> C
C -> A -> B
C -> B -> A
Well actually there's less than that because some of these are equivalent (A -> B -> C and C -> B -> A are equivalent, for example, because they use the same roads, just in reverse).
In actuality there are 3 possibilities.
Take this to 4 towns and you have (iirc) 12 possibilities. With 5 it's 60. 6 becomes 360.
This is a function of a mathematical operation called a factorial. Basically:
5! = 5 * 4 * 3 * 2 * 1 = 120
6! = 6 * 5 * 4 * 3 * 2 * 1 = 720
7! = 7 * 6 * 5 * 4 * 3 * 2 * 1 = 5040
...
25! = 25 * 24 * ... * 2 * 1 = 15,511,210,043,330,985,984,000,000
...
50! = 50 * 49 * ... * 2 * 1 = 3.04140932... × 10^64
So the Big-O of the Travelling Salesman problem is O(n!) or factorial or combinatorial complexity.
By the time you get to 200 towns there isn't enough time left in the universe to solve the problem with traditional computers.
Something to think about.
Polynomial Time
Another point I wanted to make quick mention of is that any algorithm that has a complexity of O(na) is said to have polynomial complexity or is solvable in polynomial time.
Traditional computers can solve polynomial-time problems. Certain things are used in the world because of this. Public Key Cryptography is a prime example. It is computationally hard to find two prime factors of a very large number. If it wasn't, we couldn't use the public key systems we use.
Anyway, that's it for my (hopefully plain English) explanation of Big O (revised).
http://stackoverflow.com/questions/487258/plain-english-explanation-of-big-o
Big 'O' notation and Java Constant time performance
Big 'O' notation and Java Constant time performance
Collection Constant time performance, in a line is "The same amount of time taken for a certain operation(method) regardless of the number of elements in the collection"
Big 'O' notation
The Big O notation is used to indicate the time taken by algorithms to run :-
For example:-
(N is the number of elements)
O(N):- The time taken is linerarly dependent to the number of elements.
O(log N):- The time taken is logarithmic to the number of elements.
O(1):- The time taken is constant time, regardless of the number of elements.
Please read here to find out about the big O notation:-
http://en.wikipedia.org/wiki/Big_O_notation
In java collections
Sets
1) The HashSet offers constant time performance for basic operations (add, remove, contains and size).
2) The TreeSet provides guaranteed log(n) time cost for the basic operations (add, remove and contains).
3) Like HashSet, the LinkedHashSet provides constant-time performance for the basic operations (add, contains and remove)
Lists
4) The ArrayList offers constant time performance in the size, isEmpty, get, set, iterator, and listIterator operations. The add operation runs in linear time. All of the other operations run in linear time (roughly speaking).
Maps
5) The HashMap provides constant-time performance for the basic operations (get and put).
6) The TreeMap provides guaranteed log(n) time cost for the containsKey, get, put and remove operations.
7) Like HashMap, the LinkedHashMap provides constant-time performance for the basic operations (add, contains and remove), assuming the the hash function disperses elements properly among the buckets.
If you need a List you could use the ArrayList.
If you need a Set use the HashSet or for a sorted set use the TreeSet or an insertion ordered set use the LinkedHashSet.
If you need a Map use the HashMap or for a key sorted map use the TreeMap or a key insertion ordered map use the LinkedHashMap.
http://objectissues.blogspot.com/2006/11/big-o-notation-and-java-constant-time.html
Priority Queue Code
The insert() method checks whether there are any items; if not, it inserts one at
index 0. Otherwise, it starts at the top of the array and shifts existing items upward
until it finds the place where the new item should go. Then it inserts the item and
increments nItems. Note that if there’s any chance the priority queue is full, you
should check for this possibility with isFull() before using insert().
The front and rear fields aren’t necessary as they were in the Queue class because, as
we noted, front is always at nItems-1 and rear is always at 0.
The remove() method is simplicity itself: It decrements nItems and returns the item
from the top of the array. The peekMin() method is similar, except it doesn’t decre-ment nItems. The isEmpty() and isFull() methods check if nItems is 0 or maxSize,
respectively.
Efficiency of Priority Queues
In the priority-queue implementation we show here, insertion runs in O(N) time, while deletion takes O(1) time.
Efficiency of Priority Queues
In the priority-queue implementation we show here, insertion runs in O(N) time, while deletion takes O(1) time.
Selection Sort Code
Invariant
In the selectionSort.java program. the data items with indices less than or equal to out are always sorted.
In the selectionSort.java program. the data items with indices less than or equal to out are always sorted.
Saturday, October 29, 2011
Insertion Sort Code
Invariants in Insertion Sort
At the end of each pass, following the insertion of the item from temp, the data items with smaller indices than ourter are partially sorted.
Can you explain Java pre/post increment/decrement?
If you are only incrementing or decrementing a variable, there's no difference.
i++;
++i;
These are equivalent statements in java.
If, however, you are assigning a value to a variable such as:
array[pos++] = 5;
This means, set the array element at position pos to 5 then increment pos by 1.
array[++pos] = 5;
This means, increment pos by 1 then set the array element at that new pos value to 5.
So if 'pos' was 3, in the first statement we're setting array[3] = 5. In the second statement we're setting array[4] = 5. In either situation, the next line of code will see 'pos' as 4.
This basically means if you are using pre-increment such as ++i this is evaluated before the rest of that code line is. If using post-increment such as i++ this is evaluated after the rest of the code line is.
http://answers.yahoo.com/question/index?qid=20100211211451AACctJj
In Java, what's the difference between public, default, protected, and private?
Modifier | Class | Package | Subclass | World public | Y | Y | Y | Y protected | Y | Y | Y | N no modifier | Y | Y | N | N private | Y | N | N | Nhttp://stackoverflow.com/questions/215497/in-java-whats-the-difference-between-public-default-protected-and-private
Tuesday, October 25, 2011
Depth First Search
Definitions
Adjacency
Two vertices are said to be adjacent to one another if they are connected by a single edge.
Paths
A path is a sequence of edges. There can be more than one path between two vertices.
Connected Graphs
A graph is said to be connected if there is at least on path from every vertex to every other vertex.
Representing a Graph in a Program
Vertices
In a very abstract graph program you could simply number the vertices 0 to N-1
Edges
Two methods are commonly used for graphs: the adjacency matrix and the adjacency list.
The Adjacency Matrix
An adjacency matrix is a two-dimensional array in which the elements indicate whether an edge is present between two vertices. If a graph has N vertices, the adjacency matrix is an NxN array.
The Adjacency List
The list in adjacency list refers to a linked list. An adjacency list is an array of lists. Each individual list shows what vertices a given vertex is adjacent to.
Depth-First Search
Rule1
If possible, visit an adjacent unvisited vertex, mark it, and push it on the stack.
Rule2
If you can't follow Rule 1, then, if possible, pop a vertex off the stack.
Rule3
If you can't follow Rule 1 or Rule 2, you're done.
You might say that the depth-first search algorithm likes to get as far away from the starting point as quickly as possible and returns only when it reaches a dead end. If you use the term depth to mean the distance from the starting point, you can see where the name depth-first search comes from.
Analogy
An analogy you might think about in relation to depth-first search is a maze. The maze - perhaps one of the people-size ones made of hedges, popular in England - consists of narrow passages (think of edges) and intersections where passages meet (vertices).
Depth-First Search and Game Simulations
Depth-First searches are often used in simulations of games (and game-like situations in the real world.) In a typical game you can choose one of several possible actions. Each choice leads to further choices, each of which leads to further choices, and so on into an ever-expanding tree-shaped graph of possibilities. a choice point corresponds to a vertex, and the specific choice taken corresponds to an edge, which leads to another choice-point vertex.
Chapter 13 Graphs
http://www.amazon.com/Data-Structures-Algorithms-Java-2nd/dp/0672324539/ref=sr_1_1?s=books&ie=UTF8&qid=1319590014&sr=1-1
Adjacency
Two vertices are said to be adjacent to one another if they are connected by a single edge.
Paths
A path is a sequence of edges. There can be more than one path between two vertices.
Connected Graphs
A graph is said to be connected if there is at least on path from every vertex to every other vertex.
Representing a Graph in a Program
Vertices
In a very abstract graph program you could simply number the vertices 0 to N-1
Edges
Two methods are commonly used for graphs: the adjacency matrix and the adjacency list.
The Adjacency Matrix
An adjacency matrix is a two-dimensional array in which the elements indicate whether an edge is present between two vertices. If a graph has N vertices, the adjacency matrix is an NxN array.
The Adjacency List
The list in adjacency list refers to a linked list. An adjacency list is an array of lists. Each individual list shows what vertices a given vertex is adjacent to.
Depth-First Search
Rule1
If possible, visit an adjacent unvisited vertex, mark it, and push it on the stack.
Rule2
If you can't follow Rule 1, then, if possible, pop a vertex off the stack.
Rule3
If you can't follow Rule 1 or Rule 2, you're done.
You might say that the depth-first search algorithm likes to get as far away from the starting point as quickly as possible and returns only when it reaches a dead end. If you use the term depth to mean the distance from the starting point, you can see where the name depth-first search comes from.
Analogy
An analogy you might think about in relation to depth-first search is a maze. The maze - perhaps one of the people-size ones made of hedges, popular in England - consists of narrow passages (think of edges) and intersections where passages meet (vertices).
Depth-First Search and Game Simulations
Depth-First searches are often used in simulations of games (and game-like situations in the real world.) In a typical game you can choose one of several possible actions. Each choice leads to further choices, each of which leads to further choices, and so on into an ever-expanding tree-shaped graph of possibilities. a choice point corresponds to a vertex, and the specific choice taken corresponds to an edge, which leads to another choice-point vertex.
http://www.amazon.com/Data-Structures-Algorithms-Java-2nd/dp/0672324539/ref=sr_1_1?s=books&ie=UTF8&qid=1319590014&sr=1-1
Monday, October 24, 2011
Binary Tree Code
Building tree with root value 25 ================================= Inserted 11 to left of node 25 Inserted 15 to right of node 11 Inserted 16 to right of node 15 Inserted 23 to right of node 16 Inserted 79 to right of node 25 preorder traversing ================================= Traversed 25 Traversed 11 Traversed 15 Traversed 16 Traversed 23 Traversed 79 postorder traversing ================================= Traversed 23 Traversed 16 Traversed 15 Traversed 11 Traversed 79 Traversed 25 inorder traversing ================================= Traversed 11 Traversed 15 Traversed 16 Traversed 23 Traversed 25 Traversed 79http://www.roseindia.net/java/java-get-example/java-binary-tree-code.shtml
Sunday, October 23, 2011
Simon Sinek: How great leaders inspire action
http://www.ted.com Simon Sinek presents a simple but powerful model for how leaders inspire action, starting with a golden circle and the question "Why?" His examples include Apple, Martin Luther King, and the Wright brothers -- and as a counterpoint Tivo, which (until a recent court victory that tripled its stock price) appeared to be struggling.
Seth Godin: Sliced bread and other marketing delights
http://www.ted.com In a world of too many options and too little time, our obvious choice is to just ignore the ordinary stuff. Marketing guru Seth Godin spells out why, when it comes to getting our attention, bad or bizarre ideas are more successful than boring ones.
Richard St. John: Secrets of success in 8 words, 3 minutes
http://www.ted.com Why do people succeed? Is it because they're smart, or are they just lucky? Analyst Richard St. John condenses years of interviews into an unmissable 3-minute presentation on the real secrets of success.
- PASSION
- WORK
- FOCUS
- PERSIST
- IDEAS
- GOOD
- PUSH
- SERVE
Sonatype Nexus Maven Repository Manager
https://repository.sonatype.org/index.html#welcome
Sonatype Nexus™ Professional Version
Copyright © 2008-2011 Sonatype, Inc.
All rights reserved. Includes the third-party code listed at http://www.sonatype.com/products/nexus/attributions.
Sonatype and Sonatype Nexus are trademarks of Sonatype, Inc. Apache Maven is a trademark of the Apache Foundation.
M2Eclipse is a trademark of the Eclipse Foundation. All other trademarks are the property of their respective owners.
To see the full license terms for this Software: Click Here
The product EULA is: http://www.sonatype.com/products/nexus/EULA
Sonatype Nexus™ Professional Version
Copyright © 2008-2011 Sonatype, Inc.
All rights reserved. Includes the third-party code listed at http://www.sonatype.com/products/nexus/attributions.
Sonatype and Sonatype Nexus are trademarks of Sonatype, Inc. Apache Maven is a trademark of the Apache Foundation.
M2Eclipse is a trademark of the Eclipse Foundation. All other trademarks are the property of their respective owners.
To see the full license terms for this Software: Click Here
The product EULA is: http://www.sonatype.com/products/nexus/EULA
Saturday, October 22, 2011
Linked Lists 4: The Empty List (null)
number of data values | number of nodes |
3 | 3 |
2 | 2 |
1 | 1 |
0 | 0 |
A recursive definition of a linked list:
* null
* a node whose next refers to a linked list
Why Linked Lists?
* Linked list are BEAUTIFUL.
* Linked lists are more efficient than arrays/ArrayLists in some situations
* Linked lists provide a foundation for understanding other node-based data structures (trees, graphs)
Linked Lists 3: Using Nodes
Node Node (x) Node data: "A" data: "B" data: "C" next: ---> next: ---> next: null
SyntaxHighlighter not showing toolbar
I've been trying to figure this one out myself. I won't claim to be 100% correct here, but from what I can tell, this is the answer:
- Toolbar was changed in update from version 2 to version 3.
- Toolbar no longer includes the icons and whatnot.
- The default toolbar is now the simple '?'.
This pretty much sucks, if it's true. The pop-up toolbar w/ icons is one of the things that made me choose SH over the other options.
This is what I'm guessing comparing the included CSS files in the latest package to the CSS available on sites that have a version with the "proper" toolbar enabled.
Linked Lists 2: The Node Class
Node class: What is the "next" of the last node? What does getNext() return for the last node? null
How to map a composite key with Hibernate?
To map a composite key, you can use the
EmbeddedId
or the IdClass
annotations. I know this question is not strictly about JPA but the
rules defined by the specification also applies. So here they are:2.1.4 Primary Keys and Entity Identity
...
A composite primary key must correspond to either a single persistent field or property or to a set of such fields or properties as described below. A primary key class must be defined to represent a composite primary key. Composite primary keys typically arise when mapping from legacy databases when the database key is comprised of several columns. TheEmbeddedId
andIdClass
annotations are used to denote composite primary keys. See sections 9.1.14 and 9.1.15.
...
The following rules apply for composite primary keys:
- The primary key class must be public and must have a public no-arg constructor.
- If property-based access is used, the properties of the primary key class must be public or protected.
- The primary key class must be
serializable
.- The primary key class must define
equals
andhashCode
methods. The semantics of value equality for these methods must be consistent with the database equality for the database types to which the key is mapped.- A composite primary key must either be represented and mapped as an embeddable class (see Section 9.1.14, “EmbeddedId Annotation”) or must be represented and mapped to multiple fields or properties of the entity class (see Section 9.1.15, “IdClass Annotation”).
- If the composite primary key class is mapped to multiple fields or properties of the entity class, the names of primary key fields or properties in the primary key class and those of the entity class must correspond and their types must be the same.
With an IdClass
The class for the composite primary key could look like (could be a static inner class):public class TimePK implements Serializable {
protected Integer levelStation;
protected Integer confPathID;
public TimePK() {}
public TimePK(Integer levelStation, String confPathID) {
this.id = levelStation;
this.name = confPathID;
}
// equals, hashCode
}
And the entity:@Entity
@IdClass(TimePK.class)
class Time implements Serializable {
@Id
private Integer levelStation;
@Id
private Integer confPathID;
private String src;
private String dst;
private Integer distance;
private Integer price;
// getters, setters
}
The IdClass
annotation maps multiple fields to the table PK.
With EmbeddedId
The class for the composite primary key could look like (could be a static inner class):@Embeddable
public class TimePK implements Serializable {
protected Integer levelStation;
protected Integer confPathID;
public TimePK() {}
public TimePK(Integer levelStation, String confPathID) {
this.id = levelStation;
this.name = confPathID;
}
// equals, hashCode
}
And the entity:@Entity
class Time implements Serializable {
@EmbeddedId
private TimePK timePK;
private String src;
private String dst;
private Integer distance;
private Integer price;
//...
}
The @EmbeddedId
annotation maps a PK class to table PK.Differences:
- From the physical model point of view, there are no differences
EmbeddedId
somehow communicates more clearly that the key is a composite key and IMO makes sense when the combined pk is either a meaningful entity itself or it reused in your code.@IdClass
is useful to specify that some combination of fields is unique but these do not have a special meaning.
- with
IdClass
select t.levelStation from Time t
- with
EmbeddedId
select t.timePK.levelStation from Time t
References
- JPA 1.0 specification
- Section 2.1.4 "Primary Keys and Entity Identity"
- Section 9.1.14 "EmbeddedId Annotation"
- Section 9.1.14 "IdClass Annotation
http://stackoverflow.com/questions/3585034/how-to-map-a-composite-key-with-hibernate
Subscribe to:
Posts (Atom)