Friday, June 28, 2013

What is the purpose of endorsing a check?

I actually had to go to the bank today and so I decided to ask.
The answer I was given is that a check is a legal document (a promise to pay). In order to get your money from the bank, you need to sign the check over to them. By endorsing the check you are attesting to the fact that you have transferred said document to them and they can draw on that account.

Thursday, June 20, 2013

How to determine the Application Server V6.1 Base and Network Deployment Java bit-depth on AIX, Windows, Linux

The output of "java -version" of an IBM JDK provides the info which bit-depth is installed. If we have a 64-Bit java installed, the output will contain the "64" to indicate this. Example on AIX:
    -bash-3.00$ ./java -version
    java version "1.5.0"
    Java(TM) 2 Runtime Environment, Standard Edition (build pap64devifx-20071025 (SR
    6b))
    IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3 AIX ppc64-64 j9vmap6423-20071007 (JI
    T enabled)
    J9VM - 20071004_14218_BHdSMr
    JIT - 20070820_1846ifx1_r8
    GC - 200708_10)
    JCL - 20071025
    -bash-3.00$

This is valid for all IBM JDKs. Please see as well TechNote 7005002.

Starting and stopping the WebSphere Application Server node agent

Starting and stopping the WebSphere Application Server node agent is accomplished by using the startNode and stopNode commands.
Procedure
  1. AIXLinuxSolaris To start or stop the WebSphere Application Server node agent:
    1. Ensure that you are logged in as the non-root user ID created before installing WebSphere Commerce.
    2. Ensure that your database management system is started.
    3. Enter the following commands in a terminal window:
      su - non_root_user
      cd WC_profiledir/bin
    • To start the node agent, enter the following command: ./startNode.sh
    • To stop the node agent, enter the following command: ./stopNode.sh
  2. Windows To start or stop the WebSphere Application Server node agent:
    1. Log on using Windows user ID with Administrator authority.
    2. Start a command prompt session.
    3. Issue the following command: cd WC_profiledir/bin
    • To start the node agent, enter the following command: startNode
    • To stop the node agent, enter the following command: stopNode

Wednesday, June 19, 2013

WebSphere Application Server Tutorial and FAQ — WAS in 5 Minutes

If you're developing applications for WAS and you're new to it, this is what you need to know:
  • What is the default URL of the admin console. https://$hostname:9043/ibm/console.
  • What are the default portsHTTP: 9080, HTTPS: 9443.
  • How to locate the logs. Logs can be found under$install_root/profiles/$profile_name/logs/$server_name. The default profile name is AppSrv01 and the default server name is server1. Example:/usr/IBM/WebSphere/AppServer/profiles/AppSrv01/logs/server1. SystemOut.log is the file containing everything that was logged to standard out. Logs can also be viewed from the admin console by navigating to Troubleshooting/Logging and Tracing/server_name/Runtime.
  • How to start/stop a server. If you're dealing with a "Network Deployment" type of installation (multiple application servers running under the control of the "deployment manager"), your can start/stop a server from the console (Server/Server Types/WebSphere application servers). Otherwise you have to do it from command line. Go to install_root/bin and run./startServer.sh server_name, e.g., ./startServer.sh server1 (this assumes that your installation has only one profile defined, otherwise you may need to "cd" to the profile_name/bindirectory). Make sure that you run all commands using the appropriate system account. To stop the server, run ./stopServer.sh server_name -username user_name -password password. user_name and password is the credentials of an admin account, typically the same one you use to login to the console.
  • How to deploy an application. In admin console, navigate to Applications/Application Types/WebSphere enterprise applications, click on "Install new application", select "Fast path", accept all the defaults except that on "step 2" make sure that you targeted correct servers (if you have multiple servers/clusters in your environment). Note that you can deploy a WAR file directly, you don't have to build an EAR. In this case, make sure that you set a context root on "step 4" screen of the wizard.
  • How to change context root of a Web application. Go to Applications/Application Types/WebSphere enterprise applications/application_name/Context Root For Web Modules in the console. Re-start the application after the change.
  • How to change the order of classloaders. If you're getting a ClassNotFoundException when you're starting the app, changing the order of classloaders is the first thing you may want to try. Go to Applications/Application Types/WebSphere enterprise applications/application_name/Manage Modules/module_name and make the appropriate selection in the "Class loader order" drop-down (this assumes you're doing it for a WAR module).
  • How to enable dynamic class reloading. If you need to frequently update your deployed application (e.g., you use a local WAS installation for development), enabling dynamic reloading could be a huge time saver. Go to your application in the console, "Class loading and update detection", set "Override class reloading settings ..." and set polling interval to 2 seconds. See this post for more details on how to configure your development environment to support class reloading.
  • How to find a host name and a port of the server. Go to Server/Server Types/WebSphere application servers. You'll find the host name in the Host Name column. To find a port, click on your server, and expand Ports. WC_defaulthost is the HTTP port and WC_defaulthost_secure is the HTTPS port.
  • How to kill a JVM. If the normal "stop" routine failed to stop the server in a reasonable amount of time, you may need to kill it. In a "Network Deployment" environment, simply navigate to the list of servers, select the server and click "Terminate". A node agent will kill the JVM for you. To achieve the same from command line (the only option if you're running standalone), cd toinstall_root/profiles/profile_name/logs/server_name, and kill the process ID contained in the file server_name.pid. On Unix, you can simply do kill -9 `cat server1.pid` (assumingserver1 is your server name). Use task manager of taskkill /PID on Windows.
  • How to browse JMS messages. Go to Buses/Your bus name/Destinations/Your destination/Queue points/Your queue point/Runtime/Messages.
  • Where to find configuration filesWAS has many configuration file, most of them are in XML/XMI format. The files are located under$install_root/profiles/$profile_name/config/cells/$cell_name.
http://myarch.com/websphere-application-server-for-developers-in-5-minutes-or-less

Tuesday, June 18, 2013

What is the difference between JDK and JRE?

JRE: Java Runtime Environment. It is basically the Java Virtual Machine where your Java programs run on. It also includes browser plugins for Applet execution.
JDK: It's the full featured Software Development Kit for Java, including JRE, and the compilers and tools (like JavaDoc, and Java Debugger) to create and compile programs.
Usually, when you only care about running Java programs on your browser or computer you will only install JRE. It's all you need. On the other hand, if you are planning to do some Java programming, you will also need JDK.
Sometimes, even though you are not planning to do any Java Development on a computer, you still need the JDK installed. For example, if you are deploying a WebApp with JSP, you are technically just running Java Programs inside the application server. Why would you need JDK then? Because application server will convert JSP into Servlets and use JDK to compile the servlets. I am sure there might be more examples.
 
 http://stackoverflow.com/questions/1906445/what-is-the-difference-between-jdk-and-jre

Monday, June 17, 2013

when to use synchronic and asynchronic messages in JMS?

You use synchronous messaging when the turnaround time is acceptably short enough (or important enough) that your clients can (or must) wait for the response to come back.
If the processing time required to produce the response is too long, or not important enough, or if the client would rather just check back for the response later, then it's possible to use asynchronous messaging.

Tuesday, June 11, 2013

Fully qualified path Vs. Canonical Path

"Fully-qualified path" is synonymous with "absolute path"

  • "Fully-qualified" and "absolute path" mean the same thing - a path that is not relative to an implied or specified context.
  • Every path is either a fully-qualified path or else it is a relative path
  • Every location on a file system has a multitude of paths that could be used to refer to it, including numerous fully-qualified paths:
    • C:\temp.txt
    • C:\Program Files\..\temp.txt
    • C:\Program Files\Microsoft\..\..\temp.txt
    • etc.
  • Conceptually speaking, one of those fully-qualified paths is the simplest, most straightforward way of specifying that resource - that's your canonical path.
For a file or directory, does fully-qulified path exist only one thing? -like canonical path.
No, a fully-qualified path is any path which is not a relative path (not relative to the current directory of the implied or specified context). Multiple, but distinct, fully-qualified paths could refer to the same location on the filesystem. Reread:
but substitute "fully-qualified" everywhere it says "absolute".
To be clear, some people will also use the term "relative path" to also refer to a path with a "relative reference" (double dots ..) within it. For example, some might might called C:\Program Files\Microsoft\\..\temp.txt a "relative path" because of the double dots, but I would call it an fully-qualified path with a relative reference. Hopefully, it will be clear from the conversation what they mean when they say "relative path" (a path that is relative to a context or a path with a relative reference in it).
Are both of them totally same concepts?
No, as indicated in the other SO question, there are lots ways to specify a fully-qualified path (absolute path) to a location, but only one of those fully-qualified paths is considered to be the canonical path to that location.
One more thing, Is a UNC path belong to fully-qualified path too?
Yes, UNC paths are not relative paths; they are fully-qualified paths. - http://msdn.microsoft.com/en-us/library/aa365247(v=VS.85).aspx#fully_qualified_vs._relative_paths
Is a symbolic link or a hard link belong to Fully qualified path?
Its an independent concept. A path (regardless of whether it is relative or full-qualified) leads to a location in the filesystem. The entity at that location could be one of many things: a normal file, a directory, a symbolic link, a hard link, a device, a named pipe, etc. A symbolic links or a hard link has meta-data that leads to the data you were actually looking for at that location.

Analogy Time

You can think of paths and links in the terms of directions to someone's house:
  • relative path is directions from your current location
  • fully-qualified path is directions from town-hall, regardless of where you are
    • In our strange little town of Unixville, everyone agrees and understands implicitly that "fully-qualified directions" always start at town-hall, strangely enough, a buidling that everyone calls "/".
    • The next town over (Windowsville) has multiple town halls (one for each part of town), calledC:\D:\E:\, etc.
    • Different people might give you different directions (paths) to get to the same house, even if they all start from the same starting point (townhall) - some directions will be more direct than others.
  • canonical path is the fully-qualified directions that is the simplest, most straightforward means to get from townhall to the desired house
  • symbolic link is like a empty lot with a note that gives directions to a forwarding address
    • the type of directions that led you here (whether they were relative directions, fully-qualified directions, or even the canonical fully-qualified directions) has no bearing on whether it leads to a house or any empty lot with forwarding direction here
    • there's a strange case where one of the streets in your direction is actually a symbolic link (a detour? a portal?) - the analogy falls apart here if we look too closely at it, so lets just ignore it :-)
  • hard link is a house accessible from two or more different addresses.
    • Think of a house on the corner of Elm Street and Main Street. The post office mistakenly gave it two addresses : 10 Elm Str and 20 Main Str. No matter which address you go to, you end up at the same house.
    • In our strange little town, these hard-link houses can have multiple addresses and the addresses don't have to be anywhere near each other.
    • No matter which of its addresses you go to, its the same house. Its not a copy, its not a forwarding address. Just magically, once you go inside, you end up in the same house, regardless of which address you used to get there.
    • the directions that led you to the house (no matter which address was used or whether the directions were relative directions, fully-qualified directions, or even the canonical fully-qualified directions) has no bearing on whether the house at that address is a hard-link house or not
Addendum
Edit
I asked someone who maintains Naming Files, Paths, and Namespaces page to let me know this. And he replied me.
Is this also Fully-qualified path? C:\directory..\directory\file.txt
I wonder what terms the maintainer of that page would use to differentiate between ..\file.txt andC:\directory\..\directory\file.txtsince he calls them both relative path. I agree that double dots are a relative reference, but I wouldn't tag the whole path as relative because it has double dots in the middle of it. In his terminology, there doesn't seem to be a difference between fully-qualified and canonical. (Therein, I suppose, lies the source of your question).
I come from a Unix and Java background, so perhaps that makes the difference. As I learned it:
  • relative/partially-qualified - location cannot be determined without the associated context providing information, e.g. the current working directory, the current drive, the drive's current directory, the shell PATH setting, the Java CLASSPATH setting, or the referencing URL.
  • absolute/fully-qualified - location is independent of the the associated context, i.e. the location is the same regardless of the current working directory, the current drive, the drive's current directory, the shell PATH setting, the Java CLASSPATH setting, or the referencing URL.
  • canonical - the simplest fully-qualified, i.e. no double-dots
So
  • ..\file.txt - relative
  • C:\directory\..\directory\file.txt - fully-qualified
  • C:\directory\file.txt - fully-qualified and canonical
That section of the MSDN page isn't clear on C:\directory\..\directory\file.txt: IfC:\directory\..\directory\file.txt is considered relative and won't work with Windows API that say they need a fully-qualified (but not necessarily canonical?) path, I'd suggest that page needs to make that clearer.
Fully-qualfied vs Relative
A file name is relative to the current directory if it does not begin with one of the following:
... * A disk designator with a backslash, for example "C:\" or "d:\". ...
Since C:\directory\..\directory\file.txt starts with a disk designator with a blackslash, this path is fully-qualified, not relative.
A path is also said to be relative if it contains "double-dots"; that is, two periods together in one component of the path. This special specifier is used to denote the directory above the current directory, otherwise known as the "parent directory". Examples of this format are as follows:
  • "..\tmp.txt" specifies a file named tmp.txt located in the parent of the current directory.
  • "....\tmp.txt" specifies a file that is two directories above the current directory.
  • "..\tempdir\tmp.txt" specifies a file named tmp.txt located in a directory named tempdir that is a peer directory to the current directory.
I interpreted the phrase contains double dots to mean leading double dots. The examples show only leading double dots. The terminology "current directory" usually means process's current working directory or the drive's current directory, which has bearing only when talking about leading double dots. I can, however, see how the section could be interpreted the other way.
Regardless, everyone grows up different and context is king, so I guess everyone will need to be careful of the nuances when reading docs or discussing with engineers of different backgrounds on what they mean by "fully-qualified" vs "relative"

Saturday, June 8, 2013

XA and NonXA datasource

An XA transaction, in the most general terms, is a "global transaction" that may span multiple resources. A non-XA transaction always involves just one resource. 

An XA transaction involves a coordinating transaction manager, with one or more databases (or other resources, like JMS) all involved in a single global transaction. Non-XA transactions have no transaction coordinator, and a single resource is doing all its transaction work itself (this is sometimes called local transactions). 

XA transactions come from the X/Open group specification on distributed, global transactions. JTA includes the X/Open XA spec, in modified form. 

Most stuff in the world is non-XA - a Servlet or EJB or plain old JDBC in a Java application talking to a single database. XA gets involved when you want to work with multiple resources - 2 or more databases, a database and a JMS connection, all of those plus maybe a JCA resource - all in a single transaction. In this scenario, you'll have an app server like Websphere or Weblogic or JBoss acting as the Transaction Manager, and your various resources (Oracle, Sybase, IBM MQ JMS, SAP, whatever) acting as transaction resources. Your code can then update/delete/publish/whatever across the many resources. When you say "commit", the results are commited across all of the resources. When you say "rollback", _everything_ is rolled back across all resources. 

The Transaction Manager coordinates all of this through a protocol called Two Phase Commit (2PC). This protocol also has to be supported by the individual resources. 

In terms of datasources, an XA datasource is a data source that can participate in an XA global transaction. A non-XA datasource generally can't participate in a global transaction (sort of - some people implement what's called a "last participant" optimization that can let you do this for exactly one non-XA item). 

For more details - see the JTA pages on java.sun.com. Look at the XAResource and Xid interfaces in JTA. See the X/Open XA Distributed Transaction specification. Do a google source on "Java JTA XA transaction". 

    -Mike

http://www.theserverside.com/discussions/thread.tss?thread_id=21385

Stateless Session Beans Managing JDBC Transactions

Hi Ryan,

Nice posting though. Sometimes people have a very specific way of asking questions that might take me an entire day and several posting in order to figure out the actual question :-)

Another option might be to use a stateless session bean and then use container managed transactions. For example, a session bean could have a doThis() method that calls three DAOs. The doThis() method would have a transaction attribute of Required, so that a new transaction is started if one doesn't exist but an existing one is used if it already exists. This way, each DAO can just get its own Connection using our custom connection factory, without having to rely on something that was passed in.

My question: Is this possible and will it work?

Absolutely; I�m pretty sure that most of the j2ee applications that use EJBs and DAO pattern, manage transaction using CMT (the container will provide you the best TransactionManager. No need to reinvent the wheel in this case).

My concern is that if each DAO withing the transaction retrieves is own connection from the connection pool, will the container still be able to rollback the work done by two DAOs if, for example, an error occurs in the third DAO in a transaction? If not, what would I need to do to make it work?

It will certainly do so. Moreover some containers will probably not return the connection to the pool before the transaction commits. Hence all your DAO will use the same connection. However the implementation the transaction integrity is guaranteed by the container.

I'm just not sure of how much "magic" the container can actually do behind-the-scenes.

The container will use the JTS/JTA api to implement the transaction management, which you�ll probably end up doing yourself if you decide to implement aTransactionManager (another option would be to use the JDBC transactions, which will leverage the entire transaction management at the database level). Besides it assures transaction propagation and can provide also support for global (remote) transactions using the 2PC protocol.

Another question while I'm on the subject: What types of errors will cause a container managed transaction to rollback? If it must be some sort of system exception, what types of system exceptions will force the rollback? Can I subclass a certain type of Exception and throw that and still count on the transaction being rolled back?

The container will rollback the transaction only when RuntimeExceptions are thrown (the container logs the error, discharges the bean instance and rolls back the transaction). In my opinion however, good design practice should enforce business methods to always throw application (checked) exceptions. Because the good practice and EJB specs (regarding transactions rollback) are quite opposite, j2ee come up with a way to overcome the issue. Hence you can use the EJBContext.setRollbackOnly() method to mark the transaction for rollback and throw an application exception instead.
Regards.

http://www.coderanch.com/t/316987/EJB-JEE/java/Stateless-Session-Beans-Managing-JDBC

What do u mean by precompiled statement? What is the difference between Statement and PreparedStatement?

A PreparedStatement is precompiled statement.
It means PreparedStatement compiles the SQL Statement in the first run itself. Thus, if the Statement is preparedStatement, you can run the Statement multiple times without having to compile it again and again. 
PreparedStatement "compiles" & runs the SQL Statement on the first run, & simply executes ( without compiling ) - it saves a lot of time.

It simply means that whenver you use a Statement, the SQL Statement is going to be  compiled and the execute but by using preparedStatement there is no need of compilation again and again, so it is faster than statement

http://www.geekinterview.com/question_details/6145

Wednesday, June 5, 2013

What is the difference between application server and web server?

Most of the times these terms Web Server and Application server are used interchangeably.
Following are some of the key differences in features of Web Server and Application Server:
  • Web Server is designed to serve HTTP Content. App Server can also serve HTTP Content but is not limited to just HTTP. It can be provided other protocol support such as RMI/RPC
  • Web Server is mostly designed to serve static content. Though most of the Web Servers are having plugins to support scripting languages like Perl, PHP, ASP, JSP etc. through which these servers can generate dynamic HTTP content.
  • Most of the application servers have Web Server as integral part of them, that means App Server can do whatever Web Server is capable of. Additionally App Server have components and features to support Application level services such as Connection Pooling, Object Pooling, Transaction Support, Messaging services etc.
  • As web servers are well suited for static content and app servers for dynamic content, most of the production environments have web server acting as reverse proxy to app server. That means while service a page request, static contents such as images/Static html is served by web server that interprets the request. Using some kind of filtering technique (mostly extension of requested resource) web server identifies dynamic content request and transparently forwards to app server
Example of such configuration is Apache HTTP Server and BEA WebLogic Server. Apache HTTP Server is Web Server and BEA WebLogic is Application Server.
In some cases the servers are tightly integrated such as IIS and .NET Runtime. IIS is web server. when equipped with .NET runtime environment IIS is capable of providing application services.

Monday, June 3, 2013

The Java singleton pattern thread safe

what's the Singleton pattern?

the singleton pattern is a design pattern that restricts the instantiation of a class to one object. This useful when exactly one object is needed to coordinate action across the system.  (Singleton_pattern)

So How can we implement it?

For the simple implement:


public class Singleton {
        private static Singleton _instance = null;
 
        private Singleton() {   }
 
        public static Singleton getInstance() {
                if (_instance == null) {
                        _instance = new Singleton();
                }
                return _instance;
        }
 


if every time invoke the Singleton.getInstance() method, we always get the _instance in the all system, but this implementation has the problems? It's not the thread safe..
why?




So let show you:
when the TA(Thread A) enter getInstance(), because of _instance == null, So the _instance = new Singleton(), But when the TB(Thread B) also enter the getInstance() method at the same time, it also check the _instance == null (it's the null, because the TB and TA at the same time), so the _instance was created again.
The singleton pattern must be carefully constructed in multi-thread applications.

And I will provide the Java solutions for the thread safe:

1. Eager initialization

public class Singleton {
    private static Singleton _instance = new Singleton();
 
    private Singleton() {}
 
    public static Singleton getInstance() {
        return _instance;
    }
}
 
 

Why this is thread safe?
Because the _instance is the static variable, and Java guarantees that the initialization will be run before the code is accessed by ANY class.
Popular point that the _instance is constructed at the building time, so when the caller invoke getInstance() method, the _instance just at here, return is OK. NO multi-Thread problem.
But it has the "cost", if the cost of creating the instance is not too large in terms of time/resources, you can do, IF not, you may want to switch to lazy initialization.


2. Lazy initialization

why called the lazy? Because the instance will be created at the needed time, not like the Eager initialization, the instance created at building even if you not call the getInstance().
 The Lazy initialization will avoid above situation(creating the instance is not too large in terms of time/resources)

public class Singleton {
        private static Singleton _instance = null;
 
        private Singleton() {   }
 
        public static synchronized Singleton getInstance() {
                if (_instance == null) {
                        _instance = new Singleton();
                }
                return _instance;
        }
}

Because the getInstance() is the synchronized, So anytime only one thread can enter this function. But it also has a problem: For the thread unsafe situation, just the first creation.
at if _instance != null, we are not needed to lock this function, if we add the synchronized in the method, we lock this function anytime.

So the double-checked locking is coming. it is a software design pattern used to reduce the overhead of acquiring a lock at the first testing the locking criterion without actually acquiring the lock.

So maybe you redesign this function:


public class Singleton {
        private static Singleton _instance = null;
 
        private Singleton() {   }
 
        public static Singleton getInstance() { 
              if(_instance == null){
                    synchronized(this) {
                     if (_instance == null) {
                           _instance = new Singleton();
                     }

               return _instance;
        }
}




why the Double-check happened? Because when the Thread acquired the lock, but the another has done the initialization of the instance. So the Double-check happened.

But the subtle problems happened.

1. Thread A enter this function, notice the _instance is not initialized, so it obtains the lock and begins to initialize the _instance.

2.Due to the semantics of some programming languages, the code generated by the compiler is allowed to update the shared variable to point to partially constructed object.
 Popular point that Before A as finished performing the initialization, the thread B enter this function, it check _instance != null(because java call the constructor, it will updated the _instance when the memory allocate, but it not call the construct function, just has the space. ) SO if thread B use the instance, maybe the program will crash.

the problem has been fixed, the volatile keyword now ensure that multiple threads handle the singleton instance correctly.

ublic class Singleton {
        private static volatile Singleton _instance = null;
 
        private Singleton() {   }
 
        public static Singleton getInstance() { 
              if(_instance == null){
                    synchronized(this) {
                     if (_instance == null) {
                           _instance = new Singleton();
                     }

               return _instance;
        }
}


3. Initialization-on-demand holder idiom
 First let us see the implemention 

public class Singleton {
        // Private constructor prevents instantiation from other classes
        private Singleton() { }
 
        /**
        * SingletonHolder is loaded on the first execution of Singleton.getInstance() 
        * or the first access to SingletonHolder.INSTANCE, not before.
        */
        private static class SingletonHolder { 
                public static final Singleton instance = new Singleton();
        }
 
        public static Singleton getInstance() {
                return SingletonHolder.instance;
        }
}

You may look this familiar, it looks like the eager initialization, but how this avoid the construct the large object.
Because it takes advantage of language guarantees about class initialization, so java don't construct the static inner class SingletonHodler at the building time. So the construction happen in the getInstance() function.
Since the class initialization phase is guaranteed by the JLS to be serial, and the initialization phase writes the static variable INSTANCE in a serial operation, so the JVM ensure SingletonHolder.instance is serial, is thread safe.


4. The Enum way

public enum Singleton {
        INSTANCE;
        public void execute (String arg) {
                //... perform operation here ...
        }
}

the second edition of his book "Effective Java" Joshua Bloch claims that "a single-element enum type is the best way to implement a singleton"[10] for any Java that supports enums. The use of an enum is very easy to implement and has no drawbacks regarding serializable objects, which have to be circumvented in the other ways.
 This approach implements the singleton by taking advantage of Java's guarantee that any enum value is instantiated only once in a Java program.Since Java enum values are globally accessible, so is the singleton. The drawback is that the enum type is somewhat inflexible. for examole, it dose not allow lazy initialization.

References Links:


http://regrecall.blogspot.com/2012/05/java-singleton-pattern-thread-safe.html