Thursday, October 17, 2013

How to change user credentials in Subclipse?

Delete, or rename, the Eclipse '.keyring' file in Eclipse's configuration folder. This is where the Subclipse SVNKit connector caches your SVN credentials..
[ECLIPSE INSTALL]\configuration\org.eclipse.core.runtime.keyring
If, on the other hand, you're using the JavaHL connector -- or SVN command-line -- then their credentials are stored in the Subversion runtime config folder. Delete or rename the credential file.
On Windows: %APPDATA%\Subversion\auth
On Linux and OSX: ~/.subversion/auth
Sorry about this pig of complexity, for what should be a real version-control system. :-(

Wednesday, October 16, 2013

A brief introduction to the service integration bus

When Graham started this blog in September there was a definite idea that we would be talking about what is new in v7. It was launched at the same time v7 shipped, so to deny a link would be ludicrous, but it has recently occurred to me that it might be useful just to cover some of the basics. I am basing this on a large number of conversations I have had over the last month or so where I have had to explain the basics to people who had not quite got one of the concepts. So here I go:

What is a bus?
A bus is a number of things, which makes coming up with a single definition hard, but when we talk about a bus we mean all of the following:

  1. A name space for destinations
  2. A cloud to which destinations are defined and client applications connect.
  3. A set of interconnected application servers and/or clusters that co-operate to provide messaging function
  4. A set of interconnected messaging engines that co-operate to provide messaging function
While some of these might seem similar they are in fact different and that difference should become clear later on (each statement is not quite equal).

What is a destination?

A destination is a point of addressibility within the bus. Messages are sent to and received from destinations. There are a number of different types of destination. These are:
  1. A queue. This provides point to point messaging capabilities. A message is delivered to exactly one connected client and the messages are broadly processed in a first in first out order (message priority can affect this order).
  2. A topic space. This provides publish/subscribe messaging capabilities. A message is delivered to all matching connected clients.
There are other types of destinations, but they are less common, so I have skimmed over those.

What is a messaging engine?

A bus is a logical entity and as such provides no value on its own. The "runtime" of the bus is provided by a set of messaging engines which co-operate to provide this runtime. Messaging engines provide two important functions. The first is that clients connect to messaging engines, and the second is that messages are managed by the messaging engine.
What is a bus member?
While messaging engines provide the runtime they are not directly configurable (except for one key exception I will cover later). Instead servers and/or clusters are added to the bus. When a server or cluster is added to the bus it causes a single messaging engine to be created. A server bus member can host at most one messaging engine per bus, a cluster can have multiple, which is the only time you can create messaging engines.

Destinations are then "assigned" to a bus member at which point the messaging engines running on those bus members get something called a message point which is where the messages are stored.

Multiple servers and clusters can be added to a single bus. This is an important point. Some discussions I have had recently point to this being a point of confusion. Two different servers or clusters can be added to to the same bus. A bus can be as large as the cell in which it is defined. It can be larger than a single cluster.

How does High Availability work?
A certain level of availability is provided just by adding multiple application servers as a bus, a client can connect into any running messaging engines. The problem is that if messaging engine is not running the message points it is managing are not available. This does not provide an ideal HA story.

If you want HA you just use a cluster as a bus member instead. When you add a cluster as a bus member you get one messaging engine which can run on any application server in that cluster. If the server in the cluster fails then the messaging engine will be started in another server in the cluster. This can be configured using a policy.

How does Scalability work?
Scalability also utilizes application server clusters. By configuring multiple messaging engines in a cluster each messaging engine in the cluster will have a message point for destinations the cluster manages. We call this a partitioned destination. This is because each messaging engine only knows about a subset of the messages on the destination.

The upshot of all this is that the work load is shared by multiple servers.

And finally


So there we have it. The infocenter does cover a lot of this in more detail. I have linked the titles to the appropriate part of theinfocenter for learning about the topics.

If you have any questions feel free to ask in the comments.
Alasdair

Thursday, October 10, 2013

How to get rid of nerve pinch in shoulder?

The wide grip requires that your shoulder have full range to abduct and externally rotate. Limitation from tight muscles or from weak scapular stabilizers can cause an impingement at the shoulder joint, so it may or may not be a pinched nerve.
To find out what you actually have and to get a good exercise program to correct it, you should see your doctor (orthopedist) and/or a physical therapist (physio). Any info here will only be general information which might help, but may not address your complete problem.
If your main problem is poor shoulder positioning the following may help:
  • Tight Muscles: Muscles that limit this wide grip motion are tight pecs (esp. pec minor) and subscapularis.
  • Exercises to Stretch: Pec stretches such as the doorway stretch may help. Try this stretch with the arm at various heights along the door frame to find where in the range your muscle feels tight.
  • Exercises to Strengthen and Stretch: The wall pec stretch, stretches the pecs but also contracts the rhomboid and trapezius scapular muscles to help improve the positioning of the shoulder blade. The position of the shoulder blade is important because if it is out of position (as with shoulders that are rounded forward) it can lead to impingement and pain.
If your problem is a pinched nerve, then see your doctor/physical therapist for treatment and an exercise regime to correct the imbalances and to improve your nerve mobility.

Wednesday, October 9, 2013

EGit Push tag broken

The Push Tag... wizard is much too complicated at the moment, yes. Try 
enteringrefs/tags/ as the target ref name.

http://stackoverflow.com/questions/18396182/egit-push-tag-broken

Java - How to change context root of a dynamic web project in eclipse

I'm sure you've moved on by now, but I thought I'd answer anyway.
Some of these answers give work-arounds. What actually must happen is that you clean and republish your project to "activate" the new URI. This is done by right-clicking your server (in the Servers view) and choosing Clean. Then you start (or restart it). Most of the other answers here suggest you do things that in effect accomplish this.
The file that's changing is workspace/.metadata/.plugins/org.eclipse.wst.server.core/publish/publish.dat unless, that is, you've got more than one server in your workspace in which case it will be publishN.dat on that same path.
Hope this helps somebody.

Not sure if this is proper etiquette or not -- I am editing this answer to give exact steps for Eclipse Indigo.
(1) In your project's Properties, choose "Web Project Settings".
(2) Change "Context root" to "app".
screen shot of Eclipse project properties Web Project Settings
(3) Choose Window > Show View > Servers.
(4) Stop the server by either clicking the red square box ("Stop the server" tooltip) or context-click on the server listing to choose "Stop".
(5)On the server you want to use, context-click to choose "Clean…".
enter image description here
(6) Click OK in this confirmation dialog box.
Screenshot of dialog asking to update server configuration to match the changed context root
Now you can run your app with the new "app" URL such as:
Doing this outside of Eclipse, on your production server, is even easier --> Rename the war file. Export your Vaadin app as a WAR file (File > Export > Web > WAR file). Move the WAR file to your web server's servlet container such as Tomcat. Rename your WAR file, in this case to "app.war". When you start the servlet container, most such as Tomcat will auto-deploy the app, which includes expanding the war file to a folder. In this case, we should see a folder named "app". You should be good to go. Test your URL. For a domain such as "example.com" this would be: http://www.example.com/app/
Thanks so much to Russ Bateman for posting the correct answer to this frustrating problem.
Vaadin toolkit programmers may need to rebuild their widget set if using visual add ons.
--Basil Bourque

Monday, October 7, 2013

How to make a JMS Synchronous request

First, open the response queue. Then pass that object to the set reply-to method on the message. That way the service responding to your request knows where to send the reply. Typically the service will copy the message ID to the correlation ID field so when you send the message, take the message ID you get back and use that to listen on the reply queue. Of course if you use a dynamic reply-to queue even that isn't neessary - just listen for the next message on the queue.
There's sample code that shows all of this. If you installed to the default location, the sample code lives at"C:\Program Files (x86)\IBM\WebSphere MQ\tools\jms\samples\simple\SimpleRequestor.java"on a Windows box or /var/mqm/toolsjms/samples/simple/SimpleRequestor.java on a *nix box.
And on the off chance you are wondering "install what, exactly?" the WMQ client install is downloadable for free as SupportPac MQC71.

Difference between managed bean and backing bean

What is Managed Bean?
JavaBean objects managed by a JSF implementation are called managed beans. A managed bean describes how a bean is created and managed. It has nothing to do with the bean's functionalities.
What is Backing Bean?
Backing beans are JavaBeans components associated with UI components used in a page. Backing-bean management separates the definition of UI component objects from objects that perform application-specific processing and hold data. The backing bean defines properties and handling-logics associated with the UI components used on the page. Each backing-bean property is bound to either a component instance or its value. Backing bean also defines a set of methods that perform functions for the component, such as validating the component's data, handling events that the component fires and performing processing associated with navigation when the component activates.
What are the differences between a Backing Bean and Managed Bean?
Backing Beans are merely a convention, a subtype of JSF Managed Beans which have a very particular purpose. There is nothing special in a Backing Bean that makes it different from any other managed bean apart from its usage.
MB : Managed Bean ; BB : Backing Bean
1) BB: A backing bean is any bean that is referenced by a form.
MB: A managed bean is a backing bean that has been registered with JSF (in faces-config.xml) and it automatically created (and optionally initialized) by JSF when it is needed.
The advantage of managed beans is that the JSF framework will automatically create these beans, optionally initialize them with parameters you specify in faces-config.xml.
2) BB: Backing Beans should be defined only in the request scope
MB: The managed beans that are created by JSF can be stored within the request, session, or application scopes .
Backing Beans should be defined in the request scope, exist in a one-to-one relationship with a particular page and hold all of the page specific event handling code. In a real-world scenario, several pages may need to share the same backing bean behind the scenes. A backing bean not only contains view data, but also behavior related to that data.

Thursday, October 3, 2013

Understanding JNDI

Conceptually, JNDI is like System.getProperties() on steroids.
System.getProperties() allows you to pass String parameters to your code from the command line. Similarly, JNDI allows you to configure arbitrary objects outside of your code (for example, in application server config files) and then use them in your code.
In other words, it's an implementation of Service Locator pattern: your code obtains services configured by environment from the centeral registry.
As usually with Service Locators, your code should have some entry point to access Service Locator.InitialContext is this entry point: you create InitialContext and then obtain required services from JNDI with lookup().

What is JNDI ? What is its basic use..? When it is used?

What is JNDI ?
It stands for Java Naming and Directory Interface. A Google search would have told you that and more.
What is its basic use?
JNDI allows distributed applications to look up services in an abstract, resource-independent way.
When it is used?
The most common use case is to set up a database connection pool on a Java EE application server. Any application that's deployed on that server can gain access to the connections they need using the JNDI name "java:comp/env/FooBarPool" without having to know the details about the connection.
This has several advantages:
  1. If you have a deployment sequence where apps move from devl->int->test->prod environments, you can use the same JNDI name in each environment and hide the actual database being used. Applications don't have to change as they migrate between environments.
  2. You can minimize the number of folks who need to know the credentials for accessing a production database. Only the Java EE app server needs to know if you use JNDI.

How does database indexing work?

Why is it needed?
When data is stored on disk based storage devices, it is stored as blocks of data. These blocks are accessed in their entirety, making them the atomic disk access operation. Disk blocks are structured in much the same way as linked lists; both contain a section for data, a pointer to the location of the next node (or block), and both need not be stored contiguously.
Due to the fact that a number of records can only be sorted on one field, we can state that searching on a field that isn’t sorted requires a Linear Search which requires N/2 block accesses (on average), where N is the number of blocks that the table spans. If that field is a non-key field (i.e. doesn’t contain unique entries) then the entire table space must be searched at N block accesses.
Whereas with a sorted field, a Binary Search may be used, this has log2 N block accesses. Also since the data is sorted given a non-key field, the rest of the table doesn’t need to be searched for duplicate values, once a higher value is found. Thus the performance increase is substantial.
What is indexing?
Indexing is a way of sorting a number of records on multiple fields. Creating an index on a field in a table creates another data structure which holds the field value, and pointer to the record it relates to. This index structure is then sorted, allowing Binary Searches to be performed on it.
The downside to indexing is that these indexes require additional space on the disk, since the indexes are stored together in a table using the MyISAM engine, this file can quickly reach the size limits of the underlying file system if many fields within the same table are indexed.
How does it work?
Firstly, let’s outline a sample database table schema;
Field name       Data type      Size on disk
id (Primary key) Unsigned INT   4 bytes
firstName        Char(50)       50 bytes
lastName         Char(50)       50 bytes
emailAddress     Char(100)      100 bytes
Note: char was used in place of varchar to allow for an accurate size on disk value. This sample database contains five million rows, and is unindexed. The performance of several queries will now be analyzed. These are a query using the id (a sorted key field) and one using the firstName (a non-key unsorted field).
Example 1
Given our sample database of r = 5,000,000 records of a fixed size giving a record length of R = 204 bytes and they are stored in a table using the MyISAM engine which is using the default block size B = 1,024 bytes. The blocking factor of the table would be bfr = (B/R) = 1024/204 = 5 records per disk block. The total number of blocks required to hold the table is N = (r/bfr) = 5000000/5 = 1,000,000 blocks.
A linear search on the id field would require an average of N/2 = 500,000 block accesses to find a value given that the id field is a key field. But since the id field is also sorted a binary search can be conducted requiring an average of log2 1000000 = 19.93 = 20 block accesses. Instantly we can see this is a drastic improvement.
Now the firstName field is neither sorted, so a binary search is impossible, nor are the values unique, and thus the table will require searching to the end for an exact N = 1,000,000 block accesses. It is this situation that indexing aims to correct.
Given that an index record contains only the indexed field and a pointer to the original record, it stands to reason that it will be smaller than the multi-field record that it points to. So the index itself requires fewer disk blocks that the original table, which therefore requires fewer block accesses to iterate through. The schema for an index on the firstName field is outlined below;
Field name       Data type      Size on disk
firstName        Char(50)       50 bytes
(record pointer) Special        4 bytes
Note: Pointers in MySQL are 2, 3, 4 or 5 bytes in length depending on the size of the table.
Example 2
Given our sample database of r = 5,000,000 records with an index record length of R = 54 bytes and using the default block size B = 1,024 bytes. The blocking factor of the index would be bfr = (B/R) = 1024/54 = 18 records per disk block. The total number of blocks required to hold the table is N = (r/bfr) = 5000000/18 = 277,778 blocks.
Now a search using the firstName field can utilise the index to increase performance. This allows for a binary search of the index with an average of log2 277778 = 18.08 = 19 block accesses. To find the address of the actual record, which requires a further block access to read, bringing the total to 19 + 1 = 20 block accesses, a far cry from the 277,778 block accesses required by the non-indexed table.
When should it be used?
Given that creating an index requires additional disk space (277,778 blocks extra from the above example), and that too many indexes can cause issues arising from the file systems size limits, careful thought must be used to select the correct fields to index.
Since indexes are only used to speed up the searching for a matching field within the records, it stands to reason that indexing fields used only for output would be simply a waste of disk space and processing time when doing an insert or delete operation, and thus should be avoided. Also given the nature of a binary search, the cardinality or uniqueness of the data is important. Indexing on a field with a cardinality of 2 would split the data in half, whereas a cardinality of 1,000 would return approximately 1,000 records. With such a low cardinality the effectiveness is reduced to a linear sort, and the query optimizer will avoid using the index if the cardinality is less than 30% of the record number, effectively making the index a waste of space.