Wednesday, September 26, 2012

Getting of selected records

There is no such method, and 2.9 is no longer supported .... so,

var recIndices = myDT.getSelectedRows();
var recs = [];
for(var i=0; i  recs.push( myDT.getRecord(recIndices[i]) );

// leaves you with recs an Array of Records ....

// if you want a recordset of only the selected ones ... follow up with,

var newRS = new YAHOO.widget.RecordSet();
newRS.addRecords(recs); 


http://yuilibrary.com/forum/viewtopic.php?p=34368

Sunday, September 23, 2012

JavaScript === vs == : Does it matter which “equal” operator I use?

The identity (===) operator behaves identically to the equality (==) operator except no type conversion is done, and the types must be the same to be considered equal.
The == operator will compare for equality after doing any necessary type conversions. The ===operator will not do the conversion, so if two values are not the same type === will simply returnfalse. It's this case where === will be faster, and may return a different result than ==. In all other cases performance will be the same.
To quote Douglas Crockford's excellent JavaScript: The Good Parts,
JavaScript has two sets of equality operators: === and !==, and their evil twins == and !=. The good ones work the way you would expect. If the two operands are of the same type and have the same value, then === produces true and !== produces false. The evil twins do the right thing when the operands are of the same type, but if they are of different types, they attempt to coerce the values. the rules by which they do that are complicated and unmemorable. These are some of the interesting cases:
'' == '0'           // false
0 == ''             // true
0 == '0'            // true
false == 'false'    // false
false == '0'        // true
false == undefined  // false
false == null       // false
null == undefined   // true
' \t\r\n ' == 0     // true
The lack of transitivity is alarming. My advice is to never use the evil twins. Instead, always use ===and !==. All of the comparisons just shown produce false with the === operator.

Update:

A good point was brought up by @Casebash in the comments and in @Phillipe Laybaert's answerconcerning reference types. For reference types == and === act consistently with one another (except in a special case).
var a = [1,2,3];
var b = [1,2,3];
var c = { x: 1, y: 2 };
var d = { x: 1, y: 2 };
var e = "text";
var f = "te" + "xt";

a == b            // false
a === b           // false

c == d            // false
c === d           // false

e == f            // true
e === f           // true
The special case is when you compare a string literal with a string object created with the Stringconstructor.
"abc" == new String("abc")    // true
"abc" === new String("abc")   // false
Here the == operator is checking the values of the two objects and returning true, but the === is seeing that they're not the same type and returning false. Which one is correct? That really depends on what you're trying to compare. My advice is to bypass the question entirely and just don't use theString constructor to create string objects.

JavaScript By Example

Callback functions make functions in JavaScriot far more flexiblwe than they would otherwise be.By passing a function into another function as a parameter we make the function it is passed to more flexible in that part of its processing is now determined by the function we pass to it.
In this example we have a generic processArray function that will run our callback function for every single entry in the array. Just what that processing will be is not defined in our processArray function but is instead determined by the function passed into the second argument.Unless we need to be able to call the callback function from elsewhere in our code outside the function we are passing it to we can also simplify our code by just passing it as an anonymous function. For the purpose of the example we pass in an anonymous function that will multiply each of the entries in the array by two. Should we want to change what is to be done with each entry in the array we'd just change the content of the function that we are using as the second parameter.

var myary = [4, 8, 2, 7, 5];

processArray = function(ary, callback) {
for (var i = ary.length-1; i >= 0; i--)
&nsp; ary[i] = callback(ary[i]);
}
return ary;
}

myary = processArray(myary, function(a) {return a * 2;});
http://javascript.about.com/od/byexample/a/usingfunctions-callbackfunction-example.htm

Thursday, September 20, 2012

Can I manual select JRE to startup iReport ?

Edit the file /etc/ireport.conf
there is a line to set a specific jre to use  (jdkhome="/path/to/jdk")
Uncomment it (removing the #) and set your favorite jdk. I assume you can use a jre too instead of a jdk.
Giulio

http://jasperforge.org/plugins/espforum/view.php?group_id=83&forumid=101&topicid=15

Wednesday, September 19, 2012

Tip: OVER and PARTITION BY

Here’s a quick summary of OVER and PARTITION BY (new in SQL 2005), for the uninitiated or forgetful…

OVER

OVER allows you to get aggregate information without using a GROUP BY. In other words, you can retrieve detail rows, and get aggregate data alongside it. For example, this query:
SELECT SUM(Cost) OVER () AS Cost
, OrderNum
FROM Orders
Will return something like this:
Cost  OrderNum
10.00 345
10.00 346
10.00 347
10.00 348
Quick translation:
  • SUM(cost) – get me the sum of the COST column
  • OVER – for the set of rows….
  • () – …that encompasses the entire result set.

OVER(PARTITION BY)

OVER, as used in our previous example, exposes the entire resultset to the aggregation…”Cost” was the sum of all [Cost]  in the resultset.  We can break up that resultset into partitions with the use of PARTITION BY:
SELECT SUM(Cost) OVER (PARTITION BY CustomerNo) AS Cost
, OrderNum
, CustomerNo

FROM Orders
My partition is by CustomerNo – each “window” of a single customer’s orders will be treated separately from each other “window”….I’ll get the sum of cost for Customer 1, and then the sum for Customer 2:
Cost  OrderNum   CustomerNo
 8.00 345        1
 8.00 346        1
 8.00 347        1
 2.00 348        2
The translation here is:
  • SUM(cost) – get me the sum of the COST column
  • OVER – for the set of rows….
  • (PARTITION BY CustomerNo) – …that have the same CustomerNo.

Further Reading: BOL: OVER Clause
June 2012 edit: We highly, highly recommend Itzik Ben-Gan’s brand new book Microsoft SQL Server 2012 High-Performance T-SQL Using Window Functions for an outstanding and thorough explanation of windowing functions (including OVER / PARTITION BY).
Enjoy, and happy days!
-Jen
http://www.MidnightDBA.com/Jen

http://www.midnightdba.com/Jen/2010/10/tip-over-and-partition-by/

How to calculate percentage with a SQL statement

  1. The most efficient (using over()).
    select Grade, count(*) * 100.0 / sum(count(*)) over()
    from MyTablegroup by Grade
  2. Universal (any SQL version).
    select Rate, count(*) * 100.0 / (select count(*) from MyTable)
    from MyTablegroup by Rate;
    
  3. With CTE, the least efficient.
    with t(Rate, RateCount) as ( 
        select Rate, count(*) 
        from MyTable
        group by Rate)
    select Rate, RateCount * 100.0/(select sum(RateCount) from t)
    from t;
    
http://stackoverflow.com/questions/770579/how-to-calculate-percentage-with-a-sql-statement

Software Project - Time Estimation

I've tried many formal techniques, but what I always come back to is: Break the project down into a list of modules that you will actually have to write or modify. Look at the number of different screens, the number of reports, and any particularly complex logic that will have to be developed. Then guess how long it will take to write each of these individual pieces and add them up.
Yes, at the low level you're still just making up a number, but if you have any experience you should be able to come up with a fairly good estimate for "write a screen to input customer name and address type information and write it to the database". That's something manageable. Trying to estimate "write a payroll system" with no breakdown is much more difficult.
This is essentially the idea of function point analysis. If using the formulas of function point analysis helps you, go ahead. I find it easier to just wing it when I get to the lowest level.
Don't estimate based on the assumption that the first draft of the code will always work perfectly the first time. You know that never happens except for the most trivial programs, but people always seem to estimate based on that assumption. Build in time for finding and fixing bugs. Take it for granted that there will be some number of particular nasty bugs that take a long time to figure out.
Include time for testing. And -- here's where I used to fall down all the time -- include time for fixing the bugs that are found during testing, and then for another round of testing after the fixes. I used to do estimates where I said "2 weeks for testing" or whatever, estimating how much time it would take the test group to get through the system, and then left it at that. That was dumb. Of course the test group will find problems, and you will have to fix them.
(Side note: I used to have a chief tester who would set a goal for each new release of our product that he would find 100 bugs. He saw this as a personal challenge. Sometimes he had to stretch the definition of a bug to make it, like counting a mis-spelled label on a screen as a "bug", but he almost always made it. That's the best kind of tester you can have. I've often had to explain to testers that the goal of testing is not to prove that there are no bugs, but to find the bugs that are surely there.)
Depending on just what portion of the project you're supposed to be estimating and how your organization works, you may also need to plan for "clarifications" to the requirements and more changes when the users see the program in operation. Yes yes, we always say that the requirements must be nailed down before we start coding and once the user signs off no changes will be allowed. I have never worked for an organization where it actually happened this way. No matter what policy is written on a piece of paper, in real life the users always find that even if you implemented exactly what they asked for, when they see it in practice it doesn't really work out, and so there will be further changes and rework.

http://stackoverflow.com/questions/1668731/software-project-time-estimation

How to estimate a programming task if you have no experience in it

The best answer you can give is to ask for time to knock up a quick prototype to allow you to give a more accurate estimate. Without some experience with a tool or a problem, any estimate you give is essentially meaningless.
As an aside, there is very rarely a problem with giving too long an estimate. Unanticipated problems occur, priorities change, and requirements are "updated". Even if you don't use all the time you asked for, you will have more testing time, or can release "early".
I've always been far too optimistic in my estimates, and it can put a lot of stress into your life, especially when you are a young programmer without the experience and self-confidence to tell bosses uncomfortable truths.

http://stackoverflow.com/questions/425044/how-to-estimate-a-programming-task-if-you-have-no-experience-in-it?lq=1

Wednesday, September 12, 2012

What's the point of a candidate key?

A key is called a candidate key, because while it could be used as a PK, it is not necessarily the PK.
There can be more than one candidate key for a given row, e.g., EmployeeID and SSN.
Often, rather than using a candidate key as the PK, a surrogate key is created instead. This is because decisions around what candidate key to use can be found to be erroneous later, which can cause a huge headache (literally).
Another reason is that a surrogate key can be created using an efficient data type for indexing purposes, which the candidate keys may not have (e.g., a UserImage).
A third reason is that many ORMs work only with a single-column PK, so candidate keys composed of more than one column (composite keys) are ruled out in that case.
Something that many developers do not realize is that selecting a surrogate key over a natural key may be a compromise in terms of data integrity. You may be losing some constraints on your data by selecting a surrogate key, and often a trigger is required to simulate the constraint if a surrogate key is chosen.

http://stackoverflow.com/questions/3961455/whats-the-point-of-a-candidate-key

Default passwords of Oracle 11g?

It is possible to connect to the database without specifying a password. Once you've done that you can then reset the passwords. I'm assuming that you've installed the database on your machine; if not you'll first need to connect to the machine the database is running on.
  1. Ensure your user account is a member of the dba group. How you do this depends on what OS you are running.
  2. Enter sqlplus / as sysdba in a Command Prompt/shell/Terminal window as appropriate. This should log you in to the database as SYS.
  3. Once you're logged in, you can then enter
    alter user SYS identified by "newpassword";
    
    to reset the SYS password, and similarly for SYSTEM.
See also here.
(Note: I haven't tried any of this on Oracle 11g; I'm assuming they haven't changed things since Oracle 10g.)

http://stackoverflow.com/questions/740119/default-passwords-of-oracle-11g

Tuesday, September 11, 2012

This test checks whether the length of the environment variable “PATH” does not exceed the recommended length

Ok the above problem causes problems when installing an Oracle 11g database on a machine which already has a lot of different software installed, at Seer Computing our Consultant’s machines have Oracle 9i, 10g as well as 11g installed.
So what is the solution to the above problem … the way we get round it is as follows (instructions for Windows XP machine).
1. When the failure occurs during installation of Oracle 11g, click on cancel, you will need to restart the installation from scratch
2. Go to Start -> Control Panel -> System -> Advanced -> Environment Variables (at bottom of the option)
3. Find the Path environment in the System Variables window and edit it, click on it and select all the values within it, paste these into a word document or something and save it. You might wish to see whats needed and what isnt but thats a different problem !
4. Clear the Path environment and simply add a single directory such as c:\Seer
5. Start the installation of Oracle 11g again and sit there and wait while it chunders through the processing.
6. Return to the Environment Variables and paste the originals back after the new ones put in by the installation

What is Oracle Schema?

Till today I was of the illusion that USER and SCHEMA are both equal. But Oracle Guru Andrew Clarke describes the difference between both USER and SCHEMA in his blog. Thanks Clarke for your blog.

So understanding this difference we will define what an Oracle Schema is:

A Schema represents set of objects owned by the user of the same name. For example in an organization there are 100 employees, each of these employees should have a separate space where they manage their objects. It is synonymous with the concept of an employee been allocated a new cabin where he can keep or organize his belongings.

The same way in Oracle a user must be created for each database user. An organization can keep its own rules in naming the users but it is better to use a naming notation always in such cases. If a database user logs in to his space (using connect) he can create objects which becomes the schema.

What a schema can contain?
Just like a cabin where the employee sits; it can contain a PC, a deskphone, a cabinet for filing papers etc., a schema can contain various objects like tables, views, indexes etc. If you want to create any object it must be created inside any of available schemas.

How do I access my schema?
Accessing of a schema is guarded by a password which the DBA assigns at first. You can choose to change the password.

http://askanantha.blogspot.com/2009/07/what-is-oracle-schema.html

What is Normalization?

Normalization is a method of break down complex table’s structure into simple table structure by using certain rules. Normalization usually involves dividing a database into two or more tables and defining relationships between the tables. The objective of Normalization is to isolate data so that additions, deletions, and modifications of a field can be made in just one table and then propagated through the rest of the database via the defined relationships. With the help of this method we can reduce redundancy in a table and remove the problems of inconsistency and reduce the amount of space.

The Normal Forms

Normalization results in the formation of tables that fulfill certain specified rules and represents certain normal forms. The database community has developed a series of guidelines for ensuring that databases are normalized. The normal forms are used to ensure inconsistencies are not introduced in the database. Several normal forms have been identified .The most important and widely used normal forms are.
·         First Normal Form (1 NF)
·         Second Normal Form (2 NF)
·         Third Normal Form (3 NF)
·         Fourth Normal Form (4NF)
·         Fifth Normal Form (5NF)
·         Boyce-Codd Normal Form(BCNF)

First Normal Form (1 NF)

·         Eliminate duplicative columns from the same table.
·         Create separate tables for each group of related data and identify each row with a unique column or set of columns (the primary key).

Second Normal Form (2 NF)

·         Meet all the requirements of the first normal form.
·         Remove subsets of data that apply to multiple rows of a table and place them in separate tables.
·         Create relationships between these new tables and their predecessors through the use of foreign keys

Third Normal Form (3 NF)

·         Meet all the requirements of the second normal form.
·         Remove columns that are not dependent upon the primary key.

Fourth Normal Form (4 NF)

·         Meet all the requirements of the third normal form.
·         A relation is in 4NF if it has no multi-valued dependencies.
Remember, these normalization guidelines are cumulative. For a database to be in 2NF, it must first fulfill all the criteria of a 1NF database.

Fifth Normal Form (5 NF)

·         One advantage of fifth normal form is that certain redundancies can be eliminated.
·         Fifth normal form does not differ from fourth normal form unless there exists a symmetric constraint.
·         Fifth normal form deals with cases where information can be reconstructed from smaller pieces of information that can be maintained with less redundancy. 2 NF, 3 NF, and 4 NF normal forms also serve this purpose, but 5 NF normal forms generalize to cases not covered by the others.

Boyce-Codd Normal Form (BCNF)

·         BCNF is based on the concept of a determinant.
·         A determinant is any attribute (simple or composite) on which some other attribute is fully functionally dependent.
·         A relation is in BCNF is, and only if, every determinant is a candidate key.


http://mindstick.com/Articles/88d5a271-b369-4c0a-ae80-01e66df98b6a/?What%20is%20Normalization?

Monday, September 10, 2012

Foreign key on delete cascade tips

Answer: The choice between on delete restrict or on delete cascade depends on the design of your application.  You have three choices for managing deletes on Oracle foreign key constraints:
alter table sample1 
   add foreign key (col1) 
references 
   sample (col2)
on delete no action;

alter table sample1 
add foreign key (col1) 
references 
   sample (col2)
on delete restrict;
alter table sample1 
add foreign key (col1) 
   references sample (col2)
on delete cascade;
When you create a foreign key constraint, Oracle default to "on delete restrict" to ensure that a parent rows cannot be deleted while a child row still exists.
However, you can also implement on delete cascade to delete all child rows when a parent row is deleted.
Using "on delete cascade" and "on delete restrict"  is used when a strict one-to-many relationship exists such that any "orphan" row violates the integrity of the data.
Also, see these important notes on foreign key indexing, especially important if you delete or update parent rows.
Many systems use "on delete cascade" when they have ad-hoc updates so that the end-user does not have to navigate the child table and delete dozens or hundreds of child entries.  Of course, using "on delete cascade" is dangerous because of possible mistakes and because issuing a single delete on a parent row might invoke thousands of deletes from the child table.
Obviously, if you are using "on delete cascade" and you do not create an index on the child parent key the deletion of a parent row would require a full-table scan of the child table, to find and delete the child rows.

http://www.dba-oracle.com/t_foreign_key_on_delete_cascade.htm

Friday, September 7, 2012

Where to find practical well-designed database schema examples to learn from?

This is a great source of example database schema's.
I can also recommend Beginning Database Design, published by Apress. I own this book and can confirm that it is of high quality. The book looks at a number of real world scenarios and explains the impact a certain design decision could have on the way the database works and the quality of the data and its output.
Finally I would advise building some small databases (E.G. contact management, Task list etc). Start by specifying some basic requirements and create some tables and queries. You WILL make some mistakes which is the only way of learning.

http://stackoverflow.com/questions/9993842/where-to-find-practical-well-designed-database-schema-examples-to-learn-from

Thursday, September 6, 2012

What's the difference between VARCHAR and NVARCHAR data types and when do I use them?

VARCHAR and NVARCHAR data types are both character data types that are variable-length.  Below is the summary of the differences between these 2 data types:
  VARCHAR(n) NVARCHAR(n)
Character Data Type Non-Unicode Data Unicode Data
Maximum Length 8,000 4,000
Character Size 1 byte 2 bytes
Storage Size Actual Length (in bytes) 2 times Actual Length (in bytes)
You would use NVARCHAR data type for columns that store characters from more than one character set or when you will be using characters that require 2-byte characters, which are basically the Unicode characters such as the Japanese Kanji or Korean Hangul characters.

http://www.sql-server-helper.com/faq/data-types-p01.aspx

What does “%Type” mean in Oracle sql?

Oracle (and PostgreSQL) have:
  • %TYPE
  • %ROWTYPE

%TYPE

%TYPE is used to declare variables with relation to the data type of a column in an existing table:
DECLARE v_id ORDERS.ORDER_ID%TYPE
The benefit here is that if the data type changes, the variable data type stays in sync.
Reference: http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14261/fundamentals.htm#i6080

%ROWTYPE

This is used in cursors to declare a single variable to contain a single record from the resultset of a cursor or table without needing to specify individual variables (and their data types). Ex:
DECLARE
  CURSOR c1 IS
     SELECT last_name, salary, hire_date, job_id 
       FROM employees 
      WHERE employee_id = 120;

  -- declare record variable that represents a row fetched from the employees table
  employee_rec c1%ROWTYPE; 
BEGIN
 -- open the explicit cursor and use it to fetch data into employee_rec
 OPEN c1;
 FETCH c1 INTO employee_rec;
 DBMS_OUTPUT.PUT_LINE('Employee name: ' || employee_rec.last_name);
END;
http://stackoverflow.com/questions/3790658/what-does-type-mean-in-oracle-sql 

Wednesday, September 5, 2012

What is a data binding?

Binding generally refers to a mapping of one thing to another - i.e. a datasource to a presentation object. It can typically refer to binding data from a database, or similar source (XML file, web service etc) to a presentation control or element - think list or table in HTML, combo box or data grid in desktop software.
You generally have to bind the presentation element to the datasource, not the other way around. This would involve some kind of mapping - i.e. which fields from the datasource do you want to appear in the output.
For more information in a couple of environments see:
http://stackoverflow.com/questions/25878/what-is-a-data-binding

What do Clustered and Non clustered index actually mean?

With a clustered index the rows are stored physically on the disk in the same order as the index. There can therefore be only one clustered index.
With a non clustered index there is a second list that has pointers to the physical rows. You can have many non clustered indexes, although each new index will increase the time it takes to write new records.
It is generally faster to read from a clustered index if you want to get back all the columns. You do not have to go first to the index and then to the table.
Writing to a table with a clustered index can be slower, if there is a need to rearrange the data.

http://stackoverflow.com/questions/1251636/what-do-clustered-and-non-clustered-index-actually-mean

Tuesday, September 4, 2012

What is the difference between composition and aggregation?

Found here
Both aggregation and composition are special kinds of associations. Aggregation is used to represent ownership or a whole/part relationship, and composition is used to represent an even stronger form of ownership. With composition, we get coincident lifetime of part with the whole. The composite object has sole responsibility for the disposition of its parts in terms of creation and destruction.
Moreover, the multiplicity of the aggregate end may not exceed one; i.e., it is unshared. An object may be part of only one composite at a time. If the composite is destroyed, it must either destroy all its parts or else give responsibility for them to some other object. A composite object can be designed with the knowledge that no other object will destroy its parts.
Composition can be used to model by-value aggregation, which is semantically equivalent to an attribute. In fact, composition was originally called aggregation-by-value in an earlier UML draft, with “normal” aggregation being thought of as aggregation-by-reference. The definitions have changed slightly, but the general ideas still apply. The distinction between aggregation and composition is more of a design concept and is not usually relevant during analysis.

http://stackoverflow.com/questions/813048/what-is-the-difference-between-composition-and-aggregation

JasperReports: default value instead of 'null'

Supposing the field name is "value", in the "Text Field Expression", write:
($F{value} != null) ? $F{value} : "0.00"

http://stackoverflow.com/questions/2402237/jasperreports-default-value-instead-of-null