programming4us
programming4us
DATABASE

Surviving Changes to Columns

- How To Install Windows Server 2012 On VirtualBox
- How To Bypass Torrent Connection Blocking By Your ISP
- How To Install Actual Facebook App On Kindle Fire
10/9/2010 5:57:37 PM
One of the most common causes of brittle code is a failure to program defensively against subsequent changes to the columns of the underlying data tables. These changes can take the form of adding columns, or changing the definition of existing columns, for example, their data type or size.

Of course, some changes are so serious that our code cannot survive them. For example, if a column that is required in a query is removed, then that is a breaking change that we can do nothing to protect against. However, in many other cases, we can develop code that is resilient to changes to the underlying columns. In this section, we'll examine a few examples, explaining how to make our code more robust in each case.

Qualifying column names

It takes a few extra keystrokes to qualify column names when writing our queries, but these keystrokes pay healthy dividends in terms of the resilience of the resulting code. Consider the example tables created in Listing 1, Shipments and ShipmentItems, populated with sample data.

Listing 1. The Shipments and ShipmentItems tables.

Against this schema, we develop the query shown in Listing 2, which, for every shipment, selects its Barcode and calculates the number of shipment items with a ShipmentBarcode matching the Barcode for that shipment.

Listing 2. A correlated subquery that works correctly even though column names are not qualified.

The inner query is an example of a correlated subquery; it uses in its WHERE clause the Barcode column from the Shipments table in the outer query. Notice that the query works even though we failed to qualify the column names.

Yet the situation can change. Rather than just having a barcode to identify shipments, we start using barcodes to identify individual items in the shipment, so we need to add a Barcode column to the ShipmentItems table, as shown in Listing 3.

Listing 3. The query works differently when a Barcode column is added to the ShipmentItems table.

We do not get any error messages; our query continues to work but silently changes its behavior. With the addition of the Barcode column to the ShipmentItemsShipmentItems whose Barcode value matches their ShipmentBarcode value. In other words, the correlated subquery becomes uncorrelated; the WHERE clause of the inner query no longer uses a value from the outer query. table, our query is interpreted quite differently. Now, for every shipment, it selects its barcode followed by the number of

It takes just a few moments to properly qualify all the column names in our query, and the improved query will continue to work correctly even after the addition of the Barcode column to our ShipmentItems table, as shown in Listing 4.

Listing 4. Qualified column names lead to more robust code.

As I hope this example proves, qualifying column names improves the robustness of our queries. The same technique also ensures that you get an error, instead of incorrect results, when a column is removed or when a column name is misspelled. For example, consider the case of an uncorrelated subquery that becomes correlated because a column from a table in the subquery is removed (or misspelled in the query), but happens to match a column in the outer query. Many developers forget that the parser will look in the outer query if it fails to find a match in the inner query.

Handling changes in nullability: NOT IN versus NOT EXISTS

Queries with NOT IN have a well known vulnerability. They do not work as an inexperienced database programmer might expect, if the subquery contained in the NOT IN clause returns at least one NULL. This is easy to demonstrate. In Listing 5, we recreate our ShipmentItems table with a Barcode column that does not accept NULLs, and then insert some fresh data. We then execute a query that uses the NOT IN clause.

Listing 5. Creating the new ShipmentItems table, populating it with some test data, and proving that the query using the NOT IN clause succeeds.

The query works as expected, and will continue to do so as long as the Barcode column disallows NULLs. However, let's see what happens when we change the nullability of that column, as shown in Listing 6.

Listing 6. Now that the Barcode column accepts NULL, our NOT IN query no longer works as expected.

This can often seem like a very subtle bug; sometimes the query works as expected, but sometimes it does not. In fact, the behavior is completely consistent. Every time the subquery inside the NOT IN clause returns at least one NULL, then the query returns nothing. Listing 7 shows a much simpler script that demonstrates this behavior very clearly.

Listing 7. NOT IN queries will work differently when there are NULLs in the subquery.

This behavior may seem counterintuitive, but it actually makes perfect sense. Let me explain why, in just two simple steps. Listing 3-24 shows two queries. The first one uses an IN clause; the second is logically equivalent to the first, but the IN clause has been expressed using OR predicates.

Listing 3-24. A query with an IN clause, and a logically equivalent query using OR.

In the second step, we must consider the NOT IN version of our query, convert it to use OR predicates, and then apply DeMorgan's law, which states that for the logical expressions P and Q:

NOT(P OR Q) = (NOT P) AND (NOT Q)

The result is shown in Listing 8.

Listing 8. Three equivalent queries, the first using NOT IN, the second using two OR predicates and the third one with two AND predicates.

Take note of the (1<>NULL) condition in the final query; by definition, the result of this condition is always unknown and, when joined with other conditions using AND, the result of the whole expression will always be false. This is why no row can ever qualify a NOT IN condition if the subquery inside that NOT IN returns at least one NULL.

Whenever we write a query using the NOT`IN clause, we implicitly assume that the subquery can never return a NULL. Usually, we do not need to rely on such assumptions, because it is very easy to develop correct queries without making them. For example, Listing 9 shows how to remove this assumption from our original SELECT query.

Listing 9. A query with a subquery that never returns any NULLs.

By adding just one short and simple line to our query, we have improved its robustness. Alternatively, we can develop a query that does not assume anything about the nullability of any columns, as shown in Listing 10.

Listing 10. An equivalent query with NOT EXISTS.

This query will work in the same way, regardless of whether or not the Barcode column is nullable.

Handling changes to data types and sizes

We frequently develop stored procedures with parameters, the types and lengths of which must match the types and lengths of some underlying columns. By way of an example, Listing 11 creates a Codes table, and populates it with test data. It then to creates a SelectCode stored procedure with one parameter, @Code, the type and length of which must match the type and length of the Code column in Codes table.

Listing 11. The Codes table and SelectCode stored procedure.

When we execute the stored procedure, supplying an appropriate value for the @Code parameter, the result is as expected, and is shown in Listing 12.

Listing 12. The SelectCode stored procedure works as expected.

Suppose, however, that we have to change the length of the Code column to accept longer values, as shown in Listing 13.

Listing 13. Increasing the length of the Code column and adding a row with maximum Code length.

However, the unmodified stored procedure still expects a VARCHAR(5) parameter, and it silently truncates any longer value, as shown in Listing 14.

Listing 14. The unchanged stored procedure retrieves the wrong row.

Such bugs are quite subtle, and may take considerable time to troubleshoot. How can we prevent such errors, except for manually referring to a data dictionary every time we change column types? The best way, in my opinion, is to document the requirement that the type and lengths of our stored procedure parameters must match the type and lengths of our columns and then incorporate this requirement into a boundary case unit test, to protect us from such errors.

Other  
 
Top 10
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Finding containers and lists in Visio (part 2) - Wireframes,Legends
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Finding containers and lists in Visio (part 1) - Swimlanes
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Formatting and sizing lists
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Adding shapes to lists
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Sizing containers
- Microsoft Access 2010 : Control Properties and Why to Use Them (part 3) - The Other Properties of a Control
- Microsoft Access 2010 : Control Properties and Why to Use Them (part 2) - The Data Properties of a Control
- Microsoft Access 2010 : Control Properties and Why to Use Them (part 1) - The Format Properties of a Control
- Microsoft Access 2010 : Form Properties and Why Should You Use Them - Working with the Properties Window
- Microsoft Visio 2013 : Using the Organization Chart Wizard with new data
REVIEW
- First look: Apple Watch

- 3 Tips for Maintaining Your Cell Phone Battery (part 1)

- 3 Tips for Maintaining Your Cell Phone Battery (part 2)
programming4us programming4us
programming4us
 
 
programming4us