>  Blog

Data Integrity Testing


Periodically, examine the stored data in your system to find out and report if any errors have crept in; and fix them automatically where possible.







Pradyumn Sharma

July 18, 2017


The Story of My First Financial Accounting System

Back in mid-1980s, I implemented my first Financial Accounting System for a client. It was not a very large application; I built it, single-handed, in about three months' time, using COBOL as the programming language and ISAM (Indexed Sequential Access Method) files for storing data.

A few weeks after going live, the client complained that the Trial Balance was not tallying.

In Financial Accounting, the sum of debits and the sum of credits have to be the same, for every transaction. Logically, therefore, the Trial Balance, which prints the net debit and credit balances for each account head on a given date, should always have the same total for the debit and credit columns.


IDEs, debuggers, data browsing tools were almost non-existent in those days. It took me a few hours of work to find out that for some transactions, data was not correctly stored in the files; the debits and credits were not matching for those transactions. It took some more time to find out the cause of the bug and fix it. And even more work to correct the errors in the data in the files.

Unfortunately, the problem surfaced again a couple of months later. And again. And again. Sometimes it did not even seem like a programming error on my part. For some inexplicable reason, occasionally, a record would just not get written to a file. Maybe I did not catch an error in the code properly. Or maybe an insert just "fell off" without an error being reported by the ISAM file system. I don't remember clearly now. But I remember the frustration of seeing the beast raise its ugly head time and again.



Periodically Checking for Data Integrity

Finally, I decided to write a batch program that would read all the transaction records from the ISAM files and examine their contents for data integrity. More specifically, if for any transaction, the sum of debits and the sum of credits did not match, it would print an error report. I requested the client to run that batch program once every day and call me up if it reported any errors.

Based on these reports, I gradually found and fixed the bugs in my code. As the error reports stopped reporting even the occasional mismatches, and Trial Balance (and other reports) continued to produce correct output, the client started running the batch program less and less frequently.



Making a Habit of Data Integrity Testing

When I worked on my next project (an Inventory Management System), I used dBase III Plus. It was a popular, entry-level relational database system back in the day. Programming was much easier with dBase III Plus than with COBOL. Database operations were also much easier to handle compared to ISAM files. However, there was no referential integrity validation, no other constraints, no triggers, no stored procedures, no atomic transactions, no error handling in dBase III Plus.

In the application that I was building, there were derived values in tables. A POHeader table would, for example, store the sum of values from PODetails table. And things would go wrong sometimes, due to programming errors. A record would be added or modified in the PODetails table, but the sum would not be correctly updated in the POHeader table.

I remembered the lessons from the previous project. I decided to implement a Data Integrity Testing program in this system too. In fact, now, my program not only checked and printed a list of such integrity problems in the data, but also sought the permission from the user to automatically correct those problems wherever possible.

I saw immense benefits from this approach. It soon became my habit. In almost every application that I built, I implemented a Data Integrity Testing program as soon as possible. Looking back, now I realize this was a kind of test automation of a very rudimentary type.

Much later, when I built my first application using a robust relational database like Oracle, I kept up with this habit. After all, not all validations and calculations of derived values can be (or should be) handled by table constraints or triggers.



Lessons Learned and Recommendations

Data Integrity checks remind me of similar processes in our lives - regular housekeeping, getting your vehicle a regular ‘service’ or ‘check-up’ or a frequently observed phenomenon in IT - computer virus scans! So even when your house is in order or your vehicle is running fine or the incoming files and connections are scanned properly, we perform the ‘maintenance’ tasks to ensure that things keep running smoothly. Similarly a discrepancy in data, if not caught in time may cause numerous problems down the line, so it’s best to look at the data regularly and address all the inconsistencies that may have crept in.

Errors do creep in in the stored data. Mostly due to programming errors. Sometimes (though rarely) due to bugs in the database platforms. Many of these errors remain unnoticed for a long time, and then sometimes they hit hard.

Guard yourself against such risks by implementing a Data Integrity Testing story at the earliest. Consider the following possibilities:

  • Data within a record that may be internally inconsistent. For example, value = quantity * rate. If you store the value field in a record as well, is it really equal to “quantity * rate”?
  • Inconsistencies between parent and child (or header and details) tables. Let’s say you have PurchaseOrders and PurchaseOrderLineItems tables in your application. There is a 1-to-many relationship between these tables. Further, let us suppose that you store the sum of values for all PurchaseOrderLineItems records for a PurchaseOrder in the corresponding PurchaseOrders record. Check if the derived value in the PurchaseOrders record is equal to the sum of values from the PurchaseOrderLineItems records.

  • Inconsistencies across peer records in a table or across tables. For example, if you store the fact that "A paid 5 bitcoins to B" in one record, and store that "B received 5 bitcoins from A" in another record, are these two records mutually consistent.
  • Inconsistencies in any other derived values. For example, if, in a financial accounting system, you have multiple tables for storing different types of transactions, and some table where you maintain the current balance for each account head, is the value for the current balance field correctly stored?

Also consider:

  • An appropriate frequency for running the data integrity testing batch option? Run it too frequently and the performance of the system may be adversely affected. Run it too infrequently and the errors remain hidden for a long time. One idea could be to begin with a reasonably high frequency, such as once a day. If this has no noticeable performance impact, you can continue to run it on a daily basis. Otherwise, you could gradually reduce the frequency over a period of time, as the confidence increases (due to no errors being found for a very long period of time).
  • User Experience concerns or search and reporting needs sometimes make it necessarry to duplicate the same data across two databases or two information stores and in this scenario as well, it would be worthwhile to maintain a batch (or cron) job that validates data integrity. This is particularly important in micro-services scenarios and where ‘API is the product’.
  • If it is possible to fix the errors automatically, ask the user to specify whether such errors should be fixed automatically, or only based on manual action. Oftentimes you may find it valuable to inform the user what the fix will be and provide a way to ‘reverse’ the fix.
  • Generating an automatic alert (email / dashboard) whenever any such errors are found. Of course, data security and privacy concerns must be appropriately dealt with - so your email may have information about the integrity issue but you may want to remove sensitive information from the email anyway.


Closing Thoughts

As products and applications mature, as the business needs and technology evolves and your project grows over time, bad data may prove to be the Achilles’ heel of the operation hampering scale, reporting or even day-to-day operations. So in the spirit of test driven development (Test Early, Test Often) it is really valuable to test data integrity against known business rules on a regular basis.