#1 skill for #agile and #traditional project team members – “Enough-ness Sense” #pmot #baot

I’ve had the good fortune on being on small, medium, and large traditional and agile teams. As I’ve worked on Agile teams recently, I’ve come to realize that both types of teams require one non-technical skill above all others. (In my humble opinion) Interestingly enough, this skill is required on both traditional and agile projects but in contrasting ways.

The one skill

What is the one skill I refer to? Enough-ness Sense. ūüôā

What is Enough-ness Sense? It is the skill to know whether the work has been done to the appropriate level for the circumstance.

Funny enough, (har-har) this¬†Enough-ness Sense plays contrasting roles on traditional projects versus agile projects. On traditional projects, the¬†Enough-ness Sense is used to sense when we have done enough planning and analysis and we should move to developing and other doing activities. Without this sense of¬†Enough-ness Sense, sometimes traditional projects get stuck in analysis-paralysis and an inability to move until all the intricate details are known. Professionals with¬†Enough-ness Sense know the difference between an important factor that needs to be defined and unimportant factors that can be determined at a later time. This can be seen in how detailed the analysis and design documents are. Chances are if your documents are greater than 50+ pages you don’t have enough¬†Enough-ness Sense.

On Agile projects, the effect of¬†Enough-ness Sense is the reverse. Here the¬†Enough-ness Sense is used to ensure we don’t start work until we have enough of an overall solution and design. I have seen some Agile projects jump into development too early without an end-to-end thought out design. These projects then usually suffer from large rework and scope increases later as key elements were not considered and planned for. If you start coding without a User Story Map or some general design work, you probably don’t have enough¬†Enough-ness Sense.

In both scenarios, this Enough-ness Sense is crucial. And it is not specific to one role. The Enough-ness Sense is used to determine is the plan, requirements, code, and tests are at an appropriate level? It is a critical skill for a person to be able to look at the risk inherent in the situation and determine the appropriate level of thought required.

The negative side of Enough-ness Sense

So what happens if you don’t have enough¬†Enough-ness Sense on your project team?’

On a traditional project, this will result in waste. There will be too much planning, too much analysis, and too much design. Funny enough, it usually doesn’t result in too much coding as that is the last activity being done. But in some cases of gold-plating it can result in too much code.

On an agile project, this will result in rework. There will not have been enough planning, analysis, and design to envision a cohesive solution. As such, changes occur on the project not due to changes client requirements but rather because the project team did not think something through. It is a fine line to embrace change but to not use it as an excuse to not think something through. Agile is about embracing client change,  not a lack of thoroughness on the team.


Enough-ness Sense is critical for success on both traditional and agile projects. Unfortunately I don’t know how much¬†Enough-ness Sense is enough. (har-har)


How to create 10,000 Extract, Transform, and Load automated tests using 4 tables #agile #data

The thing I love about my chosen profession is the ability to learn new things and improve on lessons learned from past projects. Recently I was able to take on a problem that I have experienced on multiple past projects.

“How can we easily create automated tests for a Data Migration or Extract, Transform, and Load application?”

Recently I have been lucky to be on Agile projects where we were able to create a large number of automated tests. I was able to see the huge increase in quality that came with these automated tests. All of these projects were Web Applications developed in either C# or Java.

In my coding days, I was always either a Data Modeler or Database Programmer. I had been on three projects where I was the lead developer on the extract, transform, and load (ETL) application that was responsible for loading data into new enterprise databases. Sadly, we had absolutely no automated tests in any of these projects. As we developed our ETL application, we had to manually test the loads to ensure they were operating as intended. This became especially painful near the end of the project as a small insignificant change resulted in hours and hours of manual retesting. We soon discovered that we were more likely to make a mistake testing the changes than making the actual coding change. This situation was something that was not sustainable.

After working with Agile teams and seeing how they were able to easily create a large number of automated tests, I hoped I would get the chance to try to create them on the next Extract, Transform, and Load project I was on.

The Opportunity

My most recent project allowed me to again be part of a team that recreated the corporate Data Warehouse and also needed to create an entirely new ETL application to load the Data Warehouse. The database technology that was selected was the Microsoft stack. We used SQL Server 2012 as our database engine and SSIS as our technology to create the ETL application. The Data Warehouse we were loading had over 200+ main tables that the ETL process was loading.

One thing I wanted to ensure we did was to create a large number of automated tests for our ETL application. We investigated multiple frameworks that existed, but none of them seemed to allow us to easily create the number of automated tests we wanted. All of them seemed to still require a large amount of test set up and the tests themselves did not adapt easily to changes in the database schema. This had always been a problem in the past as I tried to created automated tests for ETL applications.


In my time as a Data Modeler/DBA I became very good at writing queries to read the Data Dictionary tables of the database itself to generate SQL statements to then be executed. Then was the serendipitous moment – could we also read the Data Dictionary tables to easily generate automated tests for the ETL application?

The answer was yes, with a small number of customized tables that contained the column to column data mapping information. (Since this information was not stored in the Data Dictionary). The Data Dictionary tables we accessed in SQL Server were the tables that are part of the INFORMATION_SCHEMA.

The Solution

Our solution contained the following elements:

  • tSQLt¬†¬†source framework for the automated testing framework
  • Data Mapping Spreadsheet that defined the Column to Column mapping
  • 4 custom tables that contain information from the Data Mapping Spreadsheet
  • Stored Procedures that read the INFORMATION_SCHEMA and 4 custom tables to automatically generate the tests

4 Custom Tables


ETL Automated Tests

With these four custom tables loaded from the Data Mapping Spreadsheet we created the following Stored Procedures to generate tests:

Table to Table ‚Äď Every Table

  • TstTableCount: Compares record counts between source data and target data. This will be done on table to table basis. This will be done starting from the target table and comparing to the associated source table or tables.
  • TstTableColumnDistinct: Compares counts on distinct values of columns. This is a valuable technique that points out a variety of possible data errors without doing a full validation on all fields.
  • TstTableColumnNull: Generates a report of all columns where all the contents of a field is all null. This typically can highlight situations where the column was not assigned in the ETL process.

Column ‚Äď Every Column

  • ¬†TstColumn¬≠DataMapping: Compares columns directly assigned from a source column on a field by field basis for 5-10 rows in the target table. More rows can be selected depending on the risk and complexity of the data transformation.
  • TstColumn¬≠ConstantMapping: Compares columns assigned a constant on a field by field basis for 5-10 rows in the target table. More rows can be selected depending on the risk and complexity of the data transformation.
  • TstColumn¬≠NullMapping: Compares columns assigned a Null value on a field by field basis for 5-10 rows in the target table. More rows can be selected depending on the risk and complexity of the data transformation.
  • TstColumnTransformedMapping: Compares transformed columns on a field by field basis for 5-10 rows in the target table. More rows can be selected depending on the risk and complexity of the data transformation.

The Results

By creating these 4 custom tables and stored procedures we are now able to generated 10’s of thousands tests nightly. More importantly, we are able to have these tests be flexible to schema changes as the tests are generated by reading the INFORMATION_SCHEMA ¬†and 4 custom tables. A large part of generating our tests is now data driven.