11 May 2018

One Database, Two Vendors, No Love Story or Maybe…


Introduction

Situation: An organization has several BI tools provisioned with data from the same data warehouse (DW), the BI infrastructure being supported by the same service provider (vendor). The organization wants to adopt a new BI technology, though for it must be brought another vendor into the picture. The data the tool requires are already available in the DW, though the DW needs to be extended with logic and other components to support the new tool. This means that two vendors will be active in the same DW, more generally in the same environment.

Question(s): What is the best approach for making this work? Which are the challenges for making it work, considering that two vendors?


Preliminary

    When you ask IT people about this situation, many will tell you that’s not a good idea, being circumspect at having two vendors within the same environment. Some will recall previous experience in which things went really bad, escalated to some degree. They will even show their disagreement through body language or increase tonality. Even if they had also good experiences with having two vendors support the same environment, the negative experiences will prevail. It’s the typical reaction to an idea when something that caused considerable trouble is recalled. This behavior is understandable as generally human tend to remember more the issues they had, rather than successes. Problems leave deeper marks than success, especially when challenges are seen as burdens.

    Reacting defensively is a result of the “I’ve been burned once” syndrome. People react adversely and tend to avoid situations in which they were burned, instead of dealing with them, instead of recognizing which were the circumstances that lead to the situation in the first place, of recognizing opportunities for healing and raising above the challenges.


   Personally, at a first glance, the caution would make me advise as well against having two or more vendors playing within same playground. I had my plate of extreme cases in which something went wrong and the vendors started acting like kids. Parents (in general people who work with children) know what I’m talking about, children don’t like to share their toys and parents often find themselves in the position of mediating between them. When the toy get’s broken it’s easy to blame other kid for it, same as somebody else must put the toy at its place, because that somebody played the last time with it. It’s a mix between I’m in charge and the blame game. Who needs that?

  At second sight, if parents made it, why wouldn’t professionals succeed in making two vendors work together? Sure, parents have more practice in dealing with kids, have such situations on a daily basis, and there are fewer variables to think about it… I have seen vendors sitting together until they come up with a solution, I’ve seen vendors open to communicate, putting the customer on the first place, even if that meant living the ego behind. Where there’s a will there’s a way.


The Solution Space

    In IT there are seldom general recipes that always lead to success, and whether a solution works or not depends on a serious of factors – environment, skills, communication, human behavior and quite often chance, the chance of doing the right thing at the right time. However, the recipe can be used as a starting point, eventually to define the best scenario, what will happen when everything goes well. At the opposite side there is the worst scenario, what will happen when everything goes south. These two opposite scenarios are in general the frame in which a solution can be defined.

    Within this frame one can add several other reference points or paths, and these are made of the experience of people handling and experiencing similar situations – what worked, what didn’t, what could work, what are the challenges, and so on. In general, people’s experience and knowledge prove to be a good estimator in decision making, and even if relative, it proves some insight into the problem at hand.


    Let’s reconsider the parents put in the situation of dealing with children fighting for the same toy, though from the perspective of all the toys available to play with. There are several options available: the kids could take (supervised) turn in playing with the toys, fact that could be a win-win situation if they are willing to cooperate. One can take the toys (temporarily) away, though this could lead to other issues. One can reaffirm who’s the owner of each toy, each kid being allowed to play only with his toy. One could buy a second toy, and thus brake the bank even if this will not make the issue entirely go away. Probably there are other solutions inventive parents might find.

    Similarly, in the above stated problem, one option, and maybe the best, is having the vendors share ownership for the DW by finding a way to work together. Defining the ownership for each tool can alleviate some of the problems but not all, same as building a second DW. We can probably all agree that taking the tools away is not the right thing to do, and even if it’s a solution, it doesn’t support the purpose.


Sharing Ownership

    Complex IT environments like the one of a DW depend on vendors’ capability of working together in reaching the same goal, even if in play are different interests. This presumes the disposition of the parties in relinquishing some control, sharing responsibilities. Unfortunately, not all vendors are willing to do that. That’s the point where imaginary obstacles are built, is where effort needs to be put to eliminate such obstacles.

    When working together, often one of the parties must play the coordinator role. In theory, this role can be played by any of the vendors, and the roles can even change after case. Another approach is when the coordinator role can be taken also by a person or a board from the customer side. In case of a data warehouse it can be an IT professional, a Project Manager or a BI Competency Center (BICC) . This would allow to smoothly coordinate the activities, as well to mediate the communication and other types of challenges faced.


    How will ownership sharing work? Let’s suppose vendor A wants to change something in the infrastructure. The change is first formulated, shortly reviewed, and approved by both vendors and customer, and will then be implemented and documented by vendor A as needed. Vendor B is involved in the process by validating the concept and reviewing the documentation, its involvement being minimized. There can be still some delays in the process, though the overhead is somehow minimized. There will be also scenarios in which vendor B needs only to be informed that a change has occurred, or sometimes is enough if a change was properly documented.

    This approach involves also a greater need for documentation, versioning, established processes, their role being to facilitate the communication and track the changes occurred in the environment.


Splitting Ownership

    Splitting the ownership involves setting clear boundaries and responsibilities within which each vendor can perform the work. One is forced thus to draw a line and say which components or activities belong to each vendor. 

    The architecture of existing solutions makes it sometimes hard to split the ownership when the architecture was not designed for it. A solution would be to redesign the whole architecture, though even then might not be possible to draw a clear line when there are grey areas. One needs eventually to consider the advantages and disadvantages and decide to which vendor the responsibility suits best.


    For example, in the context of a DW security can be enforced via schemas within same or different databases, though there are also objects (e.g. tables with basis data) used by multiple applications. One of the vendors (vendor A) will get the ownership of the objects, thus when vendor B needs a change to those table, it must require the change to vendor A. Once the changes are done the vendor B needs to validate the changes, and if there are problems further communication occurs. Per total this approach will take more time than if the vendor B would have done alone the changes. However, it works even if it comes with some challenges.

    There’s also the possibility to give vendor B temporary permissions to do the changes, fact that will shorten the effort needed. The vendor A will still be in charge, and will have to prove the documentation, and do eventually some validation as well.


Separating Ownership

    Giving each vendor its own playground is a costly solution, though it can be the only solution in certain scenarios. For example, when an architecture is supposed to replace (in time) another, or when the existing architecture has certain limitations. In the context of a DW this involves duplicating the data loads, the data themselves, as well logic, eventually processes, and so on.

    Pushing this just to solve a communication problem is the wrong thing to do. What happens if a third or a fourth vendor joins the party? Would it be for each vendor a new environment created? Hopefully, not…

    On the other side, there are also vendors that don’t want to relinquish the ownership, and will play their cards not to do it. The overhead of dealing with such issues may surpass in extremis the costs of having a second environment. In the end the final decision has the customer.


Hybrid Approach


    A hybrid between sharing and splitting ownership can prove to give the best from the two scenarios. It’s useful and even recommended to define the boundaries of work for each vendor, following to share ownership on the areas where there’s an intersection of concerns, the grey areas. For sensitive areas there could be some restrictions in cooperation.

    A hybrid solution can involve as well splitting some parts of the architecture, though the performance and security are mainly the driving factors.


Conclusion

   I wanted with this post to make the reader question some of the hot-brained decisions made when two or more vendors are involved in the supporting an architecture. Even if the problem is put within the context of a DW it’s occurrence extends far beyond this context. We are enablers and problem solvers. Instead of avoiding challenges we should better make sure that we’re removing or minimizing the risks. 

08 May 2018

Data Migration – Facts, Principles and Practices


Introduction

    Ask a person who is familiar with cars ‘how a car works‘ - you’ll get an answer even if it doesn’t entirely reflect the reality. Of course, the deeper one's knowledge about cars, the more elaborate or exact is the answer. One doesn't have to be a mechanic to give an acceptable explanation, though in order to design, repair or build a car one needs extensive knowledge about a car’s inner workings.

    Keeping the proportions, the same statements are valid for the inner workings of Data Migrations (DM) – almost everybody in IT knows what a DM is, though to design or perform one you might need an expert.

    The good news about DMs is that their inner workings are less complex than the ones of cars. Basically, a DM requires some understanding of data architecture, data modelling and data manipulation, and some knowledge of business data and processes. A data architect, a database developer, a data modeler or any other data specialist can approach such an endeavor. In theory, with some guidance also a person with knowledge about business data and processes can do the work. Even if DMs imply certain complexity, they are not rocket science! In fact, there are tools that can be used to do most the work, there some general principles and best practices about the architecture, planning and execution that can help in the process.

Principles and Best Practices

    It might be useful to explain the difference between principles and best practices, because they’ll more likely lead you to success if you understood and incorporated them in your solutions. Principles as patterns of advice are general or fundamental ideas, truths or values stated in a context-independent manner. Practices on the other side are specific actions or applications of these principles stated in a context-dependent way. The difference between them is relatively thin, and therefore, they are easy to confound, though by looking at their generality, one can easily identify which is which.

    For example, in the 60’s become known the “keep it simple, stupid” (aka KISS) principle, which states that a simple solution works better than a complex one, and therefore as key goal one should search the simplicity in design. Even if kind of pejorative, it’s a much simpler restatement of Occam’s razor –do something in the simplest manner possible because simpler is usually better. To apply it one must understand what simplicity means, and how it can be translated in designs. According to Hans Hofmann “the ability to simplify means to eliminate the unnecessary so that the necessary may speak” or in a quote quote attributed to Einstein: “everything should be made as simple as possible, but not simpler”. This is the range within which the best practices derived from KISS can be defined.

   There are multiple practices that allow reducing the complexity of DM solutions: start with a Proof-of-Concept (PoC), start small and build incrementally, use off-the-shelf software, use the best tool for the purpose, use incremental data loads, split big data files into smaller ones, and so on. As can be seen all of them are direct actions that address specific aspects of the DM architecture or process.


Data Migration Truths

    When looking at principles and best practices they seem to be further rooted in some basic truths or facts common to most DMs. When considered together, they offer a broader view and understanding of what a DM is about.  Here are some of the most important facts about DMs:

DM as a project:

  • A DM is a subproject with specific characteristics
  • A DM is typically a one-time activity before Go live
  • A DM’s success is entirely dependent or an organization’s capability of running projects
  • Responsibilities are not always clear
  • Requirements change as the project progresses
  • Resources aren't available when needed
  • Parallel migrations require a common strategy
  • A successful DM can be used as recipe for further migrations
  • A DM's success is a matter of perception
  • The volume of work increases toward the end


DM Architecture

  • A DM is more complex and messier than initially thought
  • A DM needs to be repeatable
  • A DM requires experts from various areas
  • There are several architectures to be considered
  • The migration approach is dependent on the future architecture
  • Management Systems have their own requirements
  • No matter how detailed the planning something is always forgotten
  • Knowledge of the source and target systems aren't always available
  • DM are too big to be performed manually
  • Some tasks are easier to be performed manually
  • Steps in the migration needs to be rerun
  • It takes several iterations before arriving to the final solution
  • Several data regulations apply
  • Fall-back is always an alternative
  • IT supports the migration project/processes
  • Technologies are enablers and not guarantees for success
  • Tools address only a set of the needed functionality
  • Troubleshooting needs to be performed before, during and after migrations
  • Failure/nonconformities need to be documented
  • A DM is an opportunity to improve the quality of the data
  • A DM needs to be transparent for the business


DM implications for the Business:

  • A DM requires a downtime for the system involved
  • The business has several expectations/assumptions
  • Some expectations are considered as self-evident
  • The initial assumptions are almost always wrong
  • A DM's success/failure depends on business' perception
  • Business' knowledge about the data and processes is relative
  • The business is involved for whole project’s duration
  • Business needs continuous communication
  • Data migration is mainly a business rather than a technical challenge
  • Business’ expertize in every data area is needed
  • DM and Data Quality (DQ) need to be part of a Data Management strategy
  • Old legacy system data have further value for the business
  • Reporting requirements come with their own data requirements


DM and Data Quality:

  • Not all required data are available
  • Data don't match the expectations
  • Quality of the data needs to be judged based on the target system
  • DQ is usually performed as a separate project with different timelines
  • Data don't have the same importance for the business
  • Improving DQ is a collective effort
  • Data cleaning needs to be done at the source (when possible)
  • Data cleaning is a business activity
  • The business is responsible for the data
  • Quality improvement is governed by 80-20 rule
  • No organization is willing paying for perfect data quality
  • If can’t be counted, it isn’t visible


More to come, stay tuned…

24 February 2018

Misusing Views and Pseudo-Constants

   Views as virtual tables can be misused to replace tables in certain circumstances, either by storing values within one or multiple rows, like in the below examples:

-- parameters for a BI solution
CREATE VIEW dbo.vLoV_Parameters
AS
SELECT Cast('ABC' as nvarchar(20)) AS DataAreaId
, Cast(GetDate() as Date) AS CurrentDate
, Cast(100 as int) AS BatchCount

GO

SELECT *
FROM dbo.vLoV_Parameters

GO

-- values for a dropdown
CREATE VIEW dbo.vLoV_DataAreas
AS
SELECT Cast('ABC' as nvarchar(20)) AS DataAreaId
, Cast('Company ABC' as nvarchar(50)) AS Description
UNION ALL
SELECT 'XYZ' DataAreaId
, 'Company XYZ'

GO

SELECT *
FROM dbo.vLoV_DataAreas

GO

    These solutions aren’t elegant, and typically not recommended because they go against one of the principles of good database design, namely “data belong in tables”, though they do the trick when needed. Personally, I used them only in a handful of cases, e.g. when it wasn’t allowed to create tables, when it was needed testing something for a short period of time, or when there was some overhead of creating a table for 2-3 values. Because of their scarce use, I haven’t given them too much thought, not until I discovered Jared Ko’s blog posting on pseudo-constants. He considers the values from the first view as pseudo-constants, and advocates for their use especially for easier dependency tracking, easier code refactoring, avoiding implicit data conversion and easier maintenance of values.


   All these are good reasons to consider them, therefore I tried to take further the idea to see if it survives a reality check. For this I took Dynamics AX as testing environment, as it makes extensive use of enumerations (aka base enums) to store list of values needed allover through the application. Behind each table there are one or more enumerations, the tables storing master data abounding of them.  For exemplification let’s consider InventTrans, table that stores the inventory transactions, the logic that governs the receipt and issued transactions are governed by three enumerations: StatusIssue, StatusReceipt and Direction.


-- Status Issue Enumeration
CREATE VIEW dbo.vLoV_StatusIssue
AS
SELECT cast(0 as int) AS None
, cast(1 as int) AS Sold
, cast(2 as int) AS Deducted
, cast(3 as int) AS Picked
, cast(4 as int) AS ReservPhysical
, cast(5 as int) AS ReservOrdered
, cast(6 as int) AS OnOrder
, cast(7 as int) AS QuotationIssue

GO

-- Status Receipt Enumeration
CREATE VIEW dbo.vLoV_StatusReceipt
AS
SELECT cast(0 as int) AS None
, cast(1 as int) AS Purchased
, cast(2 as int) AS Received
, cast(3 as int) AS Registered
, cast(4 as int) AS Arrived
, cast(5 as int) AS Ordered
, cast(6 as int) AS QuotationReceipt

GO

-- Inventory Direction Enumeration
CREATE VIEW dbo.vLoV_InventDirection
AS
SELECT cast(0 as int) AS None
, cast(1 as int) AS Receipt
, cast(2 as int) AS Issue


   To see these views at work let’s construct the InventTrans table on the fly:

-- creating an ad-hoc table 
SELECT *
INTO  dbo.InventTrans
FROM (VALUES (1, 1, 0, 2, -1, 'A0001')
, (2, 1, 0, 2, -10, 'A0002')
, (3, 2, 0, 2, -6, 'A0001')
, (4, 2, 0, 2, -3, 'A0002')
, (5, 3, 0, 2, -2, 'A0001')
, (6, 1, 0, 1, 1, 'A0001')
, (7, 0, 1, 1, 50, 'A0001')
, (8, 0, 2, 1, 100, 'A0002')
, (9, 0, 3, 1, 30, 'A0003')
, (10, 0, 3, 1, 20, 'A0004')
, (11, 0, 1, 2, 10, 'A0001')
) A(TransId, StatusIssue, StatusReceipt, Direction, Qty, ItemId)


    Here are two sets of examples using literals vs. pseudo-constants:


--example issued with literals
SELECT top 100 ITR.*
FROM dbo.InventTrans ITR
WHERE ITR.StatusIssue = 1
   AND ITR.Direction = 2

GO
--example issued with pseudo-constants
SELECT top 100 ITR.*
FROM dbo.InventTrans ITR
      JOIN dbo.vLoV_StatusIssue SI
        ON ITR.StatusIssue = SI.Sold
      JOIN dbo.vLoV_InventDirection ID
        ON ITR.Direction = ID.Issue

GO

--example receipt with literals
SELECT top 100 ITR.*
FROM dbo.InventTrans ITR
WHERE ITR.StatusReceipt= 1
   AND ITR.Direction = 1

GO

--example receipt with pseudo-constants
SELECT top 100 ITR.*
FROM dbo.InventTrans ITR
      JOIN dbo.vLoV_StatusReceipt SR
        ON ITR.StatusReceipt= SR.Purchased
      JOIN dbo.vLoV_InventDirection ID
        ON ITR.Direction = ID.Receipt


    As can be seen the queries using pseudo-constants make the code somehow readable, though the gain is only relative, each enumeration implying an additional join. In addition, when further business tables are added to the logic (e.g. items, purchases or sales orders)  it complicates the logic, making it more difficult to separate the essential from nonessential. Imagine a translation of the following query:

-- complex query
  SELECT top 100 ITR.*
  FROM dbo.InventTrans ITR
              <several tables here>
  WHERE ((ITR.StatusReceipt<=3 AND ITR.Direction = 1)
    OR (ITR.StatusIssue<=3 AND ITR.Direction = 2))
    AND (<more constraints here>)

     The more difficult the constraints in the WHERE clause, the more improbable is a translation of the literals into pseudo-constraints. Considering that an average query contains 5-10 tables, each of them with 1-3 enumerations, the queries would become impracticable by using pseudo-constants and quite difficult to troubleshoot their execution plans.

    The more I’m thinking about, an enumeration data type as global variable in SQL Server (like the ones available in VB) would be more than welcome, especially because values are used over and over again through the queries. Imagine, for example, the possibility of writing code as follows:

-- hypothetical query
SELECT top 100 ITR.*
FROM dbo.InventTrans ITR
WHERE ITR.StatusReceipt = @@StatusReceipt .Purchased
   AND ITR.Direction = @@InventDirection.Receipt


    From my point of view this would make the code more readable and easier to maintain. Instead, in order to make the code more readable, one’s usually forced to add some comments in the code. This works as well, though the code can become full of comments.

-- query with commented literals
SELECT top 100 ITR.*
FROM dbo.InventTrans ITR
WHERE ITR.StatusReceipt <=3 – Purchased, Received, Registered
   AND ITR.Direction = 1-- Receipt


    In conclusion, pseudo-constants’ usefulness is only limited, and their usage is  against developers’ common sense, however a data type in SQL Server with similar functionality would make code more readable and easier to maintain.


PS: It is possible to simulate an enumeration data type in tables’ definition by using a CHECK constraint.

13 August 2017

Who Messed with My Data?


Introduction

“Errors, like straws, upon the surface flow;
He who would search for pearls must dive below.”
John Dryden

    Life of a programmer is full of things that stopped working overnight. What’s beautiful about such experiences is that always there is a logical explanation for such “happenings”. There are two aspects - one is how to troubleshoot such problems, and the second – how to avoid such situations, and this is typically done through what we refer as defensive programming. On one side avoiding issues makes one’s life simpler, while issues make it fuller.

   I can say that I had plenty such types of challenges in my life, most of them self-created, mainly in the learning process, but also a good share of challenges created by others. Independently of the time spent on troubleshooting such issues, it’s the experience that counts, the little wins against the “dark” side of programming. In the following series of posts I will describe some of the issues I was confronted directly or indirectly over time. In an ad-hoc characterization they can be split in syntax, logical, data, design and systemic errors.

Syntax Errors

“Watch your language young man!”

    Syntax in natural languages like English is the sequence in which words are put together, word’s order indicating the relationship existing between words. Based on the meaning the words carry and the relationships formed between words we are capable to interpret sentences. SQL, initially called SEQUEL (Structured English Query Language) is an English-like language designed to manipulate and retrieve data. Same as natural languages, artificial languages like SQL have their own set of (grammar) rules that when violated lead to runtime errors, leading to interruption in code execution or there can be cases when the code runs further leading to inconsistencies in data. Unlike natural languages, artificial languages interpreters are quite sensitive to syntax errors.

    Syntax errors are common to beginners, though a moment of inattention or misspelling can happen to anyone, no matter how versatile one’s coding is. Some are more frequent or have a bigger negative impact than others. Here are some of the typical types of syntax errors:
- missing brackets and quotes, especially in complex formulas;
- misspelled commands, table or column names;
- omitting table aliases or database names;
- missing objects or incorrectly referenced objects or other resources;
- incorrect statement order;
- relying on implicit conversion;
- incompatible data types;
- incorrect parameters’ order;
- missing or misplaced semicolons;
- usage of deprecated syntax.

   Typically, syntax errors are easy to track at runtime with minimal testing as long the query is static. Dynamic queries on the other side require sometimes a larger number of combinations to be tested. The higher the number of attributes to be combined and the more complex the logic behind them, the more difficult is to test all combinations. The more combinations not tested, the higher the probability that an error might lurk in the code. Dynamics queries can thus easily become (syntax) error generators.

Logical Errors

“Students are often able to use algorithms to solve numerical problems
without completely understanding the underlying scientific concept.”
Eric Mazur

   One beautiful aspect of the human mind is that it needs only a rough understanding about how a tool works in order to make use of it up to an acceptable level. Therefore often it settles for the minimum of understanding that allows it to use a tool. Aspects like the limits of a tool, contexts of applicability, how it can be used efficiently to get the job done, or available alternatives, all these can be ignored in the process. As the devil lies in details, misunderstanding how a piece of technology works can prove to be our Achilles’ heel. For example, misunderstanding how sets and the different types of joins work, that lexical order differ from logical order and further to order of execution, when is appropriate or inappropriate to use a certain technique or functionality can make us make poor choices.

   One of these poor choices is the method used to solve a problem. A mature programming language can offer sometimes two or more alternatives for solving a problem. Choosing the inadequate solution can lead to performance issues in time. This type of errors can be rooted in the lack of understanding of the data, of how an application is used, or how a piece of technology works.

“I suppose it is tempting, if the only tool you have is a hammer,
to treat everything as if it were a nail.”
Abraham Maslow

   Some of the errors derive from the difference between how different programming languages work with data. There can be considerable differences between procedural, relational and vector languages. When jumping from one language to another, one can be tempted to apply the same old techniques to the new language. The solution might work, though (by far) not optimal.

    The capital mistake is to be the man of one tool, and use it in all the cases, even when not appropriate. For example. when one learned working with views, attempts to apply them all over the code in order to reuse logic, creating thus chains of views which even prove to be flexible, their complexity sooner or later will kick back. Same can happen with stored procedures and other object types as well. A sign of mastery is when the developer adapts his tools to the purpose.

"For every complex problem there is an answer
that is clear, simple, and wrong."
Henry L. Mencken

   One can build elegant solutions but solve the wrong problem. Misunderstanding the problem at hand is one type of error sometimes quite difficult to identify. Typically, they can be found through thorough testing. Sometimes the unavailability of (quality) data can impede the process of testing, such errors being found late in the process.

   At the opposite side, one can attempt to solve the right problem but with logic flaws – wrong steps order, wrong algorithm, wrong set of tools, or even missing facts/assumptions. A special type of logical errors are the programmatic errors, which occur when SQL code encounters a logic or behavioral error during processing (e.g. infinite loop, out of range input). [1]

Data Errors

“Data quality requires certain level of sophistication within a company
to even understand that it’s a problem.”
Colleen Graham

   Poor data quality is the source for all evil, or at least for some of the evil. Typically, a good designed database makes use of a mix of techniques to reduce the chances for inconsistencies: appropriate data types and data granularity, explicit transactions, check constraints, default values, triggers or integrity constraints. Some of these techniques can be too restrictive, therefore in design one has to provide a certain flexibility in the detriment of one of the above techniques, fact that makes the design vulnerable to same range of issues: missing values, missing or duplicate records.

   No matter how good a database was designed, sometimes is difficult to cope with users’ ingenuity – misusage of functionality, typically resulting in deviations from standard processes, that can invalidate an existing query. Similar effects have the changes to processes or usage of new processed not addressed in existing queries or reports.

  Another topic that have a considerable impact on queries’ correctness is the existence, or better said the inexistence of master data policies and a board to regulate the maintenance of master data. Without proper governance of master data one might end up with a big mess with no way to bring some order in it without addressing the quality of data adequately.

Designed to Fail

“The weakest spot in a good defense is designed to fail.”
Mark Lawrence

   In IT one can often meet systems designed to fail, the occurrences of errors being just a question of time, kind of a ticking bomb. In such situations, a system is only as good as its weakest link(s). Issues can be traced back to following aspects:
- systems used for what they were not designed to do – typically misusing a tool for a purpose for which another tool would be more appropriate (e.g. using Excel as database, using SSIS for real-time, using a reporting tool for data entry);
- poor performing systems - systems not adequately designed for the tasks supposed to handle (e.g. handling large volume of data/transactions);
- systems not coping with user’s inventiveness or mistakes (e.g. not validating adequately user input or not confirming critical actions like deletion of records);
- systems not configurable (e.g. usage of hardcoded values instead of parameters or configurable values);
- systems for which one of the design presumptions were invalidated by reality (e.g. input data don’t have the expected format, a certain resource always exists);
- systems not being able to handle changes in environment (e.g. changing user settings for language, numeric or data values);
- systems succumbing in their own complexity (e.g. overgeneralization, wrong mix of technologies);
- fault intolerant systems – system not handling adequately more or less unexpected errors or exceptions (e.g. division by zero, handling of nulls, network interruptions, out of memory).

Systemic Errors

    Systemic errors can be found at the borders of the “impossible”, situations in which the errors defy the common sense. Such errors are not determined by chance but are introduced by an inaccuracy inherent to the system/environment.

    A systemic error occurs when a SQL program encounters a deficiency or unexpected condition with a system resource (e.g. a program encountered insufficient space in tempdb to process a large query, database/transaction log running out of space). [1]

   Such errors are often difficult but not impossible to reproduce. The difficulty resides primarily in figuring out what happened, what caused the error. Once one found the cause, with a little resourcefulness one can come with an example to reproduce the error.

Conclusion

“To err is human; to try to prevent recurrence of error is science.“
(Anon)

    When one thinks about it, there are so many ways to fail. In the end to err is human and nobody is exempted from making mistakes, no matter how good or wise. The quest of a (good) programmer is to limit errors’ occurrences, and to correct them early in process, before they start becoming a nightmare.

References:
[1] Transact-SQL Programming: Covers Microsoft SQL Server 6.5 /7.0 and Sybase,  by Kevin Kline, Lee Gould & Andrew Zanevsky, O’Reilly, ISBN 10: 1565924010, 1999

18 June 2017

Database Recovery on SQL Server 2017

I installed today SQL Server 2017 CTP 2.1 on my Lab PC without any apparent problems. It was time to recreate some of the databases I used for testing. As previously I had an evaluation version of SQL Server 2016, it expired without having a backup for one of the databases. I could recreate the database from scripts and reload the data from various text files. This would have been a relatively laborious task (estimated time > 1 hour), though the chances were pretty high that everything would go smoothly. As the database is relatively small (about 2 GB) and possible data loss was neglectable, I thought it would be possible to recover the data from the database with minimal loss in less than half of hour. I knew this was possible, as I was forced a few times in the past to recover data from damaged databases in SQL Server 2005, 2008 and 2012 environments, though being in a new environment I wasn’t sure how smooth will go and how long it would take.

 

Plan A - Create the database with  ATTACH_REBUILD_LOG option:

As it seems the option is available in SQL Server 2017, so I attempted to create the database via the following script:

CREATE DATABASE <database_name> ON

(FILENAME='I:\Data\<database_name>.mdf')

FOR ATTACH_REBUILD_LOG

And as expected I run into the first error:


Msg 5120, Level 16, State 101, Line 1

Unable to open the physical file "I:\Data\<database_name>.mdf". Operating system error 5: "5(Access is denied.)".

Msg 1802, Level 16, State 7, Line 1

CREATE DATABASE failed. Some file names listed could not be created. Check related errors.

It looked like a permissions problem, though I wasn’t entirely sure which account is causing the problem. In the past I had problems with the Administrator account, so it was the first thing to try. Once I removed the permissions for Administrator account to the folder containing the database and gave it full control permissions again, I tried to create the database anew using the above script, running into the next error:


File activation failure. The physical file name "D:\Logs\<database_name>_log.ldf" may be incorrect.

The log cannot be rebuilt because there were open transactions/users when the database was shutdown, no checkpoint occurred to the database, or the database was read-only. This error could occur if the transaction log file was manually deleted or lost due to a hardware or environment failure.

Msg 1813, Level 16, State 2, Line 1

Could not open new database '<database_name>'. CREATE DATABASE is aborted.

This approach seemed to lead nowhere, so it was time for Plan B.

 

Plan B - Recover the database into an empty database with the same name:

Step 1: Create a new database with the same name, stop the SQL Server, then copy the old file over the new file, and delete the new log file manually. Then restarted the server.

After the restart the database will appear in Management Studio with the SUSPECT state.


Step 2:
Set the database in EMERGENCY mode:

ALTER DATABASE <database_name> SET EMERGENCY, SINGLE_USER


Step 3:
Rebuild the log file:

ALTER DATABASE <database_name> REBUILD LOG ON (Name=’<database_name>_Log', FileName='D:\Logs\<database_name>.ldf')

The rebuild worked without problems.

Step 4: Set the database in MULTI_USER mode:

ALTER DATABASE <database_name> SET MULTI_USER


Step 5:
Perform a consistency check:

DBCC CHECKDB (<database_name>) WITH ALL_ERRORMSGS, NO_INFOMSG


After 15 minutes of work the database was back online.


Warnings:

Always attempt to recover the data for production databases from the backup files! Use the above steps only if there is no other alternative!

The consistency check might return errors. In this case one might need to run CHECKDB with REPAIR_ALLOW_DATA_LOSS several times [2], until the database was repaired.

After recovery there can be problems with the user access. It might be needed to delete the users from the recovered database and reassign their permissions!

 

Resources:

[1] In Recovery (2008) Creating, detaching, re-attaching, and fixing a SUSPECT database, by Paul S Randal [Online] Available from: https://www.sqlskills.com/blogs/paul/creating-detaching-re-attaching-and-fixing-a-suspect-database/ 

[2] In Recovery (2009) Misconceptions around database repair, by Paul S Randal [Online] Available from: https://www.sqlskills.com/blogs/paul/misconceptions-around-database-repair/

[3] Microsoft Blogs (2013) Recovering from Log File Corruption, by Glen Small [Online] Available from: https://blogs.msdn.microsoft.com/glsmall/2013/11/14/recovering-from-log-file-corruption/

28 February 2017

Data Load Optimization in Data Warehouses – A Success Story

Introduction

    This topic has been waiting in the queue for almost two years already - since I finished optimizing an already existing relational data warehouse within a SQL Server 2012 Enterprise Edition environment. Through various simple techniques I managed then to reduce the running time for the load process by more than 65%, from 9 to 3 hours. It’s a considerable performance gain, considering that I didn’t have to refactor any business logic implemented in queries.

    The ETL (Extract, Transform, Load) solution was making use of SSIS (SQL Server Integration Services) packages to load data sequentially from several sources into staging tables, and from stating further into base tables. Each package was responsible for deleting the data from the staging tables via TRUNCATE, extracting the data 1:1 from the source into the staging tables, then loading the data 1:1 from the staging table to base tables. It’s the simplest and a relatively effective ETL design I also used with small alterations for data warehouse solutions. For months the data load worked smoothly, until data growth and eventually other problems increased the loading time from 5 to 9 hours.

Using TABLOCK Hint

    Using SSIS to bulk load data into SQL Server provides an optimum of performance and flexibility. Within a Data Flow, when “Table Lock” property on the destination is checked, it implies that the insert records are minimally logged, speeding up the load by a factor of two. The TABLOCK hint can be used also for other insert operations performed outside of SSIS packages. At least in this case the movement of data from staging into base tables was performed in plain T-SQL, outside of SSIS packages. Also further data processing had benefitted from this change. Only this optimization step alone provided 30-40% performance gain.

Drop/Recreating the Indexes on Big Tables

    As the base tables were having several indexes each, it proved beneficial to drop the indexes for the big tables (e.g. with more than 1000000 records) before loading the data into the base tables, and recreate the indexes afterwards. This was done within SSIS, and provided an additional 20-30% performance gain from the previous step.

Consolidating the Indexes

    Adding missing indexes, removing or consolidating (overlapping) indexes are typical index maintenance tasks, apparently occasionally ignored. It doesn’t always bring much performance as compared with the previous methods, though dropping and consolidating some indexes proved to be beneficial as fewer data were maintained. Data processing logic benefited from the creation of new indexes as well.

Running Packages in Parallel

As the packages were run sequentially (one package at a time), the data load was hardly taking advantage of the processing power available on the server. Even if queries could use parallelism, the benefit was minimal. Enabling packages run in parallel added additional performance gain, however this minimized the availability of processing resources for other tasks. When the data load is performed overnight, this causes minimal overhead, however it should be avoided when the data are loading to business hours.

Using Nonclustered Indexes

In my analysis I found out that many tables, especially the ones storing prepared data, were lacking a clustered index, even if further indexes were built on them. I remember that years back there was a (false) myth that fact and/or dimension tables don’t need clustered indexes in SQL Server. Of course clustered indexes have downsides (e.g. fragmentation, excessive key-lookups) though their benefits exceed by far the downsides. Besides missing clustered index, there were cases in which the tables would have benefited from having a narrow clustered index, instead of a multicolumn wide clustered index. Upon case also such cases were addressed.

Removing the Staging Tables

    Given the fact that the source and target systems are in the same virtual environment, and the data are loaded 1:1 between the various layers, without further transformations and conversions, one could load the data directly into the base tables. After some tests I came to the conclusion that the load from source tables into the staging table, and the load from staging table into base table (with TABLOCK hint) were taking almost the same amount of time. This means that the base tables will be for the same amount of the time unavailable, if the data were loaded from the sources directly into the base tables. Therefore one could in theory remove the staging tables from the architecture. Frankly, one should think twice when doing such a change, as there can be further implications in time. Even if today the data are imported 1:1, in the future this could change.

Reducing the Data Volume

    Reducing the data volume was identified as a possible further technique to reduce the amount of time needed for data loading. A data warehouse is built based on a set of requirements and presumptions that change over time. It can happen for example that even if the reports need only 1-2 years’ worth of data, the data load considers a much bigger timeframe. Some systems can have up to 5-10 years’ worth of data. Loading all data without a specific requirement leads to waste of resources and bigger load times. Limiting the transactional data to a given timeframe can make a considerable difference. Additionally, there are historical data that have the potential to be archived.

    There are also tables for which a weekly or monthly refresh would suffice. Some tables or even data sources can become obsolete, however they continue to be loaded in the data warehouse. Such cases occur seldom, though they occur. Also some unused or redundant column could have been removed from the packages.

Further Thoughts

    There are further techniques to optimize the data load within a data warehouse like partitioning large tables, using columnstore indexes or optimizing the storage, however my target was to provide maximum sufficient performance gain with minimum of effort and design changes. Therefore I stopped when I considered that the amount of effort is considerable higher than the performance gain.

Further Reading:
[1] TechNet (2009) The Data Loading Performance Guide, by Thomas Kejser, Peter Carlin & Stuart Ozer
https://technet.microsoft.com/en-us/library/dd425070(v=sql.100).aspx
[2] MSDN (2010) Best Practices for Data Warehousing with SQL Server 2008 R2, by Mark Whitehorn, Keith Burns & Eric N Hanson
https://msdn.microsoft.com/en-us/library/gg567302.aspx
[3] MSDN (2012) Whitepaper: Fast Track Data Warehouse Reference Guide for SQL Server 2012, by Eric Kraemer, Mike Bassett, Eric Lemoine & Dave Withers
https://msdn.microsoft.com/en-us/library/hh918452.aspx
[4] MSDN (2008) Best Practices for Data Warehousing with SQL Server 2008, by Mark Whitehorn & Keith Burns https://msdn.microsoft.com/library/cc719165.aspx
[5] TechNet (2005) Strategies for Partitioning Relational Data Warehouses in Microsoft SQL Server, by Gandhi Swaminathan
https://technet.microsoft.com/library/cc966457
[6] SQL Server Customer Advisory Team (2013) Top 10 Best Practices for Building a Large Scale Relational Data Warehouse
https://blogs.msdn.microsoft.com/sqlcat/2013/09/16/top-10-best-practices-for-building-a-large-scale-relational-data-warehouse/

04 February 2017

Killing Sessions - Killing ‘em Softly and other Snake Stories

Introduction

    There are many posts on the web advising succinctly how to resolve a blocking situation by terminating a session via kill command, though few of them warn about its use and several important aspects that need to be considered. The command is powerful and, using an old adagio, “with power comes great responsibility”, responsibility not felt when reading between the lines. The easiness with people treat the topic can be seen in questions like “is it possibly to automate terminating sessions?” or in explicit recommendations of terminating the sessions when dealing with blockings.

   A session is created when a client connects to a RDBMS (Relational Database Management System) like SQL Server, being nothing but an internal logical representation of the connection. It is used further on to perform work against the database(s) via (batches of) SQL statements. Along its lifetime, a session is uniquely identified by an SPID (Server Process ID) and addresses one SQL statement at a time. Therefore, when a problem with a session occurs, it can be traced back to a query, where the actual troubleshooting needs to be performed.

   Even if each session has a defined scope and memory space, and cannot interact with other sessions, sessions can block each other when attempting to use the same data resources. Thus, a blocking occurs when one session holds a lock on a specific resource and a second session attempts to acquire a conflicting lock type on the same resource. In other words, the first session blocks the second session from acquiring a resource. It’s like a drive-in to a fast-food in which autos must line up into a queue to place an order. The second auto can’t place an order until the first don’t have the order – is blocked from placing an order. The third auto must wait for the second, and so on. Similarly, sessions wait in line for a resource, fact that leads to a blocking chain, with a head (the head/lead blocking) and a tail (the sessions that follow). It’s a FIFO (first in, first out) queue and using a little imagination one can compare it metaphorically with a snake. Even if imperfect, the metaphor is appropriate for highlighting some important aspects that can be summed up as follows:

  • Snakes have their roles in the ecosystem
  • Not all snakes are dangerous
  • Grab the snake by its head
  • Killing ‘em Softly
  • Search for a snake’s nest
  • Snakes can kill you in sleep
  • Snake taming

   Warning: snakes as well blockings need to be handled by a specialist, so don’t do it by yourself unless you know what are you doing!

Snakes have their roles in the ecosystem

    Snakes as middle-order predators have an important role in natural ecosystems, as they feed on prey species, whose numbers would increase exponentially if not kept under control. Fortunately, natural ecosystems have such mechanism that tend to auto-regulate themselves. Artificially built ecosystems need as well such auto-regulation mechanisms. As a series of dynamical mechanisms and components that work together toward a purpose, SQL Server is an (artificial) ecosystem that tends to auto-regulate itself. When its environment is adequately sized to handle the volume of information or data it must process then the system will behave smoothly. As soon it starts processing more data than it can handle, it starts misbehaving to the degree that one of its resources gets exhausted.

   Just because a blocking occurs doesn’t mean that is a bad thing and needs to be terminated. Temporary blockings occur all the time, as unavoidable characteristic of any RDBMS with lock-based concurrency like SQL Server. They are however easier to observe in systems with heavy workload and concurrent access. The more users in the system touch the same data, the higher the chances for a block to occur. A good design database and application architecture typically minimize blockings’ occurrence and duration, making them almost unobservable. At the opposite extreme poor database design combined with poor application design can make from blockings a DBA’s nightmare. Persistent blockings can be a sign of poor database or application design or a sign that one of the environment’s limits was reached. It’s a sign that something must be done. Restarting the SQL server, terminating sessions or adding more resources have only a temporary effect. The opportunity lies typically in addressing poor database and application design issues, though this can be costlier with time.

Not all snakes are dangerous

    A snake’s size is the easiest characteristic on identifying whether a snake is dangerous or not. Big snakes inspire fear for any mortal. Similarly, “big” blockings (blockings consuming an important percentage of the available resources) are dangerous and they have the potential of bringing the whole server down, eating its memory resources slowly until its life comes to a stop. It can be a slow as well a fast death.

   Independently of their size, poisonous snakes are a danger for any living creature. By studying snakes’ characteristics like pupils’ shape and skin color patterns the folk devised simple general rules (with local applicability) for identifying whether snakes are poisonous or not. Thus, snakes with diamond-shaped pupils or having color patterns in which red touches yellow are likely/believed to be poisonous. By observing the behavior of blockings and learning about SQL Server’s internals one can with time understand the impact of each blocking on server’s performance.

Grab the snake by its head

    Restraining a snake’s head assures that the snake is not able to bite, though it can be dangerous, as the snake might believe is dealing with a predator that is trying to hurt it, and reach accordingly. On the other side troubleshooting blockings must start with the head, the blocking session, as it’s the one which created the blocking problem in the first place.

    In SQL Server sp_who and its alternative sp_who2 provide a list of all sessions, with their status, SPID and a reference with the SPID of the session blocking it. It displays thus all the blocking pairs. When one deals with a few blockings one can easily see whether the sessions form a blocking chain. Especially in environments under heavy load one can deal with a handful of blockings that make it difficult to identify all the formed blocking chains. Identifying blocking chains is necessary because by identifying and terminating directly the head blocking will often make the whole blocking chain disappear. The other sessions in the chain will perform thus their work undisturbed.

    Going and terminating each blocking session in pairs as displayed in sp_who is not recommended as one terminates more sessions than needed, fact that could have unexpected repercussions. As a rule, one should restore system’s health by making minimal damage.

    In many cases terminating the head session will make the blocking chain disperse, however there are cases in which the head session is replaced by other session (e.g. when the sessions involve the same or similar queries). One will need to repeat the needed steps until all blocking chain dissolve.

Killing ‘em Softly 

   Killing a snake, no matter how blamable the act, it is sometimes necessary. Therefore, it should be used as ultimate approach, when there is no other alternative and when needed to save one’s or others’ life. Similarly killing a session should be done only in extremis, when necessary. For example, when server’s performance has deprecated considerably affecting other users, or when the session is hanging indefinitely.


    Kill command is powerful, having the power of a hammer. The problem is that when you have a hammer, every session looks like a nail. Despite all the aplomb one has when using a tool like a hammer, one needs to be careful in dealing with blockings. A blocking not addressed correspondingly can kick back, and in special cases the bite can be deadly, for system as well for one’s job. Killing the beast is the easiest approach. Kill one beast and another one will take its territory. It’s one of the laws of nature applicable also to database environments. The difference is that if one doesn’t addresses the primary cause that lead to a blocking, the same type of snake more likely will appear repeatedly.


    Unfortunately, the kill command is no bulletproof for terminating a session, it may only severe the snake. As the documentation warns, there can be cases in which the method won’t have any effect on the blocking, the blocking continuing to room around. So, might be a good idea to check whether the session disappeared and keep an eye on it until it totally disappeared. Especially when dealing with a blocking chain it can happen that the head session is replaced by another session, which probably was waiting for the same resources as the previous head session. It may happen that one deals with two or more blocking chains independent from each other. Such cases appear seldom but are possible.


     Killing the head session with a blocking without gathering some data provides less opportunities for learning, for understanding what’s happening in your system, of identifying what caused the blocking to occur. Therefore, before jumping to kill a session, collect the data you need for further troubleshooting.

Search for a snake’s nest 

   With the warning that unless one deals with small snakes, might not be advisable in searching for a snake’s nest, the idea behind this heuristic is that with a snake’s occurrence more likely there is also a nest not far away, where several other snakes might hide. Similarly, a query that causes permanent blockings might be the indication for code that generates a range of misbehaving queries. It can be same code or different pieces of code. One can attempt to improve the performance of a query that leads to blockings by adding more resources on the server or by optimizing SQL Server’s internals, though one can’t compensate for poor programming. When possible, one needs to tackle the problem at the source, otherwise performance improvements are only temporary.

Snakes can kill you in sleep 

   When wondering into the wild as well when having snakes as pets one must take all measures to assure that nobody’s health is endangered. Same principle should apply to databases as well, and the first line of defense resides in actively monitoring the blockings and addressing them timely as they occur. Being too confident that nothing happens and no taking the necessary precautions can prove to be a bad strategy when a problem occurs. In some situations, the damage might be acceptable in comparison with the effort and costs needed to build the monitoring infrastructure, though for critical systems it can come with important costs.

Snakes’ Taming 

   Having snakes as pets doesn’t seem like a good idea, and there are so many reasons why one shouldn’t do it (see PETA’s reasons)! On the other side, there are also people with uncommon hobbies, that not only limit themselves at having a snake pet, but try to tame them, to have them behave like pets. There are people who breed snakes to harness their venom for various purposes, occupation that requires handling snakes closely. There are also people who brought their relation with snakes at level of art, since ancient Egypt snake charming being a tradition in countries from Southeast Asia, Middle East, and North Africa. Even if not all snakes are tameable, snake’s taming and charming is possible. In the process the tamer must deprogram or control snakes’ behavior, following a specific methodology in a safe environment.

    No matter how much one tries to avoid persistent blockings, one can learn from troubleshooting blockings, about their sources, behavior as well about own limitations. One complex blocking can be a good example with which one can test his knowledge about SQL Server internals as well about applications’ architecture. Each blocking provides a scenario in which one can learn something.

    When fighting with a blocking, it’s wise to do it within a safe environment, typically a test or development environment. Fighting with it in a production environment can cause unnecessary stress and damage. So, if you don’t have a safe environment in which to carry the fight, then build one and try to keep the same essential characteristics as in production environment!

   There will be also situations in which one must fight with a blocking in the production environment. Then, be careful in not damaging the data as well the environment, and take all the needed precautions!


Conclusion

    The comparison between snakes and blockings might not be perfect, though hopefully it will imprint in reader’s mind the dangers of handling blockings inappropriately and increase the awareness in what concerns related topics.

Related Posts Plugin for WordPress, Blogger...