DB2 Storage Observations

By: Graham Murphy and Dean Compher

24 August 2011



When you are designing a new database in DB2 on Linux, UNIX, Windows (DB2/LUW) one of the most important aspects of your design is the layout of your data.  It is important to get this right the first time, because changing layouts is time consuming and difficult.  There is a lot of good information on the web already, but I wanted to add some practical observations that Graham Murphy and I have had in recently implemented or re-designed systems.  This article focuses on OLTP (On-line Transaction Processing) and reporting systems.  Data warehouses and databases with heavy analytical uses are beyond the scope of this document.


Since most of the main concepts are already covered very well by the IBM Database Storage DB2/LUW Best Practices Guide, I highly recommend that you read it.  In this article I will cover some more detailed recommendations and some alternatives that may be helpful.



File Systems


One of the first items that you need to consider is the number and types of file systems that you need.  The reason that you should work on this first is that it is typically takes a while to get storage allocated especially in large organizations where there is a separate storage management group.  Formal requests need to be well considered by the omnipotent storage team before then can condescend to bestow disk space to the unwashed masses.  Most organizations now get an allocation from the central SAN system.  Storage is usually presented to servers in an object called a Logical UNit (LUN). 


Before making that request it is a good idea to have a discussion with the storage team about how data is stored and allocated.  In some organizations standard sized LUNs are used and in others custom size LUNs can be ordered.  Here are recommendations for the different types of SAN storage


Data on All Disks

In newer disk subsystems there seems to be a trend towards to spreading the storage for LUNS across all disks in the physical disk device.  An example is IBM’s XIV storage, but other manufacturers are doing this too.  This is the simplest case for you.  If you are getting your LUNs from this type of system and you can get custom sizes then request 5 LUNs for your database data.  There is noting really magic about this number, but it strikes a nice balance between ease of administration and spreading data.  If you want a few more that is fine, but don’t go less.  If your organization issues storage in fixed sizes, then order enough LUNs for the amount of space you need.  Finally when LUNs are presented to the operating system then it is a good practice to create one file system on each LUN.


Data on Individual Arrays

On most other types of storage, LUNs are allocated from individual RAID arrays.  There is a very good discussion of how to arrange this storage in the IBM Database Storage DB2/LUW Best Practices Guide so I will not repeat it here.  If possible, you should get one LUN from each RAID array and create one tablespace per LUN. 



In many organizations the DBAs and others will be deemed unworthy of knowing what is behind the curtain of the SAN and will not be told.  In this case you just have to ask for enough LUNs to meet your needs and hope for the best.  The good news is that this often does provide adequate performance for many small and medium sized systems.  Again you should create one file system per LUN.


The IBM Database Storage DB2/LUW Best Practices Guide goes in depth about types of RAID arrays to create for DB2 and I highly recommend that you read it.  One important thing that I did not see there is how to create your file systems from LUNs.  It is good if you can create one file system per LUN, but sometimes this is not practical for various reasons.  If you find yourself in this situation do not despair.  Just remember that when you create the file system ensure that you stripe the tablespace across the LUNs and do NOT concatenate the LUNS.  If you concatenate the LUNs then as data is added it is only placed in one LUN until it is filled and then moves on to each subsequent LUN.  This is very bad and places the newest and probably hottest data into one or a few LUNS making a bad hotspot.





One of the things that I’ve been hearing lately is that it is OK to put all of your tables into one or a very few tablespaces.  This is simply NOT TRUE if you need high performance.  A good rule of thumb is to put any table with more than about 5-10 MB of data into its own tablespace.  Further it is a good idea to put the indexes for these tables into an individual tablespaces.  That is, you would put all of the indexes for a larger table into a tablespace created solely for that table’s indexes.  You can place all of the smaller tables into one tablespace, and the indexes for all of those tables into another.  Graham and I recently worked with a customer who was having performance problems with their OLTP database who had all of their tables in a single tablespace.  Once he broke all of the larger tables and their indexes into their own tablespaces performance improved dramatically.  When he was done this system had well over 100 tablespaces. 


For almost all production data and index tablespaces you should use Large (not Regular) DMS storage.  “Regular” tablespaces may go away in future releases.  Remember with DMS and Automatic Storage you can now specify a start size and let the tablespace automatically extend as needed.


If you have Large OBject (LOB) data in your database you should design your tablespaces in one of two ways.  If your LOBS are small enough to fit onto the data page with the other data and is frequently accessed, then you should put the LOBs in line.  That means that the LOB column is just part of the row in the data page just like all other columns.  This saves I/O when accessing the LOB data.  If the LOBs are large then they should be put into their own tablespaces using the “LONG IN” clause in the create table command. 



Putting the Tablespaces on File Systems


You should create each tablespace across all data file systems on your server.  That is, each tablespace should have one container (directory) on each data file system.  Avoid putting tablespaces in your backup and transaction log file system.   I am aware of two recently redesigned systems that used SAN that stripes each LUN over all disks in the storage unit.  For both of these databases, five data LUNs were created with one file system being placed on each LUN  Both of these systems perform well and there were many tablespaces and every tablespace was striped across all 5 data file systems.  Both of these are high volume OLTP systems with significant reports being created from them too. 


Striping all tablespaces across all file systems can be made easier with Automatic Storage.   With automatic storage you define the available file systems to the database and then DB2 takes care of placing each tablespace across those file systems as they are created.   



Page Size


For OLTP databases use the 4K page size.  End of discussion!  When using “LARGE” tablespaces, as should always be done these days, 4 K tablespaces can grow up to 2 Terabytes.  For strictly Reporting or Operational Data Store databases 8K or 16K pages might be more appropriate so that you get more rows per page.  Compression may also improve performance of reporting databases.



Extent and Prefetch Sizes


The IBM Database Storage DB2/LUW Best Practices Guide has a good description of extent size and provides a well accepted formula for calculating it.  However, there is an interesting alternative that is gaining acceptance in some quarters for high-volume OLTP databases.  This alternative says to use a small extent size of 2 pages.  If you need very high performance in your OLTP database then you may want to experiment with the traditional vs. small extent size and see what performs better for your work load.  I would lean more towards the traditional calculation for reporting and ODS workloads.


Again the IBM Database Storage DB2/LUW Best Practices Guide has a good description of prefetch size and provides a well accepted formula for calculating it.  Graham has provided me with a formula that can give better performance is some cases.  This alternative formula is:


PREFETCH = Nbr_File_Systems * Extent_Size * Nbr_Channels_to_Disk_System







I hope that you found these tips for DB2 storage useful.  Alternatives to traditional practices are good in some cases, but you should make sure to test them to verify that they are better for your specific situation..   Please “like” my Facebook db2Dean page at facebook.com/db2dean and add any feedback about what you liked or what I can improve.   I can only make things better with your help!



HOME | Search