DB2 Cost Saving Tips

20 February 2009

 

More than ever companies are looking for ways to save money in all aspects of their business including IT infrastructure.  There are several DB2 tools that can help you do exactly this, with some of them showing results right away.  Just think what the boss will think of you when you show up in her office with a plan to reduce your database costs!  And don’t worry about giving me any credit.  I’m just here to serve.  Actually, I’m hoping that you will just decide that this stuff is so great that you’ll switch from our competitors’ databases and that puts money in my pocket!  I’d rather have money than approbation any day!  Well enough with my fantasies -- let’s talk turkey.  I summarize a number of these features below.  Also we’ve got an event in the Salt Lake City IBM office on March 4 where we’ll jump into a few of these features in more depth.  Click here for more details. 

 

Self Tuning Memory Manager (Free)

The DB2 Self Tuning Memory Manager (STMM) can allow you to run more databases on a single Linux, UNIX or Windows (LUW) server, especially if different databases encounter their peak loads at different times of the day.  If you configure STMM to do so, each database will periodically poll the operating system.  Each time a database sees that the OS needs more memory and that database determines that it can give up some memory and still operate, it will do so.  If the load on a given database is increasing, it will determine if the OS has any memory to spare and take some if it does.  This polling does not give or take large areas of memory at once; rather it takes small amounts more frequently, reevaluating the situation constantly.  So as workloads decrease or increase, DB2 puts or takes OS memory as needed.  This way you do not need to set up some complex web of interrelations between databases.  This feature of STMM can be on for some databases and not others as you choose.  It is available on all supported Operating Systems as of DB2 v9.5 and selected Operating Systems as of v9.1.  In addition to giving and receiving memory the STMM also regularly reallocates memory between the various heaps and buffers within each database to provide optimal performance for the current workload.  You can read more about the STMM in Chapter 3 of the Data Servers, Databases, and Database Objects Guide.

 

Data Studio pureQuery Runtime (Fee)

Data Studio pureQuery Runtime (pureQuery) can significantly reduce query work load against a database server for z/OS as well as Linux, UNIX and Windows (LUW) servers.  Although it works for both Z and LUW platforms, this tool can show pay back almost immediately for shops that run applications on LUW servers connecting to DB2/zOS for the database server.  This arrangement shows the fastest payback because the reduced workload shows up in reduced monthly usage charges for DB2/zOS quickly.  One of the features of pureQuery allows the tool to intercept your dynamic SQL calls from the LUW application server and convert them to static SQL calls that are passed to the database.  Static SQL is faster and puts significantly less load on the database server.  The first step in using this feature of pureQuery is to add it to the application running in a mode where it captures a copy of the SQL being used to query the database.  After you believe that most or all of the SQL that is typically run from that server has been executed, you then tell pureQuery to bind those SQL commands to the database.  After that happens, pureQuery watches the SQL being sent to the database server and when ones that it has been bound are issued it substitutes the static SQL call instead.  This feature can be used on purchased applications and in-house applications as well.  Queries that it has never seen can just run dynamically as normal depending on settings that you choose.  pureQuery has other features that decrease costs including helping Java and .Net developers create more efficient SQL. 

 

Data Studio Administrator (Fee)

The primary thing that Data Studio Administrator (DSA) does for you is to generate scripts that preserve all database objects and data when you make a structural change to a DB2 database.  When you make one simple change to a table that requires the table to be dropped and recreated, then you may have a lot of work to do to script the data export, load, capture and script the grants, dependent views, referential integrity, stored procedures, triggers, etc.  All this for one just one table is a lot.  This drudgery grows exponentially when you begin a project that requires several tables to be changed.  You can spend days scripting, all the time knowing that just one minor mistake can cause lost data or the cancellation of the production change on implementation night.  DSA can automate all of this for you.  For changes that you want to design and implement you can reverse engineer your database into DSA and then make the changes graphically in its physical modeling tool and then DSA generates a script to implement all changes.  DSA can also compare two databases and generate a script to make one database look like the other preserving all data.  For example, let’s say your development project has been going for months with large and small changes being made to the test database the whole time.  You know that forgetting to make just one of those changes can cause the whole production implementation fail.  With DSA you can just compare the two databases just before going live and have it generate all of the scripts need to implement the changes in production.  It has several other features that allow you to save time and make changes with less errors. 

 

DB2 Compression (Fee)

Our DB2 Storage Optimization Feature (Deep Compression or Compression) allows you to save significant disk space and improve performance at the same time.  This is one of those areas where our technology really out shines the competition in both the amount of compression that you can achieve an in the improved performance.   Now you are asking, “But db2Dean, I understand how compression saves disk space but how can it improve performance?”  Well it generally only improves performance on systems that are not CPU bound.  In essence you trade a few CPU cycles for much better I/O performance and memory utilization.  For reporting and data warehouse databases where you have to scan through lots and lots of rows, performance improves because you get many more rows into memory with each physical disk read.  DB2 does compression at the row level which means that rows are only uncompressed when they actually need to be examined.  All of the other rows on the page that are not used remain in their compressed format.  For Online Transaction Processing (OLTP) databases this means that you can keep significantly more rows in the same size buffer pool, dramatically increasing the probability that the each transaction will find the rows that it needs already in memory.  Since log records are compressed there is also less I/O to the transaction logs.

 

Optim Data Growth Solution (Fee)

Optim Data Growth Solution is primarily an excellent process for archiving old data from relational databases based on rules that you establish for what should be archived.  Optim also allows you to have access to the archived data through ODBC and JDBC applications and even through federation technology to make the data appear to still be in your database.  Generally your production databases are on your most expensive tier-1 storage.  The most obvious way that archiving allows you to save cost is by putting that archived data on less expensive storage and having the oldest files on the cheapest off line storage if you wish.  However, it goes further than saving space on tier-1 storage.  Optim also compresses the data making the archives consume even less space.  Finally you also get paid back by having a better performing production database because only data needed for day-to-day operations is there.  This technology works for databases on z/OS, i5/OS and LUW servers as well.  It also archives from most relational databases, both IBM and non-IBM.

 

Optim Test Data Management (Fee)

Optim Test Data Management (TDM) saves you money in two ways – time and disk space.  It helps you define and extract appropriate subsets of test data from your production databases into your various test and development databases.  Many organizations are stuck making several full copies of their production databases into their various other environments because writing an extract across several related databases is too complex and takes too long.  Each new copy in non-production systems wastes more and more space.  TDM makes the process of defining a subset across multiple databases relatively easy and fast, so you can just copy what you need for particular test scenarios.  This can save a tremendous amount of space, especially if you have many test environments.  Another way the TDM saves time is by reducing the effort that it takes set up test databases, especially if you refresh your test databases frequently.  Once you have defined the relationships of tables within and across databases and the criteria for extracting these sets of tables you can store that in TDM.  You can then tell TDM to refresh test every Tuesday and development every Friday and have TDM to that automatically.  You can also pull copies as needed.  It has several features like allowing you to compare databases and modify data between extract and load.  Finally, TDM has an optional data masking feature that allows you to obfuscate any sensitive data while keeping it in context. 

 

Workload Manager (Fee)

Workload Manager (WLM) was introduced in DB2 v9.5 and allows you to create certain resource classes and assign certain workloads to those classes.  Each workload class is assigned certain hardware resources like an amount of processing power.  As each query is received by the database it is assigned a workload.  Some workloads get resource classes with more resources and other workloads have to make due with less.  This can help you delay the purchase of additional database server capacity be ensuring that the most important workloads get adequate resources and that everything else just gets done with the remaining system resources.  WLM was created so that less important workloads never interfere with the resources of the more important ones.  This allows you to optimize your business when there are not enough resources to get everything done. 

 

***

 

These are just some of the major features that can really help you reduce the cost of your database systems.  There are several others that combine to make DB2 a great decision from a total cost of ownership perspective.  If any of these sound interesting to you, please attend our March 4 event in Salt Lake City.  While you are there, we can talk about these and questions that you have about anything else.

 

HOME | Search