db2Dean’s 2010 IOD Highlights

23 December 2010

 

 

I had another great Information on Demand (IOD) Conference this year.   Alas, Marilyn Monroe stood me up this year, but we'll always have IOD 2009.  Be that as it may, I again learned lots of great information at the conference that I'll share here.  There was much more than what I can show here in detail, but I have included several useful items.  Many of the presentations are available now.  If the one that you wanted was not there before, it is possibly on the conference web site now.  If you would like to have someone bring any of these presentations to you, just let me know.  I look forward to seeing you and Marilyn again at next year’s conference!  Also please give add anything else you though was really interesting to the “Message Board” section of my db2Dean Community page. 

 

Database Trends

I attended a very interesting session on trends in the database world where I learned several interesting facts including:

       Custom and packaged OLTP (online transaction processing) databases comprise about 70% of the database market. 

       There are three main OLTP deployment models and organizations tend to use one or more of them:

    Large scale deployment of small databases that do not have high availability requirements.

    Mission critical transaction systems that have continuous availability and disaster recovery recovery requirements

    Larger mixed workload systems that do combination of transactional, batch and reporting functions. These also frequently need continuous availability and disaster recovery. 

       Instrumented devices are adding more and more transactions and data volume to OLTP systems and providing more useful information.  I have a personal example from my little city of Bluffdale, UT.  They have electronic water meters and I recently got a letter stating that I might have a leak in my house.  They determined this not because of the amount of water that I was using, but because of the pattern of use over several 15 minute increments that was observed by their system. 

       Many enterprises are experiencing demands to move up from reporting databases to greater levels of Business Intelligence and analytics. 

       BI is a top priority for more and more CIO's

       Warehouse are becoming more critical and real time insight is also becoming more important.

       Solid State Devices (SSDs) are becoming more widely used for targeted database file systems such as temporary tablespaces or log files.  They are still generally too expensive for entire databases.

       Cloud computing is becoming more and more used with the primary adoption at this point in time for test, development and QA databases.

       IBM has adopted a “Workload Optimized” strategy for using the right database architecture for the right type of workload such as OLTP or data warehousing to provide the best cost/performance characteristics for the type of processing being done.  This is in contrast to most of our competitors who say that one size fits all.

 

DB2 Connect

DB2 Connect is a great tool, and I am always learning lots of good stuff about it.   Last year there were a  tremendous number of new features and I wrote about many of them in my 2009 IOD Trip Report.  This year I I got even more information.  Some updates for this year include a centralized licensing feature of DB2 Connect Unlimited Edition for z/OS that allows you to add the license once on the z/OS system to enable all of the servers and workstations that access DB2/zOS directly without applying the license to each machine.  DB2 Connect v9.7 provides full support for new DB2 z/OS v10 features including load balancing through WLM.  Like DB2 9.7, the Optim Development Studio now comes free with DB2 Connect v9.7 fp 3 and later.  You can use this tool to develop SQL and Stored procedures on any platform of DB2. 

 

DB2 V9.7

I learned a number of new things about DB2 v9.7 some of which are new and some of which have been around for a while. 

       Starting with Fix Pack 3a you can now download  Optim Development Studio for free if you own any edition of DB2/LUW or DB2 Connect with current maintenance.  It has lots of great things for administering, querying and developing applications in your database. 

       Since the 2009 conference the DB2 HADR Reads on Standby feature has been released that allows you to use the HADR secondary database as a reporting or query database with some restrictions. 

       Use the DB2/LUW Connection Concentrator to save database server resources if you have lots of transient connections.  Enable it by setting the DBM MAX_CONNECTIONS higher than the MAX_COORDAGENTS setting. 

       Since at least DB2 v9.5 DB2 has used a threaded model so that when you look at the db2 processes on Linux or UNIX you only see the db2sysc process.  Using the command “db2pd -edus” you can see all of the threads in the process like the logger, agents and prefetchers.

       Using the DB2_PARALLEL_IO DB configuration parameter is still very important to tell DB2 that there are multiple disks under each container.

       DB2/LUW introduces Reclaimable Storage that makes it much easier to reduce the size of tablespaces. This feature will actually move extents around and lower the high water mark for you to allow unused storage to be reclaimed.  You use the “ALTER TABLESPACE <tsname> REDUCE” command to do this. The tablespace must have been created in DB2 v9.7 and must be DMS or Automatic Storage to allow this feature.  However, if your tablespace was created in an older version you can move the tables to a v9.7 tablespace using the ADMIN_MOVE_TABLE command or other command.

       If you use a lot of the same values in many rows for certain columns then you may want to consider using Value Compression in those tables to save space and improve performance. 

       If you are considering purchasing the DB2 Deep Compression feature, you can easily run these commands to see what you would save.  These Commands are just built into DB2 9.5 and 9.7 databases:

ADMIN_GET_TAB_COMPRESS_INFO(..,..,ESTIMATE)

ADMIN_GET_TAB_COMPRESS_INFO(..,..,REPORT)

 

DB2 on Linux

Don't forget that IBM has a site where we show you which Linux Distributions have been tested and validated for DB2.  You should check it before deciding which distribution to use for your new DB2 database.  Also if you plan to run DB2 in a virtualized Linux environment then you should definitely view the Matrix of Supported Virtualization technologies.  A great place to evaluate and test DB2 on Linux is try DB2 in the Cloud where you can quickly and cheaply create and use a DB2 on Linux instance.  This session highlighted many other best practices for running DB2 on Linux including the fact that you should still have at least twice as much swap space as physical RAM.  This is especially true if you have more than one database or more than one instance on a server.  The information center has all of the server and database configures that you should use when installing DB2 on Linux. 

 

Workload Manager (WLM)

I got some great updates at the WLM best practices session.  Just as a refresher, WLM is an add-on tool to DB2 that allows you to reserve sets of server resources for certain categories of queries called “workloads”.  Categories include things like user id, application name, application server tags, and estimated cost.  Resources include CPU, Memory and IO.  One interesting observation is that most organizations who use WLM typically use 2 categories – one for the important work and one for everything else.  They assign the important work to a resource group with lots of processing and everything else into a smaller resource group.  When you do it this way, WLM is easy and fast to configure.  However, one of the main impediments to implementation in the perceived complexity of configuration.  There are tons of things that you can do to slice and dice the workloads and resource categories so when you are learning them they can seem overwhelming, but just remember that most people only need two categories to ensure that the important work gets done.  This session also discussed how you can use Optim Performance Manager (OPM) to store and report the information that WLM collects and that you can use OPM to configure WLM.  Out of the box WLM only comes with command line configuration commands, but OPM puts a nice GUI interface on it. 

 

 

 

***

 

I was exposed to much more information than what is in this short list, but as usual attending this conference was like drinking from a fire hose.  I need to continue to review the presentations to absorb more.  This list just contains some of the items that I saw that I found particularly useful.  Again, please add anything else you though was really interesting to the “Message Board” section of my db2Dean Community page.  Also in case you wondering, I passed the tests to upgrade my certification level to “IBM Certified Database Administrator - DB2 9.7 for  LUW”.  One of the benefits of the conference is that you can take the certifications for free as part of the conference and you can just show up when it is convenient. 

 

HOME | Search