Thursday, February 14, 2013

Oracle database on ZFS - reduce costs through deduplication

I recently had a customer who wanted to reduce storage costs.  They had a 13TB PROD database, and needed 40 copies of the database to support DEV and TEST activities, totalling over 500TB.

We did some diagnostics on the databases, and worked out that only about 10% of data blocks in a clone were ever updated, so deduplicating at filesystem or storage block level could result in 90% saving.  The client was a large conservative bank, so the preference was to use existing products from large vendors.  We evaluated products from Delphix, NetApp, and Oracle, and ended up choosing the NetApp product due to a combination of functionality, cost, and in-house skills.

Since then I have been wondering if we could have done it cheaper, and have done a quick evaluation of the ZFS filesystem which does allow deduplication at filesystem level.  

ZFS is a "copy on write" filesystem which is included in the Solaris operating system, and has been ported to other operating systems.

You can find more information here :-

The following test was done using Oracle VirtualBox running Oracle Solaris 11 x86.

Create the "dbpool" disk pool using a storage device by running  "zpool create"

root@sol11:~# zpool create dbpool /dev/dsk/c7t2d0p0

Configure dbpool with recordsize 8k which is optimal for databases with 8k block size, and set mount point to /u01/data  

Refer to -

root@sol11:~# zfs create -o recordsize=8k -o mountpoint=/u01/data dbpool/data
root@sol11:~# zfs set logbias=throughput dbpool/data 

Now confirm the settings using "zfs get"

root@sol11:~# zfs get primarycache,recordsize,logbias dbpool/data
dbpool/data  primarycache  all         default
dbpool/data  recordsize    8K          local
dbpool/data  logbias       throughput  local

Then create 11g database "test" in /u01/data

oracle@sol11:/u01/data/test$ pwd/u01/data/testoracle@sol11:/u01/data/test$ ls -l
total 2720998
-rw-r----- 1 oracle oinstall 9748480 Feb 14 23:55 control01.ctl
-rw-r----- 1 oracle oinstall 9748480 Feb 14 23:55 control02.ctl
-rw-r----- 1 oracle oinstall 52429312 Feb 14 23:25 redo01.log
-rw-r----- 1 oracle oinstall 52429312 Feb 14 23:52 redo02.log
-rw-r----- 1 oracle oinstall 52429312 Feb 14 23:25 redo03.log
-rw-r----- 1 oracle oinstall 461381632 Feb 14 23:46 sysaux01.dbf
-rw-r----- 1 oracle oinstall 702554112 Feb 14 23:51 system01.dbf
-rw-r----- 1 oracle oinstall 20979712 Feb 14 23:25 temp01.dbf
-rw-r----- 1 oracle oinstall 31465472 Feb 14 23:51 undotbs01.dbf
-rw-r----- 1 oracle oinstall 5251072 Feb 14 23:25 users01.dbf

oracle@sol11:/u01/data/test$ du -k .1360500 .

oracle@sol11:/u01/data/test$ df -k .

Filesystem   1024-blocks   Used   Available Capacity  Mounted on
dbpool/data   10257408     1360529     8896686    14% /u01/data

We can see that database test is using about 1.3GB of disk in /u01/data

The "zfs list" command confirms space used is 1.3GB.

oracle@sol11:~$ zfs list
dbpool                  1.30G  8.48G    31K  /dbpool
dbpool/data             1.30G  8.48G  1.30G  /u01/data

Shutdown database test and run the "zfs snapshot" command to create a snapshot of the dbpool filesystem data called "backup"

root@sol11:/u01/data# zfs snapshot dbpool/data@backup

The contents of the snapshot can be viewed in the ".zfs" directory.

root@sol11:/u01/data/.zfs# find .

Clone the backup by running "zfs clone" to create "clone1"

root@sol11:/u01/data# zfs clone dbpool/data@backup dbpool/clone1

Mount clone1 to /u01/clone1 using "zfs set mountpoint"

root@sol11:/u01/data# zfs set mountpoint=/u01/clone1 dbpool/clone1
root@sol11:/u01/data/.zfs# cd /u01/clone1/test

root@sol11:/u01/clone1/test# ls
control01.ctl  redo01.log     redo03.log     system01.dbf   undotbs01.dbf  control02.ctl  redo02.log     sysaux01.dbf   temp01.dbf     users01.dbf

Using "zfs list" we can see that the clone only uses 21K (for 1.3GB of files)

root@sol11:/u01/clone1/test# zfs list
dbpool        1.30G  8.48G   31K  /dbpool
dbpool/data   1.30G  8.48G 1.30G  /u01/data
dbpool/clone1   21K  8.48G 1.30G  /u01/clone1

Now we can prepare initclone1.ora, create audit file directory, and recreate the controlfile for the clone database.

oracle@sol11:~$ export ORACLE_SID=clone1
oracle@sol11:~$ sqlplus / as sysdba

SQL*Plus: Release Production on Fri Feb 15 00:35:13 2013
Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> @recrctl_clone1.sql

ORACLE instance started.

Total System Global Area 1043886080 bytes

Fixed Size                  2217624 bytes
Variable Size             603982184 bytes
Database Buffers          432013312 bytes
Redo Buffers                5672960 bytes
Control file created.

Database altered.
SQL> select name from v$database;


SQL> select name from v$datafile;



We now have databases test and clone1 running, with block level deduplication.

oracle@sol11:~$ ps -ef | grep pmon  
oracle  2800     1   0 00:33:28 ?           0:00 ora_pmon_test  oracle  2920     1   0 00:35:18 ?           0:00 ora_pmon_clone1

After the "open resetlogs" the clone is now using 120MB (for an additional 1300MB database)

oracle@sol11:~$ zfs list
dbpool          1.42G  8.36G    31K  /dbpool
dbpool/data     1.30G  8.36G  1.30G  /u01/data
dbpool/clone1    120M  8.36G  1.30G  /u01/clone1

oracle@sol11:~$ df -k /u01/clone1

Filesystem  1024-blocks    Used Available Capacity  Mounted on
dbpool/clone1 10257408     1361036  8770425    14%    /u01/clone1
oracle@sol11:~$ df -k /u01/data
Filesystem   1024-blocks   Used   Available Capacity  Mounted on
dbpool/data  10257408      1360530  8770425    14%    /u01/data