Skip to content

DBAzine.com

Sections
Personal tools
You are here: Home » Blogs » Chris Foot Blog » Chris Foot's Oracle10g Blog » 10G Bigfile Tablespaces
Seeking new owner for this high-traffic DBAzine.com site.
Tap into the potential of this DBA community to expand your business! Interested? Contact us today.
Who Are You?
I am a:
Mainframe True Believer
Distributed Fast-tracker

[ Results | Polls ]
Votes : 3417
 

10G Bigfile Tablespaces 10G Bigfile Tablespaces

Oracle 10G introduced a new type of tablespace called a Bigfile Tablespace. When I read the 10G New Features manual and found out that you could create a terabyte sized datafile using the Bigfile option, I wanted to immediately find an environment that had enough free disk to allow me to create my first terabyte datafile. That was over a year ago and I still haven't found a server with a free terabyte of disk that I can use for testing. Even though I haven't made it into the terabyte datafile club yet, I thought I would discuss Bigfile tablespaces in this blog.

Administrators now have the option of choosing between Bigfile and Smallfile tablespaces. Oracle database environments can contain a combination of Bigfile and Smallfile tablespaces. Oracle uses the term "Smallfile" to designate the tablespaces we have been using since Scott's tiger was a cub. Let's take a look at Bigfile tablespaces in this blog and learn about some of the benefits that they offer.

Readers of this particular blog entry have the added benefit of an Oracle expert's opinion that I think highly of. Howard J. Rogers, author of the Dizwell Informatics Site, has made some additional points that are important to consider when contemplating the use of Bigfile tablespaces. I have added them to this blog after the fact. It's important for me to provide as much pertinent information on the topic being discussed in all of my blogs. You can find much of his helpful information in this blog entry.

Oracle manuals state that a single Bigfile tablespace file can be up to 128 terabytes for a 32K block tablespace and 32 terabytes for an 8K block tablespace. Now THAT is a big data or temp file!

A key point to remember is that a Bigfile tablespace can only contain one file. A traditional Smallfile tablespace can contain up to 1,022 files. With Smallfile tablespaces, administrators have the option of expanding an existing tablespace file or adding a new file to accommodate application data growth.

Since Bigfile tablespaces can not contain multiple files, administrators must ensure that there is enough free disk available to allow the single file to grow to its capacity when mature. As a result, Oracle recommends that Bigfile tablespaces be used with Automatic Storage Management (ASM) or logical volume managers that support RAID and dynamically extensible logical volumes.

Rogers adds that ASM is not only recommended, but a requirement because of inode issues: "If you create a truly large bigfile tablespace on a traditional file system, you will suffer from horrendous inode locking issues. That is why ASM is not really an optional extra with these things."

For those of you that would like to know what inodes are, please refer to this Wikipedia definition.

Rogers also adds that if you use a single Bigfile temporary tablespace, you could be negatively impacting database performance for sort operations. He states "the number of tempfiles is one factor that affects the ability of the optimiser to parallelise a sort."

Rogers' blog entry provides this additional information to reinforce his point: "But parallel operations in 9i derive a lot of their performance benefits from the fact that their slaves can sort to different tempfiles within the one temporary tablespace, thereby distributing I/O (potentially across different hardware devices) and not blocking each other at the O/S-inode level. If, therefore, in 10g we are going to prevent you from having multiple tempfiles, because you've chosen to implement bigfiles, then your parallel query performance is going to be stuffed to buggery, putting it bluntly. Hence the temporary tablespace group: it restores the 'multiple tempfile' capability of parallel query slaves by allowing them to write to multiple different temporary tablespaces. But if you have a single temporary tablespace comprised of multiple tempfiles (because you chose to go smallfile), then you already have precisely that capability, and tablespace groups provide you with nothing extra, except a management headache arising from the additional level of redundant complexity you've added to your database."

So before you even consider creating one of these things, make sure that your environment is able to support its expansion. If not, you could have a tablespace that is not able to accommodate the normal growth of the application data it contains. In addition, heed Rogers' recommendation to use ASM for large Bigfile tablespaces.

An interesting benefit of using a Bigfile tablespace is the ability to run file alteration operations without specifying a file name. The attached file shows a Bigfile tablespace creation and alteration.

You'll notice that when I increased the size of the file from 500 MEGs to 1 GIGs and from 1 GIGs to 2 GIGs that I did not use the data file specification. I only used the tablespace name to alter the size of the datafile. Since the Bigfile tablespace can only contain one datafile, the file specification is not required.

Bigfile data tablespaces must be created as locally managed with automatic segement space management. These are the default specifications and don't have to be specified in the CREATE BIGFILE TABLESPACE DDL statement. Oracle will return an error if either EXTENT MANAGEMENT DICTIONARY or SEGMENT SPACE MANAGEMENT MANUAL is specified. But administrators can create locally managed undo and temporary Bigfile tablespaces, both of which use manual segment management.

The DDL statement below creates a Bigfile tablespace called bigtbs that is 80 GIGs in size:

CREATE BIGFILE TABLESPACE bigtbs
DATAFILE '/u01/oradata/orcl10g/bigtbs01.dbf' SIZE 80G

Administrators are able to specify the SIZE specification in kilobytes (K), megabytes (M), gigabytes (G), or terabytes (T).

Oracle 10G allows administrators to set the default tablespace size type during database creation. Administrators use the "SET DEFAULT BIGFILE/SMALLFILE TABLESPACE" specification in the database create statement. SMALLFILE is the default specification. All tablespaces that don't have the BIGFILE or SMALLFILE option specified during creation will default to the database level specification. The database level specification can be overriden by specifying either BIGFILE or SMALLFILE during tablespace creation.

DBA_TABLESPACES, USER_TABLESPACES AND V$TABLESPACE contain a new column called BIGFILE that allows administrators to determine if the tablespace is a Smallfile or Bigfile tablespace.

The last thing I would like you to consider when you are deciding whether you should be using Bigfile tablespaces is the time required to recover it. The larger the file size, the longer it will take to recover. The smaller the size, the faster the recover and the fewer the number of tapes you have to use during that recovery.

The less tapes I have to rely upon during a recovery, the better. I personally have a tape phobia. I have had my fair share of recoveries go haywire because of bad blocks on a tape (must be my magnetic personality). I especially liked the advice our O/S admin gave my team during our last recovery that was affected by tape I/O problems "tell the operator to yank it out of the tape reader, shake it around a bit and put it back in. He'll know what to do." It worked. Its hard to believe we rely upon these things for data recovery. Disk backups here I come.

Rogers also adds that Oracle's recovery utilities provide additional features to reduce the time to recover "Sure, backing up and restoring a 128TB datafile is not something to look forward to, but they invented block level media recovery in 9i for a reason (and bigfiles are it). Similarly, they've invented the ability to use incremental backups to 'refresh' rarely-taken image copies in 10g for a reason, and 128TB datafiles are similarly it. By all means, 'handle with care' -but when you do, use the entire 10g technology stack to address some of the issues they bring."

Summary
Bigfile tablespaces will have a future in many Oracle environments but they must only be implemented with much forethought. If any of you have a spooled output showing a terabyte tablespace create, feel free to send it to me! Additional comments are always welcome!

Thanks for Reading,

Chris Foot (with much help from Howard J. Rogers)
Oracle Ace


Monday, July 17, 2006  |  Permalink |  Comments (3)
trackback URL:   http://www.dbazine.com/blogs/blog-cf/chrisfoot/blogentry.2006-07-14.6086076172/sbtrackback

Three things...

Posted by dizwell at 2006-07-17 01:15 PM
One, don't forget your inodes. If you create a truly large bigfile tablespace on a traditional file system, you will suffer from horrendous inode locking issues. That is why ASM is not really an optional extra with these things.

Two, don't forget that if TEMP is a bigfile tablespace, you are in deep trouble, as the number of tempfiles is one factor that affects the ability of the optimiser to parallelise a sort. If TEMP only has one tempfile, then you have serial sorts. That is why temporary tablespace groups were invented and why they too are not really an optional extra if you decide to use bigfile technology.

Third, whilst your comments about backup issues are certainly valid, they could perhaps mis-direct. Sure, backing up and restoring a 128TB datafile is not something to look forward to, but they invented block level media recovery in 9i for a reason (and bigfiles are it). Similarly, they've invented the ability to use incremental backups to 'refresh' rarely-taken image copies in 10g for a reason, and 128TB datafiles are similarly it.

By all means, 'handle with care' -but when you do, use the entire 10g technology stack to address some of the issues they bring.

All very good points! Blog updated.

Posted by cfoot at 2006-07-17 05:45 PM
Most of my experiences with large databases are with traditional Smallfile tablespaces. Our warehouses were entirely wrapped around dates, which certainly allows for ease of partitioning. Although we had terabyte tablespaces, the daily date range partitioning on all tables kept tablespace file sizes within reasonable limits. We certainly had a lot of files, but it worked well. The performance we received using heavy parallelism, many CPUs and EMC disk was excellent.

If you don’t mind, I have edited my blog and reference your blog where you make these points. I would appreciate it if you would take another peek at my blog entry.

I especially liked the points in your blog about how multiple tablespace groups improve the performance of applications that connect all of the users to the database with a single account. I would highly recommend that readers follow the link I have provided in my blog to Rogers' discussion on Bigfile tablespaces and temporary tablespace groups.

I think it is important to reference your points directly in the blog itself. Sometimes comments get bypassed. If you have any concerns about suddenly becoming a partner in this blog entry, please send me an e-mail at cfoot@remotedbaexperts.com!

Bigfile inode scare

Posted by kevinclosson at 2006-10-30 03:18 PM

Great topic as I feel bigfile tablespaces is a great feature. I have to point out that the bit about inode locking is a complete red herring, I quote:

"If you create a truly large bigfile tablespace on a traditional file system, you will suffer from horrendous inode locking issues."

Folks, inode locks are only an issue when file metadata changes. If you choose a filesystem that supports direct I/O, one benefit you get is elimination of atime/mtime updates. With mtime/atime updates removed from the direct I/O codepath, the only metadata changes left are structural changes to the file--not the file contents. That is, if you create, remove, truncate or extend a file (even with direct I/O), then, yes of course, inode locks must be checked out. In the case of a cluster filesystem, that extends to a cluster-inode lock. Now, if your cluster filesystem is some rudimentary central lock approach, as opposed to a symmetric DLM approach, then there are issues at that level--but only when the file changes (not the file contents).

The point being that unless you are extending a BIGFILE tablespace on a freakishly frequent basis, the inode thing is a red herring.

I have a ton of information about things like this on my blog...Invite you to visit: kevinclosson.wordpress.com






Chris Foot
Senior Database Architect
Oracle Ace
Bio & Writings
Subscribe to my blog Subscribe to my blog
 
 

Powered by Plone