DBCC CHECKALLOC - Check consistency of disk allocation.
DBCC CHECKCATALOG - Check catalog consistency
DBCC CHECKCONSTRAINTS - Check integrity of table constraints.
DBCC CHECKDB - Check allocation, and integrity of all objects.
DBCC CHECKFILEGROUP - Check all tables and indexed views in a filegroup.
DBCC CHECKIDENT - Check identity value for a table.
DBCC CHECKTABLE - Check integrity of a table or indexed view.
DBCC CLEANTABLE - Reclaim space from dropped variable-length columns.
DBCC dllname - Unload a DLL from memory.
DBCC DROPCLEANBUFFERS - Remove all clean buffers from the buffer pool.
DBCC FREE... CACHE - Remove items from cache.
DBCC HELP - Help for DBCC commands.
DBCC INPUTBUFFER - Display last statement sent from a client to a database instance.
DBCC OPENTRAN - Display information about recent transactions.
DBCC OUTPUTBUFFER - Display last statement sent from a client to a database instance.
DBCC PROCCACHE - Display information about the procedure cache
DBCC SHOW_STATISTICS - Display the current distribution statistics
DBCC SHRINKDATABASE - Shrink the size of the database data and log files.
DBCC SHRINKFILE - Shrink or empty a database data or log file.
DBCC SQLPERF - Display transaction-log space statistics. Reset wait and latch statistics.
DBCC TRACE... - Enable or Disable trace flags
DBCC UPDATEUSAGE - Report and correct page and row count inaccuracies in catalog views
DBCC USEROPTIONS - Return the SET options currently active
DBCC deprecated commands
DBCC Undocumented commands
1.DBCC CHECKALLOC - Check the consistency of disk space allocation.
Syntax
DBCC CHECKALLOC [ ( 'database' | database_id | 0
[ , NOINDEX | , { REPAIR_ALLOW_DATA_LOSS | REPAIR_FAST | REPAIR_REBUILD } ]
[WITH
{ [ ALL_ERRORMSGS ]
[ , NO_INFOMSGS ]
[ , TABLOCK ]
[ , ESTIMATEONLY ]
}
]
]
Key:
REPAIR_FAST | REPAIR_REBUILD | NOINDEX - deprecated options
REPAIR_ALLOW_DATA_LOSS - Use only as a last resort
- back up the database before you run this option.To find the repair level to use run DBCC CHECKDB without a repair option. The best and easiest way to repair errors is to restore from a backup.
Examples-- Check disc space allocation for the current database.DBCC CHECKALLOC;GO
"Millions of individuals making their own decisions in the market-place will always allocate resources better than any centralized government planning process" - Ronald W. Reagan
Related
DBCC CHECKDB - Check allocation, and integrity of all objects.
Equivalent Oracle commands:
ALTER TABLE MODIFY CONSTRAINT VALIDATE
DBMS_REPAIR
2. DBCC CHECKCATALOG - Check for catalog consistency
Syntax
DBCC CHECKCATALOG
[('database_name' | database_id | 0 )]
[WITH NO_INFOMSGS]
Key:
WITH NO_INFOMSGS - Suppresses all informational messages.
"The quality of a department is inversely proportional to the number of courses it lists in its catalogue" - Hildebrand's Law
Related
DBCC CHECKTABLE - Check integrity of a table or indexed view.
3. DBCC CHECKCONSTRAINTS - Check the integrity of table constraints.
Syntax
DBCC CHECKCONSTRAINTS
[('table' | table_id | 'constraint' | constraint_id) ]
[WITH
[ { ALL_CONSTRAINTS | ALL_ERRORMSGS } ]
[ , ] [ NO_INFOMSGS ]
]
Key:
ALL_CONSTRAINTS - Check disabled constraints in addition to enabled constraints
has no effect when a constraint name is specified.
ALL_ERRORMSGS - Return all rows that violate constraints in the table being checked.
default = first 200 rows.
"The practice of both mathematics and art requires a blend of discipline and vision--a delicate balancing of constraint and freedom" - Ivars Peterson
Related
DBCC CHECKTABLE - Check integrity of a table or indexed view.
Equivalent Oracle commands:
ALTER TABLE MODIFY CONSTRAINT VALIDATE
4. DBCC CHECKDB - Check the allocation, and integrity of all objects in a database.
Syntax
DBCC CHECKDB [( 'database' | database_id | 0 [ , NOINDEX {REPAIR_ALLOW_DATA_LOSS |
REPAIR_FAST |
REPAIR_REBUILD} )] [WITH { [ALL_ERRORMSGS ] [ , NO_INFOMSGS ] [ , TABLOCK ] [, ESTIMATEONLY ] [ , { PHYSICAL_ONLY | DATA_PURITY } ] } ]
Key:
NOINDEX - Skip intensive checks of nonclustered indexes for user tables
REPAIR_ALLOW_DATA_LOSS - Try to repair all reported errors.
REPAIR_REBUILD - Perform all repairs that can be performed without risk of data loss.
ALL_ERRORMSGS - Return all reported errors per object, default = first 200 errors.
TABLOCK - Obtain locks instead of using an internal database snapshot.
this limits the checks that are performed.
ESTIMATEONLY - Display the estimated amount of tempdb space that would be required.
PHYSICAL_ONLY - Limits the checking to the integrity of the physical structure
DATA_PURITY - Check the database for column values that are not valid or out-of-range.
Example
-- Check the current database.DBCC CHECKDB;GO
5. DBCC CHECKFILEGROUP - Check the allocation and structural integrity of all tables and indexed views in a filegroup.
Syntax
DBCC CHECKFILEGROUP
[( 'filegroup' | filegroup_id | 0
[, NOINDEX ]
)]
[WITH
[ {ALL_ERRORMSGS | NO_INFOMSGS } ]
[, TABLOCK ] [ , ESTIMATEONLY ] [ , PHYSICAL_ONLY ]
]
Key:
filegroup_name - The name of the filegroup to be checked.
default (or if 0 is specified) = the primary filegroup.
NOINDEX - Skip intensive checks of nonclustered indexes.
ALL_ERRORMSGS - Return all reported errors per object, default = first 200 errors.
TABLOCK - Obtain locks instead of using an internal database snapshot.
ESTIMATEONLY - Display the estimated amount of tempdb space that would be required.
PHYSICAL_ONLY - Limit checking to the integrity of the physical structure of the page,
record headers and the physical structure of B-trees.Examples
-- Check the primary filegroup in 'MyDatabase'
USE MyDatabase;GODBCC CHECKFILEGROUP;GO
"One of the most important ways to manifest integrity is to be loyal to those who are not present. In doing so, we build the trust of those who
are present" - Stephen Covey
Related
DBCC CHECKTABLE - Check integrity of a table or indexed view.
6. DBCC CHECKIDENT - Check and/or reseed the current identity value for a table.
Syntax
DBCC CHECKIDENT ( 'table' [ , { NORESEED | { RESEED [ , new_reseed_value ] } } ] ) [WITH NO_INFOMSGS]
Key:
NORESEED - The current identity value should not be changed.
RESEED - Change the identity value.
new_reseed_value - The new seed value to be used for the identity column.
WITH NO_INFOMSGS - Suppresses all information messages.Example
-- Reset the current identity value
USE MyDatabase;GODBCC CHECKIDENT ('MySchema.MyTable', RESEED, 5000);GO
I finally got it all together and now I forgot where I put it!
Related commands
DBCC CHECKCONSTRAINTS - Check integrity of table constraints.
Equivalent Oracle command:
SELECT Sequence_for_my_Table.currval into CurrIdentity from dual;
7. DBCC CHECKTABLE - Check the integrity of a table or indexed view.
Syntax
DBCC CHECKTABLE ('table' | 'view' [ , { NOINDEX | index_id } |, { REPAIR_ALLOW_DATA_LOSS | REPAIR_FAST | REPAIR_REBUILD } ] ) [WITH { ALL_ERRORMSGS ] [ , EXTENDED_LOGICAL_CHECKS ] [ , NO_INFOMSGS ] [ , TABLOCK ] [ , ESTIMATEONLY ] [ , { PHYSICAL_ONLY | DATA_PURITY } ] } ]
Key:
NOINDEX - Skip intensive checks of nonclustered indexes.
REPAIR_ALLOW_DATA_LOSS - Try to repair all reported errors.
REPAIR_REBUILD - Perform all repairs that can be performed without risk of data loss.
REPAIR_FAST - deprecated option
ALL_ERRORMSGS - Return all reported errors per object, default = first 200 errors.
TABLOCK - Obtain locks instead of using an internal database snapshot.
ESTIMATEONLY - Display the estimated amount of tempdb space that would be required.
PHYSICAL_ONLY - Limits the checking to the integrity of the physical structure
EXTENDED_LOGICAL_CHECKS - If the compatibility level is 100 (SQL Server 2008) or higher,
perform logical consistency checks on indexed views, XML indexes,
and spatial indexes.
DATA_PURITY - Check the table for column values that are not valid or out-of-range.Use the DATA_PURITY option for databases upgraded from versions of SQL Server earlier than 2005.
To repair errors restore from a backup, use the REPAIR options only as a last resort.
Examples
-- Check the integrity of MyTable in 'MyDatabase'
USE MyDatabase;GODBCC CHECKTABLE ('MySchema.MyTable')GO"We're going to rebuild and rebuild stronger" - Mayor Rudolph Giuliani
Related
DBCC CHECKCONSTRAINTS - Check integrity of table constraints.
8. DBCC CLEANTABLE
Reclaim space from dropped variable-length columns in a table or indexed view.
Syntax
DBCC CLEANTABLE
( { 'database' | database_id | 0 }
, { 'table' | table_id | 'view' | view_id }
[ , batch_size]
) [WITH NO_INFOMSGS]
Key:
batch_size - The no. of rows to process per transaction.
default (or if 0 is specified) = whole table
NO_INFOMSGS - Suppress all information messages.
If 0 is specified, the current database will be used.
Example
DBCC CLEANTABLE ('MyDatabase','MySchema.MyTable', 0)WITH NO_INFOMSGS;GO"It's hard to be funny when you have to be clean" - Mae West.
Equivalent Oracle command:
ALTER TABLESPACE COALESCE
9. DBCC dllname
Unload a DLL from memory.
Syntax
DBCC dllname (FREE) [ WITH NO_INFOMSGS ]
Key:
dllname - Name of the DLL to release from memory.
WITH NO_INFOMSGS - Suppress all information messages.
When an extended stored procedure is executed, the DLL will remain loaded until the server is shut down (or DBCC dllname is used to unload it)
Example
DBCC xp_my_stored_proc (FREE)"...there is nothing that so much contributes to a survival of the trials and sufferings of the day as a sense of humor. It is like the buffers in
the solid train, like the air cushion of a modern field gun. It saves the jolt; it takes up the recoil" - William Howard Taft
Equivalent Oracle command:
Set (for all packages) with Server Parameters..
SHARED_POOL_SIZE / LARGE_POOL_SIZE / JAVA_MAX_SESSIONSPACE_SIZE
10. DBCC DROPCLEANBUFFERS
Remove all clean buffers from the buffer pool.
Syntax
DBCC DROPCLEANBUFFERS [WITH NO_INFOMSGS]
Key:
WITH NO_INFOMSGS - Suppress all information messages.
This command is useful to test queries with a cold buffer cache without shutting down and restarting the server.
To produce a 'cold' buffer cache with all dirty pages written to disk, first use CHECKPOINT.
# Why can't I ease your doubtful mind, and melt your cold, cold heart # - Hank Williams
Related commands:
CHECKPOINT
DBCC FREE... CACHE
Equivalent Oracle command:
ALTER SYSTEM FLUSH SHARED POOL
11. DBCC FREE... CACHE
DBCC FREEPROCCACHE - Remove all elements from the procedure cache.
DBCC FREESESSIONCACHE - Flush the distributed query connection cache.
DBCC FREESYSTEMCACHE - Release all unused cache entries from all caches.
Syntax
DBCC FREEPROCCACHE [WITH NO_INFOMSGS]
DBCC FREESESSIONCACHE [WITH NO_INFOMSGS]
DBCC FREESYSTEMCACHE
('ALL')
[WITH [MARK_IN_USE_FOR_REMOVAL] , [NO_INFOMSGS] ]
Key:
NO_INFOMSGS - Suppress all information messages
ALL - All supported caches
MARK_IN_USE_FOR_REMOVAL - Free up currently used entries asap (asynchronous)
MARK_IN_USE_FOR_REMOVAL will not prevent new entries being created in the cache.
Requires ALTER SERVER STATE permission on the server.
Examples
USE MyDatabase;GODBCC FREESESSIONCACHE WITH NO_INFOMSGS;GO
DBCC FREEPROCCACHE;"It's hard to be funny when you have to be clean" - Mae West.
Related commands:
DBCC DROPCLEANBUFFERS
Equivalent Oracle command:
ALTER SYSTEM FLUSH SHARED POOL
12. DBCC HELP
Help and syntax information for DBCC commands.
Syntax
DBCC HELP ('dbcc_command') [WITH NO_INFOMSGS ]
DBCC HELP (@dbcc_command_var) [WITH NO_INFOMSGS ]
DBCC HELP ('?') [WITH NO_INFOMSGS ]
Key:
WITH NO_INFOMSGS - Suppress all information messages (severity levels 0-10)
dbcc_command - The DBCC 'xyz' commandExamples
DBCC HELP ('?')GO
DBCC HELP ('USEROPTIONS')
GO
DECLARE @help_var sysname
SET @help_var = 'CHECKALLOC'
DBCC HELP (@help_var)
GO"It is our special duty, that if anyone needs our help, we should give him such help to the utmost of our power" - Cicero
Equivalent Oracle command:
HELP topic - In SQL*Plus
13. DBCC INPUTBUFFER / OUTPUTBUFFER
Display the last statement sent from a client to a database instance.
Syntax
DBCC INPUTBUFFER (session_id [, request_id ]) [WITH NO_INFOMSGS ]
DBCC OUTPUTBUFFER (session_id [, request_id ]) [WITH NO_INFOMSGS ]
Key:
session_id - The session ID
request_id - A specific request (batch) within the session.
NO_INFOMSGS - Suppress all information messages (severity 0-10)To find the request_ids for a given session id (@@spid = current session):
SELECT request_id
FROM sys.dm_exec_requests
WHERE session_id = @@spid
Example
-- Display session # 52DBCC INPUTBUFFER (52);
DBCC OUTPUTBUFFER (52); "A lawyer who represents himself has a fool for a client." - proverb
Equivalent Oracle command:
Select from v$SESSION
14. DBCC OPENTRAN
Display information about the oldest active transaction and the oldest replicated transactions.
Syntax
DBCC OPENTRAN [( [ 'database' | database_id | 0 ] ) ] [WITH TABLERESULTS] [, [NO_INFOMSGS] ] ]
Key:
TABLERESULTS - Output in a tabular format that can be loaded into a table.
NO_INFOMSGS - Suppress all information messages (severity 0-10)Example
BEGIN TRAN...Insert/Update/DeleteGODBCC OPENTRAN;ROLLBACK TRAN;“If love does not know how to give and take without restrictions, it is not love, but a transaction that never fails to lay stress on a plus and a minus” - Emma Goldman
Equivalent Oracle command:
Select from V$TRANSACTION
15. DBCC INPUTBUFFER / OUTPUTBUFFER
Display the last statement sent from a client to a database instance.
Syntax
DBCC INPUTBUFFER (session_id [, request_id ]) [WITH NO_INFOMSGS ]
DBCC OUTPUTBUFFER (session_id [, request_id ]) [WITH NO_INFOMSGS ]
Key:
session_id - The session ID
request_id - A specific request (batch) within the session.
NO_INFOMSGS - Suppress all information messages (severity 0-10)To find the request_ids for a given session id (@@spid = current session):
SELECT request_id
FROM sys.dm_exec_requests
WHERE session_id = @@spid
Example
-- Display session # 52DBCC INPUTBUFFER (52);
DBCC OUTPUTBUFFER (52); "A lawyer who represents himself has a fool for a client." - proverb
Equivalent Oracle command:
Select from v$SESSION
16. DBCC PROCCACHE
Display information about the procedure cache (in table format)
Syntax
DBCC PROCCACHE [WITH NO_INFOMSGS]
Key:
NO_INFOMSGS - Suppress all information messages (severity 0-10)All output figures are Totals:
proc cache size
Entries in the procedure cache.
proc cache used
Entries that are currently being used.
num proc buffs
Pages used by all entries in the procedure cache.
num proc buffs used
Pages used by all entries that are currently being used.
proc cache active / num proc buffs active
For backward compatibility only.
“Misrepresentation and deception are standard operating procedure for this administration, which - to an extent never before seen in U.S. history - systematically and brazenly distorts the facts.” - Paul Krugman (NY Times)
Equivalent Oracle command:
Select from V$SYSSTAT
17. DBCC SHOW_STATISTICS
Display the current distribution statistics for the specified target on the specified table.
Syntax
DBCC SHOW_STATISTICS ('table' | 'view' , target)
[WITH [NO_INFOMSGS] option [, option...] ]
Options:
STAT_HEADER
DENSITY_VECTOR
HISTOGRAM
Key:
table / view - The table or indexed view for which to display stats
target - The column, index or statistics for which to display stats.
NO_INFOMSGS - Suppress all information messages (severity 0-10)
Specifying any of the 3 options will return only those statistics.
Example
DBCC SHOW_STATISTICS ('MySchema.MyTable', MyIndex);GO"A lawyer who represents himself has a fool for a client." - proverb
Related commands:
UPDATE STATISTICS
sp_createstats
sp_updatestats
Equivalent Oracle commands:
INDEX_STATS
DBA_PART_COL_STATISTICS
DBA_TAB_COL_STATISTICS
18. DBCC SHRINKDATABASE
Shrink the size of the database data and log files.
Syntax
DBCC SHRINKDATABASE ('database' option [,option] ) [WITH NO_INFOMSGS]
DBCC SHRINKDATABASE ('database_id' option [,option] ) [WITH NO_INFOMSGS]
DBCC SHRINKDATABASE (0 option [,option]) [WITH NO_INFOMSGS]
Options:
target_percent
NOTRUNCATE
TRUNCATEONLY
Key:
0 - Shrink the current database
target_percent - Percentage of free space to remain in the database file
NOTRUNCATE - Free space at the end of the data file is not returned to the OS
(pages are still moved)
TRUNCATEONLY - Release free space at the end of the data file to the OS
(do not move pages)
NO_INFOMSGS - Suppress all information messages (severity 0-10)
Only one of the truncate options can be specified - they do not apply to log files.
Example
DBCC SHRINKDATABASE (MyDatabase);“Men shrink less from offending one who inspires love than one who inspires fear” - Niccolo Machiavelli
Related commands:
ALTER DATABASE
DBCC SHRINKFILE
FILE_ID
sys.database_files
Equivalent Oracle command:
ALTER DATABASE... DATAFILE...
19. DBCC SHRINKFILE
Shrink the size of the current database data / log file or empty a file by moving the data.
Syntax
DBCC SHRINKFILE ( file , EMPTYFILE ) [WITH NO_INFOMSGS]
DBCC SHRINKFILE ( file , target_size
[, {NOTRUNCATE | TRUNCATEONLY }] ) [WITH NO_INFOMSGS ]
Key:
file - Logical file name or file_id
EMPTYFILE - Migrate data to other files in the same filegroup.
The file can be removed with ALTER DATABASE.
target_size - The size for the file in megabytes.
default = that specified when the file was created, or
the last size used with ALTER DATABASE.(int)
NOTRUNCATE - Free space at the end of the data file is not returned to the OS
(pages are still moved)
TRUNCATEONLY - Release free space at the end of the data file to the OS
(do not move pages)
NO_INFOMSGS - Suppress all information messages (severity 0-10)
You can shrink a transaction log file while the system is in use (DML commands are also being executed), however this will only affect the inactive portion of the transaction log file.
Discover the file_ID for each file with the SQL: SELECT file_id, name FROM sys.database_files;
After using TRUNCATE_ONLY you must perform a full backup
Examples
Shrink a datafile to 64 Mb:
DBCC SHRINKFILE (MyDataFile01, 64);Shrink a Log file to 8 GiB (8192 MiB):
USE MyDatabase;
GO
DBCC SHRINKFILE(MyDatabase_Log, 8192)BACKUP LOG MyDatabase WITH TRUNCATE_ONLYDBCC SHRINKFILE(MyDatabase_Log, 8192)
Afterwords, perform a full backup of the database.
To make the file as small as possible you can specify 1 for 1 Mb, or just leave out the target_size completely, be aware that doing this will slow down the system a little as the system will just have to grow the log file again as soon as another transaction is started.
In SQL Server 2008 the procedure is slightly different, the database must first be set to Simple recovery mode, then shrink the file, then restore FULL recovery mode:
ALTER DATABASE MyDatabase SET RECOVERY SIMPLE
go
DBCC SHRINKFILE(MyDatabase_log)
go
EXEC sp_helpdb MyDatabase
go
ALTER DATABASE MyDatabase SET RECOVERY FULL
go “Men shrink less from offending one who inspires love than one who inspires fear” - Niccolo Machiavelli
Related:
Why you want to be restrictive with shrink of database files - karaszi.com
ALTER DATABASE MyDatabase SET RECOVERY FULL;
DBCC SHRINKDATABASE
FILE_ID
sys.database_files
Equivalent Oracle command: ALTER DATABASE Datafile '/oradata/ss64.dbf' resize 64M;
20. DBCC SQLPERF
Display transaction-log space statistics. Reset wait and latch statistics.
Syntax
DBCC SQLPERF ( Option ) [WITH NO_INFOMSGS ]
Options:
LOGSPACE 'sys.dm_os_latch_stats' , CLEAR 'sys.dm_os_wait_stats' , CLEAR
Key:
LOGSPACE - Monitor log space, indicates when to back up or truncate the tx log.
dm_os_latch_stats - Reset the latch statistics.
dm_os_wait_stats - Reset the wait statistics.
NO_INFOMSGS - Suppress all information messages (severity 0-10)
Example
DBCC SQLPERF(LOGSPACE)GO“If you happen to be one of the fretful minority... never force an idea; you'll abort it if you do. Be patient and you'll give birth to it when the time is ripe. Learn to wait” - Robert A. Heinlein
Related commands:
sp_spaceused
sys.dm_os_latch_stats
sys.dm_os_wait_stats
Equivalent Oracle commands:
Oracle deals with transaction logs in a *totally* different way
See V$LOG and V$LOGFILE
21. DBCC TRACEON - Enable trace flags.
DBCC TRACEOFF - Disable trace flags.
DBCC TRACESTATUS - Display the status of trace flags.
Syntax
DBCC TRACEON ( trace# [ ,...n ][ , -1 ] ) [WITH NO_INFOMSGS]
DBCC TRACEOFF ( trace# [ ,...n ] [ , -1 ] ) [WITH NO_INFOMSGS]
DBCC TRACESTATUS ( [ [trace# [,...n ] ] [,] [-1] ] ) [WITH NO_INFOMSGS]
Key:
trace# - Number of the trace flag(s)
-1 - Display the status of trace flags that are enabled globally.
NO_INFOMSGS - Suppress all information messages.
By default all trace flags that are enabled for the session are displayed.
Examples
-- Turn flag on
DBCC TRACEON (3205)GO
-- Turn flag on globally
DBCC TRACEON (2528, -1)GO
-- Turn flag off
DBCC TRACEOFF (3205);GO
-- Show flag status
DBCC TRACESTATUS (2528, 3205)GO“Every branch of human knowledge, if traced up to its source and final principles vanishes into a mystery” - Arthur Machen
Related commands:
Trace Flags
Equivalent Oracle command:
Server Parameters Configuration settings
22. DBCC UPDATEUSAGE
Report and correct page and row count inaccuracies in catalog views, use this after upgrading a database to SQL Server 2005.
Syntax
DBCC UPDATEUSAGE ( database
[, {table | view} [,{index} ] ] )
[WITH [ NO_INFOMSGS ] [ , ] [COUNT_ROWS ] ]
Key:
database - 'database_name' or database_id or 0 (current db)
NO_INFOMSGS - Suppress all information messages.
COUNT_ROWS - Update the row count column.
The table/view and indexes may be specified by 'name' (in single quotes) or ID.
If 0 is specified, the current database will be used.
Example
DBCC UPDATEUSAGE ('MyDatabase','MySchema.MyTable');GO“No matter how cynical you get, it is impossible to keep up” - Lily Tomlin
Related commands:
sp_spaceused
sys.sysindexes
UPDATE STATISTICS
Equivalent Oracle command:
DBA_TABLES - Rows in table
DBA_TAB_COL_STATISTICS - Other column stats
23. DBCC USEROPTIONS
Return the SET options currently active (set for the current connection.)
Syntax
DBCC USEROPTIONS [WITH NO_INFOMSGS]
Options:
NO_INFOMSGS - Suppress all information messages.
Example
DBCC USEROPTIONS“Eventually, all things merge into one, and a river runs through it. The river was cut by the world's great flood and runs over rocks from the basement of time” - Norman Maclean, A River Runs Through It
Related commands:
SET
SET TRANSACTION ISOLATION LEVEL
Equivalent Oracle command:
DBA_TABLES - Rows in table
DBA_TAB_COL_STATISTICS - Other column stats
24. DBCC - Deprecated commands
DBCC concurrencyviolation (reset | display | startlog | stoplog)
DBCC dbreindex - - use ALTER INDEX instead
DBCC DBREPAIR - - use DROP DATABASE instead
DBCC INDEXDEFRAG
DBCC PINTABLE
DBCC SHOWCONTIG
DBCC UNPINTABLE
25. DBCC - Undocumented commands
These commands may affect system performance and/or force table-level locks.
There is no guarantee these commands will remain available in any future release of SQL server.
DBCC activecursors [(spid)]
DBCC addextendedproc (function_name, dll_name)
DBCC addinstance (objectname, instancename)
DBCC adduserobject (name)
DBCC auditevent (eventclass, eventsubclass, success, loginname, rolename, dbusername, loginid)
DBCC autopilot (typeid, dbid, tabid, indid, pages [,flag])
DBCC balancefactor (variance_percent)
DBCC bufcount [(number_of_buffers)]
DBCC buffer ( {'dbname' | dbid} [, objid [, number [, printopt={0|1|2} ] [, dirty | io | kept | rlock | ioerr | hashed ]]])
DBCC bytes ( startaddress, length )
DBCC cachestats
DBCC callfulltext
DBCC checkdbts (dbid, newTimestamp)]
DBCC checkprimaryfile ( {'FileName'} [, opt={0|1|2|3} ])
DBCC cacheprofile [( {actionid} [, bucketid])
DBCC clearspacecaches ('database_name'|database_id, 'table_name'|table_id, 'index_name'|index_id)
DBCC collectstats (on | off)
DBCC config
DBCC cursorstats ([spid [,'clear']])
DBCC dbinfo [('dbname')]
DBCC dbrecover (dbname [, IgnoreErrors])
DBCC dbreindexall (db_name/db_id, type_bitmap)
DBCC dbrepair ('dbname', DROPDB [, NOINIT])
DBCC dbtable [({'dbname' | dbid})]
DBCC debugbreak
DBCC deleteinstance (objectname, instancename)
DBCC des [( {'dbname' | dbid} [, {'objname' | objid} ])]
DBCC detachdb [( 'dbname' )]
DBCC dropextendedproc (function_name)
DBCC dropuserobject ('object_name')
DBCC dumptrigger ({'BREAK', {0 | 1}} | 'DISPLAY' | {'SET', exception_number} | {'CLEAR', exception_number})
DBCC errorlog
DBCC extentinfo [({'database_name'| dbid | 0} [,{'table_name' | table_id} [, {'index_name' | index_id | -1}]])]
DBCC fileheader [( {'dbname' | dbid} [, fileid])
DBCC fixallocation [({'ADD' | 'REMOVE'}, {'PAGE' | 'SINGLEPAGE' | 'EXTENT' | 'MIXEDEXTENT'} , filenum, pagenum [, objectid, indid])
DBCC flush ('data' | 'log', dbid)
DBCC flushprocindb (database)
DBCC freeze_io (db)
DBCC getvalue (name)
DBCC icecapquery ('dbname', stored_proc_name [, #_times_to_icecap (-1 infinite, 0 turns off)])
Use 'dbcc icecapquery (printlist)' to see list of SP's to profile.
Use 'dbcc icecapquery (icecapall)' to profile all SP's.
DBCC incrementinstance (objectname, countername, instancename, value)
DBCC ind ( { 'dbname' | dbid }, { 'objname' | objid }, { indid | 0 | -1 | -2 } )
DBCC invalidate_textptr (textptr)
DBCC invalidate_textptr_objid (objid)
DBCC iotrace ( { 'dbname' | dbid | 0 | -1 } , { fileid | 0 }, bufsize, [ { numIOs | -1 } [, { timeout (sec) | -1 } [, printopt={ 0 | 1 }]]] )
DBCC latch ( address [, 'owners'] [, 'stackdumps'])
DBCC lock ([{'DUMPTABLE' | 'DUMPSTATS' | 'RESETSTATS' | 'HASH'}] | [{'STALLREPORTTHESHOLD', stallthreshold}])
DBCC lockobjectschema ('object_name')
DBCC log ([dbid[,{0|1|2|3|4}[,['lsn','[0x]x:y:z']|['numrecs',num]|['xdesid','x:y'] | ['extent','x:y']|['pageid','x:y']|['objid',{x,'y'}]|['logrecs', {'lop'|op}...]|['output',x,['filename','x']]...]]])
DBCC loginfo [({'database_name' | dbid})]
DBCC matview ({'PERSIST' | 'ENDPERSIST' | 'FREE' | 'USE' | 'ENDUSE'})
DBCC memobjlist [(memory object)]
DBCC memorymap
DBCC memorystatus
DBCC memospy
DBCC memusage ([IDS | NAMES], [Number of rows to output])
DBCC monitorevents ('sink' [, 'filter-expression'])
DBCC newalloc - please use checkalloc instead
DBCC no_textptr (table_id , max_inline)
DBCC page ( {'dbname' | dbid}, filenum, pagenum [, printopt={0|1|2|3} ][, cache={0|1} ])
DBCC perflog
DBCC perfmon
DBCC pglinkage (dbid, startfile, startpg, number, printopt={0|1|2} , targetfile, targetpg, order={1|0})
DBCC procbuf [({'dbname' | dbid}[, {'objname' | objid} [, nbufs[, printopt = { 0 | 1 } ]]] )]
DBCC prtipage (dbid, objid, indexid [, [{{level, 0} | {filenum, pagenum}}] [,printopt]])
DBCC pss [(uid[, spid[, printopt = { 1 | 0 }]] )]
DBCC readpage ({ dbid, 'dbname' }, fileid, pageid , formatstr [, printopt = { 0 | 1} ])
DBCC rebuild_log (dbname [, filename])
DBCC renamecolumn (object_name, old_name, new_name)
DBCC resource
DBCC row_lock (dbid, tableid, set) - Not Needed
DBCC ruleoff ({ rulenum | rulestring } [, { rulenum | rulestring } ]+)
DBCC ruleon ( rulenum | rulestring } [, { rulenum | rulestring } ]+)
DBCC setcpuweight (weight)
DBCC setinstance (objectname, countername, instancename, value)
DBCC setioweight (weight)
DBCC showdbaffinity
DBCC showfilestats [(file_num)]
DBCC showoffrules
DBCC showonrules
DBCC showtableaffinity (table)
DBCC showtext ('dbname', {textpointer | {fileid, pageid, slotid[,option]}})
DBCC showweights
DBCC sqlmgrstats
DBCC stackdump [( {uid[, spid[, ecid]} | {threadId, 'THREADID'}] )]
DBCC tab ( dbid, objid )
DBCC tape_control {'query' | 'release'}[,('\\.\tape')]
DBCC tec [( uid[, spid[, ecid]] )]
DBCC textall [({'database_name'|database_id}[, 'FULL' | FAST] )]
DBCC textalloc ({'table_name'|table_id}[, 'FULL' | FAST])
DBCC thaw_io (db)
DBCC upgradedb (db)
DBCC usagegovernor (command, value)
DBCC useplan [(number_of_plan)]
DBCC wakeup (spid)
DBCC writepage ({ dbid, 'dbname' }, fileid, pageid, offset, length, data)
26. DBCC PAGE: Use this command to look at contents of a data page stored in SQL Server.
Example:
DBCC PAGE ({dbid|dbname}, pagenum [,print option] [,cache] [,logical])
where:
Dbid or dbname: Enter either the dbid or the name of the database in question.
Pagenum: Enter the page number of the SQL Server page that is to be examined.
Print option: (Optional) Print option can be either 0, 1, or 2. 0 - (Default) This option causes DBCC PAGE to print out only the page header information. 1 - This option causes DBCC PAGE to print out the page header information, each row of information from the page, and the page's offset table. Each of the rows printed out will be separated from each other. 2 - This option is the same as option 1, except it prints the page rows as a single block of information rather than separating the individual rows. The offset and header will also be displayed.
Cache: (Optional) This parameter allows either a 1 or a 0 to be entered. 0 - This option causes DBCC PAGE to retrieve the page number from disk rather than checking to see if it is in cache. 1 - (Default) This option takes the page from cache if it is in cache rather than getting it from disk only.
Logical: (Optional) This parameter is for use if the page number that is to be retrieved is a virtual page rather then a logical page. It can be either 0 or 1. 0 - If the page is to be a virtual page number. 1 - (Default) If the page is the logical page number.
27. DBCC SHOWCONTIG: Used to show how fragmented data and indexes are in a specified table. If data pages storing data or index information becomes fragmented, it takes more disk I/O to find and move the data to the SQL Server cache buffer, hurting performance. This command tells you how fragmented these data pages are. If you find that fragmentation is a problem, you can reindex the tables to eliminate the fragmentation. Note: this fragmentation is fragmentation of data pages within the SQL Server MDB file, not of the physical file itself.
Since this command requires you to know the ID of both the table and index being analyzed, you may want to run the following script so you don't have to manually look up the table name ID number and the index ID number.
Example:
DBCC SHOWCONTIG (Table_id, IndexID)
Or:
--Script to identify table fragmentation
--Declare variables
DECLARE
@ID int,
@IndexID int,
@IndexName varchar(128)
--Set the table and index to be examined
SELECT @IndexName = 'index_name' --enter name of index
SET @ID = OBJECT_ID('table_name') --enter name of table
--Get the Index Values
SELECT @IndexID = IndID
FROM sysindexes
WHERE id = @ID AND name = @IndexName
--Display the fragmentation
DBCC SHOWCONTIG (@id, @IndexID)
While the DBCC SHOWCONTIG command provides several measurements, the key one is Scan Density. This figure should be as close to 100% as possible. If the scan density is less than 75%, then you may want to reindex the tables in your database.
you can search this link also ss64.com/sql/
Wednesday, March 31, 2010
Wednesday, March 24, 2010
LOCKS IN SQL SERVER 2005
Lock modes
All examples are run under the default READ COMMITED isolation level. Taken locks differ between isolation levels, however these examples are just to demonstrate the lock mode with an example. Here's a little explanation of the three columns from sys.dm_tran_locks used in the examples:
resource_type This tells us what resource in the database the locks are being taken on. It can be one of these values: DATABASE, FILE, OBJECT, PAGE, KEY, EXTENT, RID, APPLICATION, METADATA, HOBT, ALLOCATION_UNIT.
request_mode This tells us the mode of our lock.
resource_description This shows a brief description of the resource. Usually holds the id of the page, object, file, row, etc. It isn't populated for every type of lock
The filter on resource_type <> 'DATABASE' just means that we don't want to see general shared locks taken on databases. These are always present. All shown outputs are from the sys.dm_tran_locks dynamic management view. In some examples it is truncated to display only locks relevant for the example. For full output you can run these yourself.
Shared locks (S)
Shared locks are held on data being read under the pessimistic concurrency model. While a shared lock is being held other transactions can read but can't modify locked data. After the locked data has been read the shared lock is released, unless the transaction is being run with the locking hint (READCOMMITTED, READCOMMITTEDLOCK) or under the isolation level equal or more restrictive than Repeatable Read. In the example you can't see the shared locks because they're taken for the duration of the select statement and are already released when we would select data from sys.dm_tran_locks. That is why an addition of WITH (HOLDLOCK) is needed to see the locks.
BEGIN TRAN
USE AdventureWorks
SELECT * FROM Person.Address WITH (HOLDLOCK)
WHERE AddressId = 2
SELECT resource_type, request_mode, resource_description
FROM sys.dm_tran_locks
WHERE resource_type <> 'DATABASE'
ROLLBACK
Update locks (U)
Update locks are a mix of shared and exclusive locks. When a DML statement is executed SQL Server has to find the data it wants to modify first, so to avoid lock conversion deadlocks an update lock is used. Only one update lock can be held on the data at one time, similar to an exclusive lock. But the difference here is that the update lock itself can't modify the underlying data. It has to be converted to an exclusive lock before the modification takes place. You can also force an update lock with the UPDLOCK hint:
BEGIN TRAN
USE AdventureWorks
SELECT * FROM Person.Address WITH (UPDLOCK)
WHERE AddressId < 2
SELECT resource_type, request_mode, resource_description
FROM sys.dm_tran_locks
WHERE resource_type <> 'DATABASE'
ROLLBACK
Exclusive locks (X)
Exclusive locks are used to lock data being modified by one transaction thus preventing modifications by other concurrent transactions. You can read data held by exclusive lock only by specifying a NOLOCK hint or using a read uncommitted isolation level. Because DML statements first need to read the data they want to modify you'll always find Exclusive locks accompanied by shared locks on that same data.
BEGIN TRAN
USE AdventureWorks
UPDATE Person.Address
SET AddressLine2 = 'Test Address 2'
WHERE AddressId = 5
SELECT resource_type, request_mode, resource_description
FROM sys.dm_tran_locks
WHERE resource_type <> 'DATABASE'
ROLLBACK
Intent locks (I)
Intent locks are a means in which a transaction notifies other transaction that it is intending to lock the data. Thus the name. Their purpose is to assure proper data modification by preventing other transactions to acquire a lock on the object higher in lock hierarchy. What this means is that before you obtain a lock on the page or the row level an intent lock is set on the table. This prevents other transactions from putting exclusive locks on the table that would try to cancel the row/page lock. In the example we can see the intent exclusive locks being placed on the page and the table where the key is to protect the data from being locked by other transactions.
BEGIN TRAN
USE AdventureWorks
UPDATE TOP(5) Person.Address
SET AddressLine2 = 'Test Address 2'
WHERE PostalCode = '98011'
SELECT resource_type, request_mode, resource_description
FROM sys.dm_tran_locks
WHERE resource_type <> 'DATABASE'
ROLLBACK
Schema locks (Sch)
There are two types of schema locks:
Schema stability lock (Sch-S): Used while generating execution plans. These locks don't block access to the object data.
Schema modification lock (Sch-M): Used while executing a DDL statement. Blocks access to the object data since its structure is being changed.
In the example we can see the Sch-S and Sch-M locks being taken on the system tables and the TestTable plus a lot of other locks on the system tables.
BEGIN TRAN
USE AdventureWorks
CREATE TABLE TestTable (TestColumn INT)
SELECT resource_type, request_mode, resource_description
FROM sys.dm_tran_locks
WHERE resource_type <> 'DATABASE'
ROLLBACK
Bulk Update locks (BU)
Bulk Update locks are used by bulk operations when TABLOCK hint is used by the import. This allows for multiple fast concurrent inserts by disallowing data reading to other transactions.
Conversion locks
Conversion locks are locks resulting from converting one type of lock to another. There are 3 types of conversion locks:
Shared with Intent Exclusive (SIX). A transaction that holds a Shared lock also has some pages/rows locked with an Exclusive lock
Shared with Intent Update (SIU). A transaction that holds a Shared lock also has some pages/rows locked with an Update lock.
Update with Intent Exclusive (UIX). A transaction that holds an Update lock also has some pages/rows locked with an Exclusive lock.
In the example you can see the UIX conversion lock being taken on the page:
BEGIN TRAN
USE AdventureWorks
UPDATE TOP(5) Person.Address
SET AddressLine2 = 'Test Address 2'
WHERE PostalCode = '98011'
SELECT resource_type, request_mode, resource_description
FROM sys.dm_tran_locks
WHERE resource_type <> 'DATABASE'
ROLLBACK
Key - Range locks
Key-range locks protect a range of rows implicitly included in a record set being read by a Transact-SQL statement while using the serializable transaction isolation level. Key-range locking prevents phantom reads. By protecting the ranges of keys between rows, it also prevents phantom insertions or deletions into a record set accessed by a transaction. In the example we can see that there are two types of key-range locks taken:
RangeX-X - exclusive lock on the interval between the keys and exclusive lock on the last key in the range
RangeS-U – shared lock on the interval between the keys and update lock on the last key in the range
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRAN
USE AdventureWorks
UPDATE Person.Address
SET AddressLine2 = 'Test Address 2'
WHERE AddressLine1 LIKE '987 %'
SELECT resource_type, request_mode, resource_description
FROM sys.dm_tran_locks
WHERE resource_type <> 'DATABASE'
ROLLBACK
Lock Granularity
Lock granularity consists of TABLE, PAGE and ROW locks. If you have a clustered index on the table then instead of a ROW lock you have a KEY lock. Locking on the lower level increases concurrency, but if a lot of locks are taken consumes more memory and vice versa for the higher levels. So granularity simply means the level at which the SQL Server locks data. Also note that the more restricted isolation level we choose, the higher the locking level to keep data in correct state. You can override the locking level by using ROWLOCK, PAGLOCK or TABLOCK hints but the use of these hints is discouraged since SQL Server know what are the appropriate locks to take for each scenario. If you must use them you should be aware of the concurrency and data consistency issues you might cause.
All examples are run under the default READ COMMITED isolation level. Taken locks differ between isolation levels, however these examples are just to demonstrate the lock mode with an example. Here's a little explanation of the three columns from sys.dm_tran_locks used in the examples:
resource_type This tells us what resource in the database the locks are being taken on. It can be one of these values: DATABASE, FILE, OBJECT, PAGE, KEY, EXTENT, RID, APPLICATION, METADATA, HOBT, ALLOCATION_UNIT.
request_mode This tells us the mode of our lock.
resource_description This shows a brief description of the resource. Usually holds the id of the page, object, file, row, etc. It isn't populated for every type of lock
The filter on resource_type <> 'DATABASE' just means that we don't want to see general shared locks taken on databases. These are always present. All shown outputs are from the sys.dm_tran_locks dynamic management view. In some examples it is truncated to display only locks relevant for the example. For full output you can run these yourself.
Shared locks (S)
Shared locks are held on data being read under the pessimistic concurrency model. While a shared lock is being held other transactions can read but can't modify locked data. After the locked data has been read the shared lock is released, unless the transaction is being run with the locking hint (READCOMMITTED, READCOMMITTEDLOCK) or under the isolation level equal or more restrictive than Repeatable Read. In the example you can't see the shared locks because they're taken for the duration of the select statement and are already released when we would select data from sys.dm_tran_locks. That is why an addition of WITH (HOLDLOCK) is needed to see the locks.
BEGIN TRAN
USE AdventureWorks
SELECT * FROM Person.Address WITH (HOLDLOCK)
WHERE AddressId = 2
SELECT resource_type, request_mode, resource_description
FROM sys.dm_tran_locks
WHERE resource_type <> 'DATABASE'
ROLLBACK
Update locks (U)
Update locks are a mix of shared and exclusive locks. When a DML statement is executed SQL Server has to find the data it wants to modify first, so to avoid lock conversion deadlocks an update lock is used. Only one update lock can be held on the data at one time, similar to an exclusive lock. But the difference here is that the update lock itself can't modify the underlying data. It has to be converted to an exclusive lock before the modification takes place. You can also force an update lock with the UPDLOCK hint:
BEGIN TRAN
USE AdventureWorks
SELECT * FROM Person.Address WITH (UPDLOCK)
WHERE AddressId < 2
SELECT resource_type, request_mode, resource_description
FROM sys.dm_tran_locks
WHERE resource_type <> 'DATABASE'
ROLLBACK
Exclusive locks (X)
Exclusive locks are used to lock data being modified by one transaction thus preventing modifications by other concurrent transactions. You can read data held by exclusive lock only by specifying a NOLOCK hint or using a read uncommitted isolation level. Because DML statements first need to read the data they want to modify you'll always find Exclusive locks accompanied by shared locks on that same data.
BEGIN TRAN
USE AdventureWorks
UPDATE Person.Address
SET AddressLine2 = 'Test Address 2'
WHERE AddressId = 5
SELECT resource_type, request_mode, resource_description
FROM sys.dm_tran_locks
WHERE resource_type <> 'DATABASE'
ROLLBACK
Intent locks (I)
Intent locks are a means in which a transaction notifies other transaction that it is intending to lock the data. Thus the name. Their purpose is to assure proper data modification by preventing other transactions to acquire a lock on the object higher in lock hierarchy. What this means is that before you obtain a lock on the page or the row level an intent lock is set on the table. This prevents other transactions from putting exclusive locks on the table that would try to cancel the row/page lock. In the example we can see the intent exclusive locks being placed on the page and the table where the key is to protect the data from being locked by other transactions.
BEGIN TRAN
USE AdventureWorks
UPDATE TOP(5) Person.Address
SET AddressLine2 = 'Test Address 2'
WHERE PostalCode = '98011'
SELECT resource_type, request_mode, resource_description
FROM sys.dm_tran_locks
WHERE resource_type <> 'DATABASE'
ROLLBACK
Schema locks (Sch)
There are two types of schema locks:
Schema stability lock (Sch-S): Used while generating execution plans. These locks don't block access to the object data.
Schema modification lock (Sch-M): Used while executing a DDL statement. Blocks access to the object data since its structure is being changed.
In the example we can see the Sch-S and Sch-M locks being taken on the system tables and the TestTable plus a lot of other locks on the system tables.
BEGIN TRAN
USE AdventureWorks
CREATE TABLE TestTable (TestColumn INT)
SELECT resource_type, request_mode, resource_description
FROM sys.dm_tran_locks
WHERE resource_type <> 'DATABASE'
ROLLBACK
Bulk Update locks (BU)
Bulk Update locks are used by bulk operations when TABLOCK hint is used by the import. This allows for multiple fast concurrent inserts by disallowing data reading to other transactions.
Conversion locks
Conversion locks are locks resulting from converting one type of lock to another. There are 3 types of conversion locks:
Shared with Intent Exclusive (SIX). A transaction that holds a Shared lock also has some pages/rows locked with an Exclusive lock
Shared with Intent Update (SIU). A transaction that holds a Shared lock also has some pages/rows locked with an Update lock.
Update with Intent Exclusive (UIX). A transaction that holds an Update lock also has some pages/rows locked with an Exclusive lock.
In the example you can see the UIX conversion lock being taken on the page:
BEGIN TRAN
USE AdventureWorks
UPDATE TOP(5) Person.Address
SET AddressLine2 = 'Test Address 2'
WHERE PostalCode = '98011'
SELECT resource_type, request_mode, resource_description
FROM sys.dm_tran_locks
WHERE resource_type <> 'DATABASE'
ROLLBACK
Key - Range locks
Key-range locks protect a range of rows implicitly included in a record set being read by a Transact-SQL statement while using the serializable transaction isolation level. Key-range locking prevents phantom reads. By protecting the ranges of keys between rows, it also prevents phantom insertions or deletions into a record set accessed by a transaction. In the example we can see that there are two types of key-range locks taken:
RangeX-X - exclusive lock on the interval between the keys and exclusive lock on the last key in the range
RangeS-U – shared lock on the interval between the keys and update lock on the last key in the range
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRAN
USE AdventureWorks
UPDATE Person.Address
SET AddressLine2 = 'Test Address 2'
WHERE AddressLine1 LIKE '987 %'
SELECT resource_type, request_mode, resource_description
FROM sys.dm_tran_locks
WHERE resource_type <> 'DATABASE'
ROLLBACK
Lock Granularity
Lock granularity consists of TABLE, PAGE and ROW locks. If you have a clustered index on the table then instead of a ROW lock you have a KEY lock. Locking on the lower level increases concurrency, but if a lot of locks are taken consumes more memory and vice versa for the higher levels. So granularity simply means the level at which the SQL Server locks data. Also note that the more restricted isolation level we choose, the higher the locking level to keep data in correct state. You can override the locking level by using ROWLOCK, PAGLOCK or TABLOCK hints but the use of these hints is discouraged since SQL Server know what are the appropriate locks to take for each scenario. If you must use them you should be aware of the concurrency and data consistency issues you might cause.
System Databases in SQL Server 2005
Microsoft SQL Server 2005 uses five system databases:
master – uses master.mdf for data and masterlog.ldf for logging
model – uses model.mdf for data and modellog.ldf for logging
msdb – uses msdbdata.mdf for data and msdblog.ldf for logging
resource – uses mssqlsystemresource.mdf for data and mssqlsystemresource.ldf for logging
tempdb – uses tempdb.mdf for data and templog.ldf for logging
The master Database
The master database stores all the system-level information for SQL Server. The data stored by the master database includes information for: configuration settings, logon accounts, linked servers and endpoints, user database file locations, and properties.
Due to the nature of the data stored, SQL Server cannot operate without the master database. So it is a very good idea to backup this database after changing the SQL Server configuration, modifying, adding, or removing any databases.
The model Database
SQL Server uses the model database for creating new databases. When the “create database” statement is used, SQL Server copies the contents of the model database to the newly created database. If there are any common database objects that would prove useful in all databases created by SQL Server, it is possible to add those objects to the model database. Then when a database is created by the SQL Server instance, the user defined objects will be copied to it along with the default objects. Since SQL Server recreates the tempdb database every time it is started, the model database is required in order for SQL Server to start.
The msdb Database
The msdb database is used by SQL Server to store information on operations performed by SQL Server. This includes information for: the SQL Server Agent, Database Mail, the Service Broker, log shipping, backup, and maintenance plan job parameters.
The resource Database
The resource system database was introduced with SQL Server 2005. This database is used for storing all the system views and stored procedures. Logically, each SQL Server database will contain all these system objects, however, they are physically stored within the resource database. The resource database is read-only and does not include any user data.
In previous versions of SQL Server, the system objects were stored in the master database. The motivation behind moving the objects to a separate database is to make updating the SQL Server more efficient. Improvements and fix-ups to the SQL Server system generally manifest mostly on the system objects. A separate database to store the system objects reduces the number of files that need to be replaced with an update.
The tempdb Database
As the name implies, SQL Server uses the tempdb database for storing temporary data and data objects. The tempdb database is used when an operation requires a temporary table, stored procedure, or other database object to be performed. Intermediary data for large sort operations is also stored in the tempdb database as well as temporary data for internal SQL Server operations.
Every time SQL Server is restarted, the tempdb system database is recreated thus clearing any temporary data stored during the last SQL Server session. In cases where a high volume of users and operations are performed with SQL Server the tempdb database can grow to use a significantly large amount of disk space. It is important to plan accordingly in these scenarios since running out of disk space where the tempdb database is stored will have catastrophic effects on the operation of SQL Server.
SOME OF THE ADDITIONAL DATABASES ADDED
Master
Purpose – Core system database to manage the SQL Server instance. In SQL Server 2005, the Master database is the logical repository for the system objects residing in the sys schema. In SQL Server 2000 and previous editions of SQL Server, the Master database physically stored all of the system objects.
Prominent Functionality
Per instance configurations
Databases residing on the instance
Files for each database
Logins
Linked\Remote servers
Endpoints
Additional Information
The first database in the SQL Server startup process
In SQL Server 2005, needs to reside in the same directory as the Resource database
Resource
Purpose – The Resource database is responsible for physically storing all of the SQL Server 2005 system objects. This database has been created to improve the upgrade and rollback of SQL Server system objects with the ability to overwrite only this database.
Prominent Functionality
System object definition
Additional Information
Introduced in SQL Server 2005 to help manage the upgrade and rollback of system objects
Prior to SQL Server 2005 the system related data was stored in the master database
Read-only database that is not accessible via the SQL Server 2005 tool set
The database ID for the Resource database is 32767
The Resource database does not have an entry in master.sys.databases
TempDB
Purpose – Temporary database to store temporary tables (#temptable or ##temptale), table variables, cursors, work tables, row versioning, create or rebuild indexes sorted in TempDB, etc. Each time the SQL Server instance is restarted all objects in this database are destroyed, so permanent objects cannot be created in this database.
Prominent Functionality
Manage temporary objects listed in the purpose above
Additional Information
Each time a SQL Server instance is rebooted, the TempDB database is reset to its original state
Model
Purpose – Template database for all user defined databases
Prominent Functionality
Objects
Columns
Users
Additional Information
User defined tables, stored procedures, user defined data types, etc can be created in the Model database and will exist in all future user defined databases
The database configurations such as the recovery model for the Model database are applied to future user defined databases
MSDB
Purpose – Primary database to manage the SQL Server Agent configurations
Prominent Functionality
SQL Server Agent Jobs, Operators and Alerts
DTS Package storage in SQL Server 7.0 and 2000
SSIS Package storage in SQL Server 2005
Additional Information
Provides some of the configurations for the SQL Server Agent service
For the SQL Server 2005 Express edition installations, even though the SQL Server Agent service does not exist, the instance still has the MSDB database
Distribution
Purpose – Primary data to support SQL Server replication
Prominent Functionality
Database responsible for the replication meta data
Supports the data for transaction replication between the publisher and subscriber(s)
ReportServer
Purpose – Primary database for Reporting Services to store the meta data and object definitions
Prominent Functionality
Reports security
Job schedules and running jobs
Report notifications
Report execution history
ReportServerTempDB
Purpose – Temporary storage for Reporting Services
Prominent Functionality
Session information
Cache
master – uses master.mdf for data and masterlog.ldf for logging
model – uses model.mdf for data and modellog.ldf for logging
msdb – uses msdbdata.mdf for data and msdblog.ldf for logging
resource – uses mssqlsystemresource.mdf for data and mssqlsystemresource.ldf for logging
tempdb – uses tempdb.mdf for data and templog.ldf for logging
The master Database
The master database stores all the system-level information for SQL Server. The data stored by the master database includes information for: configuration settings, logon accounts, linked servers and endpoints, user database file locations, and properties.
Due to the nature of the data stored, SQL Server cannot operate without the master database. So it is a very good idea to backup this database after changing the SQL Server configuration, modifying, adding, or removing any databases.
The model Database
SQL Server uses the model database for creating new databases. When the “create database” statement is used, SQL Server copies the contents of the model database to the newly created database. If there are any common database objects that would prove useful in all databases created by SQL Server, it is possible to add those objects to the model database. Then when a database is created by the SQL Server instance, the user defined objects will be copied to it along with the default objects. Since SQL Server recreates the tempdb database every time it is started, the model database is required in order for SQL Server to start.
The msdb Database
The msdb database is used by SQL Server to store information on operations performed by SQL Server. This includes information for: the SQL Server Agent, Database Mail, the Service Broker, log shipping, backup, and maintenance plan job parameters.
The resource Database
The resource system database was introduced with SQL Server 2005. This database is used for storing all the system views and stored procedures. Logically, each SQL Server database will contain all these system objects, however, they are physically stored within the resource database. The resource database is read-only and does not include any user data.
In previous versions of SQL Server, the system objects were stored in the master database. The motivation behind moving the objects to a separate database is to make updating the SQL Server more efficient. Improvements and fix-ups to the SQL Server system generally manifest mostly on the system objects. A separate database to store the system objects reduces the number of files that need to be replaced with an update.
The tempdb Database
As the name implies, SQL Server uses the tempdb database for storing temporary data and data objects. The tempdb database is used when an operation requires a temporary table, stored procedure, or other database object to be performed. Intermediary data for large sort operations is also stored in the tempdb database as well as temporary data for internal SQL Server operations.
Every time SQL Server is restarted, the tempdb system database is recreated thus clearing any temporary data stored during the last SQL Server session. In cases where a high volume of users and operations are performed with SQL Server the tempdb database can grow to use a significantly large amount of disk space. It is important to plan accordingly in these scenarios since running out of disk space where the tempdb database is stored will have catastrophic effects on the operation of SQL Server.
SOME OF THE ADDITIONAL DATABASES ADDED
Master
Purpose – Core system database to manage the SQL Server instance. In SQL Server 2005, the Master database is the logical repository for the system objects residing in the sys schema. In SQL Server 2000 and previous editions of SQL Server, the Master database physically stored all of the system objects.
Prominent Functionality
Per instance configurations
Databases residing on the instance
Files for each database
Logins
Linked\Remote servers
Endpoints
Additional Information
The first database in the SQL Server startup process
In SQL Server 2005, needs to reside in the same directory as the Resource database
Resource
Purpose – The Resource database is responsible for physically storing all of the SQL Server 2005 system objects. This database has been created to improve the upgrade and rollback of SQL Server system objects with the ability to overwrite only this database.
Prominent Functionality
System object definition
Additional Information
Introduced in SQL Server 2005 to help manage the upgrade and rollback of system objects
Prior to SQL Server 2005 the system related data was stored in the master database
Read-only database that is not accessible via the SQL Server 2005 tool set
The database ID for the Resource database is 32767
The Resource database does not have an entry in master.sys.databases
TempDB
Purpose – Temporary database to store temporary tables (#temptable or ##temptale), table variables, cursors, work tables, row versioning, create or rebuild indexes sorted in TempDB, etc. Each time the SQL Server instance is restarted all objects in this database are destroyed, so permanent objects cannot be created in this database.
Prominent Functionality
Manage temporary objects listed in the purpose above
Additional Information
Each time a SQL Server instance is rebooted, the TempDB database is reset to its original state
Model
Purpose – Template database for all user defined databases
Prominent Functionality
Objects
Columns
Users
Additional Information
User defined tables, stored procedures, user defined data types, etc can be created in the Model database and will exist in all future user defined databases
The database configurations such as the recovery model for the Model database are applied to future user defined databases
MSDB
Purpose – Primary database to manage the SQL Server Agent configurations
Prominent Functionality
SQL Server Agent Jobs, Operators and Alerts
DTS Package storage in SQL Server 7.0 and 2000
SSIS Package storage in SQL Server 2005
Additional Information
Provides some of the configurations for the SQL Server Agent service
For the SQL Server 2005 Express edition installations, even though the SQL Server Agent service does not exist, the instance still has the MSDB database
Distribution
Purpose – Primary data to support SQL Server replication
Prominent Functionality
Database responsible for the replication meta data
Supports the data for transaction replication between the publisher and subscriber(s)
ReportServer
Purpose – Primary database for Reporting Services to store the meta data and object definitions
Prominent Functionality
Reports security
Job schedules and running jobs
Report notifications
Report execution history
ReportServerTempDB
Purpose – Temporary storage for Reporting Services
Prominent Functionality
Session information
Cache
Shrinking Truncate Log File
SQL SERVER – Shrinking Truncate Log File – Log Full
Sometime, it looks impossible to shrink the Truncated Log file. Following code always shrinks the Truncated Log File to minimum size possible.
USE DatabaseName
GO
DBCC SHRINKFILE(, 1)
BACKUP LOG WITH TRUNCATE_ONLY
DBCC SHRINKFILE(, 1)
GO
Sometime, it looks impossible to shrink the Truncated Log file. Following code always shrinks the Truncated Log File to minimum size possible.
USE DatabaseName
GO
DBCC SHRINKFILE(
BACKUP LOG
DBCC SHRINKFILE(
GO
Restore Database

1. Open SQL Server 2005 Management Studio.
2. Connect to the appropriate server.
3. Expand Databases.
4. Right-click the desired database, mouseover Tasks, mouseover Restore and select Database.
5. The Restore Database window will appear.
6. Ensure that the correct database name appears in the To database field. If not, select it from the dropdown.
7. Under Source for restore, select the From device radio button.
8. Click the button next to the textbox to select the device to restore from.
9. The Specify Backup window will appear.
10. Click Add and locate and select the backup file from the Locate Backup File Window. Click OK.
11. Under Select the backup sets to restore, select all three backups (full, transaction log, and differential).
12. In the left pane, select Options.
13. Under Recovery state, ensure that the Restore with Recovery radio button is selected.
14. Click OK to begin restoring the database.
15. Once the restore is complete, a notification box will appear. Click OK to close the box. You have now restored your database to the most recent state.
Restore System Databases

I'm following the procedures from Microsoft where they state to place the instance in single user mode first then invoke SQLCMD then perform the restore. Sounds simple enough.
C:\> SQLCMD
> RESTORE DATABASE MASTER from DISK = 'Z:\MINIDR\master_backup_200804200315.bak'
2. > RESTORE DATABASE master FROM DISK = 'C:\MINIDR\masterbackupfile.bak' WITH
RECOVERY, REPLACE;
2> GO
source server = server from which backups originated
target server = server onto which we wish to restore the system databases
Presuming that paths to system files are different (more difficult to do than if paths are the same)
1. Ensure target server is same build revision as source server. Patch accordingly.
2. Start target server in single user mode (sqlservr -c -m -f)
3. Connect to SQL Server using sqlcmd
4. Restore master database (Note - does not require WITH MOVE option).
When done, SQL Server stops automatically.
5. Start target server in single user mode (sqlservr -c -m -f -T3608)
6. Connect to SQL Server using sqlcmd
7. Use ALTER DATABASE command to point SQL Server to the mssqlsystemresource database:
ALTER DATABASE mssqlsystemresource
MODIFY FILE (name = data, filename = ' \mssqlsystemresource.mdf')
GO
ALTER DATABASE mssqlsystemresource
MODIFY FILE (name = log, filename = ' \mssqlsystemresource.ldf')
GO
8. Stop SQL Server (Ctrl-C). Start in single user mode.
9. Use ALTER DATABASE command to point SQL Server to the model, msdb & tempdb databases:
modeldev = model.mdf
modellog = modellog.ldf
msdbdata = msdbdata.mdf
msdblog = msdblog.ldf
tempdev = tempdb.mdf
templog = templog.ldf
10. Stop SQL Server. Start SQL Server (either from cmd or as service).
11. Restore msdb, model if required.
3. procedure
We need to rebuild the system databases if the master database is corrupted or damaged. Let us discuss in detail how to rebuild system databases in SQL server 2005.
Step 1: Take a full backup of all the System and User databases prior to rebuilding the system databases as we need to restore it later to avoid data loss.
Step 2: Copy the SQL 2005 setup files from the CD to the hard disk. In the command prompt, navigate to the folder which contains the setup.exe file. In my case it is available in D:\Setups\SQL 2005\Servers folder. The SQL Server we are going to rebuild is currently running. Now type the below command,
start /wait setup.exe /qn INSTANCENAME=”MSSQLSERVER” REINSTALL=SQL_Engine REBUILDDATABASE=1 SAPWD=”XXXX”
where XXXX is the name of the password.
INSTANCENAME=”MSSQLSERVER” for default instance of SQL 2005 and
INSTANCENAME=”MSSQL$INSTANCENAME” for named instance of SQL 2005.
For example,
If you have a named instance named as “Deepak\Test” then type as below,
INSTANCENAME=”MSSQL$TEST” in the above command
Step 3: After executing the command in step 2 the rebuild process will start and will complete within 5 minutes. You can verify whether the databases are rebuild by navigating to folder containing the data and log files for the system databases. If you arrange them using modified date it will clearly show the time when it was last modified and it is the time when we executed the command in Step 2.
Step 4: Once the rebuild is completed, connect to the SQL server using SSMS. In the object explorer only the system databases will be available.
If any user db were present prior to rebuild it will be lost and we need to perform as below to retrieve it.
1. Restore from the backup taken in Step 1 (or)
2. We can attach from the data and log files of the user db as they will not be cleaned up during rebuild process.
NOTE : No Need to detach all the user databases before rebuild as the ldf and mdf files will be present in the same path as usual and will not be overwritten.
Now we need to restore the system databases from the backup which we took in Step 1.
Master database can be restored only in Single user mode (refer Step 5) and other dbs can be restored normally.
NOTE : The ldf and mdf files of the system databases will be overwritten and hence we cannot perform detach/ attach operation.
Step 5: In order to restore master database perform the below steps,
· Stop SQL server and start it using the below command from the command prompt
· NET START MSSQLSERVER /c /m which will start SQL in single user mode
Note: For default instance its MSSQLSERVER, for named instance its MSSQL$instancename
· Type as shown in the below screenshot. Once the restore is completed SQL server will shut down and hence we need to start it normally and access the databases.
Friday, March 12, 2010
Transaction Log – Part 4
Know The Transaction Log – Part 4 - Restoring Data
After looking about the backups in SQL Server it is time to know about the Restore and Recovery in SQL Server.
SQL Server supports three levels of Restoring data. They are
1. Complete Database Restore : This is the basic restore strategy. A complete database restore for a simple recovery model database simply involve a full backup followed by the latest differential backup if any available. For a full or bulk-logged recovery model database this complete Data Restore involves restoring a Full Backup followed by a latest differential backup and then a sequence of Transaction log backup in which they are backed up and finally tail-log backup if any available.
2. File Restore : This restore operation is very useful when any one of the files in the file group is damage. The main advantage of this restore is restore time will be less, obviously compared to complete database restore. For simple recovery model database file restore will work with read-only secondary files.
3. Page Restore : This restore is only applicable for Full or Bulk-Logged Recovery model database and not available for Simple recovery model. Using this level of restore , a particular page or pages can be restored.
How Restore Works ?
Restoring is the process of copying data from a backup and applying Transaction logs to the data to point of time when the backup is taken. This process is done in three phases, Data Copy phase, Redo Phase and Undo Phase.
Data Copy Phase: The process of copying the data, index and log pages from the backup media to the database files. No log backups nor log information in the data backups are applied in this phase.
Redo Phase : This phase applies logged changes to the data by processing log information from the log backups.
Undo Phase : This phase applies undoing any uncommitted transactions from the data that are restored from Backup and brings the database online.
In this stage we have to understand the relationship between the WITH RECOVERY and WITH NORECOVERY options in RESTORE Command.
The default option is WITH RECOVERY. This will continue the Undo phase after completing the REDO phase.
A normal restore operation stops at the redo stage if WITH NORECOVERY is included in RESTORE statement. This allows Roll Forward to continue with the next statement of the Restore Sequence, in which the other backups typically a differential or a transaction backup will do the undo phase.
For a Full Recovery model database or for a Bulk-logged recovery model database, a restore operation is done by a sequence of RESTORE statements. This sequence is called Restore Sequence.
For a simple scenario a restore sequence might be
• starting with restoring a recent full backup,
• applying the most recent differential backup,
• restoring the sequence of log backups in the order they are backed up after the most recent differential backup,
• finally the restoring tail log backup if any taken after the failure occurred.
For more complex scenarios, complex sequence planning will be required. For these planning a recovery path is very important. A Recovery Path is a complete sequence of data and log backups that can bring database to a point of time. For more details about Recovery Path search in Books On Line.
Complete Database Restore :
A simple restore strategy. Let us see how you have to do a Complete Database Restore using an example. Suppose for a full or bulk-logged recovery model database, a series of backups are taken in the following schedules. A Full backup on Monday 10 PM, Differential backups are scheduled on 10 PM of Wednesday,Friday and Sunday. Transaction Log Backups are scheduled twice a day 6 AM and 6 PM every day. In this sequence, the database is got failed on Saturday 4 PM. How to Restore this database with available backups ?
On a failure situation of database, first thing we have to do is take the tail log backup with NOTRUNCATE option if possible. So take the Tail-log backup first.
Every Restore is to be started with Full Backup. So start with restoring the full backup taken on Monday 10 PM with NORECOVERY option. We have the latest differential backup taken on Friday 10 PM. So apply that backup, and we can omit applying log backups taken after Monday 10 PM and before Friday 10 PM. After restoring this latest differential backup, we have to restore the log backup taken on Saturday 6 AM. That is latest log backup taken on schedule before failure. Now the database is ready up to the Saturday 6 AM. Now, Restore the tail backup that is taken after the failure , that Saturday 4 PM with RECOVERY option. Now the database is fully restored.
File or File Group Restore :
To Restore a Single File in a File group or Complete File group of a Database you have to use Restore Command with FILE option or FILEGROUP option. All you need is unbroken chain of log backups from the time of file or file group backup was made. Before applying the file or file group backup you have to take the transaction log backup. After restoring the file or file group , you have to restore all the transaction log backups to synchronise that file or file group with the rest of the database.
Let us see an example. Suppose a SecondaryFG is a file group of a database backed up on Friday 12 noon and the database is still in use. Backing up of Transaction log of this database is scheduled on 10 AM, 11:30 AM, 1:00 PM, 2:30 PM , 4:00PM, 5:30PM and so on. Note that the database is still in use, and the changes are made in SecondaryFG and other file groups too. At 5:15 PM, a media failure occurs that corrupts the SecondaryFG. Now we have to restore this. First take the tail log backup that contains all the log records after 4:00 PM Log Backup, with NOTRUNCATE NORECOVERY, to make the database in Restoring state so that no other modification will be done after the failure. Now apply the backup that was taken at 12 noon. So the SecondaryFG now have all the changes that are made up to 12 noon. Now start applying the Transaction log backups in the sequence of they backed up that is apply 1:00PM Backup first, 2:30 PM backup second, 4:00 PM backup third. Now the SecondaryFG is synchronised with all the database files up to 4:00 PM. Finally apply the tail log backup that was taken after the failure to make SecondaryFG fully compatible with all the files of the database.
Page Restore:
Page restore is only possible for databases using Full Recovery model or Bulk-logged Recovery model. All Editions of SQL Sever other than Enterprise Edition support offline Page Restore whereas SQL Server 2005 Enterprise Edition supports Online PAGE Restore when database is online.
A page may be marked as suspect page, when a query or DBCC CHECK TABLE or DBCC CHECKDB failed to access it. Every page in a database that is marked as suspect will have an entry in msdb..suspect_pages table. Event_type column of this table may have either one of the following numbers, 1 for the pages marked with error number 824 other than Bad Page ID and Checksum Error; 2 for Bad PageID ,3 for Checksum Error. 4,5 and 7 for the repaired pages. This table is limited to size and if it is full, the errors could not be logged in this table. So it should be a DBA’s routine to delete the all the records in the msdb..suspect_pages table having event_type is greater than or equal to 4 in regular intervals.
Get the pageId and fileId from msdb..suspect_pages for the pages to be restored. Start RESTORE with full or File or Group Backup that contains the pages to be restored and use PAGE option. Then apply the most recent differential backup if any available and apply all subsequent log backups. Now backup the log and restore it again to match the last_target_LSN in sys.masterfile.
Limitations of Page Restore
• Only Database pages can be restored not the log pages.
• File boot page i.e Page 0 can not be restored and page 1:9 can not be restored that is database boot page.
• GAM , SGAM and PFS Pages can not be restored.
For more details see Books Online.
With this I conclude my Know The Transaction LOG Series. Some things are purposely omitted in this series of post that are POINT-IN-TIME Restore using RESTORE with STOPAT option to avoid over doses.
After looking about the backups in SQL Server it is time to know about the Restore and Recovery in SQL Server.
SQL Server supports three levels of Restoring data. They are
1. Complete Database Restore : This is the basic restore strategy. A complete database restore for a simple recovery model database simply involve a full backup followed by the latest differential backup if any available. For a full or bulk-logged recovery model database this complete Data Restore involves restoring a Full Backup followed by a latest differential backup and then a sequence of Transaction log backup in which they are backed up and finally tail-log backup if any available.
2. File Restore : This restore operation is very useful when any one of the files in the file group is damage. The main advantage of this restore is restore time will be less, obviously compared to complete database restore. For simple recovery model database file restore will work with read-only secondary files.
3. Page Restore : This restore is only applicable for Full or Bulk-Logged Recovery model database and not available for Simple recovery model. Using this level of restore , a particular page or pages can be restored.
How Restore Works ?
Restoring is the process of copying data from a backup and applying Transaction logs to the data to point of time when the backup is taken. This process is done in three phases, Data Copy phase, Redo Phase and Undo Phase.
Data Copy Phase: The process of copying the data, index and log pages from the backup media to the database files. No log backups nor log information in the data backups are applied in this phase.
Redo Phase : This phase applies logged changes to the data by processing log information from the log backups.
Undo Phase : This phase applies undoing any uncommitted transactions from the data that are restored from Backup and brings the database online.
In this stage we have to understand the relationship between the WITH RECOVERY and WITH NORECOVERY options in RESTORE Command.
The default option is WITH RECOVERY. This will continue the Undo phase after completing the REDO phase.
A normal restore operation stops at the redo stage if WITH NORECOVERY is included in RESTORE statement. This allows Roll Forward to continue with the next statement of the Restore Sequence, in which the other backups typically a differential or a transaction backup will do the undo phase.
For a Full Recovery model database or for a Bulk-logged recovery model database, a restore operation is done by a sequence of RESTORE statements. This sequence is called Restore Sequence.
For a simple scenario a restore sequence might be
• starting with restoring a recent full backup,
• applying the most recent differential backup,
• restoring the sequence of log backups in the order they are backed up after the most recent differential backup,
• finally the restoring tail log backup if any taken after the failure occurred.
For more complex scenarios, complex sequence planning will be required. For these planning a recovery path is very important. A Recovery Path is a complete sequence of data and log backups that can bring database to a point of time. For more details about Recovery Path search in Books On Line.
Complete Database Restore :
A simple restore strategy. Let us see how you have to do a Complete Database Restore using an example. Suppose for a full or bulk-logged recovery model database, a series of backups are taken in the following schedules. A Full backup on Monday 10 PM, Differential backups are scheduled on 10 PM of Wednesday,Friday and Sunday. Transaction Log Backups are scheduled twice a day 6 AM and 6 PM every day. In this sequence, the database is got failed on Saturday 4 PM. How to Restore this database with available backups ?
On a failure situation of database, first thing we have to do is take the tail log backup with NOTRUNCATE option if possible. So take the Tail-log backup first.
Every Restore is to be started with Full Backup. So start with restoring the full backup taken on Monday 10 PM with NORECOVERY option. We have the latest differential backup taken on Friday 10 PM. So apply that backup, and we can omit applying log backups taken after Monday 10 PM and before Friday 10 PM. After restoring this latest differential backup, we have to restore the log backup taken on Saturday 6 AM. That is latest log backup taken on schedule before failure. Now the database is ready up to the Saturday 6 AM. Now, Restore the tail backup that is taken after the failure , that Saturday 4 PM with RECOVERY option. Now the database is fully restored.
File or File Group Restore :
To Restore a Single File in a File group or Complete File group of a Database you have to use Restore Command with FILE option or FILEGROUP option. All you need is unbroken chain of log backups from the time of file or file group backup was made. Before applying the file or file group backup you have to take the transaction log backup. After restoring the file or file group , you have to restore all the transaction log backups to synchronise that file or file group with the rest of the database.
Let us see an example. Suppose a SecondaryFG is a file group of a database backed up on Friday 12 noon and the database is still in use. Backing up of Transaction log of this database is scheduled on 10 AM, 11:30 AM, 1:00 PM, 2:30 PM , 4:00PM, 5:30PM and so on. Note that the database is still in use, and the changes are made in SecondaryFG and other file groups too. At 5:15 PM, a media failure occurs that corrupts the SecondaryFG. Now we have to restore this. First take the tail log backup that contains all the log records after 4:00 PM Log Backup, with NOTRUNCATE NORECOVERY, to make the database in Restoring state so that no other modification will be done after the failure. Now apply the backup that was taken at 12 noon. So the SecondaryFG now have all the changes that are made up to 12 noon. Now start applying the Transaction log backups in the sequence of they backed up that is apply 1:00PM Backup first, 2:30 PM backup second, 4:00 PM backup third. Now the SecondaryFG is synchronised with all the database files up to 4:00 PM. Finally apply the tail log backup that was taken after the failure to make SecondaryFG fully compatible with all the files of the database.
Page Restore:
Page restore is only possible for databases using Full Recovery model or Bulk-logged Recovery model. All Editions of SQL Sever other than Enterprise Edition support offline Page Restore whereas SQL Server 2005 Enterprise Edition supports Online PAGE Restore when database is online.
A page may be marked as suspect page, when a query or DBCC CHECK TABLE or DBCC CHECKDB failed to access it. Every page in a database that is marked as suspect will have an entry in msdb..suspect_pages table. Event_type column of this table may have either one of the following numbers, 1 for the pages marked with error number 824 other than Bad Page ID and Checksum Error; 2 for Bad PageID ,3 for Checksum Error. 4,5 and 7 for the repaired pages. This table is limited to size and if it is full, the errors could not be logged in this table. So it should be a DBA’s routine to delete the all the records in the msdb..suspect_pages table having event_type is greater than or equal to 4 in regular intervals.
Get the pageId and fileId from msdb..suspect_pages for the pages to be restored. Start RESTORE with full or File or Group Backup that contains the pages to be restored and use PAGE option. Then apply the most recent differential backup if any available and apply all subsequent log backups. Now backup the log and restore it again to match the last_target_LSN in sys.masterfile.
Limitations of Page Restore
• Only Database pages can be restored not the log pages.
• File boot page i.e Page 0 can not be restored and page 1:9 can not be restored that is database boot page.
• GAM , SGAM and PFS Pages can not be restored.
For more details see Books Online.
With this I conclude my Know The Transaction LOG Series. Some things are purposely omitted in this series of post that are POINT-IN-TIME Restore using RESTORE with STOPAT option to avoid over doses.
TRANSACTION LOG –PART- 3
KNOW THE TRANSACTION LOG –PART- 3
digg_url="http://sqlthoughts.blogspot.com/2008/03/know-transaction-log-part-3.html";
This is third article in the KNOW THE TRANSACTION LOG series. In Part 1 I explained about the Transaction Log File and its behaviour. In Part 2 I explained about the Recovery Models available in SQL Server 2005 which affects the behaviour of Transaction log file of the database. In this Part 3, I am going to explain about the various Backup options available in SQL Server 2005, because, Backups are the backbone of the Restore Recovery of course for a DBA too. :)
Backups in SQL Server 2005.
Two major categories of Backups are available in SQL Server. They are Data backup and Log Backup.
Data backup includes image of one or more data files and log record data. It has three types.
· Full Database Backup includes all data files in the Database which is complete set of data. This also have enough log records that allow to restore the data during restore recovery. This is called base backup. Every restore situation need at least one base, full backup. For small databases, performing a Full Backup takes small amount of time and the backup occupies small amount of disk spaces. As database becomes larger, the full backup takes more time to finish, so as the restore takes more time during recovery. As for as larger databases concern, take Full backup along with supported differential backups, transaction log backups to reduce backup and restore time and associated system overhead.
When restoring a database from a Full backup , SQL Server re-creates the database in one step. As Full database backups include transaction log records within it, after restoring is over, all uncommitted transactions during the time of full database backup taken, are rolled back. So the restored database matched the original database when it was backed up minus the uncommitted transactions.
· Differential Backup : Differential backup of a database backs up only modified data since a last base database backup. It is small in size comparative to Full Database backup, obviously, runs fast, saves backup time. The base for first Differential backup after the full backup, is last full backup and for subsequent differential backups the base is the previous differential backup until the next full data backup is performed. This base is called as differential base. For a Simple Recovery model database, there should be only one differential base and for Full Recovery Model, there may be multibase Differential bases are allowed, but it is difficult to administer. For a Read-Write and online databases, sys.database_files system catalog view returns various information including three column information about differential base. That columns are differential_base_lsn , differential_base_guid, differential_base_time. For a read-only databases sys.master_files catalog view should be use to get the information about the differential base.
Have a full database backup and subsequent frequently taken differential backup for a large mission critical databases to avoid data loss. As the differential backup process takes smaller time to finish, the restore from it also takes minimum time.
When restoring from Differential Backups , a full backup restore should be done first and then a most recent Differential backup is to be restored even though a multiple differential backups has been taken between Full Backup and most recent Differential backup. No Log Backups that were taken between full backup and Differential backup need to be restore. If any tail log backup that has been taken before the full backup is restored, then that should be restored after restoring the differential backup.
· Partial Backup includes primary file groups and read-write file groups. Excludes read-only file groups by default. It can back up specified read-only file groups while taking backup. This is new to SQL Server 2005. It is different from differential backup. It is designed to provide flexibility for databases having simple recovery model. A Partial Backup of a read-only databases only have the Primary file groups files. To create Partial Backup we have to use READ_WRITE_FILEGROUPS [ A Partial Backup can be base for the Differential Partial Backup. Differential Partial backups back up all the data extents that are modified after a base partial backup of same set of file groups are performed. This can be performed with the help of the following command.
BACKUP DATABASE database_name READ_WRITE_FILEGROUPS > , TO WITH DIFFERENTIAL
· File Or File Group Backup includes the file or file groups specified. An Individual file of a database alone can be backed up with this type of backup. This backup is very useful for the failure situation like if only one file is damaged in the database, we can restore that particular file only instead of having full database restore. This can minimize the restore time very much. There are two types of File backups. File Backup and Differential File Backup. A File Backup of a database can be the base for the Differential File Backup. Performing Differential File backup will give you an error if you changed the read/write file to read-only file after taking last full file backup. So whenever you change a read/write file to read-only file or a read-only file to read write file then take a full file backup.
An advantage of having file backup is recovery from damaged files or a file located in damaged media is very faster. The only damaged files can be restored. The disadvantage of this is maintaining complete file backup set can be more time consuming and complexity of administrative task is increased.
A complete set of file or file group backup is equivalent to Full database backup. When performing file group backups for a full or bulk logged model database do perform transition log backups additionally.
· Transaction LOG Backup includes only log records. For a full or bulk-logged recovery model regular transaction log backup is required. If not taken the transaction log file grows continuously till the disk is full. LOG Backup can be performed with the following command.
BACKUP LOG To .
There is a special type of LOG Backup that is Tail-Log backup. This log backup is taken immediately after the database failure if the log disk is accessible. This can be done if you include WITH NORECOVERY option in BACKUP LOG Command. When you issue this option the database becomes Restoring State and becomes offline to guaranty no modification can be done after finishing the tail-log backup. After taking Tail-Log Backup you have to restore the database.
· COPY-ONLY Backup : This is a special situation backup. It does not affect the regular SQL Server Backups and Restore sequences. After taking the COPY-Only Backup the transaction logs are not truncated. As the name specified it does only copying either Full Database or Full Log. This can be performed when issuing WITH COPY_ONLY option in BACKUP Command.
Backup History
The information about the backup history are stored in the msdb database which are very useful to manage backups. The following system tables in the msdb system database store history information about backups.
1) Backupfile stores a row for each data and log file in the database including a column is_present that indicates whether that file was backed up as a part of the backup set.
2) Backupfilegroup stores a row for each filegroup in a database at a time of a backup but this table does not indicate whether the filegroup was backed up or not. This table is new to SQL Server 2005.
3) Backupset stores a row for each backup set when a new backup set is created for each backup event.
4) Backupmediaset stores one row for each media set to which backupsets are written.
5) Backupmediafamily stores one row for each media family or its part of mirrored media set and one row for each mirror in the set.
digg_url="http://sqlthoughts.blogspot.com/2008/03/know-transaction-log-part-3.html";
This is third article in the KNOW THE TRANSACTION LOG series. In Part 1 I explained about the Transaction Log File and its behaviour. In Part 2 I explained about the Recovery Models available in SQL Server 2005 which affects the behaviour of Transaction log file of the database. In this Part 3, I am going to explain about the various Backup options available in SQL Server 2005, because, Backups are the backbone of the Restore Recovery of course for a DBA too. :)
Backups in SQL Server 2005.
Two major categories of Backups are available in SQL Server. They are Data backup and Log Backup.
Data backup includes image of one or more data files and log record data. It has three types.
· Full Database Backup includes all data files in the Database which is complete set of data. This also have enough log records that allow to restore the data during restore recovery. This is called base backup. Every restore situation need at least one base, full backup. For small databases, performing a Full Backup takes small amount of time and the backup occupies small amount of disk spaces. As database becomes larger, the full backup takes more time to finish, so as the restore takes more time during recovery. As for as larger databases concern, take Full backup along with supported differential backups, transaction log backups to reduce backup and restore time and associated system overhead.
When restoring a database from a Full backup , SQL Server re-creates the database in one step. As Full database backups include transaction log records within it, after restoring is over, all uncommitted transactions during the time of full database backup taken, are rolled back. So the restored database matched the original database when it was backed up minus the uncommitted transactions.
· Differential Backup : Differential backup of a database backs up only modified data since a last base database backup. It is small in size comparative to Full Database backup, obviously, runs fast, saves backup time. The base for first Differential backup after the full backup, is last full backup and for subsequent differential backups the base is the previous differential backup until the next full data backup is performed. This base is called as differential base. For a Simple Recovery model database, there should be only one differential base and for Full Recovery Model, there may be multibase Differential bases are allowed, but it is difficult to administer. For a Read-Write and online databases, sys.database_files system catalog view returns various information including three column information about differential base. That columns are differential_base_lsn , differential_base_guid, differential_base_time. For a read-only databases sys.master_files catalog view should be use to get the information about the differential base.
Have a full database backup and subsequent frequently taken differential backup for a large mission critical databases to avoid data loss. As the differential backup process takes smaller time to finish, the restore from it also takes minimum time.
When restoring from Differential Backups , a full backup restore should be done first and then a most recent Differential backup is to be restored even though a multiple differential backups has been taken between Full Backup and most recent Differential backup. No Log Backups that were taken between full backup and Differential backup need to be restore. If any tail log backup that has been taken before the full backup is restored, then that should be restored after restoring the differential backup.
· Partial Backup includes primary file groups and read-write file groups. Excludes read-only file groups by default. It can back up specified read-only file groups while taking backup. This is new to SQL Server 2005. It is different from differential backup. It is designed to provide flexibility for databases having simple recovery model. A Partial Backup of a read-only databases only have the Primary file groups files. To create Partial Backup we have to use READ_WRITE_FILEGROUPS [
BACKUP DATABASE database_name READ_WRITE_FILEGROUPS > ,
· File Or File Group Backup includes the file or file groups specified. An Individual file of a database alone can be backed up with this type of backup. This backup is very useful for the failure situation like if only one file is damaged in the database, we can restore that particular file only instead of having full database restore. This can minimize the restore time very much. There are two types of File backups. File Backup and Differential File Backup. A File Backup of a database can be the base for the Differential File Backup. Performing Differential File backup will give you an error if you changed the read/write file to read-only file after taking last full file backup. So whenever you change a read/write file to read-only file or a read-only file to read write file then take a full file backup.
An advantage of having file backup is recovery from damaged files or a file located in damaged media is very faster. The only damaged files can be restored. The disadvantage of this is maintaining complete file backup set can be more time consuming and complexity of administrative task is increased.
A complete set of file or file group backup is equivalent to Full database backup. When performing file group backups for a full or bulk logged model database do perform transition log backups additionally.
· Transaction LOG Backup includes only log records. For a full or bulk-logged recovery model regular transaction log backup is required. If not taken the transaction log file grows continuously till the disk is full. LOG Backup can be performed with the following command.
BACKUP LOG
There is a special type of LOG Backup that is Tail-Log backup. This log backup is taken immediately after the database failure if the log disk is accessible. This can be done if you include WITH NORECOVERY option in BACKUP LOG Command. When you issue this option the database becomes Restoring State and becomes offline to guaranty no modification can be done after finishing the tail-log backup. After taking Tail-Log Backup you have to restore the database.
· COPY-ONLY Backup : This is a special situation backup. It does not affect the regular SQL Server Backups and Restore sequences. After taking the COPY-Only Backup the transaction logs are not truncated. As the name specified it does only copying either Full Database or Full Log. This can be performed when issuing WITH COPY_ONLY option in BACKUP Command.
Backup History
The information about the backup history are stored in the msdb database which are very useful to manage backups. The following system tables in the msdb system database store history information about backups.
1) Backupfile stores a row for each data and log file in the database including a column is_present that indicates whether that file was backed up as a part of the backup set.
2) Backupfilegroup stores a row for each filegroup in a database at a time of a backup but this table does not indicate whether the filegroup was backed up or not. This table is new to SQL Server 2005.
3) Backupset stores a row for each backup set when a new backup set is created for each backup event.
4) Backupmediaset stores one row for each media set to which backupsets are written.
5) Backupmediafamily stores one row for each media family or its part of mirrored media set and one row for each mirror in the set.
TRANSACTION LOG –PART 2
KNOW THE TRANSACTION LOG –PART 2
digg_url="http://sqlthoughts.blogspot.com/2008/03/know-transaction-log-part-2.html";
In my previous post
Technorati Tags: SQL Server,Recovery,Restore Recovery,Backup,Recovery Models
, I wrote about the Restart Recovery which is automatically done by SQL Server 2005 in the event of SQL Server startup. There is another type of recovery available that is, well known, commonly practiced and a manual process, RESTORE RECOVERY.
Restore Recovery is triggered manually by DBAs during the data loss events, to bring back the SQL Server database to a particular point of working state. The data is recovered from the BACKUP of the database taken and stored away in a media either tape or disk file.
Before looking further inside, let us discuss some basics of Backups.
What is the need of a Backup?
Backup is the backbone for the mission where critical data involved. Even though having high availability system configured with compatible RAID level of Disk Subsystems or fully redundant Storage Area Networks and for Servers that are clustered with failovers with Microsoft Cluster Services and SQL Server 2005 failover clusters, backups of mission-critical databases are so important for many reasons. Say suppose, a developer executed a DELETE FROM query forgetfully missed a WHERE clause in it against a production server, instead of Development Server where he supposed to execute! This is one simple example. A lot of such situations may arrive to test your Database Administrative abilities. You have to rely on your database Backup.
In a Restore Recovery, Backup is so important. But a behaviour of Restore Recovery is based on a property of the Database that is ‘Recovery’. There are three Recovery Models available in SQL Server. They are SIMPLE, FULL, and BULK-LOGGED. When you create a Database, the default value of this option is ‘FULL’. This can be changed with ‘Alter Database’ command with a SET RECOVERY option. For example Alter Database mydb SET RECOVERY SIMPLE.
Simple Recovery Model.
This model provides a very simplest form of backup. This model minimizes the administrative overheads to a DBA. When this RECOVERY option is set in a database, then its transaction log will not be included during the backup and it is not possible to take TRANSACTION LOG Backup. When you take backup, the SQL Server automatically truncates transaction log by dropping the inactive log records and free up the space used by them.
This model of recovery is advisable to
· The databases that are under development process.
· The databases that are mainly used for data ware houses.
· The databases that are used for read-only purposes.
There are no log backups involved in Simple Recovery model; the database can be restored to the end of the most recent backup. So the work done after the last full backup can be lost.
Simple recovery model has following restrictions.
1. Page restore can not be done.
2. File Restore and Piecemeal Restore are available only for read only databases.
3. Point-in-time restore is not available.
Full Recovery Model.
This is the default Recovery Model when you create a database in SQL Server 2005. This model provides a full protection to the data. Thus this is best option to prevent data loss. These recovery models full rely on transaction log backups. To avoid data loss you have to frequently take backup of transaction log along with data backups. If you have a transition log backup after a failure, then you can restore the data to the point of time when the failure occurs.
As all activities including Bulk Copy operations, SELECT INTO, and even Create Index, are logged into the Transaction Log file, and it keeps the log records even after taking the data backup, the Transaction log file may grow high in volume in disk size if you specify auto grow during creating the database. This is one disadvantage of this recovery model, but can be easily handled with DBA’s high attention. As the storage is growing high in this model, the restoration time will be relatively high. For each time a transaction log backup is performed, the inactive log records are truncated and the space used by them is freed up for future usages.
The following scenarios are highly suited to have a database with FULL Recovery Model.
1. If the database contains multiple filegroups or read-only file groups.
2. If having efficient DBAs who can perform point-in-time recovery, Individual page restorations.
3. For high cost tolerance scenarios to tolerate the disk cost due to highly growing transaction logs.
Bulk-logged Recovery model.
This model is very similar to the full recovery model excepts it won’t log the Bulk Copy Operations, SELECT INTO, CREATE INDEX, ALTER INDEX REBUILD, DROP INDEX, WRITETEXT and UPDATETEXT BLOB operations. That means these operations are minimally logged. Because these recovery model can not log these operations as they run very fast. But transaction log records are created for such operations that took place and the page extents which are affected by these operations are also recorded in that log records. Still Log Backup is required to free up the inactive transaction log records. The Bulk-Logged Recovery model does not support Point-In-Time Restoration. This model is very useful where frequent bulk copy operations take place, so that, they are minimally logged and the performance degradations due to bulk copy operations will not be there.
What is LOG Truncation?
If the log records are not eventually not deleted in a frequency of time, Transaction LOG file will grow in high volume (of course depends upon the file size mentioned CREATE DATABASE command) upto even completely full your disk drive. So, inactive transaction records should be deleted in frequency of time. Deleting all inactive Transaction records from Transaction Log file is called as LOG Truncation.
LOG truncates occurs automatically
· for a simple recovery model database after a checkpoint occurs
· for the FULL and Bulk-logged model after taking the Transaction Log backup, if a checkpoint occurs after the previous backup.
digg_url="http://sqlthoughts.blogspot.com/2008/03/know-transaction-log-part-2.html";
In my previous post
Technorati Tags: SQL Server,Recovery,Restore Recovery,Backup,Recovery Models
, I wrote about the Restart Recovery which is automatically done by SQL Server 2005 in the event of SQL Server startup. There is another type of recovery available that is, well known, commonly practiced and a manual process, RESTORE RECOVERY.
Restore Recovery is triggered manually by DBAs during the data loss events, to bring back the SQL Server database to a particular point of working state. The data is recovered from the BACKUP of the database taken and stored away in a media either tape or disk file.
Before looking further inside, let us discuss some basics of Backups.
What is the need of a Backup?
Backup is the backbone for the mission where critical data involved. Even though having high availability system configured with compatible RAID level of Disk Subsystems or fully redundant Storage Area Networks and for Servers that are clustered with failovers with Microsoft Cluster Services and SQL Server 2005 failover clusters, backups of mission-critical databases are so important for many reasons. Say suppose, a developer executed a DELETE FROM query forgetfully missed a WHERE clause in it against a production server, instead of Development Server where he supposed to execute! This is one simple example. A lot of such situations may arrive to test your Database Administrative abilities. You have to rely on your database Backup.
In a Restore Recovery, Backup is so important. But a behaviour of Restore Recovery is based on a property of the Database that is ‘Recovery’. There are three Recovery Models available in SQL Server. They are SIMPLE, FULL, and BULK-LOGGED. When you create a Database, the default value of this option is ‘FULL’. This can be changed with ‘Alter Database’ command with a SET RECOVERY option. For example Alter Database mydb SET RECOVERY SIMPLE.
Simple Recovery Model.
This model provides a very simplest form of backup. This model minimizes the administrative overheads to a DBA. When this RECOVERY option is set in a database, then its transaction log will not be included during the backup and it is not possible to take TRANSACTION LOG Backup. When you take backup, the SQL Server automatically truncates transaction log by dropping the inactive log records and free up the space used by them.
This model of recovery is advisable to
· The databases that are under development process.
· The databases that are mainly used for data ware houses.
· The databases that are used for read-only purposes.
There are no log backups involved in Simple Recovery model; the database can be restored to the end of the most recent backup. So the work done after the last full backup can be lost.
Simple recovery model has following restrictions.
1. Page restore can not be done.
2. File Restore and Piecemeal Restore are available only for read only databases.
3. Point-in-time restore is not available.
Full Recovery Model.
This is the default Recovery Model when you create a database in SQL Server 2005. This model provides a full protection to the data. Thus this is best option to prevent data loss. These recovery models full rely on transaction log backups. To avoid data loss you have to frequently take backup of transaction log along with data backups. If you have a transition log backup after a failure, then you can restore the data to the point of time when the failure occurs.
As all activities including Bulk Copy operations, SELECT INTO, and even Create Index, are logged into the Transaction Log file, and it keeps the log records even after taking the data backup, the Transaction log file may grow high in volume in disk size if you specify auto grow during creating the database. This is one disadvantage of this recovery model, but can be easily handled with DBA’s high attention. As the storage is growing high in this model, the restoration time will be relatively high. For each time a transaction log backup is performed, the inactive log records are truncated and the space used by them is freed up for future usages.
The following scenarios are highly suited to have a database with FULL Recovery Model.
1. If the database contains multiple filegroups or read-only file groups.
2. If having efficient DBAs who can perform point-in-time recovery, Individual page restorations.
3. For high cost tolerance scenarios to tolerate the disk cost due to highly growing transaction logs.
Bulk-logged Recovery model.
This model is very similar to the full recovery model excepts it won’t log the Bulk Copy Operations, SELECT INTO, CREATE INDEX, ALTER INDEX REBUILD, DROP INDEX, WRITETEXT and UPDATETEXT BLOB operations. That means these operations are minimally logged. Because these recovery model can not log these operations as they run very fast. But transaction log records are created for such operations that took place and the page extents which are affected by these operations are also recorded in that log records. Still Log Backup is required to free up the inactive transaction log records. The Bulk-Logged Recovery model does not support Point-In-Time Restoration. This model is very useful where frequent bulk copy operations take place, so that, they are minimally logged and the performance degradations due to bulk copy operations will not be there.
What is LOG Truncation?
If the log records are not eventually not deleted in a frequency of time, Transaction LOG file will grow in high volume (of course depends upon the file size mentioned CREATE DATABASE command) upto even completely full your disk drive. So, inactive transaction records should be deleted in frequency of time. Deleting all inactive Transaction records from Transaction Log file is called as LOG Truncation.
LOG truncates occurs automatically
· for a simple recovery model database after a checkpoint occurs
· for the FULL and Bulk-logged model after taking the Transaction Log backup, if a checkpoint occurs after the previous backup.
Transaction log part1
Know The Transaction LOG -Part 1
digg_url="http://sqlthoughts.blogspot.com/2008/02/know-transaction-log-part-1.html";
After writing about the LOG shipping in my previous post, many questions raised that urged me to share the knowledge about the database logs in SQL Server 2005. As it is a huge concept to discuss, I planned to write a series of posts in Database Transaction Logs.
What is Database LOG?
A Database log always called as Transaction Log, is a critical component of a database. It is in a format of one or more disk files, created with 'Create Database' or 'Alter Database' command. The Transaction Log is required to bring back the last working state of your database when a failure occurs to your database.
Based on the 'RECOVERY' option of the database, Transaction Log records every database modification including the information about the pages which are being modified, the data values added or modified, start and end time of modification occurs to a series of LOG records. So that, whenever a 'undoing' or 'redoing' is required, SQL Server 2005 can do that to your database with the help of Transaction Log.
What is Undo or Redo ?
Before knowing these, it is necessary to know how a transaction work with the modification or how SQL Server handles a transaction.
Whenever a data modification request is received by the SQL Server, regardless of explicit or implicit transaction, all the underlying pages are loaded into the buffer cache. A series of log records are created for this transaction including page numbers for which the modifications are to be carried out, before stage and after stage of the modification. All these logs records are internally linked together. Then the modifications take effect in the pages loaded in cache.
After modifications are done in cache, if a Rollback request for this particular transaction is received then all the undo operations for this transaction are carried out from the Transaction Log records. This Roll Backward operation is called 'Undoing'.
Suppose a Commit request for this transaction is received, then first log records are written to log disk files, prior to the data pages that are modified in cache are written to the data disk files. The buffer manager ensures this. Write LOG Records first and then DATA Records into Disk Files. This mechanism is called as 'Write-Ahead-Log'(WAL).
After writing log records into the log file or files and before writing the modified data records into the data files, say suppose, SQL Server stopped due to some resource problems, then, SQL Server uses these log records to recover all the transactions that are marked as committed and not reflected in data, during the restart of SQL Server. This recovery is called Restart Recovery. This restart recovery is always done for all the databases of an instance when that instance of SQL Server is restarted. This restart recovery, which is doing Roll Forward of all transactions to the data files is called 'Re-doing'.
LOG Sequence Number (LSN)
Every Log Record in Transaction log is associated with a LSN that is Log Sequence Number. As every transaction is associated with a series of log records, and every log records having LSN within it and all related log records are linked backward for Rollback operations. LSN of a log record is unique and it is a sequence number greater than the previous LSN associated with old transactions.
Every Data Page has a LSN number of the Log record which modified this page earlier, recorded in its header. Every LOG Record associated with a page for which the modification is being carried out, is also having the previous LSN stored in the Page’s header, and the new LSN number for this current modification. When a Redo operation is carried out by the SQL Server, it checks these two LSN numbers. If the LSN of the page is high then REDO skip for this page.
Active LOG Records and Inactive Log Records
The Records in LOG files are marked as two types, Active and Inactive. All the Log Records that are the part of live transactions which are not yet either committed or rolled back are called as Active Log Records. The Log Records related with earlier committed or rolled backed transactions are called as Inactive Records. The redoing or undoing operation carried out by the SQL Server, only deals with Inactive Log Records.
Virtual Log Files
SQL Server Database Engine, divide the physical Log disk file into Virtual log files internally. The number and size of the virtual log files cannot be configured explicitly by any DBA. It is based on the auto growth specification in Database for its log file. Database engine tries to have a minimum number of virtual log files for a physical log file. When a log file is created first, the number of Virtual log files may be 4 to 16. And it will go higher when the physical LOG file is enlarged. The performance will be slower, if number of virtual log files are high. SQL Server will automatically create VLFs during the expansion of Physical LOG File, so creating the LOG file with considerable size and file growth percentage should be adequately specified will reduce the number of Virtual Log Files in LOG. To view the Virtual log file use undocumented DBCC Command DBCC LOGINFO. The following is DBCC LOG Info of one of my active SQL Server Database in my development server. This Log file is having 16 Virtual Log files.
FieldID
FileSize
StartOffset
FSEQNO
STATUS
PARITY
CREATELSN
2
253952
8192
24
0
64
0
2
253952
262144
27
0
64
0
2
253952
516096
22
0
128
0
2
278528
770048
23
0
128
0
2
262144
1048576
25
0
64
24000000034800494
2
262144
1310720
26
0
64
25000000049500003
2
262144
1572864
28
0
64
27000000020000052
2
262144
1835008
29
0
64
28000000019100003
2
262144
2097152
30
0
64
29000000007300459
2
262144
2359296
31
0
64
29000000050700023
2
262144
2621440
32
0
64
31000000019800006
2
327680
2883584
33
0
64
32000000004600469
2
327680
3211264
34
0
64
33000000024100176
2
393216
3538944
35
0
64
34000000035400136
2
393216
3932160
36
0
64
35000000034400087
2
458752
4325376
37
0
64
36000000044200003
Checkpoints in Transaction Logs
Within a start and end of a complete transaction, we may use checkpoints or save points with the help of CHECKPOINT and SAVE TRANSACTION T-SQL statements, to store partially done transactions to write in disk files. These checkpoints may also internally triggered by SQL Server itself too. What a checkpoint does within the transaction?
Checkpoint is a SQL Server process that writing all modified data pages I buffer cache into disk files.It is also forces any pending transaction log records into log file. Performing Checkpoints reduces the recovery time of restart recovery, as it forced the transactions to written to log files and also writes the dirty pages into disk files. This process of Checkpoints minimize the Roll forward operations of Restore Recovery.
The Checkpoint Operation involves following steps.
Writing out all dirty pages into Data disk files.
Writing a list of active transactions to Transaction log.
Storing check point log records to Transaction Log.
Scope of the Checkpoints is the Database level. So the Checkpoint operation run only for the current database only.
Checkpoint occurs in the following cases.
Whenever we issue CHECKPOINT T-SQL Command for the current database.
Whenever SQL Server shuts down without option WITH NOWAIT. This checkpoint works for all the databases in that instance. WITH NOWAIT option skips the checkpoint.
Whenever a Data Files are added to or removed from a Database using ALTER DATABASE Command.
When Bulk copy operation or Select Into operation performed in a database for which ‘Bulk-logged’ Recovery model is set.
When a database’s recovery model is changed from Bulk-logged or FULL to SIMPLE.
For a Simple Recovery Model Database, if the size of the Transaction Log exceeds 70%.
When number of log entries exceeds the estimated amount of work required by the SQL Server's 'Recovery Interval' configuration.
digg_url="http://sqlthoughts.blogspot.com/2008/02/know-transaction-log-part-1.html";
After writing about the LOG shipping in my previous post, many questions raised that urged me to share the knowledge about the database logs in SQL Server 2005. As it is a huge concept to discuss, I planned to write a series of posts in Database Transaction Logs.
What is Database LOG?
A Database log always called as Transaction Log, is a critical component of a database. It is in a format of one or more disk files, created with 'Create Database' or 'Alter Database' command. The Transaction Log is required to bring back the last working state of your database when a failure occurs to your database.
Based on the 'RECOVERY' option of the database, Transaction Log records every database modification including the information about the pages which are being modified, the data values added or modified, start and end time of modification occurs to a series of LOG records. So that, whenever a 'undoing' or 'redoing' is required, SQL Server 2005 can do that to your database with the help of Transaction Log.
What is Undo or Redo ?
Before knowing these, it is necessary to know how a transaction work with the modification or how SQL Server handles a transaction.
Whenever a data modification request is received by the SQL Server, regardless of explicit or implicit transaction, all the underlying pages are loaded into the buffer cache. A series of log records are created for this transaction including page numbers for which the modifications are to be carried out, before stage and after stage of the modification. All these logs records are internally linked together. Then the modifications take effect in the pages loaded in cache.
After modifications are done in cache, if a Rollback request for this particular transaction is received then all the undo operations for this transaction are carried out from the Transaction Log records. This Roll Backward operation is called 'Undoing'.
Suppose a Commit request for this transaction is received, then first log records are written to log disk files, prior to the data pages that are modified in cache are written to the data disk files. The buffer manager ensures this. Write LOG Records first and then DATA Records into Disk Files. This mechanism is called as 'Write-Ahead-Log'(WAL).
After writing log records into the log file or files and before writing the modified data records into the data files, say suppose, SQL Server stopped due to some resource problems, then, SQL Server uses these log records to recover all the transactions that are marked as committed and not reflected in data, during the restart of SQL Server. This recovery is called Restart Recovery. This restart recovery is always done for all the databases of an instance when that instance of SQL Server is restarted. This restart recovery, which is doing Roll Forward of all transactions to the data files is called 'Re-doing'.
LOG Sequence Number (LSN)
Every Log Record in Transaction log is associated with a LSN that is Log Sequence Number. As every transaction is associated with a series of log records, and every log records having LSN within it and all related log records are linked backward for Rollback operations. LSN of a log record is unique and it is a sequence number greater than the previous LSN associated with old transactions.
Every Data Page has a LSN number of the Log record which modified this page earlier, recorded in its header. Every LOG Record associated with a page for which the modification is being carried out, is also having the previous LSN stored in the Page’s header, and the new LSN number for this current modification. When a Redo operation is carried out by the SQL Server, it checks these two LSN numbers. If the LSN of the page is high then REDO skip for this page.
Active LOG Records and Inactive Log Records
The Records in LOG files are marked as two types, Active and Inactive. All the Log Records that are the part of live transactions which are not yet either committed or rolled back are called as Active Log Records. The Log Records related with earlier committed or rolled backed transactions are called as Inactive Records. The redoing or undoing operation carried out by the SQL Server, only deals with Inactive Log Records.
Virtual Log Files
SQL Server Database Engine, divide the physical Log disk file into Virtual log files internally. The number and size of the virtual log files cannot be configured explicitly by any DBA. It is based on the auto growth specification in Database for its log file. Database engine tries to have a minimum number of virtual log files for a physical log file. When a log file is created first, the number of Virtual log files may be 4 to 16. And it will go higher when the physical LOG file is enlarged. The performance will be slower, if number of virtual log files are high. SQL Server will automatically create VLFs during the expansion of Physical LOG File, so creating the LOG file with considerable size and file growth percentage should be adequately specified will reduce the number of Virtual Log Files in LOG. To view the Virtual log file use undocumented DBCC Command DBCC LOGINFO. The following is DBCC LOG Info of one of my active SQL Server Database in my development server. This Log file is having 16 Virtual Log files.
FieldID
FileSize
StartOffset
FSEQNO
STATUS
PARITY
CREATELSN
2
253952
8192
24
0
64
0
2
253952
262144
27
0
64
0
2
253952
516096
22
0
128
0
2
278528
770048
23
0
128
0
2
262144
1048576
25
0
64
24000000034800494
2
262144
1310720
26
0
64
25000000049500003
2
262144
1572864
28
0
64
27000000020000052
2
262144
1835008
29
0
64
28000000019100003
2
262144
2097152
30
0
64
29000000007300459
2
262144
2359296
31
0
64
29000000050700023
2
262144
2621440
32
0
64
31000000019800006
2
327680
2883584
33
0
64
32000000004600469
2
327680
3211264
34
0
64
33000000024100176
2
393216
3538944
35
0
64
34000000035400136
2
393216
3932160
36
0
64
35000000034400087
2
458752
4325376
37
0
64
36000000044200003
Checkpoints in Transaction Logs
Within a start and end of a complete transaction, we may use checkpoints or save points with the help of CHECKPOINT and SAVE TRANSACTION T-SQL statements, to store partially done transactions to write in disk files. These checkpoints may also internally triggered by SQL Server itself too. What a checkpoint does within the transaction?
Checkpoint is a SQL Server process that writing all modified data pages I buffer cache into disk files.It is also forces any pending transaction log records into log file. Performing Checkpoints reduces the recovery time of restart recovery, as it forced the transactions to written to log files and also writes the dirty pages into disk files. This process of Checkpoints minimize the Roll forward operations of Restore Recovery.
The Checkpoint Operation involves following steps.
Writing out all dirty pages into Data disk files.
Writing a list of active transactions to Transaction log.
Storing check point log records to Transaction Log.
Scope of the Checkpoints is the Database level. So the Checkpoint operation run only for the current database only.
Checkpoint occurs in the following cases.
Whenever we issue CHECKPOINT T-SQL Command for the current database.
Whenever SQL Server shuts down without option WITH NOWAIT. This checkpoint works for all the databases in that instance. WITH NOWAIT option skips the checkpoint.
Whenever a Data Files are added to or removed from a Database using ALTER DATABASE Command.
When Bulk copy operation or Select Into operation performed in a database for which ‘Bulk-logged’ Recovery model is set.
When a database’s recovery model is changed from Bulk-logged or FULL to SIMPLE.
For a Simple Recovery Model Database, if the size of the Transaction Log exceeds 70%.
When number of log entries exceeds the estimated amount of work required by the SQL Server's 'Recovery Interval' configuration.
Thursday, March 11, 2010
database suspect
- I was struck with a database which is on suspect mode and need to recover it without any recent backups. I searched across the net and found these steps which are quite useful
Step 1: Clear the suspect mode of the database using sp_resetstatus DatabaseName. This will clear the suspect flag and make the database available online
Step 2: Change the database status to Emergency using the following command. Emergency mode allows you to access the databases as normal but with no consistency guarantee. This option also allows us to export the table data so that we can minimize the damage.ALTER DATABASE DatabaseName SET EMERGENCY;
Step 3:Restrict database to single user by changing the access mode as mentioned belowALTER DATABASE DatabaseName SET SINGLE_USER;
Step 4:Run the CHECKDB command with “REPAIR_ALLOW_DATA_LOSS” option. This option should be tried as last option as it always behaves the way it is named. We are not sure of what data it removes.DBCC CHECKDB (DatabaseName, REPAIR_ALLOW_DATA_LOSS) WITH NO_INFOMSGS, ALL_ERRORMSGS;
There are some best (simple) practices which prevents us from such failures. Below are some of them
Backup your data frequently ( daily or once in two days)
Have multiple backups. Move the backups to external drives or tapes frequently
Validate that your backups are good by doing trial restores to alternate server
Run CHECKDB regularly
Subscribe to:
Posts (Atom)