澳门金沙vip 3

SQL Server 2005 性能故障白皮书

 

Tempdb

Tempdb globally stores both internal and user objects and the
temporary tables, objects, and stored procedures that are created during
SQL Server operation.

There is a single tempdb for each SQL Server instance. It can be a
performance and disk space bottleneck. The tempdb can become
overloaded in terms of space available and excessive DDL/DML operations.
This can cause unrelated applications running on the server to slow down
or fail.

Some of the common issues with tempdb are as follows:

  • Running out of storage space in tempdb.

  • Queries that run slowly due to the I/O bottleneck in tempdb.
    This is covered under I/O
    Bottlenecks.aspx#_I/O_Bottlenecks).

  • Excessive DDL operations leading to a bottleneck in the system
    tables.

  • Allocation contention.

Before we start diagnosing problems with tempdb, let us first look
at how the space in tempdb is used. It can be grouped into four main
categories.

User objects

These are explicitly created by user sessions and are tracked in system catalog. They include the following:

  • Table and index.

  • Global temporary table (##t1) and index.

  • Local temporary table (#t1) and index.

    • Session scoped.

    • Stored procedure scoped in which it was created.

  • Table variable (@t1).

    • Session scoped.

    • Stored procedure scoped in which it was created.

Internal objects

These are statement scoped objects that are created and destroyed by SQL Server to process queries. These are not tracked in the system catalog. They include the following:

  • Work file (hash join)

  • Sort run

  • Work table (cursor, spool and temporary large object data type (LOB) storage)

As an optimization, when a work table is dropped, one IAM page and an extent is saved to be used with a new work table.

There are two exceptions; the temporary LOB storage is batch scoped and cursor worktable is session scoped.

Version Store

This is used for storing row versions. MARS, online index, triggers and snapshot-based isolation levels are based on row versioning. This is new in SQL Server 2005.

Free Space

This represents the disk space that is available in tempdb.

The total space used by tempdb equal to the User Objects plus the
Internal Objects plus the Version Store plus the Free Space.

This free space is same as the performance counter free space
in tempdb.

By Muhammad Shujaat Siddiqi

原文地址:

 

Published: October 1, 2005

ADDING A NEW PARTITION

This operation is supported to accommodate the continuous changes when a
new Partition needs to be added if the already existing partitions are
very large. This is technically called a Partition Split. ALTER TABLE
statements support partition split with the required options. To split a
partition, the ALTER PARTITION FUNCTION statement is used.

ALTER PARTITION FUNCTION MyPartitionFunction()



SPLIT RANGE (5000)

But before splitting, a new file group must be created and added to the
partition scheme using the partition function being modified, otherwise,
this statement would cause an error while executing.

allow_page_locks 

Detection

Inefficient query plans are usually detected comparatively. An
inefficient query plan may cause increased CPU consumption.

The query against sys.dm_exec_query_stats is an efficient way to
determine which query is using the most cumulative CPU.

select  
    highest_cpu_queries.plan_handle,  
    highest_cpu_queries.total_worker_time, 
    q.dbid, 
    q.objectid, 
    q.number, 
    q.encrypted, 
    q.[text] 
from  
    (select top 50  
        qs.plan_handle,  
        qs.total_worker_time 
    from  
        sys.dm_exec_query_stats qs 
    order by qs.total_worker_time desc) as highest_cpu_queries 
    cross apply sys.dm_exec_sql_text(plan_handle) as q 
order by highest_cpu_queries.total_worker_time desc

Alternatively, query against sys.dm_exec_cached_plans by using
filters for various operators that may be CPU intensive, such as ‘%Hash
Match%’, ‘%Sort%’ to look for suspects.

GUI SUPPORT FOR PARTITIONING

Great GUI support is provided in SQL Server 2008 for partitioning. If
you see the properties of a table, you can easily find partition related
properties under the Storage tab.

澳门金沙vip 1

There is no support of partitioning at the time of table creation, but
later you can use the wizard for partitioning. There is a new
partitioning wizard introduced in SQL Server 2008 Management Studio.

解决方案:

Monitoring index usage

Another aspect of query performance is related to DML queries, queries
deleting, inserting and modifying data. The more indexes that are
defined on a specific table, the more resources are needed to modify
data. In combination with locks held over transactions, longer
modification operations can hurt concurrency. Therefore, it can be very
important to know which indexes are used by an application over time.
You can then figure out whether there is a lot of weight in the database
schema in the form of indices which never get used.

SQL Server 2005 provides the
new sys.dm_db_index_usage_stats dynamic management view that
shows which indexes are used, and whether they are in use by the user
query or only by a system operation. With every execution of a query,
the columns in this view are incremented according to the query plan
that is used for the execution of that query. The data is collected
while SQL Server is up and running. The data in this DMV is kept in
memory only and is not persisted. So when the SQL Server instance is
shut down, the data is lost. You can poll this table periodically and
save the data for later analysis.

The operation on indexes is categorized into user type and system type.
User type refers to SELECT and INSERT/DELETE/UPDATE operations. System
type operations are commands like DBCC statements or DDL commands or
update statistics. The columns for each category of statements
differentiate into:

  • Seek Operations against an index (user_seeks or system_seeks)

  • Lookup Operations against an index (user_lookups or
    system_lookups)

  • Scan Operations against an index (user_scans or system_scans)

  • Update Operations against an index (user_updates or
    system_updates)

For each of these accesses of indexes, the timestamp of the last access
is noted as well.

An index itself is identified by three columns covering its
database_id, object_id and index_id. Whereas index_id=0 represents a
heap table, index_id=1 represents a clustered index whereas
index_id>1 represents nonclustered indexes

Over days of runtime of an application against a database, the list of
indexes getting accessed in sys.dm_db_index_usage_stats will
grow.

The rules and definitions for seek, scan, and lookup work as follows in
SQL Server 2005.

  • SEEK: Indicates the count the B-tree structure was used to access
    the data. It doesn’t matter whether the B-tree structure is used
    just to read a few pages of each level of the index in order to
    retrieve one data row or whether half of the index pages are read in
    order to read gigabytes of data or millions of rows out of the
    underlying table. So it should be expected that most of the hits
    against an index are accumulated in this category.

  • SCAN: Indicates the count the data layer of the table gets used for
    retrieval without using one of the index B-trees. In the case of
    tables that do not have any index defined, this would be the case.
    In the case of table with indexes defined on it, this can happen
    when the indexes defined on the table are of no use for the query
    executed against that statement.

  • LOOKUP: Indicates that a clustered index that is defined on a table
    did get used to look up data which was identified by ‘seeking’
    through a nonclustered index that is defined on that table as well.
    This describes the scenario known as bookmark lookup in SQL
    Server 2000. It represents a scenario where a nonclustered index is
    used to access a table and the nonclustered index does not cover the
    columns of the query select list AND the columns defined in the
    where clause, SQL Server would increment the value of the column

    user_seeks
    

    for the nonclustered index used plus the column

    user_lookups
    

    for the entry of the clustered index. This count can become very
    high if there are multiple nonclustered indexes defined on the
    table. If the number of

    user_seeks
    

    against a clustered index of a table is pretty high, the number of

    user_lookups
    

    is pretty high as well plus the number of

    user_seeks
    

    of one particular nonclustered index is very high as well, one might
    be better off by making the nonclustered index with the high count
    to be the clustered index.

The following DMV query can be used to get useful information about the
index usage for all objects in all databases.

select object_id, index_id, user_seeks, user_scans, user_lookups  
from sys.dm_db_index_usage_stats  
order by object_id index_id

One can see the following results for a given table.

object_id       index_id    user_seeks    user_scans    user_lookups  
------------      ------------- -------------- --------------  ----------- 
------ 
521690298         1                  0                 251             
     123 
521690298         2                123                 0               
       0

In this case, there were 251 executions of a query directly accessing
the data layer of the table without using one of the indexes. There were
123 executions of a query accessing the table by using the first
nonclustered index, which does not cover either the select list of the
query or the columns specified in the WHERE clause since we see
123 lookups on the clustered index.

The most interesting category to look at is the ‘user type statement’
category. Usage indication in the ‘system category’ can be seen as a
result of the existence of the index. If the index did not exist, it
would not have to be updated in statistics and it would not need to be
checked for consistency. Therefore, the analysis needs to focus on the
four columns that indicate usage by ad hoc statements or by the user
application.

To get information about the indexes of a specific table that has not
been used since the last start of SQL Server, this query can be executed
in the context of the database that owns the object.

select i.name 
from sys.indexes i  
where i.object_id=object_id('<table_name>') and 
    i.index_id NOT IN  (select s.index_id  
                        from sys.dm_db_index_usage_stats s  
                        where s.object_id=i.object_id and      
                        i.index_id=s.index_id and 
                        database_id = <dbid> )

All indexes which haven’t been used yet can be retrieved with the
following statement:

select object_name(i.object_id), 
i.name, 
s.user_updates, 
s.user_seeks, 
s.user_scans, 
s.user_lookups
from sys.indexes i  
            left join sys.dm_db_index_usage_stats s 
on s.object_id = i.object_id and  
                  i.index_id = s.index_id and s.database_id = <dbid>>
where objectproperty(i.object_id, 'IsIndexable') = 1 and
-- index_usage_stats has no reference to this index (not being used)
s.index_id is null or
-- index is being updated, but not used by seeks/scans/lookups
(s.user_updates > 0 and s.user_seeks = 0 
and s.user_scans = 0 and s.user_lookups = 0)
order by object_name(i.object_id) asc

In this case, the table name and the index name are sorted according to
the table name.

The real purpose of this dynamic management view is to observe the usage
of indexes in the long run. It might make sense to take a snapshot of
that view or a snapshot of the result of the query and store it away
every day to compare the changes over time. If you can identify that
particular indexes did not get used for months or during periods such as
quarter-end reporting or fiscal-year reporting, you could eventually
delete those indexes from the database.

.aspx#mainSection)Top
Of
Page.aspx#mainSection) 

PARTITIONING AND COMPRESSION

After the long debate about CPU versus data size, your organization
might have decided to include compression in your design. Don’t forget
that you have taken your partitioning strategy into consideration how it
would be affected by this new world order.

The good thing is that you can employ compression in your partitions as
well. The greatest part is that different partitions may have different
compression settings.

All of you who have played around with this feature (compression) know
that the first step in compressing any object is to see whether or not
compression would be beneficial for the object. The
sp_estimate_data_compression_savings stored procedure is available
to help with this. The report generated by this stored procedure
contains information about expected benefits of compression for each
partition individually.

Let us check our table ‘MyPartitionedTable’ and see how compression
could benefit us. Since we have no data in the table, first we insert
some rows into the table.

澳门金沙vip 2

WHERE ALLOW_PAGE_LOCKS = 0

Applies To: SQL Server 2005

SLIDING WINDOW TECHNIQUE

SQL Server cannot have an unlimited number of partitions. The limitation
is 1000 partitions in 2005. Based on this limitation we cannot switch in
partitions forever. So, how to resolve this? The Sliding Window
Technique is the answer.

According to this technique, a table always has some specified number of
partitions. As the new partition comes, the oldest partition is switched
out of the table and archived. This process can goes on forever.
Additionally, this should be very fast to do since it is a metadata
operation.

This technique could give a drastic boost in performance based on the
fact that so long as data is being loaded into the source staging table,
your partitioned table is not locked at all. You can even improve that
using bulk inserts. After it is loaded the switching in process is
basically a metadata operation.

This process is so regular that you can automate this using dynamic SQL.
The tasks to be followed are as follows:

  1. Managing staging tables for switching in and out the partitions.
  2. Switching new partition in and old partition out of the table.
  3. Archive staging table in which data is archived and remove both
    staging tables.

 reference page:

Detection

You can use System Monitor (PerfMon) or SQL Trace (SQL Server Profiler)
to detect excessive compiles and recompiles.

System Monitor (Perfmon)

The SQL Statistics object provides counters to monitor compilation
and the type of requests that are sent to an instance of SQL Server. You
must monitor the number of query compilations and recompilations in
conjunction with the number of batches received to find out if the
compiles are contributing to high CPU use. Ideally, the ratio of SQL
Recompilations/sec
 to Batch Requests/sec should be very low unless
users are submitting ad hoc queries.

The key data counters to look are as follows.

  • SQL Server: SQL Statistics: Batch Requests/sec

  • SQL Server: SQL Statistics: SQL Compilations/sec

  • SQL Server: SQL Statistics: SQL Recompilations/sec

For more information, see “SQL Statistics Object” in SQL Server Books
Online.

SQL Trace

If the PerfMon counters indicate a high number of recompiles, the
recompiles could be contributing to the high CPU consumed by SQL Server.
We would then need to look at the profiler trace to find the stored
procedures that were being recompiled. The SQL Server Profiler trace
gives us that information along with the reason for the recompilation.
You can use the following events to get this information.

SP:Recompile / SQL:StmtRecompile. The SP:Recompile and
the SQL:StmtRecompile event classes indicate which stored procedures
and statements have been recompiled. When you compile a stored
procedure, one event is generated for the stored procedure and one for
each statement that is compiled. However, when a stored procedure
recompiles, only the statement that caused the recompilation is
recompiled (not the entire stored procedure as in SQL Server 2000). Some
of the more important data columns for the SP:Recompile event class
are listed below. The EventSubClass data column in particular is
important for determining the reason for the
recompile. SP:Recompile is triggered once for the procedure or
trigger that is recompiled and is not fired for an ad hoc batch that
could likely be recompiled. In SQL Server 2005, it is more useful to
monitor SQL:StmtRecompiles as this event class is fired when any
type of batch, ad hoc, stored procedure, or trigger is recompiled.

The key data columns we look at in these events are as follows.

  • EventClass

  • EventSubClass

  • ObjectID (represents stored procedure that contains this statement)

  • SPID

  • StartTime

  • SqlHandle

  • TextData

For more information, see “SQL:StmtRecompile Event Class” in SQL Server
Books Online.

If you have a trace file saved, you can use the following query to see
all the recompile events that were captured in the trace.

select  
    spid, 
    StartTime, 
    Textdata, 
    EventSubclass, 
    ObjectID, 
    DatabaseID, 
    SQLHandle  
from  
    fn_trace_gettable ( 'e:\recompiletrace.trc' , 1) 
where  
    EventClass in(37,75,166)

EventClass  37 = Sp:Recompile, 75 = CursorRecompile,
166=SQL:StmtRecompile

You could further group the results from this query by the SqlHandle and
ObjectID columns, or by various other columns, in order to see if most
of the recompiles are attributed by one stored procedure or are due to
some other reason (such as a SET option that has changed).

Showplan XML For Query Compile. The Showplan XML For Query
Compile
 event class occurs when Microsoft SQL Server compiles or
recompiles a SQL statement. This event has information about the
statement that is being compiled or recompiled. This information
includes the query plan and the object ID of the procedure in question.
Capturing this event has significant performance overhead, as it is
captured for each compilation or recompilation. If you see a high value
for the SQL Compilations/sec counter in System Monitor, you should
monitor this event. With this information, you can see which statements
are frequently recompiled. You can use this information to change the
parameters of those statements. This should reduce the number of
recompiles.

DMVs. When you use the sys.dm_exec_query_optimizer_info DMV,
you can get a good idea of the time SQL Server spends optimizing. If you
take two snapshots of this DMV, you can get a good feel for the time
that is spent optimizing in the given time period.

select *  
from sys.dm_exec_query_optimizer_info 

counter          occurrence           value                 
---------------- -------------------- ---------------------  
optimizations    81                   1.0 
elapsed time     81                   6.4547820702944486E-2

In particular, look at the elapsed time, which is the time elapsed due
to optimizations. Since the elapsed time during optimization is
generally close to the CPU time that is used for the optimization (since
the optimization process is very CPU bound), you can get a good measure
of the extent to which the compile time is contributing to the high CPU
use.

Another DMV that is useful for capturing this information
is sys.dm_exec_query_stats.

The data columns that you want to look at are as follows. :

  • Sql_handle

  • Total worker time

  • Plan generation number

  • Statement Start Offset

For more information, see the SQL Server Books Online topic
on sys.dm_exec_query_stats.

In particular, plan_generation_num indicates the number of times the
query has recompiled. The following sample query gives you the top
25 stored procedures that have been recompiled.

select *  
from sys.dm_exec_query_optimizer_info 

select top 25 
    sql_text.text, 
    sql_handle, 
    plan_generation_num, 
    execution_count, 
    dbid, 
    objectid  
from  
    sys.dm_exec_query_stats a 
    cross apply sys.dm_exec_sql_text(sql_handle) as sql_text 
where  
    plan_generation_num >1 
order by plan_generation_num desc

For additional information, see Batch Compilation, Recompilation, and
Plan Caching Issues in SQL Server
2005 ()
on Microsoft TechNet.

PARTITION ALIGNED INDEX VIEWS

Partition aligned index views allow to efficiently create and manage
summary aggregates in relational data. The query results are
materialized immediately and persisted in physical storage in the
database. This is an extension of Indexed views which existed in SQL
Server 2005. Earlier, it was difficult to manage index views on
partitioned table. This is because switching in and out the partition
was not possible as the data was not distributed in a partitioned way in
the indexed view. Indexed views were required to be dropped before this
operation. In SQL Server 2008, this became possible with the
introduction of Partition Aligned Index Views. In this way Indexed views
have now evolved to become ‘Partition Aware’.

It is said that these types of index views increase the speed and
efficiency of queries on the partitioned data. The following conditions
should be true if an index view has to be partition aligned with the
table on which this view is defined.

  • The partition functions of the indexes of the indexed view and table
    must define the same number of partitions, their boundary values and
    the partition must be based on the same column.
  • The projection list of the view definition includes the partitioning
    column (as opposed to an expression that includes the partitioning
    column) of the partitioned table.
  • Where the view definition performs a grouping, the partitioning
    column is one of the grouping columns included in the view
    definition.
  • Where the view references several tables (using joins, sub queries,
    functions, and so on), the indexed view is partition-aligned with
    only one of the partitioned tables.

These views can be local or distributed. The local partitioned index
view is the one in which all partitions lie on the same SQL Server
instance. For a distributed one, different partitions of tables, queried
in single view, reside on different SQL Server instances across the
network. The distributed partitioned views are specially used to support
federation of SQL Server instances.

Query the indexes and tables list with follow query, then enable
their(index) page lock setting from property setting dialog.

Monitoring tempdb space

It is better to prevent a problem then work to solve it later. You can
use the following performance counters to monitor the amount of space
tempdb is using.

  • Free Space in tempdb (KB). This counter tracks free space
    in tempdb in kilobytes. Administrators can use this counter to
    determine if tempdb is running low on free space.

However, identifying how the different categories, as defined above, are
using the disk space in tempdb is a more interesting, and
productive, question.

The following query returns the tempdb space used by user and by
internal objects. Currently, it provides information
for tempdb only.

Select 
    SUM (user_object_reserved_page_count)*8 as user_objects_kb, 
    SUM (internal_object_reserved_page_count)*8 as internal_objects_kb, 
    SUM (version_store_reserved_page_count)*8  as version_store_kb, 
    SUM (unallocated_extent_page_count)*8 as freespace_kb 
From sys.dm_db_file_space_usage 
Where database_id = 2

Here is one sample output (with space in KBs).

user_objets_kb   internal_objects_kb   version_store_kb   freespace_kb 
---------------- -------------------- ------------------ ------------ 
8736               128                    64                    448

Note that these calculations don’t account for pages in mixed extents.
The pages in mixed extents can be allocated to user and internal
objects.

REMOVING AN EXISTING PARTITION

The Merge operation is used to remove an existing partition. The syntax
of MERGE statement is nearly the same as the SPLIT statement. This
statement would remove the partition created in SPLIT command shown
before this.

ALTER PARTITION FUNCTION MyPartitionFunction ()



MERGE RANGE (5000)

正常应该只有如下三条记录:

Resolution
  • Determine if cursors are the most appropriate means to accomplish
    the processing or whether a set-based operation, which is generally
    more efficient, is possible.

  • Consider enabling multiple active results (MARS) when connecting to
    SQL Server 2005.

  • Consult the appropriate documentation for your specific API to
    determine how to specify a larger fetch buffer size for the cursor:

    ODBC – SQL_ATTR_ROW_ARRAY_SIZE

    OLE DB – IRowset::GetNextRows or IRowsetLocate::GetRowsAt

.aspx#mainSection)Top
Of
Page.aspx#mainSection) 

SWITCHING A PARTITION

This operation is used to switch a partition in or out of a partitioned
table. It must be remembered that for switching in, the already existing
partition must be empty within the destination table. In the following
example, we are switching in a table MyNewPartition as 4th partition to
MyPartitionedTable.

ALTER TABLE MyNewPartTable switch TO MyPartitionedTable PARTITION 4

In the following example we are switching partition 3 out to a table
MyPartTable:

ALTER TABLE MyPartitionedTable switch PARTITION 4 TO MyNewPartTable

We partitioned our table because of large amount of data. In order to
speed up data retrieval, we have incorporated several indexes on our
table. This would certainly help in satisfying the query requirements of
users, but it adds additional time while updating or adding records to
the table given the large amount of data in the table. So it is better
that we insert our records in an empty table, which is exactly same as
the partitioned table. Insert new records in that table and then switch
that partition into the partitioned table. We discuss this further as we
will be discussing the Sliding Window Technique.

The requirement for the table when switching in and out of a partition
means that both of them must also have same clustered and non-clustered
indexes, and additionally, the same constraints.

澳门金沙vip 3

Resolution

If you have detected excessive compilation/recompilation, consider the
following options.

  • If the recompile occurred because a SET option changed, use SQL
    Server Profiler to determine which SET option changed. Avoid
    changing SET options within stored procedures. It is better to set
    them at the connection level. Ensure that SET options are not
    changed during the lifetime of the connection.

  • Recompilation thresholds for temporary tables are lower than for
    normal tables. If the recompiles on a temporary table are due to
    statistics changes, you can change the temporary tables to table
    variables. A change in the cardinality of a table variable does not
    cause a recompilation. The drawback of this approach is that the
    query optimizer does not keep track of a table variable’s
    cardinality because statistics are not created or maintained on
    table variables. This can result in nonoptimal query plans. You can
    test the different options and choose the best one.

    Another option is to use the KEEP PLAN query hint. This sets the
    threshold of temporary tables to be the same as that of permanent
    tables. The EventSubclass column indicates that “Statistics
    Changed” for an operation on a temporary table.

  • To avoid recompilations that are due to changes in statistics (for
    example, when the plan becomes suboptimal due to change in the data
    statistics), specify the KEEPFIXED PLAN query hint. With this option
    in effect, recompilations can only happen because of
    correctness-related reasons (for example, when the underlying table
    structure has changed and the plan no longer applies) and not due to
    statistics. An example might be when a recompilation occurs if the
    schema of a table that is referenced by a statement changes, or if a
    table is marked with the sp_recompile stored procedure.

  • Turning off the automatic updates of statistics for indexes and
    statistics that are defined on a table or indexed view prevents
    recompiles that are due to statistics changes on that object. Note,
    however, that turning off the “auto-stats” feature by using this
    method is usually not a good idea. This is because the query
    optimizer is no longer sensitive to data changes in those objects
    and suboptimal query plans might result. Use this method only as a
    last resort after exhausting all other alternatives.

  • Batches should have qualified object names (for example, dbo.Table1)
    to avoid recompilation and to avoid ambiguity between objects.

  • To avoid recompiles that are due to deferred compiles, do not
    interleave DML and DDL or create the DDL from conditional constructs
    such as IF statements.

  • Run Database Engine Tuning Advisor (DTA) to see if any indexing
    changes improve the compile time and the execution time of the
    query.

  • Check to see if the stored procedure was created with the WITH
    RECOMPILE option or if the RECOMPILE query hint was used. If a
    procedure was created with the WITH RECOMPILE option, in
    SQL Server 2005, we may be able to take advantage of the statement
    level RECOMPILE hint if a particular statement within that procedure
    needs to be recompiled. This would avoid the necessity of
    recompiling the whole procedure each time it executes, while at the
    same time allowing the individual statement to be compiled. For more
    information on the RECOMPILE hint, see SQL Server Books Online.

PARTITIONING AND PARALLEL EXECUTION:

If you have multiple processing cores on your server then partitioning
your large tables could also result in more optimized parallel execution
plans by the database engine. It must be remembered that in SQL Server
2005, a single thread was created per partition (at max). So if there
are less number of partitions then the number of processors, all the
cores were not optimally utilized and some processors used to be sitting
idle. With SQL Server 2008, this restriction has been removed. Now all
processors can be used to satisfy the query requirement.

When SQL Server uses parallel execution plan for partition tables, you
can see Parallelism operator in the execution plan of the query. In
actual there are multiple threads working on different partitions of the
table. Each partition is processed by a single thread. We can control
parallel plan of our queries by using MAXDOP hint as part of query or
plan guide. First we see how we could control the degree of parallelism
using query hints:

SELECT * FROM MyPartitionedTable



ORDER BY OrderDate DESC



OPTION (MAXDOP 4)

The plan can also be created to support parallelism:

EXEC sp_create_plan_guide 



@name = N'Guide1', 



@stmt = N'SELECT * FROM MyPartitionedTable



ORDER BY OrderDate DESC', 



@type = N'SQL',



@module_or_batch = NULL, 



@params = NULL, 



@hints = N'OPTION (MAXDOP 4)';

Remember if you are creating a custom plan guide for your query then you
must not have specified (Maximum Degree Of Parallelism) MAXDOP = 1 as an
option. Additionally the configuration option of maximum degree of
parallelism for the server should be appropriately set.

sp_configure 'show advanced options', 1;



GO



RECONFIGURE WITH OVERRIDE;



GO



sp_configure 'max degree of parallelism', 0;



GO



RECONFIGURE WITH OVERRIDE;



GO

You can also set the configuration settings of the instance from the
Properties window of the instance selected from the object explorer in
the SQL Server Management Studio.

澳门金沙vip 4

错误1:

Detection

Intra-query parallelism problems can be detected using the following
methods.

System Monitor (Perfmon)

Look at the SQL Server:SQL Statistics – Batch Requests/sec counter
and see “SQL Statistics Object” in SQL Server Books Online for more
information.

Because a query must have an estimated cost that exceeds the cost
threshold for the parallelism configuration setting (which defaults to
5) before it is considered for a parallel plan, the more batches a
server is processing per second the less likely it is that the batches
are running with parallel plans. Servers that are running many parallel
queries normally have small batch requests per second (for example,
values less than 100).

DMVs

From a running server, you can determine whether any active requests are
running in parallel for a given session by using the following query.

select  
    r.session_id, 
    r.request_id, 
    max(isnull(exec_context_id, 0)) as number_of_workers, 
    r.sql_handle, 
    r.statement_start_offset, 
    r.statement_end_offset, 
    r.plan_handle 
from  
    sys.dm_exec_requests r 
    join sys.dm_os_tasks t on r.session_id = t.session_id 
    join sys.dm_exec_sessions s on r.session_id = s.session_id 
where  
    s.is_user_process = 0x1 
group by  
    r.session_id, r.request_id,  
    r.sql_handle, r.plan_handle,  
    r.statement_start_offset, r.statement_end_offset 
having max(isnull(exec_context_id, 0)) > 0

With this information, the text of the query can easily be retrieved by
using sys.dm_exec_sql_text, while the plan can be retrieved
using sys.dm_exec_cached_plan.

You may also search for plans that are eligible to run in parallel. This
can be done by searching the cached plans to see if a relational
operator has its Parallel attribute as a nonzero value. These plans
may not run in parallel, but they are eligible to do so if the system is
not too busy.

-- 
-- Find query plans that may run in parallel 
-- 
select  
    p.*,  
    q.*, 
    cp.plan_handle 
from  
    sys.dm_exec_cached_plans cp 
    cross apply sys.dm_exec_query_plan(cp.plan_handle) p 
    cross apply sys.dm_exec_sql_text(cp.plan_handle) as q 
where  
    cp.cacheobjtype = 'Compiled Plan' and 
    p.query_plan.value('declare namespace  
p="http://schemas.microsoft.com/sqlserver/2004/07/showplan"; 
        max(//p:RelOp/@Parallel)', 'float') > 0

In general, the duration of a query is longer than the amount of CPU
time, because some of the time was spent waiting on resources such as a
lock or physical I/O. The only scenario where a query can use more CPU
time than the elapsed duration is when the query runs with a parallel
plan such that multiple threads are concurrently using CPU. Note that
not all parallel queries will demonstrate this behavior (CPU time
greater than the duration).

Note: Some parts of the code snippet presented in the following
table have been displayed in multiple lines only for better readability.
These should be entered in a single line.

select  
    qs.sql_handle,  
    qs.statement_start_offset,  
    qs.statement_end_offset,  
    q.dbid, 
    q.objectid, 
    q.number, 
    q.encrypted, 
    q.text 
from  
    sys.dm_exec_query_stats qs 
    cross apply sys.dm_exec_sql_text(qs.plan_handle) as q 
where  
    qs.total_worker_time > qs.total_elapsed_time 
SQL Trace 
Look for the following signs of parallel queries,  
which could be either statements or batches that have 
CPU time greater than the duration. 
select  
    EventClass,  
    TextData  
from  
    ::fn_trace_gettable('c:\temp\high_cpu_trace.trc', 
default) 
where  
    EventClass in (10, 12)    -- RPC:Completed,  
SQL:BatchCompleted 
    and CPU > Duration/1000    -- CPU is in  
milliseconds, Duration in microseconds Or can be  
Showplans (un-encoded) that have Parallelism operators] 
in them 
select  
    EventClass,  
    TextData  
from  
    ::fn_trace_gettable('c:\temp\high_cpu_trace.trc', 
default) 
where  
    TextData LIKE '%Parallelism%'

COLLOCATION

This is a strong form of alignment. In collocation, objects are joined
with an equijoin predicates. These predicates use columns which are
partitioning columns. They are useful in writing queries that run way
faster than regular queries. These are used by the tables, which are
partitioned using the same partition key, and the same partition
function is used to partition them.

To ease the creation of partitions in an un-partitioned table,
collocation support is provided in the wizard to specify the collocation
table. This collocation table is used to copy the definition of
partition settings and we don’t need to specify the partition settings
again for each table having the same partition settings.

澳门金沙vip 5

Though, In T-SQL there is no support for specifically specifying the
collation table, this option has been added in this wizard to provide
ease of partitioning a table. On the performance side, if two tables are
collocated then extraordinary performance can be expected in joins.

Executing the query “ALTER INDEX
[IX_liveConfigState_Service_ServiceId_…” failed with the
following error: “The index
“IX_liveConfigState_Service_ServiceId_GroupRightsVersion” on table
“liveConfigState_Service” cannot be reorganized because page level
locking is disabled.”. Possible failure reasons: Problems with the
query, “ResultSet” property not set correctly, parameters not set
correctly, or connection not established correctly.

Slow-Running Queries

Slow or long running queries can contribute to excessive resource
consumption and be the consequence of blocked queries.

Excessive resource consumption is not restricted to CPU resources, but
can also include I/O storage bandwidth and memory bandwidth. Even if SQL
Server queries are designed to avoid full table scans by restricting the
result set with a reasonable WHERE clause, they might not perform as
expected if there is not an appropriate index supporting that particular
query. Also, WHERE clauses can be dynamically constructed by
applications, depending on the user input. Given this, existing indexes
cannot cover all possible cases of restrictions. Excessive CPU, I/O, and
memory consumption by Transact-SQL statements are covered earlier in
this white paper.

In addition to missing indexes, there may be indexes that are not used.
As all indexes have to be maintained, this does not impact the
performance of a query, but does impact the DML queries.

Queries can also run slowly because of wait states for logical locks and
for system resources that are blocking the query. The cause of the
blocking can be a poor application design, bad query plans, the lack of
useful indexes, and an SQL Server instance that is improperly configured
for the workload.

This section focuses on two causes of a slow running query—blocking and
index problems.

MULTITHREADED PARTITION ACCESS

Threading support for partitions has also been improved in SQL Server

  1. Earlier in 2005 one thread was assigned to each partition. So if
    there are e.g. 40 threads available and table has only 2 partitions then
    only 2 threads may be utilized for accessing these partitions’ data. In
    2008, all the partitions are accessed by all the available threads.
    These threads access table partitions in a round robin fashion.

You need to make sure that the ‘max worker thread’ configuration is
properly set for your SQL Server instance.

SELECT * FROM SYS.CONFIGURATIONS



WHERE NAME = 'max worker threads'

This feature is also important in the case of non-partitioned table.
This is because back in 2005 if we had large amounts of data in a
non-partitioned table then only one thread was available to it. Now in
2008 all the available thread could utilize it which results in boosting
the performance of data access.

Many operations can be done in parallel due to partitioning like loading
data, backup and recovery and query processing.

Memory errors

701 – There is insufficient system memory to run this query.

Causes

This is very generic out-of-memory error for the server. It indicates a
failed memory allocation. It can be due to a variety of reasons,
including hitting memory limits on the current workload. With increased
memory requirements for SQL Server 2005 and certain configuration
settings (such as the max server memory option) users are more
likely to see this error as compared to SQL Server 2000. Usually the
transaction that failed is not the cause of this error.

Troubleshooting

Regardless of whether the error is consistent and repeatable (same
state) or random (appears at random times with different states), you
will need to investigate server memory distribution during the time you
see this error. When this error is present, it is possible that the
diagnostic queries will fail. Start investigation from external
assessment. Follow the steps outlined in General troubleshooting steps
in case of memory
errors.aspx#_General_troubleshooting_steps_in%20ca).

Possible solutions include: Remove external memory pressure. Increase
the max server memory setting. Free caches by using one of the
following commands: DBCC FREESYSTEMCACHE, DBCC FREESESSIONCACHE, or DBCC
FREEPROCCACHE. If the problem reappears, reduce workload.

802 – There is insufficient memory available in the buffer pool.

Causes

This error does not necessarily indicate an out-of-memory condition. It
might indicate that the buffer pool memory is used by someone else. In
SQL Server 2005, this error should be relatively rare.

Troubleshooting

Use the general troubleshooting steps and recommendations outlined for
the 701 error.

8628 – A time out occurred while waiting to optimize the query. Rerun
the query.

Causes

This error indicates that a query compilation process failed because it
was unable to obtain the amount of memory required to complete the
process. As a query undergoes through the compilation process, which
includes parsing, algebraization, and optimization, its memory
requirements may increase. Thus the query will compete for memory
resources with other queries. If the query exceeds a predefined timeout
(which increases as the memory consumption for the query increases)
while waiting for resources, this error is returned. The most likely
reason for this is the presence of a number of large query compilations
on the server.

Troubleshooting

  1. Follow general troubleshooting steps to see if the server memory
    consumption is affected in general.

  2. Check the workload. Verify the amounts of memory consumed by
    different components. (See Internal Physical Memory
    Pressure.aspx#_Internal_physical_memory_pressure) earlier
    in this paper.)

  3. Check the output of DBCC MEMORYSTATUS for the number of waiters at
    each gateway (this information will tell you if there are other
    queries running that consume significant amounts of memory).

    Small Gateway                  Value 
    ------------------------------ -------------------- 
    Configured Units               8 
    Available Units                8 
    Acquires                       0 
    Waiters                        0 
    Threshold Factor               250000 
    Threshold                      250000 
    
    (6 row(s) affected) 
    
    Medium Gateway                 Value 
    ------------------------------ -------------------- 
    Configured Units               2 
    Available Units                2 
    Acquires                       0 
    Waiters                        0 
    Threshold Factor               12 
    
    (5 row(s) affected) 
    
    Big Gateway                    Value 
    ------------------------------ -------------------- 
    Configured Units               1 
    Available Units                1 
    Acquires                       0 
    Waiters                        0 
    Threshold Factor               8
    
  4. Reduce workload if possible.

8645 – A time out occurred while waiting for memory resources to
execute the query. Rerun the query.

Causes

This error indicates that many concurrent memory intensive queries are
being executed on the server. Queries that use sorts (ORDER BY) and
joins may consume significant amount of memory during execution. Query
memory requirements are significantly increased if there is a high
degree of parallelism enabled or if a query operates on a partitioned
table with non-aligned indexes. A query that cannot get the memory
resources it requires within the predefined timeout (by default, the
timeout is 25 times the estimated query cost or
thesp_configure ‘query wait’ amount if set) receives this error.
Usually, the query that receives the error is not the one that is
consuming the memory.

Troubleshooting

  1. Follow general steps to assess server memory condition.

  2. Identify problematic queries: verify if there is a significant
    number of queries that operate on partitioned tables, check if they
    use non-aligned indexes, check if there are many queries involving
    joins and/or sorts.

  3. Check the sp_configure parameters degree of
    parallelism
     and min memory per query. Try reducing the degree
    of parallelism and verify if min memory per query is not set to
    a high value. If it is set to a high value, even small queries will
    acquire the specified amount of memory.

  4. To find out if queries are waiting on RESOURCE_SEMAPHORE,
    see Blocking .aspx#_Blocking)later
    in this paper.

8651 – Could not perform the requested operation because the minimum
query memory is not available. Decrease the configured value for the
‘min memory per query’ server configuration option.

Causes

Causes in part are similar to the 8645 error; it may also be an
indication of general memory low condition on the server. A min memory
per query
 option setting that is too high may also generate this
error.

Troubleshooting

  1. Follow general memory error troubleshooting steps.

  2. Verify the sp_configure min memory per query option setting.

.aspx#mainSection)Top
Of
Page.aspx#mainSection) 

PARTITION FUNCTIONS

The Partition Function is the function that defines the number of
partitions. This is the first step in the implementation of partitioning
for your database object. One partition function can be used to
partition a number of objects. The type of partition is also specified
in the partition function, which currently can only be ‘RANGE’.

Based on the fact about boundary values for partitions that which
partition they should belong to, we can divide partition function into
two types:

  1. Left: The first value is the maximum value of the first
    partition.
  2. Right: The first value is the minimum value of the second
    partition.

The syntax for the creation of a partition function is as follows:

<``span class``=``"inlineCode"``>``<``span class``=``"inlineCode"``>``CREATE
PARTITION ``FUNCTION partition_function_name ``(
input_parameter_type ``)

AS RANGE ``[ LEFT | RIGHT ]

FOR VALUES ( [ boundary_value [ ,...n ] ] ) [ ; ]``<``/span``>

<``/span``>

e.g.
Generally the input_parameter_type is numeric. SQL Server supports
only certain input_parameter_types. The list of partition functions
defined can be obtained by using the SYS.PARTITION_FUNCTIONS catalog
view:

SELECT * FROM sys.partition_functions

The first thing that you would want to do is to test whether your
partition function is implemented as per your desire or not. Specially,
we can check if it is working on boundary values. You can check it with
the special function provided: $Partition. We test the partition
function MyPartitionFunc2 created by us earlier. In this SQL, we are
verifying to which partition, (Partition Key = 100), would belong to.

SELECT $PARTITION.MYPARTITIONFUNC2(100) [PARTITION NUMBER]

To find out the boundary values defined by partition functions,
partition_range_values catalog view is available.

SELECT * FROM SYS.PARTITION_RANGE_VALUES

Though SQL Server does not directly support List Partitioning, you can
create list partitions by tricking the partition function to specify the
values with the LEFT clause. After that, put a CHECK constraint on the
table, so that no other values are allowed to be inserted in the table
specifying Partition Key column any value other than the ‘list’ of
values.

SELECT OBJECT_NAME(i.object_id) as TableName ,

CPU Bottlenecks

A CPU bottleneck that happens suddenly and unexpectedly, without
additional load on the server, is commonly caused by a nonoptimal query
plan, a poor configuration, or design factors, and not insufficient
hardware resources. Before rushing out to buy faster and/or more
processors, you should first identify the largest consumers of CPU
bandwidth and see if they can be tuned.

System Monitor is generally the best means to determine if the server is
CPU bound. You should look to see if the Processor:% Processor
Time
 counter is high; values in excess of 80% processor time per CPU
are generally deemed to be a bottleneck. You can also monitor the
SQL Server schedulers using the sys.dm_os_schedulers view to see if
the number of runnable tasks is typically nonzero. A nonzero value
indicates that tasks have to wait for their time slice to run; high
values for this counter are a symptom of a CPU bottleneck. You can use
the following query to list all the schedulers and look at the number of
runnable tasks.

select  
    scheduler_id, 
    current_tasks_count, 
    runnable_tasks_count 
from  
    sys.dm_os_schedulers 
where  
    scheduler_id < 255

The following query gives you a high-level view of which currently
cached batches or procedures are using the most CPU. The query
aggregates the CPU consumed by all statements with the same
plan__handle (meaning that they are part of the same batch or
procedure). If a given plan_handle has more than one statement, you may
have to drill in further to find the specific query that is the largest
contributor to the overall CPU usage.

select top 50  
    sum(qs.total_worker_time) as total_cpu_time,  
    sum(qs.execution_count) as total_execution_count, 
    count(*) as  number_of_statements,  
    qs.plan_handle  
from  
    sys.dm_exec_query_stats qs 
group by qs.plan_handle 
order by sum(qs.total_worker_time) desc

The remainder of this section discusses some common CPU-intensive
operations that can occur with SQL Server, as well as efficient methods
to detect and resolve these problems.

ADDITIONAL OPERATIONS WITH PARTITIONS

SQL Server also supports other operations with partitions. These
operations help in managing a partitioned object. They are as follows:

  1. Split
  2. Merge
  3. Switch

Remember that these operations are meta data operations and do not
involve any movement of data.

FROM sys.indexes as i

Conclusion

For more information:

.aspx#mainSection)Top
Of
Page.aspx#mainSection) 

LOCK ESCALATION AND PARTITIONING

Lock escalation is the process of converting many fine-grain locks into
fewer coarse-grain locks, reducing system overhead. SQL Server 2005
supports lock escalation on only the table level. Now in SQL Server
2008, lock escalation is also supported on the partition Level. This was
introduced to improve the availability of data in the partitioned table.
This is not the default behavior but it can be changed for any table. So
changing this option can further improve the performance for querying
your table.

If you want to find out about lock escalation for any of your existing
tables then you can check it from TABLES table in SYS schema.

SELECT lock_escalation_desc FROM sys.tables WHERE name = 'MyPartitionedTable'

You can change the new LOCK_ESCALATION property of any of your
existing table. The possible options are as follows:

1. Auto: This option lets the SQL Server decides whether table is
partitioned or not. If the table is partitioned, it is on partition
level otherwise it is on table level.

2. Table: The lock escalation will always be on the table level no
matter if the table is partitioned or not. This is the same behavior as
in SQL Server 2005. This is also the default behavior in SQL Server
2008.

3. Disable: This prevents lock escalation in most cases. Table-level
locks are not completely disallowed e.g. when you are scanning a table
that has no clustered index under the serializable isolation level, the
database engine must take a table lock to protect data integrity.

After you are done with partitioning your table set the LOCK_ESCALATION
on the table to AUTO so that database engine may allow locks to escalate
on the partition i.e. HoBT (Heap or Binary Tree) level for the said
table.

ALTER TABLE MyPartitionedTable SET (LOCK_ESCALATION = AUTO)

It must be remembered that if the partition level escalation option is
not implemented correctly, this may create some deadlock problems for
the clients expecting table level lock escalation. This is the reason
this is not the default option.

You can also specify the lock escalation settings by changing the
properties of your table.

澳门金沙vip 6

You can view the locking details in your server by using the sp_lock
stored procedure.

name as IndexName ,

Caches and memory pressure

An alternative way to look at external and internal memory pressure is
to look at the behavior of memory caches.

One of the differences of internal implementation of SQL Server 
2005  compared to SQL Server 2000 is uniform caching framework. In order
to remove the least recently used entries from caches, the framework
implements a clock algorithm. Currently it uses two clock hands—an
internal clock hand and an external clock hand.

The internal clock hand controls the size of a cache relative to other
caches. It starts moving when the framework predicts that the cache is
about to reach its cap.

the external clock hand starts to move when SQL Server as a whole gets
into memory pressure. Movement of the external clock hand can be due
external as well as internal memory pressure. Do not confuse movement of
the internal and external clock hands with internal and external memory
pressure.

Information about clock hands movements is exposed through
the sys.dm_os_memory_cache_clock_hands DMV as shown in the
following code. Each cache entry has a separate row for the internal and
the external clock hand. If you see
increasing rounds_count and removed_all_rounds_count, then the
server is under the internal/external memory pressure.

select * 
from  
    sys.dm_os_memory_cache_clock_hands 
where  
    rounds_count > 0 
    and removed_all_rounds_count > 0

You can get additional information about the caches such as their size
by joining with sys.dm_os_cache_counters DMV as follows.

Note: Some parts of the code snippet presented in the following
table have been displayed in multiple lines only for better readability.
These should be entered in a single line.

select  
    distinct cc.cache_address,  
    cc.name,  
    cc.type, 
    cc.single_pages_kb + cc.multi_pages_kb as total_kb,  
    cc.single_pages_in_use_kb + cc.multi_pages_in_use_kb 
    as total_in_use_kb,  
    cc.entries_count,  
    cc.entries_in_use_count, 
    ch.removed_all_rounds_count,  
    ch.removed_last_round_count 
from  
    sys.dm_os_memory_cache_counters cc  
    join sys.dm_os_memory_cache_clock_hands ch on 
    (cc.cache_address =ch.cache_address) 
/* 
--uncomment this block to have the information only 
for moving hands caches 
where  
    ch.rounds_count > 0 
    and ch.removed_all_rounds_count > 0 
*/ 
order by total_kb desc

Note that for USERSTORE entries, the amount of pages in use is not
reported and thus will be NULL.

3. STEP # 03 (Create table or index based on the partition scheme)

The important clause related to partitioning is ON clause in CREATE
TABLE statement. We create our table MyPartitionedTable as follows:

CREATE TABLE MyPartionedTable



(



ID int PRIMARY KEY,



Name VARCHAR(50)



)



ON MyPartitionScheme(ID)
Virtual address space and physical memory

In Microsoft Windows®, each process has its own virtual address space
(VAS). The set of all virtual addresses available for process use
constitutes the size of the VAS. The size of the VAS depends on the
architecture (32- or 64-bit) and the operating system. In the context of
troubleshooting, it is important to understand that virtual address
space is a consumable memory resource and an application can run out of
it even on a 64-bit platform while physical memory may still be
available.

For more information about virtual address space, see “Process Address
Space” in SQL Server Books Online and the article called Virtual
Address
Space ()
on MSDN.

PARTITIONED INDEX

We can also partition our indexes, and this should contribute to
improved performance. If indexes are partitioned they serve as the local
indexes for each partition. We can also go with Global indexes on a
partitioned table without caring about the different partitions of the
table.

It must be remembered that indexes can be partitioned using a different
partition key than the table. The index can also have different numbers
of partitions than the table. We cannot, however, partition the
clustered index differently from the table. To partition an index, ON
clause is used, specifying the partition scheme along with the column
when creating the index:

CREATE NONCLUSTERED INDEX MyNonClusteredIndex ON dbo.MyTable(MyID1)



ON MyPartitionScheme(MyID2);

You can see that Mytable is indexed on the MyID1 column, but the index
is partitioned on the MyID2 column. If your table and index use the same
Partition function then they are called Aligned. If they go further and
also use the same partition scheme as well, then they are called Storage
Aligned (note this in the figure below). If you use the same partition
function for partitioning index as used by the table, then generally
performance is improved.

澳门金沙vip 7

Methodology

There can be many reasons for a slowdown in SQL Server. We use the
following three key symptoms to start diagnosing problems.

  • Resource bottlenecks: CPU, memory, and I/O bottlenecks are
    covered in this paper. We do not consider network issues. For each
    resource bottleneck, we describe how to identify the problem and
    then iterate through the possible causes. For example, a memory
    bottleneck can lead to excessive paging that ultimately impacts
    performance.

  • Tempdb bottlenecks: Since there is only one tempdb for each
    SQL Server instance, this can be a performance and a disk space
    bottleneck. A misbehaving application can overload tempdb both
    in terms of excessive DDL/DML operations and in space. This can
    cause unrelated applications running on the server to slow down or
    fail.

  • A slow running user query: The performance of an existing query
    may regress or a new query may appear to be taking longer than
    expected. There can be many reasons for this. For example:

    • Changes in statistical information can lead to a poor query plan
      for an existing query.

    • Missing indexes can force table scans and slow down the query.

    • An application can slow down due to blocking even if resource
      utilization is normal.

    Excessive blocking, for example, can be due to poor application or
    schema design or choosing an improper isolation level for the
    transaction.

The causes of these symptoms are not necessarily independent of each
other. The poor choice of a query plan can tax system resources and
cause an overall slowdown of the workload. So, if a large table is
missing a useful index, or the query optimizer decides not to use it,
this not only causes the query to slow down but it also puts heavy
pressure on the I/O subsystem to read the unnecessary data pages and on
the memory (buffer pool) to store these pages in the cache. Similarly,
excessive recompilation of a frequently running query can put pressure
on the CPU.

.aspx#mainSection)Top
Of
Page.aspx#mainSection) 

PARTITION ELIMINATION / PRUNING

This is the process used by the optimizer to figure out which partitions
are necessary to access in order for SQL Server to provide the results
of any query. It means those partitions that do not fulfill the criteria
of the query are not even checked to find the result. The optimizer
figures this out based on the filter specified in the query. These
filters are specified in WHERE, IN or other filtering clauses.

If you want to know if the database engine is using partition
elimination while executing your queries then run SET STATISTICS PROFILE
ON command before running your queries. We can see the partition
elimination in the execution plan of the following query:

SELECT * FROM [MYPARTITIONPRACTICEDB].[DBO].[MYPARTIONEDTABLE] 



WHERE ID > 1001 

To find out which partition are used to satisfy the required query
result, we can look at <RunTimePartitionSummary> in the XML plan
of execution.

<RunTimePartitionSummary>



<PartitionsAccessed PartitionCount="1">



<PartitionRange Start="2" End="2" />



</PartitionsAccessed>



</RunTimePartitionSummary>

It is apparent that only 2nd partition is used and the other partition
is not part of the search.

There are two types of partition elimination schemes that SQL Server may
use. They are as follows:

  1. Static Partition Elimination
  2. Dynamic Partition Elimination

As appears from the name, with Dynamic Partition Elimination scheme, the
decision about selection of partitions, to satisfy the query criteria,
is made at the query execution time.

Now you must be wondering how SQL Server decides which scheme to use.
You will be glad to know that in SQL Server 2008 the dynamic partition
elimination is much more efficient than in SQL Server 2005.

On This Page

.aspx#ELAA)Introduction.aspx#ELAA) 
.aspx#EKAA)Goals.aspx#EKAA) 
.aspx#EJAA)Methodology.aspx#EJAA) 
.aspx#EIAA)Resource
Bottlenecks.aspx#EIAA) 
.aspx#EHAA)CPU
Bottlenecks.aspx#EHAA) 
.aspx#EGAA)Memory
Bottlenecks.aspx#EGAA) 
.aspx#EFAA)I/O
Bottlenecks.aspx#EFAA) 
.aspx#EEAA)Tempdb.aspx#EEAA) 
.aspx#EDAA)Slow-Running
Queries.aspx#EDAA) 
.aspx#ECAA)Conclusion.aspx#ECAA) 
.aspx#EBAA)Appendix
A: DBCC MEMORYSTATUS
Description .aspx#EBAA)
.aspx#EAAA)Appendix
B: Blocking
Scripts.aspx#EAAA) 

PARTITIONED TABLE

After creation of a partition scheme, a table may be defined to follow
that scheme. In this case the table is called PARTITIONED. A partitioned
table may have a partitioned index. Partition aligned index views may
also be created for this table. These index and view may be based on
different partition strategy (partition function and partition scheme).

There may be question in your mind if it is possible to partition your
table using multiple columns. The answer may be YES or NO. Why? No,
because there is no such direct support for this in SQL Server. Yes,
because you can still do that by using persisted computed column based
on any number of columns you want. It must be remembered that this is
still a single dimension partition.

Now you might be wondering whether your existing tables could be
partitioned or not. For partitioning your existing table just drop the
clustered index on your table and recreate it on the required partition
scheme.

<``span class``=``"inlineCode"``>``CREATE TABLE

[ database_name . [ schema_name ] . | schema_name . ] table_name

( { <column_definition> | <computed_column_definition> ``}

[ <table_constraint> ] [ ,...n ] )

[ ``ON { partition_scheme_name ``( partition_column_name ``)
| filegroup

| ``"default" } ]

[ { TEXTIMAGE_ON { filegroup | "default" } ] [ ; ]

<``/span``>

SQL Server 2008 has introduced an exciting new feature, ‘FileStream’.
Remember if you have any column specified which is based on filestream
data, you have to follow some additional steps to get things to work
with partitioning.

Resolution

If the contention in tempdb is due to excessive DDL operation, you
will need to look at your application and see if you can minimize the
DDL operation. You can try the following suggestions.

  • If you use stored procedure scoped temporary tables, consider if
    these tables can be moved outside of the stored procedure.
    Otherwise, each execution of the stored procedure will cause a
    create/drop of the temporary table.

  • Look at query plans to see if some plans create lot of temporary
    objects, spools, sorts, or worktables. You may need to eliminate
    some temporary objects. For example, creating an index on a column
    that is used in ORDER BY may eliminate the sort.

If the contention is due to the contention in SGAM and PFS pages, you
can mitigate it by trying the following:

  • Increase the tempdb data files by an equal amount to distribute
    the workload across all of the disks and files. Ideally, you want to
    have as many files as there are CPUs (taking into account the
    affinity).

  • Use TF-1118 to eliminate mixed extent allocations.

.aspx#mainSection)Top
Of
Page.aspx#mainSection) 

Enough Study, Let’s Practice!

CREATE A PRACTICE DATABASE

Before looking at an example implementation of partitioning, let us
create a new database to test this feature. Let us create a database
with two file groups FG1 and FG2. Make sure that you create
PartitionPractice folder in C directory before running the following
query:

USE master



GO



CREATE DATABASE MyPartitionPracticeDB



ON PRIMARY



( NAME = db_dat,



FILENAME = 'c:\PartitionPractice\db.mdf',



SIZE = 4MB),



FILEGROUP FG1



( NAME = FG1_dat,



FILENAME = 'c:\PartitionPractice\FG1.ndf',



SIZE = 2MB),



FILEGROUP FG2



( NAME = FG2_dat,



FILENAME = 'c:\PartitionPractice\FG2.ndf',



SIZE = 2MB)



LOG ON



( NAME = db_log,



FILENAME = 'c:\PartitionPractice\log.ndf',



SIZE = 2MB,



FILEGROWTH = 10% );



GO



USE MyPartitionPracticeDB



GO

1. STEP # 01 (Create partition function)

CREATE PARTITION FUNCTION partfunc (int) AS



RANGE LEFT FOR VALUES (1000, 2000, 3000, 4000, 5000);

2. STEP # 02 (Create partition scheme)

CREATE PARTITION SCHEME MyPartitionScheme AS



PARTITION MyPartfunc TO



([FG1], [FG2])



GO
Resolution

澳门金沙vip,Consider the following options if you have detected inefficient query
plans.

  • Tune the query with the Database Engine Tuning Advisor to see if it
    produces any index recommendations.

  • Check for issues with bad cardinality estimates.

    Are the queries written so that they use the most restrictive WHERE
    clause that is applicable? Unrestricted queries are resource
    intensive by their very nature.

    Run UPDATE STATISTICS on the tables involved in the query and check
    to see if the problem persists.

    Does the query use constructs for which the optimizer is unable to
    accurately estimate cardinality? Consider whether the query can be
    modified in a way so that the issue can be avoided.

  • If it is not possible to modify the schema or the query, SQL
    Server 2005 has a new plan guide feature that allows you to specify
    query hints to add to queries that match certain text. This can be
    done for ad hoc queries as well as inside a stored procedure. Hints
    such as OPTION (OPTIMIZE FOR) allow you to impact the cardinality
    estimates while leaving the optimizer its full array of potential
    plans. Other hints such as OPTION (FORCE ORDER) or OPTION (USE PLAN)
    allow you varying degrees of control over the query plan.

CREATE PARTITIONS

After the creation of a non-partitioned table, it can then be
partitioned. To ensure the ease of partitioning, GUI support is
available in SQL Server Management Studio. Not only can you create
partitions in a non-partitioned table but you can also manage partitions
in an already partitioned table.

Just right click a non-partitioned table and look at the following
pop-up menu i.e. the Create Partition option under Storage menu:

澳门金沙vip 8

When you select this option, an easy to use wizard is started to allow
you to create partitions. Remember, you can also use already existing
partition functions and scheme to partition a table. You can also create
new ones and the wizard allows you to enter the details for the
partition function, it allows the LEFT or RIGHT options, and for
entering the details of the partition scheme. It also asks for the
details of file groups for new partitions.

澳门金沙vip 9

This wizard is very easy to use and allows the developer to generate
partition functions, partition schemes and other options if none of the
already existing ones fits the user’s needs. There are also some third
party solutions available to manage partitioning of your objects outside
SQL Server Management Studio.