Overview - Iordan Iotzov's technology blog



Working with Confidence: How Sure Is the Oracle CBO About Its Cardinality Estimates, and Why Does It Matter?Iordan K. IotzovNews America Marketing (NewsCorp)OverviewAsk WhyA typical SQL tuning exercise starts with a statement that takes too long to execute. Generally, there are two major reasons, excluding bugs and locking, for that behavior. Oracle could do its best and find the optimal, or close to the optimal, execution plan, but due to lack of supporting structures, such as indexes and materialized views, or inadequate hardware resources, such as disk, CPU, memory, networking, etc, cannot deliver the desired execution time. The way to achieve the desired performance in this case is to identify and resolve the hardware bottleneck or identify the missing supporting structure and create it. It is also possible that Oracle picks a subpar execution plan that leads to poor SQL performance, even though a significantly better plan exists. One of the most effective methods for rectifying subpar execution plans consists of comparing the estimated result sets cardinality with the actual result sets cardinality and is well explained in “Tuning by Cardinality Feedback” paper (Breitling, n.d.).Let’s go a step further and discover why the Oracle cost-based optimizer (CBO) fails to get the correct estimates, which may eventually lead to a generation of an inefficient execution plan. Issues with table and index statistics are one reason for suboptimal CBO behavior. All statistics stored in the data dictionary must be in sync with the corresponding data at all times. The default statistics gathering mechanisms usually work fine for non-volatile data, even though we might occasionally need to manually adjust some statistics (Lewis, 2006). Dealing with volatile data presents a bigger challenge, but there are number of effective techniques (Iotzov, 2012).Making guesses that turn out to be incorrect is the other major reason why the CBO would pick a bad execution plan. Typically, that is not an indication of a software problem. The Oracle CBO is an excellent piece of software that is able to decipher very complex statements. It does not have a crystal ball though. Since it is expected to assign selectivity to every predicate, no matter how complex, the CBO is frequently left with no other option but to guess. Unnecessary guesswork in design and troubleshooting is rightfully criticized by most responsible database professionals. There is even a party dedicated to that cause (BAAG, n.d.). Less attention is paid to guesses made by complex software, such as the Oracle CBO. Even though we cannot change the software and most of the assumptions are the best assumptions that could be made under the circumstances, we still need to know about the guesses done by the optimizer. That knowledge would not only help us write better SQL, but would also enable us to troubleshoot more effectively. The amount of guesswork included in a cardinality estimate could be used the optimizer to deliver better execution plans. The CBO could decide to go with a robust option, such as hash join, when it detects that it is not confident about a cardinality estimate. Those topics are beyond the scope of the paper though. Confidence of Cardinality Estimates in Oracle, current stateThroughout the paper, I assume that the Oracle CBO does not universally compute or use confidence level of its cardinality estimated. That is, in most cases, the CBO is not aware whether an estimate is based on sound mathematical models or whether it is based on guesses. My assumption about Oracle CBO’s behavior is based on the following three factors.First, I was not able to find any mention of confidence, maximum error of estimate or anything that might imply that Oracle computes or uses this type of statistic. Next, I couldn’t find any information about the topic by independent experts. In fact, Jonathan Lewis was very nice confirm my assumptions regarding this matter (Lewis, 2012).Finally, I was not able to find any information in the 10053 trace files that may relate to confidence of estimates. I ran a trace on two similar queries. REF _Ref347237370 \h Figure 1 shows a query with a LIKE predicate with a leading wildcard, a construct whose selectivity is genuinely difficult to assess. select tab2.* from? tab1 , tab2where tab1.str like '%BAA%'and tab1.id = tab2.idFigure 1: Query that forces the CBO to make a wild guess REF _Ref347237757 \h Figure 2 shows a query that contains an equality predicate, a popular construct whose selectivity can be reliably measured, particularly if the underlying data is not skewed.select tab2.*from tab1 , tab2where tab1.NUM = 14and tab1.id = tab2.idFigure 2: Query that forces the CBO to make a reasonable guessThe execution plans for those two queries, shown in REF _Ref347237847 \h Figure 3 and REF _Ref347237851 \h Figure 4, are very similar. They have the same sequence of operations, the same type of joins and same estimated cardinalities. | Id | Operation | Name | Rows | Bytes | Cost | Time |--------------------------------------+-----------------------------------+| 0 | SELECT STATEMENT | | | | 38K | || 1 | HASH JOIN | | 488K | 21M | 38K | 00:08:49 || 2 | TABLE ACCESS FULL | TAB1 | 488K | 11M | 11K | 00:02:20 || 3 | TABLE ACCESS FULL | TAB2 | 9766K | 210M | 10K | 00:02:06 |--------------------------------------+-----------------------------------+Figure 3:Execution plan for query that forced CBO to make a wild guess| Id | Operation | Name | Rows | Bytes | Cost | Time |--------------------------------------+-----------------------------------+| 0 | SELECT STATEMENT | | | | 38K | || 1 | HASH JOIN | | 488K | 15M | 38K | 00:08:45 || 2 | TABLE ACCESS FULL | TAB1 | 488K | 4395K | 11K | 00:02:20 || 3 | TABLE ACCESS FULL | TAB2 | 9766K | 210M | 10K | 00:02:06 |--------------------------------------+-----------------------------------+Figure 4:Execution plan for query that forced CBO to make a reasonable guessThere is only a small difference in the number of estimated bytes. Nothing in the execution plan or the 10053 trace, as far as I can see, indicates that the Oracle CBO computed or used any information related to the level of guesswork it had to employ in the query. The SQL Tuning Advisor may issue some recommendations that are indirectly related to cardinality confidence.Cardinality Confidence in Other DatabasesTeradata has four discrete cardinality confidence levels that are computed for almost every intermediary set (Ballinger, 2009).Figure 5 shows those confidence levels along with a very simple definition under what circumstances they would be assigned. Figure 5:Confidence levels in TeradataBelow is an example of an explain plan in Teradata for a very simple query:select * from iiotzov.dep where dep_name = 'IT' Explanation 1) First, we lock a distinct iiotzov."pseudo table" for read on a RowHash to prevent global deadlock for iiotzov.dep. 2) Next, we lock iiotzov.dep for read. 3) We do an all-AMPs RETRIEVE step from iiotzov.dep by way of an all-rows scan with a condition of ("iiotzov.dep.DEP_NAME = 'IT '") into Spool 1 (group_amps), which is built locally on the AMPs. The size of Spool 1 is estimated with high confidence to be 1 row. The estimated time for this step is 0.01 seconds. 4) Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request. -> The contents of Spool 1 are sent back to the user as the result of statement 1. The total estimated time is 0.01 seconds. The statistics of the table were collected, so the cardinality estimate in step 3 had high confidence.Here is the explain plan for a slightly more complex query:select b.* from iiotzov.dep a, iiotzov.emp bwhere a.dep_name = 'IT'and a.dep_id = b.dep_id Explanation1) 2)3)....4) We do an all-AMPs RETRIEVE step from iiotzov.b by way of an all-rows scan with no residual conditions into Spool 2 (all_amps),which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 2 by row hash. The size of Spool 2 is estimated with high confidence to be 2 rows. The estimated time for this step is 0.00 seconds. 5) We do an all-AMPs JOIN step from iiotzov.a by way of a RowHash match scan with a condition of ("iiotzov.a.DEP_NAME = 'IT '"), which is joined to Spool 2 (Last Use) by way of a RowHash match scan. iiotzov.a and Spool 2 are joined using a merge join, with a join condition of ("iiotzov.a.DEP_ID = DEP_ID"). The result goes into Spool 1 (group_amps), which is built locally on the AMPs. The size of Spool 1 is estimated with low confidence to be 2 rows. The estimated time for this step is 0.00 seconds. 6) ….. We see that even though the plan started with high confidence at step 4, the confidence level dropped to low after only one join. Since increasing the confidence as the tables are joined is almost impossible, lower confidence levels coming from earlier steps would dominate the confidence levels of the steps they feed into(Ballinger, 2009). Most non-trivial queries would end up with low or no confidence despite the efforts we put into gathering statistics. Let take a look at REF _Ref347240380 \h Figure 6 that shows the confidence levels of a real query. The initial steps often involve working directly with tables, so their respective cardinalities usually come with high confidence. As the query goes on, and more of the work is on intermediate sets, rather than on tables, the confidence of cardinality goes down. In this case, the Teradata optimizer works with very little visibility after step 10. Figure 6: Confidence level of a real queryFoundations of Estimating CardinalityJoinsCardinality has a central role in SQL optimization. It drives most optimizer decisions, from the order of processing and the type of joins, to the way to access the records. REF _Ref347240647 \h Figure 7 depicts a join in Oracle. All joins have two source sets and one result set. Most joins also have a predicate that links the two source sets. Figure 7: Join of two sourcesThe cardinality of the result can be computed using the following formula (Lewis, 2006).CardResult=CardSource 1 ×CardSource 2×SelPredEven though the above formula is used in the Oracle CBO, it does not account for errors. Errors are inevitable and accounting for them can enhance any model. REF _Ref347240998 \h Figure 8 shows a join of two sources with consideration for errors. The cardinality of each of the sources is not a specific number, but rather a range, shown with light blue lines. The cardinality of the result is also a range. The ranges for the result set are significantly wider compared to the ranges of the source sets.Figure 8: Join of two sources, accounting for errorsThe formulas for the lower and upper level of the cardinality range are (Ioannidis 1991)1+e+RCardResult=1+e+S1×CardSource 1 ×1+e+S2×CardSource 2×SelPred1-e-RCardResult=1-e-S1×CardSource 1 ×1-e-S2×CardSource 2×SelPredWhere the relative cardinality error of the result set is in between -e-R and e+R . Respectively, the relative cardinality error for source 1 set is between -e-S1 and e+S1 and for source 2 set is between -e-S2 and e+S2 .For clarity, we do not take into considerations the errors introduced by the predicate. The formulas for the lower and upper level of the cardinality range imply that the maximum error grows exponentially with the number of joins. An illustration of that behavior is shown in REF _Ref347241335 \h Figure 9. If we assume that each basic set comes with 10% error, we can observe that after six joins, the maximum relative cardinality error goes up to 110%. Figure 9: Propagation of errorsFilters REF _Ref347308263 \h Figure 10 depicts filtering a source set with two filters. Oracle allows enormous flexibility in specifying filters, so this is a rather simple scenario.Figure 10:Two filters applied to a source setAssuming that the two filters are linked with an AND logical operator, the cardinality estimate of the result set would be CardResult=CardSource ×SelFilter 1×SelFilter 2according to Metalink Note 10626.1 (MOS 2012a). REF _Ref347308666 \h Figure 11 shows the same two filters, along with the errors that they would introduce.Figure 11:Two filters applied to a source set, accounting for errorsSince our focus here is on filters, we would ignore any errors that may come from the source set. Below are the formulas for the lower and upper level of the result cardinality range.1+e+RCardResult=CardSource ×(1+e+F1)SelFilter 1×(1+e+F2)SelFilter 21-e-RCardResult=CardSource ×(1-e-F1)SelFilter 1×(1-e-F2)SelFilter 2There relative error for filter 1 is between -e-F1 and e+F1 , and the relative error for filter 2 is between -e-F2 and e+F2 REF _Ref347310012 \h Figure 12 illustrates how errors in multiple filters affect the error of the result set. Figure 12: Aggregation errors from multiple filtersEach filter contributes to the error, so the more filters we have, the higher the error. It must be noted that we are looking at the relative error. It is possible for a single filter to produce higher absolute error than a combination of that filter and some other filters. The relative error of multiple filters would be higher though. Confidence of Cardinality Estimates in OracleAn attempt to measure confidence of cardinality estimate in Oracle To model the confidence of Oracle’s cardinality estimates, I created the XPLAN_CONFIDENCE package. It measures the maximum relative error (e+R) for every step in the execution plan. The output is a continuous variable, which would enable us to gain good inside into the strengths and the limitations of the Oracle CBO. The package has multiple limitations and is intended for demonstration purposes only. I accept no liability or responsibility for XPLAN_CONFIDENCE package, nor would I be able to support it. Use it at your own risk! The package uses only very basic factors and ignores many important ones. It does not recognize sub-queries, inline views and other “complex” structures. It has limited ability to parse filters and properly handle built-in and custom PL/SQL functions. It is not aware of profiles, dynamic sampling, adaptive cursor sharing and many other features. It is not aware of materialized views and query rewrite. It also has very limited ability to handle extended statistics. In short, the package, and the model it represents, is very far from accurate and complete. REF _Ref347310012 \h Figure 12 shows the package in action. It is invoked similarly to DBMS_XPLAN.DISPLAY_CURSOR function. The output is also similar, but with one additional column – “Max. Error”. That column measures the maximum cardinality error at every execution plan step, a proxy for CBO confidence in its estimate at that step. It is assumed that the statistics completely represent the underlying data, so the package assigns zero cardinality error for full table scans without filters. The errors start when filters and predicates are applied.Figure 13:Sample run of XPLAN_CONFIDENCEThe package uses V$SQL_PLAN to get the execution plan and the filters, so a SQL statements must be present there. The package works best when executed through SQL*Developer. The definition of the package is shown in REF _Ref347310304 \h Figure 14.Figure 14:Specification of XPLAN_CONFIDENCE REF _Ref347322322 \h Figure 15 gives an overview of the architecture of the package. The first step is parsing the execution plan from V$SQL_PLAN. At mentioned, the ability to handle complex statements is very limited, so unless the filters fall under small number of predefined templates, they are classified as complex and not parsed further. After that, a procedure that computes the maximum relative errors is invoked recursively. After all calculations are done, the execution plan in text form is retrieved using DBMS_XPLAN.DISPLAY_CURSOR function. The plan is then parsed and the maximum error information is appended to the relevant lines.Figure 15: XPLAN_CONFIDENCE architectureThe maximum error computation algorithm is based on the formulas discussed in the previous section. We would also need to assign specific maximum errors for most of the common filters in SQL. REF _Ref347322750 \h Table 1 shows the specific numbers I used in the package. Table 1:Maximum relative errorsPredicate/ DB structuresComplexBind/VariableHistogramsAssigned max. errorOpportunities for improving CBO’s confidence =Y20%Substitute with simple predicates NY5%Consider literals (be aware of parsing implications) NY1%N5%Consider histograms if appropriate >Y40%Substitute with simple predicates NY10%Consider literals (be aware of parsing implications) NY1%N10%Consider histograms if appropriate LIKE200%Force dynamic sampling MEMBER OF200%IN predicate, if number of records is low.Store records in DB table; make sure table stats are available Unique Index 0%Extended Statistics Columns 5%A very effective way to deal with correlated columns as well as large number of filter conditions It is important to note that assigned maximum error numbers are purely subjective and are not a result of a scientific study. That is not that big of a problem because the purpose of the package is not to forecast, but rather to show general trends. We assume that we get the smallest error when a unique key is used. The next best thing is using histograms and literals. Using binds or not having histograms is assumed to give slightly higher maximum error. Extended statistics can also help reduce errors; they are assigned 5% maximum error, but the underlying predicates are not counted. Complex structures, everything that the simple parser could not recognize, are assigned a bigger error. Since the CBO has difficult time getting sound selectivity coefficients from LIKE and MEMBER OF, they are assigned 200% maximum error. The last column in the table provides guidance on how we can modify our code and database structures to minimize the guesses CBO has to make. XPLAN_CONFIDENCE has only limited capabilities, so it does not take into account many important SQL constructs. REF _Ref347323067 \h Table 2 shows some important constructs that are not included in XPLAN_CONFIDENCE, but are challenging for the CBO. Table 2: Constructs not included in the modelPredicate Opportunities for improving CBO’s confidence PL/SQL functionsForce dynamic samplingUtilize ASSOCIATE STATISTICSPipeline functionsForce dynamic samplingUtilize ASSOCIATE STATISTICSCONNECT BY LEVEL < Even though integrating SQL and PL/SQL opens up many interesting design options, we should be aware of how CBO costs PL/SQL functions, and what we can do to supply more information to the optimizer (Senegacnik, 2010). Pipeline functions are also a very useful feature, but they too force the CBO to make unsubstantiated assumptions (Billington, 2009).A Proactive Design ApproachIt is very important to understand how the Oracle CBO computes the selectivity and cardinality for any SQL predicate or structure we intend to use. Due to simplicity and prevalent use, common predicates are almost always optimally processed by the CBO, that is, the CBO makes only the absolutely necessary guesses regarding their selectivity. New or complex structures might present a challenge to the CBO. CBO would not complain or warn about it, except when the SQL Tuning Adviser is invoked, but would rather make blanket assumptions and move on. I believe, it is the responsibility of the database administrator/architect to vet all new SQL features and uses and see how well the CBO can handle them. The vetting process should include running the SQL with the new feature with different inputs. Does the execution plan change as you change the size of any of the inputs? A look at the explain plan usually suffices, but sometimes reviewing 10053 trace is needed. If the default CBO behavior is not satisfactory, it is possible to supply the CBO with more information? How difficult would that process be?Cardinality Confidence in ActionNormal Confidence Deterioration REF _Ref347324229 \h Figure 16 shows an execution plan of a typical query. Each line is colored in a shade of red, where the brightness of the color is proportional to the maximum cardinality error for the respective step. We can see a slow and steady deterioration of CBO’s confidence in the size of the result set. Those findings are in line with the observations in REF _Ref347241335 \h Figure 9. The more tables are involved in a query, the higher the maximum cardinality error, the higher the chance of a suboptimal plan. That is because most SQL constructs either pass or amplify the cardinality errors from the previous steps. Few constructs, such as “col in(select max(col1) from subq)”, reduce cardinality errors. That is, the errors accumulated in subq are not passed on to the next step. XPLAN_CONFIDENCE does not recognize any such construct at this time. Figure 16: Normal confidence deteriorationQueries that include large number of tables may get a suboptimal execution plan for reasons explained in Metalink Notes 212809.1 and 73489.1 (MOS 2011)(MOS 2004). In theory, for a query of n joins, the CBO need to review (n!) possible permutations and the number of permutation grows really fast. For instance, (13!) is 6 billion, while (14!) is 87 billion. Therefore, the CBO has to abandon some permutations, possibly missing good plans. Faster CPU and improvements in the CBO, such as heuristics to reduce the search space, allow for more permutations to be reviewed. On the other hand, increased complexity of computing cost restricts that number of permutations to be reviewed. A sensible solution to this problem is to either avoid or logically split large SQLs, so the issues related with confidence deterioration and exponential growth of possible execution plans are minimized. REF _Ref347672607 \h Figure 17 illustrates that point. The higher the maximum error of a block, the higher chance the CBO will choose a suboptimal plan at a consecutive step. The query on the left has more blocks with high maximum error than the query on the right. Please note that the top block does not count because since it is the last step, the errors there would not affect the execution plan. Figure 17: Logically "splitting" a query REF _Ref347326291 \h Figure 18 shows how to use hints to achieve a logical split. The logical split is not always possible, and when it is, it is usually not as simple as show here. Figure 18:Logically "splitting" a query using a NO_MERGE hintThe recommendation here does not contradict the well established policy of consolidating logic into a single SQL (Kyte 2007), but it is rather a caveat for queries with more than 10-15 tables. Rapid Confidence Deterioration REF _Ref347326465 \h Figure 19 shows a query with a different cardinality confidence pattern. The level of confidence for most of the steps is significantly lower compared to the previous query. There are significant jumps of the maximum relative error at step 18 and 31. This behavior illustrates that the number of joins is only one factor, usually not the most important, for confidence degradation. The complexity of the filters and predicates we use is the most important reason for suboptimal execution plans. Figure 19:Rapid confidence deterioration REF _Ref347326605 \h Figure 20 shows the filters that cause the biggest increase in maximum error for the query. The filter for step 31 contains two constructs that require the CBO to make a wild guess – MEMBER OF and LIKE. Step 18 contains SQL functions that are classified as complex by XPLAN_CONFIDENCE. Even though the filter does not have a high-error clause, the large number of mid-error clauses still decreases the cardinality confidence significantly. Figure 20: Problem filtersFortunately, there are number of ways to deal with clauses that are challenging the Oracle CBO.Dynamic sampling is a well established technique for gathering ad-hoc statistics. By default, it is triggered when objects do not have proper statistics. It also can be used to get the correct cardinalities when the filter clause would have forced the CBO to guess. This can be done by either increasing the dynamic sample level or by forcing dynamic sampling on a specific table using hints. Oracle 11g release 2 could trigger dynamic sampling without explicit instructions if it decides it would be beneficial (MOS 2012b).Another strategy is to provide as much information as possible about a predicate. Extended statistics and virtual columns are simple and effective ways to do that. ASSOCIATE STATISTICS is valuable mechanism for supplying statistics for PL/SQL and pipelined functions (Senegacnik 2010) (Billington 2009), but it requires more effort. Replacing the offending filter clause with semantically identical filter clause that uses simpler predicates is also a great option. For instance the following clauseand col1 <= nvl(col2,to_timestamp( '12-31-9999','mm-dd-yyyy'))that uses a “complex” predicate can be replaced with this simple clauseand (col1 <= col2 or col2 is NULL)The CBO can get the selectivity of simple predicates, such as <= and is NULL, with a small margin of error, but it has to guess the selectivity of most SQL functions.Yet another way to deal with “problem” predicates is to push them as late in the execution plan as possible. That way, the damage they would do would be minimal. REF _Ref347327618 \h Figure 21 illustrates that point. By moving block B to the end of the execution, the CBO can work with minimal errors almost until the very end of the execution plan, greatly reducing the chance of generating a sub-optimal plan.Figure 21:Pushing the problem block/predicateThe last option is splitting the problem SQL into two or more SQL statements. That is a radical option that has its drawbacks. The major one is that we need to guarantee that all SQLs work on data at the same point in time, something that is automatic when only one SQL statement is evolved. If we are to split a query, we should do it immediately after a filter with high error is applied. Also, the statistics for the intermediate table must be gathered, either explicitly or via dynamic sampling. REF _Ref347389985 \h Figure 22 shows an example of splitting a table. By creating an explicit temporary result and gathering statistics on it, we make sure that the cardinality error stays in check.Figure 22: Splitting a queryStill relevant with Oracle 12c?At the time of writing, Oracle 12c has not been publicly released, so any the information about 12c features and functionalities is an interpretation from indirect sources (Hall 2012). Adaptive execution plan looks likely to be a powerful feature that would likely change Oracle performance tuning. REF _Ref347390173 \h Figure 23 shows my understanding of how it works. Some intermediary sets would have cardinality threshold. If the actual cardinality hit that threshold, a new plan would kick in. Again, this is just a guess on my part – I have not had the chance to use Oracle 12c.Figure 23: Adaptive cursor sharingThis feature is a reactive one, that is, it does not help the CBO get the optimal plan the first time, but rather minimizes the damage when the query execution strays from the expected path. As with any advanced features, we can expect to incur some costs, most likely higher parse time. ConclusionAsk not what the optimizer can do for you - ask what you can do for the optimizer… The Oracle CBO is responsible for generating an execution plan for every syntactically valid SQL statement. It is a very difficult, yet extremely important job. It is critical to us to understand, at least at high level, how the optimizer works. We need to know what structures and predicates are reliably handled by the CBO and what structures and predicates confuse it. Armed with that knowledge, we need to provide the best environment for it to succeed. We can get the best results from the CBO by supplying it with as much relevant information as possible, limiting use of structures and predicates that force it to make unnecessary guesses and mitigating the effects of the inevitable guesses it has to make. Huge SQL statements are not the solution to our problem, huge SQL statements are the problem... We should be aware that the more tables get involved in a SQL statement, the higher the chance of a suboptimal plan. While for most cases the recommendation to issue as few SQL statements as possible is still valid, there are situations where we need to help the CBO with the inevitable burdens of processing large statements.Cool new features – trust, but verify… As an industry leader, the Oracle database regularly introduces new features. As great and innovative as those features are, they need to be vetted by a DBA to ensure that the Oracle CBO would handle them without issues.ReferencesBAAG (n.d). BAAG party - battle against any guess. Retrieved from , C (2009). Can We Speak Confidentially? Exposing Explain Confidence Levels. Retrieved from A., (2009). Setting cardinality for pipelined and table functions. Retrieved from , W. (n.d.). Tuning by cardinality feedback method and examples. Retrieved from Y. And S. Christodoulakis (1991), On the propagation of errors in the size of join results. ACM SIGMOD '91. Retrieved from , I. (2012). Advanced Methods for Managing Statistics of Volatile Tables in Oracle. Hotsos 2012. Retrieved from Kyte, T. (2007). On Cursors, SQL, and Analytics. Retrieved from, J. (2006). Cost-Based Oracle Fundamentals. New York, NY:APRESSLewis, J. (2012). Oracle Discussion Forums. Retrieved from Oracle Support. (2012 a). Cost Based Optimizer (CBO) Overview [ID 10626.1].Retrieved from Oracle Support. (2012 b).Different Level for Dynamic Sampling used than the Specified Level [ID 1102413.1].Retrieved from Oracle Support. (2011). Limitations of the Oracle Cost Based Optimizer [ID 212809.1].Retrieved from Oracle Support. (2004). Affect of Number of Tables on Join Order Permutations [ID 73489.1] Retrieved from , J. (2010) Expert Oracle Practices , New York, NY , APRESS ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download