Type Your Title Here - Oracle



Responsibly Forecasting Oracle System Performance

Craig A. Shallahamer, OraPub, Inc.

Abstract

Forecasting Oracle system performance is a proactive performance management requirement. This is especially true when both budgets and workloads wildly fluctuate. The questions are always “how,” “how long,” “how much money,” and “how accurate.” This paper will introduce an industry standard systematic method for responsible forecasting and provide thorough explanations of the many ways performance can be forecasted along with their associated risks, costs, capabilities, and limitations. Special attention is given towards transitioning into using forecasting as part of a DBAs daily routine.

Introduction

It’s absolutely necessary. Performance management is a fundamental responsibility of IT. And it can be divided into two main areas; reactive and proactive. Reactivity is where most people live. It’s the daily grind of trying to keep up with the river of performance related problems. Proactivity is where those that desire to survive must live.

Curiously and sometimes maddening, reactive work is what most of us where trained for. Our university experience and our yearly training courses are nearly always aimed at solving a pre-existing problem. I can simply compare my Proactive and Reactive Performance Management course enrollment numbers. For every ten people I train in reactive performance management, I train around one person in proactive performance management. That’s a ratio of 10 to 1! That’s absurd, but it’s also a sobering reality when it comes to both our own mindset and that of the world’s companies.

Though no one talks very much about it, every training company knows why. The money is in the reactive arena not the proactive. They know people do not want to “pay up front.” Just take a look at any training company’s course offerings. If you can find a proactive type of class, look at how often it’s offered.

Even words like support and problem solver assume there is a current and serious problem that must be solved…that’s reactive. Can you imagine answering the phone, “Hello. Proactive Support, this is Craig speaking. What would you like to forecast?” It’s laughable!

Sure it’s fun and exciting to live in a reactive performance world for a while, but don’t fool yourself in thinking you can live there forever. Have you ever noticed that the real highly reactive jobs are full of younger people? It’s certainly not that the reactive jobs are not that fun or that older people forgot how fun they are. No way. It’s that unless your job has a mix of proactive and reactive work, for sanity reasons, you will either naturally migrate to a reactive job, get promoted away from the madness, or become a complete and total computer geek.

As physically healthy people know, preventative health care is the secret to minimizing sickness. And it’s the same for our work. If you don’t plan for your systems to be inundated with the sickness of insane users, pushy vendors, smart-ass hackers, and dehumanizing life-sucking corporate executives, then you’re in for a miserable life. So you have got to something about this and do it fast before you get sucked into the DBA churn and either burnout and quit or get promoted.[1]

So what do you do? You learn how to manage performance. That means basic planning, which really means forecasting. Forecasting is your way to tell what will probably happen in the future. And armed with a good forecast model and decent anticipated business changes, you can make yourself prepared for just about anything.

Forecasting has been around for a long time in computing. But with the introduction of UNIX and Oracle, things got so intense that people just seemed to forget about forecasting. But now things have changed. Budgets are extremely tight and system workloads are constantly being changed.

Forecasting is not that hard. Sure it takes some time and some practice, but if you survived this long, you can learn how to responsibly forecast performance. That is what this paper and my training courses are all about; helping you learn how to manage performance. And this paper specifically is about introducing you to forecasting, how to do it, how to get started, and how to use forecasting in your daily work. I hope you enjoy the journey.

The Method

Without method there is chaos. With too much method there is death. So somehow we have got to live in the middle. I have read, been indoctrinated, been rebuked, and developed ways to do things…that is method.

Because forecasting is scientific in nature, there has got to be a method. The simplest, most appropriately complete, and industry accepted method I have found consists of five main steps. I believe in this method so much, I have even designed my web-based simulation forecasting service based upon this methodology. The five steps are determine the study question, workload characterization, model development, validation, and forecasting.

Determine the study question

This is where you truly determine what you want to forecast. Every forecast answers at least one question and it’s almost always directly related to very specific business question or concern. What you must do in this phase is determine what that question is. And everyone must understand and agree with the question, so when it’s answered they will understand the relevance of your revelatory work.

The question can be as simple as, “If the system workload doubles, can our system handle the load?” Or it can be more complex such as, “If we upgrade our CPUs, add 20 more IO devices, and increase the workload by 20%, what will happen to response time, CPU utilization, and average IO utilization?” Once you know the study question, write it down and make sure everyone knows what it is and agrees.

Workload Characterization

Forecasting work involves a workload or workloads. Whether you are doing a standard benchmark or performing a "back of the envelope" forecast, it is impossible to responsibly forecast anything if you have not categorized something. For example, how can you ask the question, "What will happen to the IO subsystem when the number of devices is doubled?" if you have not determined how many devices are active in your system? Or how can you ask, "What will happen to response time if the workload is doubled?" if you have not determined what the workload is?

An important part of workload characterization is data gathering. Most likely you will have to create your own data gathering tools. And the data that is to be gathered will be highly dependant upon which forecast model you use. So as I discuss the various forecast models in this paper, be thinking, “How would I get that data?”

A detailed discussion of characterizing your workload is out of scope for this paper. But please remember this: Keep it as simple as possible. Just complicated enough to enable input into your forecast model and to responsibly answer the study question at the desired precision level.

Model Development

Every forecast uses some kind of model. Even a standard benchmark attempts to model a real-life system. We use models everyday of our life. We draw pictures of an Oracle instance, we work on an ER diagram, we play with model airplanes, cars, and trucks (some of us still do), and we pretend we are building bridges and buildings with wood blocks. These are all models and they are fundamental to understanding reality. In this paper, I will introduce a number of forecast models that I have personally used on large production Oracle based systems.

But models are not reality…no model is reality or it would not be a model. When someone begins to believe their model is reality, it’s time to give them a good slap across the head, get on another project real fast, or both. What we attempt to do is find the most appropriate model that matches our available input (the results of workload characterization) and will produce the appropriate output in terms of precision and actual forecasted items (e.g., CPU utilization).

Most models are like airplane model kits. While they provide the basic building blocks, you must customize them and sometimes tune/optimize them to become better representations of the real thing. For example, I commonly use a tunable queuing circuit model, which contains multiple queues and a few decision points that alters the flow of transaction activity within the circuit/model. Before the forecast model is useful, I must determine the most optimal decision point values. Once I have determined these values, the model is tuned or optimized, and ready to be validated (the next phase).

Validation

Once a forecast model has been developed we must statistically understand just how well the model actually forecasts. The validation process takes an optimized model and runs unseen (i.e., data that the model was not been trained/optimized with) historical data through the model and compares the forecast values against what we know to be true. It’s the classic predicted value minus the actual value…hence we have an opportunity to statistically measure error and randomness. The statistical information will be used when making our final forecast related statements.

If we skip the validation phase we will have no responsible forecast variation understanding. Now before you go to sleep, listen to this. Just saying “The CPUs will average 65% busy” is usually not good enough. In fact, it can be very misleading and irresponsible. It is impossible for people (including you and I) to understand what is meant by “an average of 65%.” Could the CPUs be busy 85%? If so, how often?

Very basic and simple statistics become very useful and extremely powerful.[2] After we validate our forecast model, we can responsibly make a statement like, “Our forecast is the CPUs will average 65% busy and that 92% of the time they will be busy between 55% and 75%.” That’s powerful and very useful. And it’s also very freeing because everyone knows the CPUs will never be exactly at 65% busy anyway. The difference is, we can quantify the range and its likelihood of occurring so people can make more informed business decisions.

Forecasting

This is where the fun is. Determining the study question, characterizing the workload, developing and optimizing the model, and validating the forecast model is necessary, but can be exhausting. The real fruit and fun of our effort is in running many scenarios through our validated forecast model. What many people forget is that with numeric based forecast models (including simulation), many different scenarios can be very quickly forecasted. And if we forget to run a scenario or someone wants a different scenario, it’s no problem.

While forecasting is definitely the fun part for me, I am also very aware of the seriousness of my forecast related statements. And as you probably have guessed, basic statistics is a large part of this. The statistics we calculated during the validation phase will allow us to make amazingly strong (they often seem that way to me) and responsible predictive statements. And without that feeling in your gut that you may be wrong…you’re not wrong, if you follow the above steps and phrase your forecasted statements with basic and easy to understand statistics.

Forecast Models

There are many, many forecast models available today and each has it’s advantages over the other. What I found the most useful for people starting to forecast, is to gain a practical understanding of five forecast models. You can not survive on just one or two models because your study question and workload characterization will almost always dictate the type of model you can responsibly use.

Discovering the Appropriate Forecast Model

The question invariably comes down to, “Which forecast model should I use?” By understanding the basic characteristics of each forecast model, you should be able determine the most appropriate model. When I go through the process of selecting a forecast model, I ask myself five questions regarding forecast scope, available input data, system status, required precision, and available time. I’ll explain each of these in more detail and then show them in table format.

Forecast Model. There are five forecasting models that I have professional experience using; simple math, ratio modeling, linear regression, simple queuing, and simulation. Used appropriately, each of these forecast models works very well. Used inappropriately, each will still forecast, but the results will be inappropriate and possibly even dangerous.

Scope. A model either forecasts a specific component (e.g., CPU), the system (e.g., response time), or both (e.g., CPU and response time). When a model forecasts both at the component and system levels, it takes in consideration the interaction between the various components. This results in a much more realistic and broader forecast.

Required Data. Each model has specific data requirements. For example, a simple queuing model requires transaction level data whereas ratio modeling needs only basic workload data. The more detailed the data, typically the more costly the forecasting model is to use (in terms of time, money, and resources).

System Status. Some models require production system data, some only require good guesses or transaction samples, and some require a creative combination. For example, while linear regression requires real production data, ratio modeling uses a combination of informed guessing and data from a similar production system.

Precision Possibility. Forecast variance tells us about precision. A variance of 8 CPUs is not very precise, but a variance of 1 CPU is very precise. Each model, in combination with other factors, has a predisposed precision capability or possibility. Matching desired forecast precision with potential forecast precision is very important to ensure financial, labor, time, and forecast precision expectations are carefully met. Ratio modeling is easy and quick, but the precision is very low, making it good for general budgeting or architectural exploration discussions. Simulation on the other hand, is more rigorous and requires a higher commitment level (in terms of time, money, and resources), but the precision can be much higher.

Project Duration. Every forecast study, project, program, or whatever you want to call it, has an implied duration in terms of money, time, and resources. Forecast models generally fit into a short, medium, or long project category. A short project can last a couple of hours to a few days. A medium duration project can last a few days to a few weeks. A longer duration project will take a minimum of three to four weeks and can last a couple of months.

However, the key project duration factor is how many components and/or systems must be forecasted. For example, are we only forecasting database server CPU utilization or are we forecasting each component of the database server, application server, web servers, client machines, etc. This makes a monumental difference in the project duration.

[pic]

Figure 1. This table can be used to determine an appropriate forecast model. Ask yourself about the project scope, forecast model required data, system status, precision possibility and precision required, and how much time you have. Asking and answering these questions will lead you to your best forecast model options. Keep in mind that typically multiple forecasting models are appropriate.

Here is a simple example of how you might use this matrix to select the most appropriate forecast model. Suppose your company is going to add 25 more Oracle Manufacturing users to the system next month and management wants to know if the CPUs can handle the load. Here is how this situation maps to the above matrix.

• What is the Scope? Since we are only concerned about CPU activity, our scope is at the component level. Any of our forecast models will work just fine (so far).

• What is the System Status? The Oracle Manufacturing system is already in production.

• What is the Precision Requirement? This is something that you may need to ask more questions about, but this type of question usually requires only a low precision answer. Which means any of our models will provide the required precision.

• What is the Project Duration? These types of questions don’t typically allow us much time. Usually I find management wants to know the answer in a day or two. Unless you have already been using a simulation forecast model, your best bet would be Ratio Modeling.

• What is the Data Available? We certainly don’t want to take the time to sample individual transactions. But we do have a working production system, which means we can get plenty of basic workload information. All but Simple Queuing theory would probably work fine.

When taking each of these items into consideration, Ratio Modeling is the clear winner. Sure we could use alternatives, but Ratio Modeling will provide the required precision at the lowest cost (effort, time, etc.)

Here’s a more complex example. Suppose you have a production database server, the workload is expected to increase by 50%, the vendor has new CPUs that are “25% faster” than your existing CPUs, and your IO vendor says that their new cache has reduced service times by 15%. You need to know what the minimal configuration could be without degrading response time and keeping with standard company risk levels.

• What is the Scope? For this example, Scope is the key defining question. Since we are concerned about response time, the CPU subsystem, and the IO subsystem, we clearly need a system level forecast model or we could use multiple component level models. (Which is not uncommon.) For example, we could forecast CPU using Simple Queuing and use Simple Math for IO. Response time could also be forecasted using Simple Queuing although the precision would be lower than using Simulation since it would not take into consideration component level interaction. While all of our models could participate, Simulation would provide a single comprehensive forecast model.

• What is the System Status? The computing system is already in production.

• What is the Precision Requirement? Clearly this is a very complicated forecast situation that will have a number of people very interested in your results. While more information is obviously needed than provided above, higher precision forecast will be required. This will probably eliminate Simple Math. However, keep in mind that sometimes IO forecasts do not need to be as precise as CPU forecasts. So Simple Math could be used for the IO forecast. But if possible, other forecast models should be considered.

• What is the Project Duration? This information is not known at this time. However, unless the DBA has already developed a forecast model that has been previously successfully used, a forecast question like this will take at least one week (at the very, very least).

• What is the Data Available? The question we are answering does not refer to a specific workload category transaction mix, but rather the generic workload. Any of our forecast models would work.

So what’s the most appropriate forecast model? Personally, I would choose simulation or a combination of simple queuing and simple math. However, simulation is clearly the best choice because it naturally takes into consideration component interaction and provides a solid statistical foundation.

At this point, I’m hoping you understand there are many forecast model options and opportunities. And that you are feeling more comfortable in your ability to pick the most appropriate model in a variety of circumstances. You have been patient and now it is time to explain each of these forecast models in more detail.

Proven Forecast Models

If you have gotten this far in this paper, you know there are many forecast model options. In this section, I will introduce five different forecast models intertwined with some of my personal experiences in an attempt to bring out their unique characteristics.

It is important to be comfortable with each model so you don’t favor one model over another. Have you ever seen a little boy get a new hammer for Christmas…everything becomes a nail. The same thing can happen with forecast models. When you get really good with one model, you tend to desperately try to always use it. That’s a recipe for disaster.

In presenting the different forecast models, I will start with the simplest and then work towards the most complicated. The more complicated models are not necessarily more difficult to use. It’s like algebra and calculus. Calculus is not really more complicated, but it does contain the elements of algebra. In the same way, the more complicated forecast models simply contain elements from the less complicated models. I’ll start with simple math (almost embarrassed to write about it), ratio modeling (low precision but very fast), linear regression (easy and precise), simple queuing (fun and precise but very time consuming), and finally simulation (easy, very precise, very powerful).

Simple Math. As the name states, it’s simple. The funny thing is, we use simple mathematical forecasting all the time without ever thinking about it. For example, if we know an Oracle client process consumes 10MB of non-shared resident memory and plans are to add another 50 users (resulting in 50 additional client processes),[3] then the system will require 500MB of additional memory. Because there is no queuing involved with Simple Math, it’s only appropriate for forecasts like memory, basic IO predictions, and basic networking. I would resist using simple math for CPU, advanced IO, and advanced network predictions.

Ratio Modeling. Ratio modeling is a fast way to make low precision forecasts. Myself and two colleagues developed the Ratio Modeling Technique back in the mid-1990’s in response to finding ourselves in situations where we simply had to deliver a forecast, yet we did not have the time to use an alternative model. Ratio Modeling works very well when you are quickly budgeting hardware, assessing and exposing technical risk, validating alternative technical architecture designs, and especially when sizing packaged applications. The technique enables you to define the relationship between process categories (e.g., batch processes) and a specific system resource (e.g., CPU). Ratio Modeling produces no statistical data, so it is truly a back of the envelop forecast technique. It has been so useful, there is a separate paper devoted entirely to understanding and using Ratio Modeling.

Linear Regression (LR). Linear regression is fantastic because it’s simple to use (assuming you have a good tool[4]) and provides statistical measurements to add strength to your forecast statements. I typically use LR when I want to quickly predict CPU utilization for a given business activity like orders shipped per hour or web hits per second. Regression analysis requires data and data requires a system to sample from. Therefore, LR requires a production environment.

[pic]

Figure 2. This is the graphical chart from OraPub’s simple (single independent value) linear regression workbook. If you carefully draw vertical line at around 75% busy, you will notice to the left of the line the data is fairly linear, but to the right, the data is increasingly non-linear. This is why one should never use linear regression to forecast past 75% utilization.

The relationship between just about anything and CPU utilization is not linear. Forecast techniques and reality demonstrate that at around 55% utilization, queue time is already consuming around 5% of total response time. And at around 75% utilization, queue time is already consuming around 15% of total response time. So while I enthusiastically use linear regression, I will never make a forecast with a utilization over 75%. Simply because even though the forecast model says it’s OK, I know it’s not because it expects the data to be linear when it’s clearly not. So be careful when using LR, but use it liberally and appropriately.

Simple Queuing. Queuing theory is a wonderful and powerful forecast technique. It can even be implemented into an MS-Excel workbook.[5] Understanding queuing is a fundamental performance management concept. The components are simple: a transaction, a queue (or line as we say in the USA) and a server. When a transaction enters a queue response time starts and queue time starts. When the transaction finally begins getting served by the server (a CPU perhaps), queue time stops and service time begins. When the transaction is finished being served, service time stops and response time stops. This can be symbolically[6] shown mathematically:

Rt = St + Qt

Based upon basic queuing theory and your personal experience, you should be able to feel that the below graph is correct.

[pic]

Figure 3. This is a classic response time curve taken from OraPub’s MS-Excel based queuing theory workbook. Notice that for quite a while, response time equals service time. Bu with an arrival rate of around 10 trx/sec, queuing starts to enter the system. This occurs at around 55% utilization. By the time utilization is around 75%, queue time accounts for around 15% of response time and after that it’s exponential. This is why a system may have an acceptable response time, but than all of a sudden performance dramatically degrades.

With a solid basic understanding queuing theory, both reactive and proactive performance management will take on a new meaning. This why I talk about response time and it components in my Reactive Performance Management course and spend half a day on basic queuing theory in my Proactive Performance Management class.

When more than one queue is involved, a network or circuit queuing model is born. For Oracle based systems, I have found the only way to get respectable predictions is by using a multiple queue network. While a single queue model does model the CPU subsystem rather precisely, it can not model the entire system and the affect one subsystem (e.g., CPU) has on the other subsystems very well. Plus, for detailed forecasts, actual transaction data must be gathered, organized, and entered into the queuing model. This can take a relatively long time. So long in fact, that many times is less expensive simply to purchase more hardware instead of paying for a transaction based queuing theory forecast study.

Simple queuing theory forecasting models are also excellent learning tools. For example, it is very easy to visibly see the results of tuning, adding additional CPUs, or using faster CPUs.

Simulation. While common amongst the world’s research community, simulation is virtually ignored when forecasting Oracle system performance. Cost and education are the primary reasons, but with some instruction, a little experience, along with some basic statistics, simulation can be a fantastic solution that yields surprisingly good precision.

Many commercial simulation packages are graphical and can produce some eye-popping “ah ha” moments when presenting the animated results to management. However, these commercial packages can be relatively expensive and have a substantial learning curve. But there are alternatives as I will discuss below.

Basically, a simulation forecast model simulates transactions working there way through a system (in our case, an Oracle based system), while watching and recording what is happening. For usability and precision[7] reasons, the model is highly abstracted with the entire computing system represented with only a few queues. To enable the simulation to mimic your production system, it must have the ability to be tuned. The tuning process can take a long time if done by hand.

Suppose the simulation model had just four tunable parameters. The possible combinations are virtually unlimited yet only a few combinations will yield an optimized/tuned forecast model that will provide the precision we desire. Even when intelligently adjusting the parameters, to begin approaching acceptable precision, it could take literally hundreds of individual simulation runs. Obviously, an automated optimization solution is needed and I think you can see that the computational power required could be tremendous. Fortunately, there are also solutions to this challenge that makes low cost simulation forecast models possible. Keep in mind that for basic demonstrations and when used as an educational tool, running a few simulations can work beautifully.

With the right tools, simulation models can be built rather quickly, optimized rather quickly (assuming the computing power is available), yield precise results, and allow for both component level forecasts as well as the system as a whole. This makes simulation an extremely powerful forecasting tool.

[pic]

Figure 4. There are many ways to represent, that is, model a system using queuing theory and simulation. After working with a number of simulation model alternatives, I found that when this tunable closed circuit queuing model was simulated, it produced very precise forecasts.

Because simulation is a powerful forecasting model, graphical based simulation tools are very expensive to develop, non-graphical simulation tools for Oracle based systems are virtually non-existent, and optimizable tools are non-existent, I developed a tunable multiple queue simulation model (shown in the above figure). The model was placed into a specially designed web-site to help guide the user through the entire forecasting process. To supply the massive computer power needed, a grid computing provider was used. You can learn and test this out for yourself at no charge. Just go to sim. .

Getting Started

Now that you are equipped with the method and some tools, you may be asking, “Yeah…but how do I get started?” Even in my Proactive Performance Management class, I tell my students to start forecasting right away. It doesn’t have to be big or complicated, but get started. If that doesn’t happen, the knowledge and enthusiasm will begin to fade.

Here are some suggestions.

Use a production System. Start with a real production system. People tend to start with non-production systems, like a development box. This can be frustrating because the results are many times very strange since you’re modeling a non-production system. Starting with a production server will help you begin to feel the results instead of just looking at them.

Go where the tools are. While one forecast model may be more complicated than another, if you have the tools available, it will actually be much easier. Personally, I would start with either regression analysis or simulation. I would not suggest simulation unless I had access to personnel or a solid simulation forecasting tool environment. This is one reason why I developed the OraPub simulation web-based service…to help people start using simulation without the high cost and complexity.

Ask and answer very straightforward questions. Start with a very straight forward study question like, “If the workload is doubled, will response time be significantly impacted?” or “If we purchase 5 more CPUs, how will that affect CPU utilization?” or “How many orders per hour can our current system handle?” If you start too aggressively, you might get stuck in the details and the complexity and just give up. Or you may end up spending too much time on the project/effort and be forced to stop by management. Make it practical, that is of some real value, but also make it simple.

Don’t get caught up in the math. Honestly, there are a lot of books out there (I’ve listed the good ones in the Reference section) and even those can get real mathematical. Authors just love to impress you with their mathematical abilities, but we have more important things to do. So while having basic queuing theory and basic statistics knowledge is required, understanding proofs and knowing who discovered this or that is, in my opinion, worthless.

Daily Forecasting

The real power of forecasting and its real value to your company will only be realized when forecasting is integrated into your daily DBA experience. This implies automation and customized tools. After a while, you will notice that people typically ask you the same kinds of forecasting questions. Concentrate on answering those questions quickly and precisely by automating our forecasting method.

Let me give you a simple example. Let’s say business is good and management seems to be always asking you your opinion about how many orders the system could ultimately push through each day. Instead of guessing, you could perform a linear regression forecast. But that’s just the beginning. Sure you can give them the answer once, but workloads change because economic activity changes, people change, organizations change, and applications change. Since you already have categorized your workload and developed the data gathering tools, it should be no big deal to place the data into a MS-Excel based LR forecast model once a week to produce a refined forecast. People love this stuff. It’s current, real, and nobody ever does it…until now. You could even post your LR graph marking where the company is currently operating and where the forecasted maximum orders per hour mark is.

Here’s a more complex but more power example using OraPub’s web-based system performance forecasting system. Let’s suppose people are also asking you about response time changes in relation to workload changes, CPU changes, and IO subsystem changes. This requires the ability to answer an ever changing number of “what if” questions with good precision. This is perfect for simulation. The way the web-site interface is designed, once you optimize and validate your model, you can ask an unlimited number of forecasts…today or tomorrow.

If someone comes up to you and asks, “Hey, what would happen to the system if we doubled the workload?” Then you can ask, “Do you want to know about response time, the CPU subsystem, or the IO subsystem?” Knowing darn well they will say, “Can you give me everything?” You then say, “Wow, that’s pretty complicated, but I’ll see what I can do. Can you wait a few days?” After they leave, you simple submit the scenario and in a few minutes you will have your answer. After a day or two, you give them the results. Just like with the linear regression example above, you can continue to refine and strengthen your model by adding additional data/training information and by refining the forecast model tuning parameters.

Conclusion

Thank you for taking the time to read this paper. I hope it has not only renewed you interest in forecasting but has given you some additional forecasting alternatives, some guidance in method, and also some direction and ideas to integrate forecasting into your daily routine. Thank you and good forecasting!

About the Author

Quoted a being "An Oracle performance philosopher who has a special place in history of Oracle performance management," Mr. Shallahamer brings his unique experiences to many as a keynote speaker, a sought after teacher, a researcher and publisher for ever improving Oracle performance management, and the founder of the grid computing company, BigBlueRiver. He is a recognized authority in the Oracle server technology community and is making waves in the grid community the result of founding a company which provides "Massive grid processing power—for the rest of us."

Mr. Shallahamer spent nine years at Oracle Corporation personally impacting literally hundreds of consultants, companies, database administrators, performance specialists, and capacity planners throughout the world. He left Oracle in 1998 to start OraPub, Inc. a company focusing on "Doing and helping others Do" both reactive and proactive Oracle performance management. He continues to push performance management forward with his research, writing, consulting, highly valued teaching, and speaking engagements.

Combining his understanding of Oracle technology, the internet, and self organizing systems, Mr. Shallahamer founded BigBlueRiver in 2002 to help meet the needs of people throughout the world living in developing countries. People with limited technical and business skills can now start their own businesses which supply computing power into BigBlueRiver's computing grid. In a small way, this is making a difference in potentially thousands of people's lives.

Whether speaking at an Oracle, a grid computing, or a spiritual gathering, Mr. Shallahamer combines his experiences and his purpose toward communicating his unique insight into the technologies, the challenges, and the controversies of both Oracle and grid computing.

References

"Advanced Reactive Performance Management For Oracle Based Systems" Class Notes (2003). OraPub, Inc.,

" Advanced Proactive Performance Management For Oracle Based Systems " Class Notes (2003). OraPub, Inc.,

Bailey, James; After Thought. BasicBooks, 1996. ISBN 0-465-00781-3

BigBlueRiver, Inc., Massive grid processing power—for the rest of us..,

Chatterjee, S.; Price, B. Regression Analysis by Example. John Wiley & Sons, 1991. ISBN 0-471-88479-0

Gunther, Neil J.; The Practical Performance Analyst. McGraw Hill, 2000. ISBN 0-595-12674-X

Kelton, W. David; Sadowski, Randall P.; Sadowski, Deborah A.; Simulation With Arena. WCB/McGraw-Hill, 1998. ISBN 0-07-027509-2

Levin, R.; Kirkpatrick C.; Rubin, D. Quantitative Approaches to Management. McGraw-Hill Book Company, 1982. ISBN 0-07-037436-8Dominic Delmolino manages Oracle’s Eastern area System Performance Group. During his five years with Oracle, he has visited over hundred client around the world performing Oracle database and Oracle Applications installations, upgrades, migrations as well as database design and analysis engagements. However, his specialties are replication, distributed systems, configuration planning and management. Mr. Delmolino is based in Washington, District of Columbia, USA and may be reached at 703/917-4518 or on the internet at ddelmoli@us..

Menascé, D.; Almeida, V.; Dowdy, L. Capacity Planning and Performance Modeling. PTR Prentice Hall, Englewood Cliffs NJ, 1994. ISBN 0-13-035494-5

Menascé, D.; Almeida, V.; Dowdy, L. Capacity Planning Web Performance. PTR Prentice Hall, Englewood Cliffs NJ, 1994. ISBN 0-13-693822-1

“OraPub System Performance Forecasting (OSPF) web-based service” OraPub, Inc.,

"OraPub System Monitor (OSM)" tool kit (1998-2003). OraPub, Inc.,

Rowtree, Derek; Statistics Without Tears. Allyn and Bacon, 1981. ISBN 0-02-404090-8

Shallahamer, C.; Forecasting Oracle System Performance Using Simulation. OraPub, Inc., 2003,

Shallahamer, C.; Predicting Computing System Throughput and Capacity. White Paper, 1995.

Shallahamer, C.; Total Performance Management. OraPub, Inc., 1994-.

Shallahamer, C.; The Ratio Modeling Technique. OraPub, Inc., 1990’s-.

Tanner, Mike; Practical Queueing Analysis. McGraw-Hill, 1995. ISBN 0-07-709078-0

-----------------------

[1] Unfortunately, I chose all three. I lived in daily “no win” performance engagements yet always won, I got burned out on corporate madness which striped my team of dignity, and I got promoted to try and make a difference. Finally I quit to make a real difference for others and myself.

[2] The best statistic book I have ever read is called Statistics Without Tears by Derek Rowntree. The statistics module I teach in both my proactive and reactive performance classes is based, in part, on this book.

[3] We all know that adding fifty more users does not equal fifty more server processes. This is when understand the concepts of named users, connected, and active users becomes important.

[4] You will find free MS-Excel based linear regression tools that are very simple to you on OraPub’s web-site. While simple, they are very powerful and useful.

[5] Again, you’ll find a very nice free an MS-Excel based queuing theory workbook on OraPub’s web-site.

[6] Symbols can be powerful models. Mathematics is only one of the many symbolic languages. Symbols are powerful because they represent much more than they are. This is one reason why symbolism is so important to us a people.

[7] It may seem odd, but it is nearly always the case that the more detailed or complex a forecast model becomes, the less precise and usable it becomes. Take for example, linear regression. Adding more and more independent variables nearly always reduces the forecast precision. Don’t be fooled into thinking that detailed forecast models are “better predictors”…because they are not.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download