Qualifications.pearson.com



SlidePurpose of SlideAdditional InformationSlide 1Your starter for 10: Using the data setBut they’ll not know who has voted for what. OK, that’s all I need to go through for now, your host today is Phoebe Pennerton, and your presenters are Mark Heslop and Narsh Srikanthapala. Mark and Narsh are going to take you through the session, so it’s with great pleasure we hand over to them now. Over to you Mark and Narsh.Good evening everybody, my name is Mark Heslop, I’m head of mathematics at Alton Grammar School for Boys. For my sins I was on the LCAT panel that wrote this specification so if you’ve got any comments you can hurl them at Nash and he’ll reply to them in the background. So while you’ve been waiting we’ve had a little bit of a starter for ten up on the screen, that I’ve been using with my classes. The question is how much, how useful has it been just to actually get you used to the data. So we’re going to go through some of the answers.Slide 2Starter for 10: AnswersSo which is further north, Hurn or Cambourne, now we’re going to spend a little bit of time later on looking at the data and where we might get this information from. If you found any of these answers tonight well done to you, unfortunately there’s no prizes apart from brownie points or commendations, depending on which you have in your school. Your location which is the UK’s highest is Cambourne at 87 metres. What scales are used for measuring wind speed? Well we’ve got on the database we’ve got Beaufort Conversion and knots. Now one of the things I’ve found in my classes is that not many of our students actually know what a knot is, so it’s something that we can actually spend a few minutes actually talking it through with them when we use the database, what the actual units are. How are snow or hail measured? Well when you look on the first page of the database, they’re actually melted first and measured as rain. Why can’t we test the hypothesis is it’s sunnier in the UK than in Perth? Interestingly if you actually, if you had a chance to look through the database we’ve actually got different data fields for the UK locations than we have for the overseas ones, and not all the UK data fields are in the overseas one, so we don’t have any sunshine data for the overseas stations. So one of the questions was find some discrete and categorical data, so we’ve got cloud cover and wind direction of 20. We’ll talk a little bit later about what a wind direction of 20 actually means. And some categorical data, so a wind cardinal direction of north north west. Interestingly when I did this with my students, not many of them realised that a wind direction of north north west or a direction of 20, which is 20 degrees, not many of them actually realised that that’s where the wind is blowing from, rather than blowing towards, so if we say it’s a northerly wind we mean that the wind is blowing from the north, usually bringing down cool air. If we say it’s a south westerly wind then it’s bringing up warm moist air from the Atlantic, and a southerly wind or a south easterly wind is coming up from the continent, bringing us nice warm weather, one would hope. And I hope it’s as sunny for you today as it is here in Manchester, although having seen we’ve got delegates here from Egypt and from the Far East, I think it’s probably going to be a little bit better weather there than it is over here. So why might the daily mean wind direction not be a reliable indicator? Well to be honest it doesn’t give an indication of how variable the wind has been, it’s just an indication of the average wind direction. So the winds could have been quite changeable during the day, but it just calculates an average wind direction.What does a wind direction of north east mean, the wind is blowing form the north east like we said and not towards the north east. Trace, tr in the database, you’ll find that in some of the raining data, so it's where there's basically a trace of rain but not enough to measure. And what is normal air pressure? So when the weatherman's talking about high air pressure or low air pressure, well normal air pressure's about 1013 hPa, it's average sea pressure, sea level pressure I should say. High pressure depends on which website you look on, some people regard high pressure as being above 1020 hPa or a milibar in old money as, when I was taught it, and when the weatherman's talking about low pressure, some people talk about anything below 1005 milibars, hPa, or anything below 1000, and the general rule is that low pressure generally means poor weather and high winds. Slide 3The A level reformsSo we're going to move onto the next slide, so we've got the presentation today, we're going to be talking a little bit about the A level reforms, and I’m going to then go on to talk about some normal distribution calculations and how we might use those. I’m then going to hand over to Narsh who's going to talk to you about hypothesis testing and the binomial distribution. So the first part is to do with effectively what we used to be in the old Edexcel statistics one, the second part, the middle part of the presentation is for those of you who haven’t really taught statistics 2 before, it's where the binomial was and the hypothesis testing, and the final part of the presentation, the last half hour, I’m going to talk about how you can use the new style of calculators that have come out, and working with ways to work with large data sets. So I should go back a slide. Just before we go on, could we have the next poll please?Slide 4PollSo just so we can find a little bit about you. OK, I think everyone's just about voted. We’ve still got a few results coming in. And hopefully you can see on the screen, so we've got quite a few people who've never taught before, hopefully this will be useful to you, and quite a few people are experienced teachers, so hopefully we're going to add some new stuff to what you’ve got. Can we go back to the presentation please?Slide 5The A level reformsOK. So in terms of the reforms, Ofqual was tasked by the department for education of reviewing the current, the A levels that we're doing at the moment, and just asking is it fit for purpose. And as a general rule it was considered that the core 1 to 4 content was fit for purpose. We got a lot of feedback from the universities that says oh god, please don’t mess anything up, don’t change anything too much, our courses rely on it, but it turns out there was never any specification written before, for the applied side of the specification. So all the boards could actually submit what they like, and they could actually send that to the DFE, which kind of explains why if you’ve taught different specifications before, you’ve tried to teach mechanics or statistics or decision maths, why you've come in and there's different content there. So all the AS levels and the A levels will be assessed in the same standards as they are currently, we're not seeing that big a leap in GCSE that we've all gone through this summer. They’ll all be fully linear, so the AS levels will be standalone qualifications and won’t count towards the A2 levels. In terms of the university application points we're talking about 40% of the tariff of an A2 level. And the AS content was designed to be a subset of the A level content, to allow co-teachability, the marks at AS won’t count towards A2 level. That was important, so you will be able to have students who, in terms of the co-teachability, students who are in your group who just intend to finis at AS level, we can teach the year 12 content to the people who are just finishing year 12 and those continuing to year 13, and then the, those just doing the AS level will just stop the course in the summer and take the exams. It's absolutely co-teachable. Slide 6The A level reforms: contentIn terms of the content, so it's now 100% core content. Well what does that mean. Well it just means it's 100% prescribed. So we're talking two thirds of the course from core 1 to core 4, and all students will take and make the remainder of their course with content that mainly comes from mechanics 1 and mechanics 2 and statistics 1 and statistics 2. And if you're looking in the specification, all the applied, sorry, all the AS content will be shown in bold font. So in terms of the A level reform, what we've got here is from the DFE document and my key message for you is not to panic. So although we have a requirement to, for assessment of problem solving, communication, proof, modelling, applications of technique, this has always been in the A level maths specifications and further maths specifications. All the boards have done this but it was Ofqual that decided there should be standardisation, so I want my key message, I don’t want people to panic, I don’t want people thinking they're going to go through the change in style of questions that we've seen on GCSE. And if you’ve had a chance to look at the sample assessment materials, you’ll find that actually it's pretty common with the style of questions that you’ve seen previously. So there's a requirement now to have a pre-released large data set. This was kind of copied in some ways from the GCSE statistics course, where it's gone down very well, students manipulating, working through a dataset as they go through the actual course itself. In terms of calculator requirements, so the ability to compute summary statistics and access probabilities from standard statistical distributions. It was kind of felt like we were operating a little bit in the dark ages, although we were looking up trig values on our calculator, we were looking up the equivalent of binomial tables from a data book, it's kind of the equivalent of looking up trigonometric values from a data table so we decided to move things forward. In terms of an iterative function, well we've always had that there on the Casio calculator and the Texas Instruments calculators. You can use the answer key to put the answer back into the formula. New for further maths is the ability to form the calculations with matrices up to three by three. Now any of the calculators that are on the market that have these summary statistics tables in can also do this.OK, I’m just getting a message from Karl that I’m just going to pause for a second so that he can improve my sound. Karl, do you want to come on mic and just talk about that one?Hi Mark, yes, it's carl here. Sorry to interrupt you but yeah, we have, some people are having a little bit of difficulty following because your sound level is a bit low and we just want to see what we can do about that. First thing to do is just on the down arrow next to the mic icon and just look at the volume setting there. If it's not up to about 80% turn it up to there please.How's that now?I think it's a little bit louder, just try to say a few works so I can get a better idea.Testing testing one two three is the old classic. Is that any better Karl?A little bit better. How far up is it on the slider?I’m at about 95%.OK. It's still not great to be honest, it's better than it was.I’ve turned it up to 100% now. How's that now?OK, let’s carry on with that then, and hopefully it's, the other thing, just make sure that the mic is fairly close to your mouth although not right in front of it, but it may just have wandered a little bit perhaps.OK, I’ll just check that one. So if you’ve got any problems, yeah, just comment on the chat window to Narsh in the background.Lovely Mark, thanks very much and sorry for interruption. Cheers. Slide 7Our design principles OK. So in terms of the design principles, we've got separate pure and applied papers, simple 2 to 1 ratio of pure to applied, so two thirds of the course is pure with one third is applied. And the large dataset will be available for the lifetime of the qualification. There's a comment here, may be amended, but that's only if Ofqual feels that the questions are getting quite repetitive. But it is envisaged that it'll be there for the lifetime of the course. So further mathematics is designed to aid parallel delivery with mathematics, so if you look at some of the schemes of work it highlights where some of the prerequisites come in between further mathematics and AS and A level mathematics. And no non-calculator assessments, that wasn’t ever a requirement of the specification, it's just what the different boards have, what the boards have decided to do in this generation of the A level maths.Slide 8Overview of the specification: AS levelSo moving on in terms of the content. So we've got 2 papers at AS level, so paper 1 is where the pure mathematics will be, two hours on 100 marks, and paper 2 is where the mechanics and the statistics is all grouped together. It was kind of with feedback to Pearson Edexcel it was felt that people wanted a separate mechanics and statistics paper. Slide 9Overview of the specification: A level At A level mechanics we've got three papers, so we've got pure 1 and pure 2. Now any of the content can be assessed in either of those two papers. If you look to previous iterations there was content defined for pure 1 and there was content defined for pure 2, but Ofqual requested that any content could be assessed between either of those two papers. And then at the end we've got a mechanics and statistics paper where content will be assessed. Slide 10Applied content changesNow in terms of the applied content changes, this is where there's never been a specification before, the key message is that it's now standardised content across all boards, so if you're moving between boards, between schools, at least you haven’t got too much of a CPD issue when you're moving around. It has, if you've had a look through the actual specifications and content from the board you'll see more of an emphasis on statistics at AS level and more of an emphasis on mechanics at A2 level. And the thinking behind that is that a lot of the students who stop the course at AS level might be using the course to go on to do the social sciences and it might be more useful to have a bit of a background in social statistics. And as we've said before, statistics relies heavily on the use of a calculator.Slide 11AS level content So in terms of the content, I'll do this briefly, so statistical sampling, which a lot of you have seen before. A lot of this is in the current statistics 1 modules and common to the majority of the statistics 1 modules. Calculation interpretation of measures of location, so although we’ve had measures of location before, it's actually an emphasis more on the interpretation of data in the new AS level rather than actual calculation. Understand the use of coding, which we've had in the specifications. Interpret diagrams for single variable data. It was kind of felt that too much time was being spent in exams with students regurgitating standard calculations and actually the emphasis more should be on comparing distributions, comparing graphs and drawing conclusions from them. Probability, mutually exclusive independent events, if you've used that before, if you used statistics 1 with Edexcel before it will feel familiar to you. In terms of the difference between AS level into A2 level, as level, this is where we've got discrete distributions, so we've got the binomial distribution at as level, and we're going to go on and talk about we've got the continuous distribution and normal distribution at A2 level. The big change is that we've now got statistical hypothesis testing, that's come down from S2 if you're with Pearson Edexcel previously, so if you have S2 textbooks that's a good place to start looking for resources to support your course. Slide 12A level contentIn terms of A level, this is where we've got regression coming in and correlation. Students are expected to be able to interpret correlation, but not necessarily calculate the regression line within the exams themselves. Obviously there's no harm in that, in doing that within your lessons to get them to understand the calculations behind the actual correlation and regression equations. Hypothesis testing then involves zero correlation of correlation coefficients, and this is where we start in bringing the probability work that you've previously seen in S1, so we’ve got set notation, additional probability, the assumptions behind probability calculations, and then at A2 level, this is where we bring in the normal distribution, and also the use of the normal distribution for hypothesis testing. We do actually have to talk about within the course where the normal distribution can be used as an approximation to the binomial distribution but that's something Narsh will talk about later on. And hypothesis testing or the mean of the normal distribution. Slide 13Standard Deviation and the Normal DistributionSo just to go on, if you’ve not taught the normal distribution before, so standard deviation and the normal distribution, so this next section is basically for those of you who haven't taught core 1 before.Slide 14What is the normal distribution?So these diagrams show the distribution of adult males in a particular city, and as class width reduces, the distribution gets smoother… Slide 15The normal distribution…until it gets, if we go to a normal distribution, we end up with what we call the normal distribution. One of the key measures of the normal distribution is what we call the standard deviation. Some of you might be screaming at the microphone as I say this, but a very rough way to think about standard deviation is the average difference from the mean. We're loosely going to be calculating it. It’s a nice way to think about it. And if you have a normal distribution with a mean of zero here, within one standard deviation either side of the mean we've got 68% of the population. Within two standard deviations either side of the mean we've got 95.4% of the population, and within 99% of the mean we've got three standard deviations. Now as I was saying before, standard deviation is a measure of spread, it's a key measure of spread at this level, so what we're going to do is just look at how we do the standard deviation of the following numbers. So in terms of notation, this is something if you haven’t taught AS level before we have to get you used to, is the sum of the frequencies times the numbers, basically adding all the numbers up, divided by the number of numbers. Here we've got a frequency of one for each of the numbers really, so we're just basically going to add them all up, divide by the number of numbers, unfortunately the presentation, that should be a 6 on there, and we get a mean of 54 over 6. So we're just calculating the mean, which at this level is called x bar, calculating the mean of the distribution. Next what we do is we look at the difference between the mean and each of the values. Here we've got x bar, we're going to take x bar away from each of the values and see how far the values are away from the mean. Now when we actually, if we were to do the average of these values we'd probably get a value of zero because we’ve got a load of positive numbers and we've got a load of negative numbers. So one of the ways we get around that to calculate this figure is basically to square the figures first, then we can add them all up, then we can divide by how many numbers. So in terms of this calculation, basically what we're going to calculate is the variance, so, which is basically the standard deviation, as I said. So if we take the difference between each of the values and the mean, square it, we're going to add them all up, we're going to divide by the number of the numbers, and that actually gives us a variation of 32 over 9. For the standard deviation we're going to square root the variants and we get 1.86. So we can loosely think about that as being the average distance from the mean would be six. Slide 16Easier methodHow do we calculate that? Well that's quite a long winded method really. There are easier ways to do it. Basically we can basically add up the square of the values. This calculation here, the sum of the values divided by n is the mean, so basically what we can do is square the values and divide by n, and we can subtract from that the actual mean itself. Now here's the short version, and a nice way to remember this one is the sum of the mean of the squares minus the square of the mean. In terms of the calculation we've just done, instead of doing it all in the table we square all the values, find their sum and divide by six. Take up the square of the mean, and that will give us the standard deviation once we’ve square rooted it. A much quicker way than doing it taking the mean away from the sum. Slide 17Grouped dataSo in terms of group data, basically we've got to take into account that we have a frequency for each of these, so basically we find the frequency, so we've got four lots of 2.5 here rather than just being 2.5, so the short way to add that up is to do frequency times by the actual midpoint itself, remember for group data work we're dealing with the midpoint. So we do four times 2.5 to get the summation of those. The summation of the squares need to be four times 6.25 to give us 25. We're going to add those up, the key calculations that you might be given in an exam, they might say sigma fx squared is 6487.5, sigma fx is 285, sigma f is 27, and can we actually calculate the standard deviating. So the standard deviation in terms of grouped data, well again it's the mean of the squares minus the square of the mean, that would give us the variance and we square root that to get the standard deviation. Slide 18Grouped data: formulaNot going to spend too much time on that, that's a slide that you can actually just look through in your own time. The short version of the formula is this. Again we've got the mean of the squares minus the square of the mean, sigma fx over m being the mean, squaring it to get 3:08. Slide 19Standard notationNow in terms of standard notation, we have probability x is great is 0.5 here, so basically what we're saying is that the mean of this distribution is 8 and it has half the population above that, so 8 is the mean of the distribution, the normal distribution is symmetrical, so for any normal randomly distributed variable the probability that x is greater than mu is 0.5. Basically we're talking about it being half the population. So x is normally distributed, and the mean is 8, and then we've got standard deviation of 0.2. One thing to be careful of in examinations or in textbooks, that might be given as the variance rather than 0.2 squared, and a common mistake for candidates is actually, do actually get the variance and the standard deviation mistaken. So it's also a case of getting the students used to this notation. The alternative notation is when this n is replaced by ab that we'll cover later on, that's where x is distributed binomially. Slide 20Key casesSo key cases that you need to be aware of. So this is for instance where we could say calculate the probability that x is less than 33. Now we're going to talk later on about how to use the calculators. Previously we would have had to normalise this to a standard distribution with a mean of zero and a standard deviation of one, look up values in the tables. Now what we can actually do is using the new calculators is actually put this data into the calculator straight away and actually work out the probabilities. So this is a standard case where we're asked to find the probability of it being less than a value above the mean. A second case would be that if we're asked to find a probability where it's above a certain value below the mean. Now if you've actually done the, I’ll just go back one step, if you've actually done the course before what you'd actually have to do previously is find the probability in this area here below the mean and take it away from one. With the new calculators what you can actually do is calculate straight away the area under the curve from 5:38. That's a slight difference even if you've taught the course before. We could ask to find the probability of being between two values, 33.5 and 38.2, and again with the new calculators we can actually plug that information in and get the answer straight away. The key question that we're going to talk about later on is what the students will need to be showing, although Ofqual actually asked for the tables to be built into the calculators. There's a lot more functionality, a lot of this functionality wasn’t actually silicified, and we just need to make sure we're not bypassing understanding by getting students to actually just plug values into their calculators. We can also do a situation, it's rarer in textbooks to do a not equals situation. So in this situation here we'd use our calculators to find the area between and do a one minus. Slide 21Standard normal distributionOK. So in terms of the standard deviation, one of the things we might have to do is standardise the normal distribution, as you would have had to have done previously if you were on the S1 course. So the standard normal distribution has a mean of zero and a standard distribution of one. So what we would have done previously, we'd say if x is distributed normally with a mean of 50 and a standard deviation of 4, find the probability that x is less than 53, so we could use coding to relate it to the standard normal distribution and find out how many standard deviations it is away from the mean, or what multiple of the standard deviation it is away from the mean, and look that up in the tables. Slide 22CodingWith the new spec we could do it in the calculator, but we still need to be able to do it for some of the questions in the textbook. So coding, so this is where it's distributed normally, with a distribution, with a mean of 50 and a standard deviation of 4, and we're after the area that is less than 53. In terms of coding, basically what that translates to is that it's 3 above the mean, standard deviation is 4, so it's .75 standard deviations above the mean, and that's what you'd go look in previous tables.Where does that come in now? Well, this is, sorry, I should have said that this is the calculation that we've just run through, so 53 take 50 divided by 4, and will give us that it's three quarters of a standard deviation above the mean, and then looking up in the tables or in the calculator, .7734. So the coding formula is your value minus the mean, to get the difference between it and the mean, divided by the standard deviation, and that's what we can see in the tables. Slide 23Coding 2So we still need coding in the specification. The calculators can't work it backwards to give us a missing mu or standard deviation. So if a random variable is distributed normally with a missing mean, but we're given a standard deviation and then we're given some other information, so given that the probability of x is 0.2, find the value of mu, we could use, we need to use basically the probability that x is less than 20 in the tables that come along with the actual specification, because some of these calculations, we can't pick that back using the calculators. So what we're going to do is look that up in the tables. Now in terms of the working, this is where students tend to struggle is how to lay it out. This is the actual value that's going to be looked up in the tables or the calculator, and this is kind of a nice recommended working, so we're going to look up the probability of what's, how many standard deviations relate to a probability of .8, find the figure of .8416. So using the coding formula, basically we can then say that 20 minus mu divided by 3 is the same as .8416. We can rearrange that and actually work out the mean is 17.4. This is on the harder end of the specification in the statistics from my experience previously. A lot of students actually struggle to actually lay their working out, and my advice would be solve to follow something on these slides.Slide 24Binomial distribution AS levelNow I’m going to hand over to Narsh if I could to talk to you about the binomial distribution, so this is the content that was previously on statistics 2, if you're with Edexcel, and Narsh is going to hand back to me a little bit later on.Hello everybody, hope you can all hear me. I’m going to talk to you about binomial distribution today, and so let's get cracking.Slide 25Binomial distributionAnd I’m going to start with a pure question, so in the following expansion, how many different ways could one obtain a cubed b squared terms. So this is a pure question and if you’ve taught pure before then you'll be absolutely fine with that. And if you’re not don’t worry because I know there'll be some people here who aren’t teaching pure and have never taught pure, that's not a problem, you’ll still be able to access this, but I’m just going to tie in the pure with relation to the binomial distribution. So you'll be familiar hopefully that expanding, what I asked for is expanding these five brackets, and if I’m going to get an a cubed b squared term, then I’m going to need to get three as, for example it could be this a, this a and let's say this a, and a couple of bs, maybe those two bs over there, and that would get me an a cubed b squared term. And there are a bunch of different ways we could have that a cubed b squared term, so that's one way. And here's another way, so if I just flick between those two slides, you can see a couple of ways there, and my question is how many other ways are there. So I’ll give you five seconds to ponder that if you're not familiar with binomial. And if you are then hopefully you’ll be aware that it’s five choose three. When I say five choose three what I mean is this here, and we're not going to get into the practicalities of where that comes from but I’m sure if you’ve taught pure or binomial in statistics previously then you’ll be familiar with how to obtain five choose three, how to calculate it and where that solution comes from to the question. Binomial expansion's called binomial expansion because there are two parts to it, two parts that are being expanded, and we're going to look at binomial distribution, and binomial distribution is about variables that have two possible outcomes. They need to be mutually exclusive. And so we can use the binomial distribution to model repeated trials of a variable with two possible outcomes, and we’re going to look at an example, and this is going to be very much for people that have never taught binomial before, this section, and I’m going to spend a lot more time on the binomial than I will be on the hypothesis testing, just to give you an early heads up on that. So the example here is that we throw a biased coin five times and we want to evaluate the probability of obtaining a head at least three times when the probability of throwing a head is 0.7. And that question could I suppose in theory have been a GCSE question, it'd be a beast of a tree diagram. To start with we’re going to pop down the key information, this is exactly what I would suggest my students do, and we're going to use the standard notation for tis, so n here is our number of trials and n is always going to represent our number of trials and we’re going to talk about the probability of success being p and the question is asking us to calculate the probability that x is greater or equal to 3. X being the number of times we get a head. So in order to do that, with this being discrete, in order to get the number of times we get at least 3, we can just get the probability of getting 3, the probability of getting 4, the probability of getting 5, and add them all together. Obviously if you get 3, 4 or 5 then you will have successfully got at least 3. And we can work each of those out individually, and in order to do that I’m going to start by looking at just a case when we're looking at 3 heads, so now we're looking at exactly 3 heads. And if we were to get exactly 3 heads, if we were to go back to doing this in a GCSE kind of way then we would draw ourselves a tree diagram, as I said it's quite a ghastly looking tree diagram, so I’ve just drawn a skeleton here of the five events that are occurring, the five throws of the coin, and here are the outcomes now. And that first outcome, the very first outcome at the very top would be if we were to get five heads, but obviously we know we're not looking for five heads, we're looking for the case on x equals 3. And when x equals 3, we need 3 heads and 2 tails, there's a couple of ways, so we could have head head, first two heads, then a tail, then another tail, then a head, or we could have tail, followed by two heads, followed by a tail, followed by a head. And there are a few other ways. And so my question for you is, and I’ll just give you a few seconds to have a think about this, how many other final outcomes would result in exactly three heads, where along this tree diagram would you get three heads.Might be worth, if you’re with a room, chatting with your faculty, maybe the group would like to discuss that. Another way of asking that question is how many times would I have ended up with exactly 3 heads, and that is very similar to how many times will I have h cubed if I was to expand out h add t to the power of five. So if I look here, h add t to the power of five, how many terms would have h cubed in it? Which of course is how many terms would have h cubed t squared in it, and this is now very similar to what I mentioned earlier on binomial expansion, where we had a cubed b squared in an expansion of a add b to the power of 5.So that hopefully ties in binomial expansion with binomial distribution for you, for those of you who are familiar with binomial expansion, and that gives us 5 factorial over 3 factorial, 2 factorial. Now don’t worry too much about this if you’re unfamiliar with it, your calculators will get you these figures nice and easily these days, not like when I was learning it myself, so you can actually use the NCR facility on your calculators to do this, you don’t need to do it as a factorial calculation. I’ll just get my pointer, right. OK, and so we know how to do that and we're going to get 10 different ways, so we should have in theory 10 ways to get 3 heads exactly. here they all are. and so I'm going to look now specifically at the very first two, and the first two, if we look at the probabilities, so just go back, the first one there is head head head tail tail, and the second one is head head tail head tail. So the first one, head head head tail tail, well that's the probability of getting a head followed by a head followed by a head followed by a tail followed by a fail, and that is because each of those is independent, when you throw the coin once it doesn’t affect the second throw of the coin or the third throw or the fourth throw or the fifth throw. So we can split that up and use a bit of GCSE maths there and say well that's the probability of a head and the probability of a head and the probability of a head and the probability of a tail and the probability of a tail, which is going to be 0.7 times 0.7 times 0.7, that was given to us in the question, multiplied by the probability of a tail, and of course if it's binomial and there's only two possibly outcomes and they're mutually exclusive, then the probability of a tail is going to be one minus the probability of a head, so 0.7 times 0.7 times 0.7 multiplied by 0.3 squared, and so there we have our first probability. and if we look at the second one I said we were going to look at, we've got basically the same calculation, we've had to flip the order round a little bit, head head tail head tail, and obviously multiplication being commutative means that we end up with exactly the same result. So in effect when we go back to our tree diagram we can see hopefully that we've got 0.7 cubed times 0.3 squared, and we're going to get that for all of the 10 options, all along here, that give us 3 heads exactly, so in order to calculate probability of getting 3 heads, we're going to need to multiply by 10 for our 10 different ways of doing it. OK, so we had the number of ways to get 3 heads, the probability of getting 3 heads and the probability of getting 2 tails, and in order to calculate the probability of getting exactly 3 heads, we needed to multiply all those together. It's really a multiplication of the latter 2 parts there, the probability of the heads and the tails, and then it's 10 additions of that same thing for the number of different ways that you could get 3 heads. And so we get 5 choose 3 properties of the head cubed times property of tails squared, but obviously this probability of a tail is related to the probability of a head, so we can change that into one minus probability of a head, and if we call the probability of success, rather than calling it the probability of a head, if we think more generally about all binomials, we're going to talk about success and failure, and we're going to get in this example probability of success cubed and the probability of failure twice squared, and there were 5 choose 3 ways of doing it, 5 because we threw the coin 5 times, so in another binomial we might throw it n times, and in this particular example we said r successes, so if we were looking at, sorry, 3 successes, if we were looking at r successes then it would be n choose r, and now this time we need r successes, so the probability of success to the power of r, and n minus r failures, so one minus p to the power of n minus r. And that then is a general way to find the probability of obtaining exactly r heads or r successes when you have a binomial event. So there we have something more general, and we can now apply this to when we have the case of four heads or five heads rather than using our tree diagram. So here it is. We've done the first bit here, probability of x equals 3, we've already done that. Probability of x equals 4, I’m going to follow through here, so we have the number of ways of getting 4 different, sorry, 4 heads, and we're going to need to use 4 heads and therefore one tail, so we're going to multiply those together, and that will give us an outcome on the x equals 4 so probability of r x equals 4, and same for the probability that x equals 5. And we can have 3 separate probabilities, and because we cared about the probability of being greater than or equal to 3 we're going to add those together cause it could be any of those situations, and that will give us a final probability of 0.83692. Now if you care to note it, this value here, you will need later on. Slide 26Expectation Value and Variance of a Binomially Distributed VariableOK, now I’m going to talk a bit about the expected value and the variance of a binomially distributed variable. The binomial distribution has two possible outcomes, success and failure, so we can represent that with some binary using success as 1 and failure as zero. And if I do that and if I were to for example repeat my trial 20 times and the probability of success was 0.8 then our expected value, hopefully fairly intuitively would be 20 multiplied by 0.8. I’m going to give you 20 seconds to just have a quick chat with anyone around you or to have a think to yourself about why it would be 20 multiplied by 0.8 as our expected value.Sorry Narsh, I think we've got a problem with your microphone.Quite right Mark, thanks very much. Hopefully that's back on now. Right, very sorry folks, I’m just going to go through everything I said entirely to myself to you all. So here I’ve got a table for one die, probability of a head is 0.8, probability of a tail is 0.2 therefore, and if we're calculating the expected value of x if you're familiar with this from discrete random variables in the old spec, or if you're teaching it in the new spec, you’ll know that the expected value of x can be calculated by multiplying the value here by the probability and you'll get 0 for this first one, the tail. Similarly for the head you'll get 0.8, and we're going to total those to get our expected value, and then if we do the same thing for x squared we get these values here and obviously you can see that they're the same in this particular case and we can get the expected value of x squared in exactly the same way then by totalling those two values there, on this bottom row, which gives us 0.8, and we can use those in pretty much exactly the way that Mark showed you earlier on. This is expressed slightly differently to how Mark showed you it but it does show you the same thing, he had sigma notation here for the sum of the x squareds over n, i.e. the expected value of x squared, or the mean of the x squareds. Subtract the mean squared for the expected value of x squared, and for this case we'll get a variance of 0.16. Now we're going to look at a couple of different cases, so here we have pretty much the same thing, but we're going to do it for a more general case rather than having 0.8 this time we're going to say probability of success is p, and we know that these two are inextricably linked, and if this is p then this has to be 1 subtract p. And obviously 0 times anything is 0, so the probability that x equals zero, multiplied by 0 is zero, multiplied by zero is zero, and similarly here when we multiply these we get p. The expected value of a binomially distributed variable is p. And the expected valued of x squared is p, and therefore the variance of it is p into 1 minus p. Just going to leave that there for a moment for you to just mull over before we move onto the next slide which will take this on a step further. Slide 27Expectation Value and Variance of a Binomially Distributed VariableSo this time we have 2 repetitions of the same event. So we're going to have a success or failure, and we know that the probability of success is p, and if you think about your tree diagram you could then draw this and you'd be able to work out the probability of getting one success, and if you've done 2 repetitions then you could have 2 successes and of course you could also potentially have no successes. So there's three possible eventualities now, no successes, one success or two successes, and here are our probabilities. Now in order to see where these come from you may need to draw yourselves perhaps at a later date the tree diagram that would go with this, and your tree diagram would have success failure, followed by success failure, followed by success failure. And then you could draw a list of your outcomes and hopefully get those probabilities. Now we have the same row that I had earlier for finding the expected value with two repetitions, and so this time we get 2p for our expected value. Same row again, so this time with x squared in order to be able to find the variance, and if we follow this through we’re going to get a variance of 2p into 1 minus p. And if I flip back to here, here we had a variance of p into 1 minus p, that was with one repetition. Here we have 2p into 1 minus p. OK. And now we're going to look at n repetitions. Now this is a bit more complicated and drawing a tree diagram with n repetitions is pretty tricky. Hopefully you're convinced by the fact that the number of, the probability of getting n successes would be p to the n, you would need success followed by success followed by success etcetera, and same for failure. And we're going to get an expected value of x of np. Now if I just scroll up back, the expected value of x with 2 repetitions was 2p, with 1 repetition was just p, and so hopefully it's believable at least that with n repetitions it’s np, and I asked you earlier to think about why when we had 20 repetitions with a probability of 0.8 you would do 20 times 0.8 in order to get the expected value then, and so similarly here hopefully np is our expected value will enable you to understand the value there for the expected value with n repetitions. The variance of x here is beyond the scope of the course. It doesn’t require an awful lot more time but you'll need to think about how many ways having repeated n times you could obtain a total of let's say 5 or 6 or 7 possible values. And in order to do that initially you might be best to think about n equals 3 and n equals 4 and build it up slowly.Slide 28Binomial DistributionOK, so, by throwing a coin 500 times now, so a different example, slightly different example, still a biased coin, this time we're going to throw it 500 times and we're looking to evaluate the probability of obtaining a head exactly 300 times when the probability of obtaining a head is 0.56. So if you’ve got a calculator to hand, this is our calculation, so this time I’m not asking for at least 300 times, just exactly 300. So very similar to what we did earlier, instead of 5 choose 3 I’ve got 500 choose 300, probability of success is 0.56 to the power of 300, and 1 minus the probability of success will give me the probability of failure, so these two ought to add up to 1, which they look like they do, happy days. And if we've got 300 successes then we must have 200 failures. And if you try this on a calculator I expect that what you'll probably get is an issue, because this combination calculation at the start of it is too big for the calculator to compute. Slide 29Binomial Distribution and Central Limit TheoremSo what we do in that scenario is we approximate it with the normal distribution, and the normal distribution can be used as an approximation when you have lots and lots of something, and I think Mark talked a little bit about the central limit theorem earlier. So obviously we know that the normal distribution is bell shaped and that we have a symmetrical distribution. So this is only really valid when probability is pretty close to 0.5. And it's a bit of a kind of mm, what's close to 0.5, well you have to use your judgement for you to decide that. Obviously the closer to 0.5 the more accurate the approximation is. So in our example we've got 500 repetitions, we've got a probability of success of 0.56 and a probability of failure of 0.44, and we know that to calculate the expected value of a binomially distributed variable, we can do n times p, so in this case 500 times 0.56, and we can get our mean that way.So we're going to use that, and that's 280, and we also know from a few slides ago that the variance of n trials of a binomially distributed variable is np into 1 minus p, so to calculate the variance here we're going to do 500 times 0.56 times 0.44, so that's the number of repetitions, probability of success, probability of failure, and that gives us a variance of 1 to 123.2. Slide 30Normal approximation of binomial distributionSo now we need to calculate the probability, and just going to show you a bit of notation here, so here we've got some notation showing that x is distributed binomially, 500 repetitions, probability of success 0.56, and that gives us these two values. And we're going to create our own variable which is our approximated variable, and this is distributed normally with a mean of 280 and a variance of 123.2. And our example asked for the probability that x equals 300, and if our, if we're trying to find the probability that x equals 300 then given that we're using a continuous variable now, because normal distribution is reflective of a continuous distribution, then we need to think about the probability that y lies between 299.5 and 300.5, and that's continuity correction which you would need to do whenever you're approximating discrete data using a continuous scale. So hopefully that's something familiar to do already. OK, and then to do the rest of that, I’ve put it all up on the screen for you, this is now just a normal distribution calculation which Mark effectively talked you through earlier, he did that quite quickly, but hopefully that all makes sense to you and that's how you would use a normal distribution to approximate when n is large. I’m going to just leave that open for a moment for you to read through if you want to. I’m not going to talk you through it.OK, let's move on. So I want to just have a quick look at how good this normal approximation is, how valid is it to do what we're doing. So let's have a look at how it would have got on before. So this is the example we looked at earlier, we did five trials with a probability of heads being 0.7, and we know that the expected value of x is 3.5, the variance of x is 1.05, this is just using np for the expected value of x. Np and np into 1 minus p for the variance, and we wanted the probability that we had at least 3 heads. So this time we're going to look at the probability that y, normal distribution variable is greater or equal to 2.5. I’ve done this the old way of doing normal distribution, if you were, if you were listening intently to Mark earlier you’d know that your calculators would mean that you don’t need to do this any more and you would just need to work out the probability, it's greater or equal to 2.5 on your calculator. Now we got 0.8692 earlier. Using the normal distribution as an approximation, 0.865, seems pretty good as an approximation. Obviously though the more trials you have and the more symmetric your probabilities the better the approximation would be. Slide 31Hypothesis testing AS and A levelOK, so that's the end of the binomial section, and if we look, we're going to look very quickly at hypothesis testing and really just a flavour of what it's all about. Slide 32HypothesisSo here's a hypothesis for you, all school subjects are equally brilliant, there's the null hypothesis, and the alternative hypothesis, h1, which is that maths is the greatest subject ever. Now I know what we're all thinking, and it's not really testable within the scope of this course but an intro into teaching it on the new spec A level, so here are some key points for hypothesis testing. Slide 33Hypothesis testingYour types of hypothesis, which can be the null hypothesis, the alternate hypothesis, within the alternative hypothesis you’re going to get an idea of whether the test is one tailed or two tailed, you’ll be able to see that from what you write or what you’re being asked. The significance level and then finally drawing your conclusion. So I’m going to do this through an example, but to start with let me just give you some theory. So the null hypothesis is the boring, everything's as it should be hypothesis. The alternate hypothesis is I’ve got a view, I’ve got a speculation here that this is actually the case, it’s not all as boring as we thought. And here's an example then, so in a manufacturing plant they bottle lemonade and in industrial scale, and they're supposed to be 500 millilitre bottles, so the process aims to ensure that the volume of lemonade in each bottle is 503 millilitres, but there is speculation from the public that the bottles regularly have less than this. Just going to give you another 10 seconds again to have a think or have a chat with the other people in the room with you, why would a manufacturing plant aim to have 503 millilitres if the bottles are supposed to have 500 millilitres?OK, so obviously what the manufacturers would prefer is that people aren’t able to complain about them and the providing, and so they're safer to make sure that there's at least 500 millilitres, it's unlikely that they would receive complaints if there are regularly more than 500 millilitres in their bottles. So our hypotheses then, so the null hypothesis, the boring hypothesis, it's supposed to provide 503 millilitres according to the process, and so our boring hypothesis, everything's as it should be, the mean is 503 millilitres. Our alternate hypothesis, well the public have a speculation and they speculate that actually they're not getting what they should be getting, they think that the company are trying to fiddle them out of some value for money and providing maybe less than 500 itself, and certainly less than 503 as they claim. So that is a one tailed test and it's one tailed because we're stating explicitly, the public is stating explicitly that it's less than 503 millilitres, OK? So in this scenario we've got a one tailed test, but on the other hand we could have a test in which we say simply that the test statistic is wrong, in which case it would be conducted at two tails. So for example in the earlier one we said less than 503, instead the chairman comes along and he says the foreman in this manufacturing plant is certainly not doing a good enough job and he thinks that the mean volume is not 503. And if he's saying not 503, then he could of course be saying it's less or more. He's simply saying that either way, he’s got a problem, if it's less then he's not happy with the foreman cause he's exposing the company to potential complaints from the public, and if it's more then he's potentially exposing the company to lower profits because they're providing way too much lemonade when they should be keeping every possible drop of it and maximising their profits. OK, and then we've got the significance level, and the significance level is something that should be set ahead of conducting the test. And it's your threshold probability for the test statistic. So if we think about that bottle example just now, we think it's distributed normally with a mean of 503 and a standard deviation of 13, and we set the significance level at 5%. So if we set it at 5% then we're testing and we're going to use that significance level and if we find ourselves believing that the probability that we got what we got when we looked at a bunch of bottles was less likely than that 5% then we're going to say to ourselves actually, this alternative hypothesis is starting to look a bit plausible, much more plausible than we originally thought. So we take a sample of 10 bottles and we test its mean. We know the sample mean of the bottles because we can work it out, and we know that it should be distributed normally with a mean of 503 and a variance of 169 over 10. And that's because of the information I’ve previously given you. So we said then the variance was 169. OK. So the probability of the sample mean of this sample was at x bar and it's less than or equal to the value it has being 5%, then we can say that there is sufficient evidence to reject the null hypothesis. This doesn’t mean quite the same thing as whether we're accepting the alternate hypothesis, it just means that we have enough evidence to say actually, 503 doesn’t look like the mean that we thought it was, OK, and so that gives us the one tail test. If we were looking at the two tail test, then we were looking at whether or not 503 is right, and it could be above or it could be below. So now if we take our sample of 10 bottles and test its mean, we're looking for whether the variant, the mean that we get now falls within a two and a half percent level below or above, and that's probably easiest to explain with some diagrams, so in a one tail test we're capturing in this example where it was below, so we thought that it was less than 503, we're looking at the probability of there being a mean in this section here. And that section is 5%. Whereas in the two tail test we're saying we could be wrong but we could be wrong at either end. So now we need to think about the two and a half percent at the bottom end and the two and a half percent at the top end, and if our mean falls within either of these sections then that's statistically significant and we have enough evidence then to at least reject the null hypothesis. OK. And so as I said earlier we don’t have enough evidence to reject, sorry, we have enough evidence to reject the null hypothesis. Of course your test may well provide you with a mean that was very close to 503, it might be 503.0001, which is obviously very very slightly higher than 503. The chances are that that wouldn’t be significant so then you would say that you don’t have enough evidence to reject the null hypothesis. I think that it would be unwise to go so far as to say I’m going to accept the alternate hypothesis because it's a bit higher. The bit is what we’re testing here when we're doing hypothesis testing and that significance level is going to make sure that you’re above a certain threshold before you say yes, this alternate hypothesis looks plausible. Just a final note, it isn’t quite the same to say that I’m accepting the null hypothesis if I say there isn’t enough evidence to reject the null hypothesis. For example, if we set the significance level at 5% and actually what we find is a mean which falls within 6% probability at the tail at the end, then we probably have enough evidence to at least be questioning it. We're not going to reject, sorry, we’re not going to accept our null hypothesis at that point because we probably are saying to ourselves hmm, this looks dodgy but it hasn’t fallen within a level of significance that makes me say yes, I reject the null hypothesis, and say yes, there's enough evidence here to say the alternate hypothesis looks a bit more plausible.OK, so if we look back at the binomial from earlier, we had a biased coin, probability of a head was 0.7, the coin was thrown five times and the probability of obtaining at least 3 heads was 0.83692. Sam throws the coin five times and obtains 4 heads. He conjectures that the probability of a head is in fact higher than 0.7 and decides to test his claim with a 10% significance level. So we're going to set up the hypothesis test. So there are our two hypotheses, normal boring everything is as it should be null hypothesis, probability of a head is exactly as it was claimed to be, 0.7. The alternate hypothesis, Sam's thrown it, he's got 4 heads, he says nah, the probability of a head is definitely higher than 0.7, and now he's going to test it. And he’s testing it at a 10% significance level. So we’ve got these values from earlier, you'll be able to have a look at those on the slides if you've downloaded them, so these were on my earlier slides that I presented. So just over here these values, and this gives us 0.52822, 52.822%. The probability that x is greater than or equal to 4 then is greater than 10%, and we would only reject the null hypothesis if the probability that we’d got here was in that 10% tail right at the top. This is much lower than that, so therefore there is not enough evidence at this stage to reject the null hypothesis, and we’re going to stick with our assumption for the time being that the probability of getting a head is 0.7. Hopefully that all makes sense.Slide 34Calculator supportOK, I’m now going to hand back over to Mark, and he’s going to talk you through the calculator.Thanks Narsh, can I just check that everyone can still hear me? Oh good, Narsh has just sent me a quick message in the background so that you can still hear me, that's good. Slide 35Calculator supportSo we're going to talk for a little bit about how to use calculators on the new specification. We've kind of geared this specification around the Casio, Casio's done some work with Pearson to support, if you look at the textbooks you'll see references to some of the Casio calculators. There are other calculators available that will meet the needs of the specification, so the Texas Instruments TI30 Pro will meet the needs of the specification, but we’ll just base this presentation around the Casio, the Texas Instruments one is very similar. OK, so on the new Casio, it’s a menu based system, it’s not a million miles different in terms of layout than your graphic, sorry, topographical calculator that you've seen in the past, most people have the FX83 or the FX85. So it’s a very similar layout to what you’ve seen in the past, not too much of a shock to students. It's a menu based system, so basically you’ve got a number of options here, you could calculate which is your bog standard screen, you’ve got complex numbers which is useful if you’re doing further maths, various base calculations, matrix for further maths. Vectors is useful for single maths, as is the statistics and the distribution section. There's also an equation solving function. Now again this is functions, these are functions that were popped into the calculator that weren’t actually related for the government or Ofqual but are very useful for getting your students to check their calculations. They can solve inequalities, they can solve quadratic inequalities, they can solve polynomials, simultaneous equations, but as a general rule of thumb, if they make a mistake, or if they're not showing their working they're not necessarily going to get full marks, so it's just a case of maybe introducing your students to these functions later on. They may not, they probably won’t even realise they’re in the calculators. A lot of people were, when I first showed them this, were up in arms about the amount that's in these new calculators. Actually the majority of this stuff has been in a previous iteration of the Casio calculators, the 991, the S, the silver calculator which a lot of students have, that can solve equations, can solve simultaneous equations, can do matrices, and a lot of students just don’t know this functionality is there until you show them. So it's just worthwhile getting them to do it manually and then towards the exams and their preparation, walking them through the functions that are actually in the calculators. So we're going to concentrate on the distributions, how we might use the distribution functions within the calculator.Slide 36DistributionsSo in terms of the distributions, we have basically menu 7, so the golden rule is if in doubt, press the menu key on there just near the top right hand side of the keyboard. When we get this we come to a number of options. We have the normal distribution, and I’m going to move onto option 2 here, the cumulative distribution, that's what we'd normally call the normal distribution, the inverse normal distribution, the binomial probability distribution for calculating for instance the probability of 10 heads out of 20 throws. The cumulative distribution is what's programmed in, or sorry, I shouldn’t say programmed, is what's listed in the tables on the previous data sheets. The normal probability distribution is a bit of a strange one really. It’s effectively giving you the heights at any point on the bell curve. Now in my experience I can’t really find a question that would actually need just those heights. There was actually an Edexcel question from a couple of years ago when they said on a normal distribution, find the probability that x equals 15. Now in some ways it was almost a trick question because if x is 15 therefore it has no thickness therefore it has no area, therefore the probability should have just been zero, but I wouldn’t want students to be confused looking up 15 in the probability distribution, and actually coming up with an actually point in the line. It has no area, so therefore its probability is zero. If we use the down arrow key that's on the, one of the four keys just underneath the display, we can scroll down to the next window which gives us the binomial cumulative distribution, and the poisson distribution and the poisson cumulative distribution. Now poisson are further maths. Now we should say there is no inverse binomial distribution. Now this explains to a lot of people, suddenly there's a lightbulb moment when they realise why there are still binomial tables in the data book from Edexcel. So when it comes to hypothesis testing, where students are asked to look up which event corresponds with a probability of 5%, as Narsh was saying in the hypothesis testing, you can’t look that up on a calculator. You could potentially create a table of events in your calculators, but not all the calculators actually have that, so that's why Edexcel has actually published an extract from the binomial tables, specifically to work with the hypothesis testing. They have said basically that you will only be able to use that table within hypothesis testing. If it's just a standard calculation of a binomial probability or a cumulative probability, in 20 throws of a fair coin how many of them will be up to or including or what is the probability of up to or including ten throws. That won’t be information that you'll be able to look up in the table. So if you have your calculators with you, and unfortunately when this got translated through it's actually snarled up our presentation slightly, rather embarrassing so I’m just going to put the whole thing on the screen, so find the probability of exactly 3 heads out of 5 throws of a biased coin, given that the probability of a head is 0.7. So this is one of the examples that Narsh did before. So if you have your calculators, I’m going to leave this on the screen for a minute or so for you to work it through just to check that you get the same answer. So if you press the menu button, and we go down to option 4 and we press the probability distribution, we don’t need to scroll down and do the cumulative distribution because that would be up to and including three heads out of five coins. Now once we've pressed the probability distribution we've got two options, we've got list and we’ve got variable. So if we wanted to create a list of events, so probability of 1 head, 2 head, 3 heads, 4 heads, 5 heads etcetera, we can actually set that up as a list and get the calculator to calculate them all for us. We're going to use just a variable, just a single event probability. So when you use your calculator for the first time, basically it pops up this screen with zeros in. So we’re after probability of 3 heads, so we need to put x equals 3. There's 5 trials in total so we set n to 5, and we’ve got basically the probability of the event as being 0.7. When we press equals we should pop out a probability of 0.3087, and I’m just going to give people a minute or so to try and replicate that if you’ve got one of the Texas Instruments or one of the Casio calculators.So hopefully you’ve been able to give that a go and are not now screaming at the screen in anguish because I haven’t given you enough time, but this is it there in the presentation if you want to have another go at it later on. Interestingly we were originally looking at maybe supporting the course with our existing S1 and S2 textbooks, and we found out something a little bit odd, that we were getting slightly different values when we were doing calculations, especially when there was several calculations that came together, and it was to do with rounding errors. When we were, when we were using a calculator we would get an accurate figure, and when we looked up in the back of the book for the answers we were getting a slightly different figure to what the textbook was doing. And it was because when you're looking something up in the tables, if your value doesn’t quite exist in the tables you might go with the nearest event, also with the normal distribution, the nearest listing to it, look up the probability. And we actually had some quite big rounding errors creeping in, so it’s just a word of caution if you're actually dealing with existing resources.So, the next one, find the probability of here we go, up to and including five heads out of 15 throws on a fair coin. So this is an up to and including question. So if we go into the menu we need to scroll down so we can actually find the binomial cumulative distribution. So we don’t want the binomial probability distribution, if we put this data in it will give us the probability of exactly 5 heads out of 15, and we want up to an including, so it’s binomial cumulative distribution.So if we scroll, if we press 1 on this one to go for the binomial cumulative distribution, we press 2 cause we’re just doing a s ingle calculation rather than an actual list and variety of results.What actually comes up in here is the information we might have sent last time round, so we sometimes need to overwrite this.We’ve actually set this now to 5 heads out of 15 events, a probability of 0.5, and this is the figure that pops out of the calculator, .1508789062. And therefore then leads to a conversation with your students about the level of accuracy they need for significant figures is absolutely fine from the calculators.So I’m just going to leave that on the screen for about 30 seconds or so so if you’ve got your calculators you can actually try to replicate that.Slide 37Normal distributionOK, so hopefully you’ve been able to have a go at that. Just going to go onto the next screen and we’ll talk about some normal distribution calculations, how we might use the calculators. So here we've got x is distributed normally and a mean of 50, a standard deviation of 4, and we’re going to find the probability that x is less than 53. So previously under the old spec we'd need to code this, work out how many standard deviations of the mean it is, and actually look that up in a table. I am going to come to it in a second what students may need to show in terms of their working. So again, menu, it’s always the get out of jail free card, press the menu button. I’m going to go for the normal cumulative distribution, basically it comes set with an upper and a lower limit, they're currently set to zero, and a standard deviation of 1. Now here's the odd part. Sorry, standard deviation of 1. The odd part is when we're doing a lot of less than calculations in terms of the cumulative distributions, the calculator doesn’t do a straight less than calculation like the table used to be. So if we looked it up in the tables, 1 standard deviation above the mean, it would basically do everything less than 1 standard deviation, including the whole left hand side of the distribution. With the new calculators you actually need to set in two values, you need to set in an upper limit and a lower limit. So that might be useful if we’re doing a calculation where we’re trying to find the area between 53, x is greater than 53 and less than 54. But when we're doing an entire to the left of calculation we need to set a lower limit that's sufficiently low to cover the entire left hand half of the distribution, equally if we were doing greater than 53, we'd set a lower limit of 53 and an upper limit of several thousand. This, as long as there’s a sufficiently big value it doesn’t really matter, the only time when students might get caught out is just tell them to be careful when the standard deviation is high. If the standard deviation was, say, 100, well when we set a lower limit that's not really the extreme end of the distribution and we'd actually get a bit of a spurious answer. So basically we set the upper distribution at, the upper limit is 53, the standard deviation of 4, and the mean is 50, and if you have a calculator to hand I’m going to let you just try and replicate that one for a second.OK. Hopefully you can hear me again. From this I get an answer of .7733726476. The interesting one is if you code this and look it up in the tables, you'll actually get a very slightly different answer to four significant figures, so it's just again one to be careful of if you're using existing resources. Slide 38Inverse normal calculationsInverse normal calculations, so if we have a distribution of, where the mean is 30 and the standard deviation is 5, find the probability that x is greater than a, where that area’s basically .4. Now in terms of the inverse distributions in the calculator we can only use a less than, one of the quirks of the new calculators. So we basically have to look up in the calculator, what area is up to and including 0.6 and tell me the value that goes along with that.So we’re going to go for the inverse normal, and we can set the area as being 0.6 and the standard deviation and mean as being 30. Now if you’re looking for error checking in this one, we should get an answer greater than 30 because our area should include more than half of the distribution. And I’ll give you 30 seconds or so just to have a go at that.And we'll pop the answer up to put you out of your misery. So that basically gets an inverse value of 31.26 and again it's higher than 30, so we're in the right ballpark.Slide 39Storing answersNow storing answers, so a student might want to actually take this material and work in the normal calculations mode, at which point they're going to have to press the menu key afterwards and press 1 to get back to the standard calculations. But they might want to store these answers, so what we can do is basically go through the same memory operations that we go through on our calculators, and if your students are anything like my students, all the way up to further mathematicians, I seem to have to remind them every five minutes how to store things in a calculator, and they will always ask me the question, how do I wipe it when I want to do another one, and I always just have to tell them, you just store something else over the top. So basically once we’ve pressed stored we actually get a little icon appearing at the top here, and then we can press any of the buttons with an a, b, c, d, e, f to actually store it in that memory, and this value here will then be burned into the memory of their calculator, and if you want to go into one of the other modes of the calculator, they can continue doing sums with that. Slide 40Recalling answersRecalling answers, I thought I’d actually include these slides in here as something you can actually do with your students to remind them. So we can press alpha and then a, just to recall anything that has been stored in a, which will pop that one back out. Or we can actually use the, oh, I should skip back one. We can actually use the recall function, so a second function, so shift and store to get recall and then the button with an a on it. I quite like using the alpha key, the red one, and a, so they can do a separate calculation. So they could do, type in 3 a here and it would automatically triple the value of a. I quite like that functionality. Slide 41New statistics question So just in terms of some of the statistics question that they might get, just need to look at briefly what new stuff is in the mark scheme. So here's a calculation, Helen models the number of hours of sunshine each day for the month of July at Heathrow by a normal distribution with a mean of 6.6 and a standard deviation of 3.7. Use Helen’s model to predict the number of days in July at Heathrow when the number of hours of sunshine is more than one standard deviation above the mean. 2 marks. Conceivably your students could actually pop this into a calculator and get .15865 out with no working. They could then multiply it by 31.01865 in their calculator and get a final answer without writing anything down on the page. In some ways this is the drawback of the new calculations, out of the new calculators that I don’t want to undermine students' statistical thinking. We should encourage our students to be writing something down, if only at the very least, if they've messed this up completely they would get 0 marks if nothing else was written on the page. If they have something in there, so we’ve said here use Helen’s model to predict the number of days in July when the number of hours of sunshine is more than a standard deviation above the mean, so the probability of h is greater than 10.3 represents more than one standard deviation above the mean, they're going to get a method mark. What I don’t want students to do is just type things into their calculator, be broadly correct in their method but actually mess the value up and get no marks, so we should still be encouraging our students to write information down on the page, it's good statistical thinking, it can help frame their thoughts and not just rely overly on banging things into a calculator. So the discrete random variable x is distributed binomially with 40 trials and a probability of .27 of success. Find the probability that x is greater than or equal to 16. So because of the nature of the cumulative distribution tables within the calculators for the binomial we're going to have to do, work out the probability that x is less than or equal to 15 and do a 1 minus calculation. Now my worry is some of my students in the pressure of an exam will just be banging that into a calculator and trying to write down an answer here, .0509. If they get it wrong they're going to get no marks. Now a correct method can be implied by a correct answer if there's nothing else there, but they're running also a real risk that basically if they've made a mistake if there's nothing else there they’re not going to get anything at all.Slide 42Working with the large data setOK, so now we’re going to talk about some strategies about working with the large dataset.Slide 43General pointsSo my general advice, because I’ve been playing with this with my classes for a number of months is, don’t try and upload the entire large data set into Geogebra or Autograph. I’ve made these mistakes so you don’t have to. Autograph and Geogebra just can’t cope with multiple page spreadsheets. Copy and paste any data you need, so general advice is if you're going to be doing any filtering, any data selection, any treating of the data, do it all in Excel first and then copy and paste it if you're going to use Geogebra or Autograph to plot graphs. Like we said sample your data in Excel first before pasting it into graphic programmes, you can use filters in Excel, I’m going to show you a little bit about that in a sec, I’ve got a couple of exciting videos for you to watch. My advice is be careful when you're plotting histograms. Geogebra doesn’t seem to cope well with histograms, it seems to plot frequency. There are options in Autograph if you get your students to plot histograms which you can turn to frequency density, but they can be a little bit dicey. It's one of the more complicated options. So what we're going to talk about now is ways in which you can work with the large database. So hopefully the first slide in the presentation, your started for ten, gave you an idea for actually even if you've got just a few short questions, it's actually very quick and easy to get used to working with the data. Ten questions, just like that, giving them to your students, they can actually get quite familiar with the dataset and be asking the right questions. What does KN mean? What is this a measure of? You know, looking on the front sheet in terms of how is rainfall measured, how is wind speed measured, how do we measure snow, what does trace mean in the data? These are the kind of questions that it can take very little time at all to get your students familiar with the database. Kind of my big message here is not to panic. It's not, the intention is that this isn't something that you're going to have to devote hours and hours and hours and hours of lessons to just to get them through the course. The idea of a large database is that it should support the course and be used when appropriate. It is an excellent teaching tool. Someone asked me a few weeks back of when I was planning to use the large database. They said are you going to be using it in the autumn term, are you, first half, second half, in the spring term? Now I asked a question in a slightly different way, what's wrong with using it in entirely different year groups? Some of the tasks we're going to go through this evening, I actually did it with my year nines and they coped absolutely fine with it. Now you could use it as a teaching tool, if you’re not a college, if you have a sixth form attached to your school there's absolutely nothing wrong with you using the large database all the way through so they're entirely familiar with it when they come to the sixth form, and you don’t have to spend time familiarising them with the database and you can just get on with using it as a statistical tool. Slide 44Random samplingSo I’m just going to move on a little bit, now if we could have the random sampling video, we actually changed this, it did originally say insert random video here. So I’m just going to press play and talk you through this video where I’ve looked at how you might want to do some random sampling with your students. So on this first page here, this is where we've actually got all the information. This is where a lot of the information that was on the starter for ten was actually contained. I’m just going to keep going with the presentation. We can scroll down and look at the instructions of how data is recorded, so this is where your students can actually get a lot of that information from. We've got a map with the locations around the UK, and the locations around the world. And along the bottom we've got various sheets, basically one for every area of the UK and internationally, and also for 1987 to 2015. We've got an absolute ton of data to go at. So we're going to look at Camborne from October 2015, and up at the top here, just going to pause the video for a second, just bear with me, scroll back a little bit, OK, at the top here, that's where the information was on the actual altitude of where your locations are, so we asked which is the highest one, or which is the, when we look on a map which is furthest south and which is furthest west, we've also got grid references and we've got altitude. Now this is actually a little video about how you might do some random sampling. So one of the first points is, well you could generate random numbers and spend a lot of time putting numbers in very very tediously, one by one, but actually we’ve got a lot of tools in Excel that can do this for us, I’m just going to press play. So the first step, and if you want to get your students to do some random sampling, is to insert a column, insert a second column, and what we're going to do, we're going to number these rows on the left hand side, and then we're going to use a random number generator, here's the short way to do it, if you just do the first four, what you can actually do is just drag the bottom side of it and use it to generate numbers for the entire row. Now what we can do is use Google to actually generate some random numbers and I’m just going to press pause for a second, oh no, wait, it's going to go and do it on this one, so if you literally type in random number generator into Google, it is the first thing to come up, you can set in a max, so for however many rows we’ve actually got on the spreadsheet, you can set a minimum, so 1, and you can just tell it to generate random numbers. Now when my students did this they were generating a random number, then going back to the spreadsheet and putting in an x by the side of the number. We're going to use filters in a second to just pull all our data together, doing it one by one by one. If you want to generate a sample of 30 it's far more efficient, again we've made this mistake so you don’t have to, is to just get Google here to generate 30 random numbers, write them down on a piece of paper, then go back to the spreadsheet and tick off the ones that you're interested in. So this is where we're generating the random numbers, so out of 184, 39, 96, 154, my lottery numbers are not going to come up tonight. So now what we can do is actually put crosses by the ones that we're interested in. Now remember that the emphasis in this specification is on interpreting data rather than spending too much time drawing tedious graphs. Now I’ve got no problem with students drawing graphs but not too much of it when it actually gets in the way of the interpretation. So what we're going to do after this section is show you some hypotheses that you can get your students to work through and find evidence to support or evidence that does not support the hypothesis. And what we'd like to do is make sure the database does not get in the way of interpreting the data, drawing the graphs doesn’t get in the way of drawing the actual data.So I’ll just press play, so we've selected an amount of data using the random number generators, and now we're going to highlight the area of the database I’m interested in, I’m going to scroll all the way down to the bottom. And I’m going to use a feature in Excel basically called filters. And what I’m going to do is I’m going to filter that left-hand column for all the rows that have got an x in it, and there is my data ready to go, good to go. So if we go into the data column, the data menu, click on filters, there at the top, we can click on the drop down menu there, we can untick select all and we can just select the xs and there we have all the data gathered together.OK, and that's the end of that video, can we move back to the presentation please?Slide 45When was the best time to go on holiday in 2015?Waiting for it to load up, so, moving onto the next slide, so you could ask your students, and we've got a series of these slides, there's nothing wrong with you just showing these slides to your students and asking them to go away and investigate. So question or hypothesis, when was the best time to go on holiday in 2015, this is actually, I should say this is a question rather than a hypothesis. Get your students to choose a location from the UK, Camborne, Heathrow, Hurn, Leeming, Leuchars. My advice would be teaching the GCSE stats course is that students have a nasty tendency to massively overcomplicate things. They might select several of these and investigate several strands of data and tie themselves in a knot. So the key point is, and we're asking when's the best time to go on holiday, is to choose a location initially, choose how you’re going to investigate that, you can discuss with your students, well how are we going to decide when the best time to go on holiday, some of them might say it's when it's the least rainfall, it's absolutely fine to go and investigate. Some might say I like to go on holiday when it's warmest. So we could get our students to highlight the date column and the temperature column, we could plot a time series of graphs, so in this case here we're not doing any sampling, we're just going to go with the census, get your students to plot a line graph, get your students to insert a type and an axis on Microsoft Excel, get them to discuss how they could improve the graph. Excel isn’t brilliant at plotting graphs. And at this stage here I haven’t got too much of an issue with that, but if we're doing professionally-produced graphs we'd be looking at using some more expensive tools perhaps, but in terms of interpreting the data, it's absolutely fine in Excel. Get the students to copy and paste the graph into Word, and get them to write up a conclusion in relation to the question. So we could maybe get them to extend it, maybe, is there a pattern across the UK, so we could get them to maybe look at the other locations, so do we have a common weather pattern across the UK, and is it, if we arrive at the conclusion that the best time to go on holiday in 2015, is there a pattern between 2015 and 1987. Slide 46Graph 1So if we do that we end up with quite a hideous looking graph here, and well, OK, we can maybe get a bit of a conclusion, sorry you haven’t quite got the detail here, but about June, July time is the best time to go on holiday in the UK for Camborne. But it's a bit of a messy graph, so maybe we should look if that pattern is consistent around the UK. Slide 47Graph 2And we end up with an absolute mess of a graph, and no one can conclude anything and it's pretty meaningless, so the question for you is, how we could make the trend clearer? Now there's nothing wrong with this in terms of thinking statistically, bring in small items from outside the course. One of the things we could introduce is a moving average to smooth out the peaks and troughs. It's not something that's in the GCSE any more but it literally takes us five minutes to produce and it's quite easy to do on Microsoft Excel.Slide 48Patterns are clearer with a moving averageSo here's a moving average for Camborne, and you can see we actually get a bit of better idea of what the trend is doing. Slide 49Is there a common pattern across the UK?So what we could do is look at that across the UK, do a moving average for all the locations around the UK, and we arrive at the conclusion bizarrely enough that the best place to go on holiday in the UK was Heathrow at the end of June, so that's where you should all book your holidays to next year. Next up was Leeming in Yorkshire, so we could then, this is quite a nice graph to discuss with the students. So I had this discussion with my year nines, and they came to the conclusion that actually these lines, broadly speaking, follow the same trends, so therefore broadly speaking we actually have a general weather pattern for the UK. Now that's quite high level thinking for a year nine, but it's something that when I chatted about it with my sixth formers, they actually saw it fairly quickly. We shouldn’t underestimate how intuitively our students can think statistically, it was quite a nice thought to come out of them that if it’s warmer down south it’s warmer up north generally speaking, we might find some exceptions but we have a broad UK weather pattern. Slide 502015 daily max temperature 7 day averageSo what about different years, so this peak here is, we could then ask our students to go on and investigate well is the end of June always the best time to go on holiday in the UK, and we could get them to plot the data for 1987. And we do have a bit of a peak here in 1987, but we’ve also got some peaks in August, which is probably the rarity these days, rather than the rule. So there's lots of nice discussions that they could come up with on this. Get them to write up four or five bullet points, keep it short, keep it sweet, in my experience when they’re doing the GCSE statistics course which we've done for a good few years here, which is that if they try and write too much they will tie themselves in knots and confuse themselves. Get them to compare things directly and just write four or five quality bullet points.Slide 51Comparing distributionsSo one of the things we need to be able to do is to compare distributions. So we could ask students to investigate where's the sunniest place to live in the UK, and what we could get them to do is copy the total hours on sunshine data from two of the UK locations onto a new spreadsheet and get them to work with that data in Geogebra. The nice thing about Geogebra is that it plots very nice box and whisker plots, and plots outliers for us. So first things first, get them to open Geogebra, select the spreadsheet option, paste the data into the Geogebra spreadsheet and check that you’ve got more than 100 rows. Again, I’ve made mistakes so you don’t have to. Bizarrely when you copy and paste Excel data into Geogebra sometimes it doesn’t paste more than 100 rows, so it's worthwhile just checking, scrolling down that you've got more than 100 pieces of data. Get your students to highlight the columns and create box and whisker plots, and again there's nothing wrong with putting this entire slide on the board as a task for your students. Export the box plots as a picture and insert it into word, and then write some conclusions. Slide 52Total hours of sunshineNow we’ve actually got a little bit of a video coming in a second, so this is where I did a screengrab from Geogebra. You'll have to excuse us, when this presentation was translated onto the online system it’s made a bit of a mess of these boxes here. In my experience, changing the labels on Geogebra is a bit of a pain, so it's best just to do a screengrab, put it into Microsoft Word or Microsoft Excel, and insert text boxes to actually relabel up the actual locations around the UK. So in terms of hours of sunshine, it's quite nice that Geogebra automatically includes outlier tests. Now, can we run the Geogebra video for a second please?Slide 53VideoOh yes, my fault, user idiot error, I should have pressed play, sorry. So if we're going to be looking at Leuchars, basically we can go along to the Leuchars data sheet. I’ve already basically set these data up from the other sheet ready in a second spreadsheet, so basically it's always nice to have a separate working spreadsheet to gather your data together from, if you're trying to collate data from different years, set yourself up a separate spreadsheet, gather it together in one and use excel to paste that into xxx. So I’ve got to now go into Leuchars, I’m going to scroll across the bottom of the screen. There we go, Leuchars 2015, apologies if I’m saying that wrong. I’ve already got the column highlighted, do that again for good measure. And you could go into the home menu and copy and paste, or you could right click and go to copy. I’m going to right click here, and I’m going to click paste. You don’t have to do that, you can use the, in the home menu you can do copy and paste from there. Now I’m going to select all of this data, so I’ve finally gathered all of my data from the different locations together. I’m going to copy that and I’m going to paste it into Geogebra, so if we go into the spreadsheet function. Basically I can clear that box in the middle of the screen, I don’t really need that. I can click on here and I can right click and I can paste the data in. There we go, there's all the data, and it's always worthwhile scrolling down just to check that you’ve got more than 100 pieces of data, so we're going to do that now, and hallelujah, this time around it pasted it in. I get it about one time in five, it didn’t paste all the data. So now we're going to go into the graphing option, so what we want here is multivariable analysis, so what we've got to do now is highlight a number of the columns. And there’s nothing there so we're going to highlight a number of the columns, select multivariable analysis and we're going to go for a box and whisker plot. There we go, there's our box and whisker plot with our outlier down here, there’s our outlier nicely tested. Now if we want to export this we've got a number of options, maybe you could do a print screen and a screen grab, you can actually say export this image in the settings and the options. We can also in the options turn on and off the outliers. We can look at summary statistics from it. And I actually may just move on slightly just to speed things up so we can get you all finished on time. And this is where I’ve copied and pasted it into Microsoft Word, and we're inserting a box over the top to basically relabel them. OK, can we move back to the presentation please?Slide 54Categorical dataSuper, thank you for that. OK. So looking at categorical data, so tasks that you could do with categorical data with your students, so we could do a hypothesis, the prevailing wind of the UK is from the south west, and we could get our students to count the frequency of north, north north east, north east etcetera for one of the UK locations, and ask them to plot a suitable graph to see if there's any prevailing wind. You could use filters to help you count up the data or get the data in order. And if you've got more advanced students who are fairly competent on Microsoft Excel, maybe we could use count statements to count the frequency so we're not having to count the frequency of these things by hand, and we could get our students to plot a graph. Slide 55Wind direction: Pie chartSo I got my students to select an appropriate graph from categorical data on Excel and it was an absolute car crash because they chose a pie chart, and it was an absolute mess, it was horrible. It was completely misleading because east was pointing somewhere up to the right hand side, west was pointing down to the left, it was absolutely horrible. It was a nice learning point cause I could then get the students to discuss what would be a better graph, and they came up with the conclusion of a bar chart. Slide 56Wind direction: Bar chartIt’s not a bad way to do it but you've got to appreciate that it does wrap round 360, but it did allow them to see the frequency of the actual wind direction, principle mean direction. And they did come to the conclusion that for Camborne in 2015 we had a bit of a bimodal distribution, at the very least we could describe it as that, and that the prevailing wind from Camborne seemed to be somewhere around north north west, sorry, west north west, and north north west. Now I had students asking what about this bar. Well there's a lot of things that come up in these that give you a nice opportunity for discussion, so it might be that there was some physical structure blocking the wind coming from a certain direction in Camborne, maybe there was a hill behind the weather station, you have some absolutely lovely descriptions coming up on this one. So extension, does the prevailing wind vary around the UK, does it vary across the years?Slide 57Bivariate dataSo maybe we could look at some bivariate data with your students, so hypothesis, increased cloud cover results in fewer hours of sunshine, so we could ask our students to test if there's a correlation by plotting a scatter graph of the daily mean total of cloud cover with the daily mean total of sunshine, with a hint, Excel will only use the left hand column for your x axis, you may need to adjust your spreadsheet before plotting graphs, so you might need to have a second spreadsheet as a working spreadsheet to get these things in the right order before you actually try plotting them out, using Excel to actually plot the graph. And just get your students to remember to add axis labels and a title. Slide 58Cloud cover graphSo what does that look like? Well cloud cover oktas, that was a bit of a strange one in the database, this definition, oktas, so it's a bit of an aviation term, that's where I’ve come across it in the past, so it basically relates to eighths of the sky being covered. So eight oktas is basically the entire sky covered in cloud. Four oktas is half covered, two oktas is half covered, and classic with Excel, it's banged me out an egg graph that goes below zero. But we could adjust that one. So we can get our students to plot this. Considering this is real world data, I don’t think this is a bad graph, it's quite nice, we could look at talking to our students about why these are in columns. Well oktas is discrete data, it’s only in eighths, so logically it should come up with this. If you want to extend this you could maybe get your students to look at calculating the regression line and using it to make a prediction. Basically the issue with the regression line is we could potentially extend it beyond eight oktas and come up with a nonsense description, but that's kind of a nice classic exam question, why can’t we use the regression line to extrapolate outside of the database. Slide 59Hypothesis: more cloud cover means higher rainfallWe could ask them to do a different hypothesis, maybe more cloud cover means higher rainfall. These are all just mini tasks that you can get your students to do. When my students first looked at this, they said well we can’t really conclude much because there's a lot of data here, and it was the students who actually came up with the conclusion, well, maybe we should cleanse the data a little bit, maybe we should look at only including items for days where there is significant rainfall. Slide 60Adjusted hypothesis: on days where there is significant rainfall higher cloud cover means there is more rainSo they actually came to the conclusion that maybe we should say, an adjusted hypothesis, on days where there is significant rainfall, higher cloud cover means that there is more rain. So basically they said cloud cover, rainfall above 2 millimetres. But the general conclusion is that data is messy, I mean this isn’t your classic nice fit graph, but it doesn’t mean that we can't use it to draw a conclusion that broadly speaking when there is more cloud cover there is a higher incidence of rainfall. You’re going to find this with the students. Now in terms of how long it took to do these tasks, so I set my students about 20 minute to work with the started for ten that you had at the start of this presentation. We then talked about that for about 10 or 15 minutes. I then set them a task within lessons to look at filtering the data and doing a sample of data, and then I gave them homework where they actually produced the graphs and wrote up their conclusions, and then half an hour the next lesson to discuss their conclusions. If you've got a weaker group maybe you could look at just getting them to produce graphs at home and then as a discussion in the next lesson actually discussing before they write up the conclusions. But it took less than two hours to get quite a good task done with a group of about 20 students. Now when it came to the second and third tasks I didn’t actually need to do any preparation work, I could just set them as a homework task, right ladies and gentlemen, go and investigate on days when there is significant rainfall higher cloud cover means that there is more rain, and got them to go away and discuss it. Slide 61Example questionsNow we're going to be hanging around for a little while, this is pretty much our last slide here, so I’m just going to quickly go through a couple of sample exam questions, so here's an example, from your knowledge of the large data set, explain why the process might not generate a sample of 20 when doing systematic sampling. Now if your students have been working with these tasks when they look at the, at some of the data they'll see that some of the data is missing so it may not generate a sample of 20. So a lot of the questions can be set in the context of the data set, but don’t necessarily rely on the information, so this first part here doesn’t actually rely on information from the data set, only the last part here does. And the idea is that students who are familiar with the data set will understand that this is in the context of the data set, I don’t need to be panicking, there's actually no information I need to extract from the data set, the students never need to memorise something but they'll be asked questions in the context of the data set where being familiar with it will make their life more straightforward.Slide 62Free supportNow just as a final slide here, we're going to go onto some free support and resources, so you've got schemes of work and course planners and content mapping documentation on the Edexcel website. We’ve got topic based resources to use in the classrooms, particularly based around some of the new schemes of work, the online scheme of work which is free for everybody, there'll be specimen and secure mock assessment papers for your students, questions will be divided up onto exam wizard and results plus, and we'll also be doing a further series of pre-recorded events and training events as the year progresses, just to make sure that you’re fully supported on delivering the new specification.That’s it from myself and Narsh, and I’m quite proud of the fact that we’ve managed to hit 18.30 absolutely pretty much bang on. So we’re going to be hanging around for a little ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download