Knowledge

Talk:Law of large numbers/Archive 2

Source šŸ“

764:
It could literally be anything between 0 and 1. In other words, you have no way of knowing, after any finite number of trials, that you are in fact anywhere near the expectation value of the random variable. It could be that, today, you encounter the unfortunately weird scenario in which the first 1 million trials give you H. You *should not* conclude, after any one of those trials, that the expectation value *is actually* 1, or even near 1. Likewise, if you happen to get a sample mean of 0.5 even after 3.7 trillion trials, you cannot guarantee me that the coin is truly fair. What is true is that your sample mean is *probably* getting closer to the expectation value. The more trials you do, the more I'd be willing to bet on your sample average as being near the expectation value. But no matter how many trials you sample, your sample mean can, in a worst case scenario (such as a distribution for which the random variable can take on any real number) be arbitrarily far off from the expectation value. The LLN gives us comfort that, "most of the time," our intuition is sound to believe that, by some reasonably large number of trials, we're in the vicinity of the expectation value. But it seems that many people who are not familiar with the technicalities of "almost sure convergence" will be misled by the "apples" example to think that is the same as "sure convergence," or that those who are not familiar with convergence in metric spaces in general will be misled to think that the first n points in a sequence tell us anything *for certain* about where the sequence is going.
1862:
which a coin flip operates. We have gravity. We have mass. But really, it is a thought experiment. There can be only four possible alternatives: 1)The coin turns up heads. 2) The coin turns up tails. 3) The coin lands on its edge. 4)The coin vanishes into thin air when tossed. Ok, clearly the last is not reasonable. Coins actually CAN come to rest on their edges and sometimes do, but for this experiment we PRESUME that there are only two final outcomes -- heads or tails -- even if the coin lands on its edge it falls one way or the other. No other outcomes are allowed to be thought about. Assuming (this is a thought experiment)that the coin is "fair" -- that means that the coin is perfectly round and has no weight anomalies that favor landing on one side or the other -- andy toss may result in either a head or a tail. Since there are only two possible outcomes and neither outcome is favored by the coin, it will "choose" to fall one way sometimes and the other way other times, in a fashion that is entirely random. On average, this randomness will not favor either side but will be a perfect split between the two choices -- 50%. --
955:
size can be almost any positive number and CLT still works -- if the number of samples is large enough). It addresses the means of samples of ANY size following a normal distribution. It is the Law of Large Numbers that addresses sample size. Of course the LLN also addresses variance, but not really directly. In the same way, CLT addresses sample size, but not directly. I think that this distinction is why these two things are called the first and second fundamental theorems of probability: they work together. LLN describes sample size and CLT describes the distribution of possible sampling results around the true mean. These work together, and the results are stronger when you have both large sample size and large numbers of samples. But the normality of the distributuions follows the number of samples more than it follows the sample size of the samples. In addition, CLT was preceeded by LLN both historically and logically. CLT is, in a sense, a refinement of LLN in that it takes one very large sample and "looks" at what happens when it is divided into a number of smaller samples.
208:
conclusion from analysis under the law of large numbers. But that is not exactly what the law of large numbers says when we examine the equations in the article. They do not directly refer to the rate of convergence and, in fact, I can find no source that discusses rate of convergence as an intrinsic part of the Law of Large Numbers, but rather, all sources simply point out that as the numbers get larger, they get closer to the population mean. it is discussed as a limit when the sample size approaches infinity. As Bernoulli, (who first described the law of large numbers) said: Even the stupidest man knows by some instinct of nature per se and by no previous instruction that the greater the number of confrming observations, the surer the conjecture.
1262:
provide a probability. However, the LLN certainly considers variance. Look at the assumptions. It assumes a population with a fixed mean and a fixed variance. Consider: As N increases toward infinity the sample variance will approach, asymptotically toward the expected variance. If you were to know, in advance, the population variance, you could use that information to provide a measure of closeness, if not exactly probability. I do not think anyone tries to do such analysis in depth because the research has gone in the direction of the CLT, but I believe it would be a natural progression if the CLT were not discovered.
803:
especially as sample sizes get larger." I think it would be more accurate to say that LLN arises out of CLT rather than the other way around. CLT gives the mean, the variance, and the distribution of sample means. As the variance of the distribution collapses as the sample size grows larger, it follows that the mean converges to a number (the population mean) in probability. This is the LLN. So LLN is a result that can be obtained from the CLT rather than the other way around. Of course there have to be some assumptions for this to work, for example finite variance etc. I am correcting the article accordingly.
303:
result and in practice can be used for samples that exceed, say 100, without worrying about convergence. For example, if you are trying to estimate a binomial distribution mean, and the population mean is 0.5, then a sample of 100 would give the standard deviation to be 0.5 * 0.5 /sq-root(100) = 0.025. So the probability that the sample mean will lie outside 0.45 to 0.55 (interval length 4 sd) is less than 1%. I suppose whether you regard that interval as sufficiently convergent to 0.50 is a matter of taste. If you increase the sample size to 10,000, then the interval shrinks to 0.495 to 0.505.
1146:
really think about "futhermore" and instead see it this way: the CLT says that "a large enough sample can give you mean and variance of the population in a normally distributed response". All complete. You see CLT as including LLN and I see CLT as additive to LLN. Perhaps this is like a comparison of the set of counting integers with the set of rational numbers. Rational Numbers may include the counting integers, while the counting integers do not include the rational numbers but they do form the foundation for the rational numbers. I could go on, but this is long enough for now. --
998:
saying "any sum of many independent identically-distributed random variables". Note the use of the word "many". 3)Also go down to the proofs of CLT and you will see they are for n approaching infinity. 4)As a simple example, consider a binomial sample of heads = 0 and tails = 1 of size just 2. The probability distribution for the mean will be 0 with probability 1/4, 0.5 with probability 1/2, and 1 with probability 1/4. This certainly isn't a normal distribution. 5) So saying "sample size can be almost any positive number and CLT still works" is not right. Regards,
692:
approaches the population average precisely because of the LACK OF INDEPENDENCE. But the LLN relies very heavily on the assumption of independence. Picture sampling apples WITH REPLACEMENT, so that trials are independent. Maybe only 100 apples are there, but the sample size may be 1 million. As the sample size approaches infinity, the sample average approaches the population average, and that has nothing at all to do with whether anything near the whole population is represented when the sample size is big enough.
1061:
some odd reason) not sure if your sample will be normally distribution, or if you are looking at the practical implications of the range in a hypothesis test. However, with regard to the Central Limit Theorem describing how the distibution of sample means will be normal, how does the distribution of the means get affected by the size of the samples that create those means? There may be some cross talk but essentially the normal distribution will occur in the sample means if you use sample sizes of 2 or 200.
922:"CLT does not say that as the sample size increases, the estimate of the mean improves." That is not right. The variance of the normal distribution as given by CLT decreases at the rate of the sq-root of sample size. Hence the estimate of the mean becomes more accurate as the sample grows larger. In CLT as the sample size approaching infinity, the variance approaches zero, the sample mean converges in probability to the population mean. This is the LLN. Regards, 243:
the whole population -- as long as the population is rather large. So I would change the example from 100 apples to 1000 apples and then point out that, while a sample of 999 would be the most accurate partial sample, a sample of only 40 or 50 will typically suffice under conditions where we accept the risk of certain errors. It is not that this sample size is expressly described by the Law of Large Numbers, but rather that it is a corellary in most cases.
80:
the population average. It suggests that LLN works because 99% of the apples have been counted, which is not correct. The correct example to illustrate LLN would be say: "If you took a sample of 1,000,000 apples out of 100,000,000 apples, the average would be almost exactly the same as the average for all 100,000,000 apples." That is even though 99% of the apples have not been counted, still the sample average will be very close to the population average.
31: 2380:"Theoretical and experimental probabilities are linked by the Law of Large Numbers. This law states that if an experiment is repeated numerous times, the relative frequency, or experimental probability, of an outcome will tend to be close to the theoretical probability of that outcome. Here the relative frequency is the quotient of the number of times an outcome occurs divided by the number of times the experiment was performed." ( 2675:
explicitly. Secondly, one will see "LLN" used when explaining the phenomena whereby the rate of increase tends to slow as the numbers get larger. Since it appears my two examples of useage are not addressed in the current article a lay person is left scratching their head. I'm not suggesting these particular useages are proper, but their discussion, if only to explain their incorrectness would help the average user.---
1677:
large numbers. Because, as with the roll of the dice, the law of large numbers says that each side will get, on average, very close to its fair share of events given enough rolls. Thus, with enough trials, something with a probability of .000000001 may still be likely to occur at least once or more often. So, though the lottery may pay off with a probability of .0000001, it will happen if enough people buy tickets. --
128:
probability that the mean of the sample and the mean of the population converges to 1.00"? This is like saying that as you get closer and closer to 100 out of 100 apples, you will get closer and closer to the population mean. So the apples example is a specific and detailed example of the concept that is described later in the equations. It is what the Law of Large Numbers really says.
2299:
introduction "The law of large numbers (LLN) is a fundamental concept in probability that states: If an event of probability p is observed repeatedly during independent repetitions, the ratio of the observed frequency of that event to the total number of repetitions converges towards p as the number of repetitions becomes arbitrarily large." by this edit
513:
that infinite is best. You are trying to make it say something else -- something about a small sample size being enough. That certainly falls out of the analysis that can be conducted later but that is NOT what the Law of Large Numbers says. But again, if you can find a valid source that says so, then use that source and I shall be satisfied.
1542:"The strong law states that almost every sample mean will approach the population mean arbitrarily close as the sample size increases. Although one can theoretically conceive samples for which this does not hold (for example, throwing infinitely many fives with a dice), the law strong law implies that these samples jointly have a probability 875:
Maybe it does. LLN talks to the mean of the sample vs the mean of the population but not to the distribution around the mean of either. I would agree however that the CLT certainly IMPLIES the LLN. But to settle the issue, can you find a source that says CLT contains LLN or that if you have proven CLT you have done more than imply LLN? --
2387:"in statistics, the theorem that, as the number of identically distributed, randomly generated variables increases, their sample mean (average) approaches their theoretical mean." (Encyclopedia Britannica). Almost readable, but it makes the LLN a statistical concept rather than a concept in probability theory that extends to statistics. 2377:"A fundamental law in probability theory and statistics stating that if an event or probability p is observed repeatedly during independent repetitions the proportion of the observed frequency of that event to the number of repetitional converges (see convergence) towards p as the number of repetitions become large. " 2647:
The focus on the toss of a coin is superfluous, and if the article were written with the focus on probability with statistics being used as examples of how the LLN is applied, then the statement "The law of large numbers works equally well for proportions" would be redundant as well. As it stands, it
2222:
One thing you notice immediately is how long it is. The number of final possible outcomes is the number of outcomes possible on a toss (2) raised to the number of trials or 2^4. That equals 16. If we had gone with 10 tosses our table would have been 2^10 =1024 rows long. Way too long. That is why
1948:
Finally, since there are 2 possible outcomes but only 1 of these will actually occur, the chance of that one occurrance is the number of things that will actually occur divided by the number that might have possibly occurred. In this case, for example, the chance that the coin will land heads is 1/2
1934:
Third, and maybe this is the hardest one of all, you must recognize that the coin is "fair", that is that it does not "prefer" to fall one way or the other and it does not "remember" how it fell in the past. Each toss is its own toss and heads may turn up just as easily as tails each and every time.
1861:
It's not a silly question. It gets to the heart of what "probability" is. One of the reasons that this is called the "Law" of large numbers is that it is considered somewhat axiomatic -- it "just is". Physcially, perhaps the reasons are rooted in physics. We have a certain number of dimensions in
1729:
Here is part of my problem: The central limit theorum is about the distribution of the sample mean. The law of large numbers, not only says that the probability of all rolls will be the expected mean, but that the probability of each outcome will be exactly 1/6. This is not quite the same thing as
1663:
are very low; however, the odds that someone will win the lottery are quite good, provided that a large enough number of people purchased lottery tickets." I would like to see some cites that this is really a "less technical way to refer". Actually LLN is inappropriate for this example, as LLN speaks
1493:
At the end of a later paragraph which ends with the sentence "For example, the odds that you will win the lottery are very low; however, the odds that someone will win the lottery are quite good, provided that a large enough number of people purchased lottery tickets." the removed sentences make more
1188:
Regarding the original question. It is fairly easy to obtain a weak LLN from a CLT. This is because convergence in distribution to a constant implies convergence in probability to the constant. However, the strong LLN does not follow immediately from the CLT, since there is no such implication for
1137:
3)I agree generally and accept the blame. Sometimes it is that I have done an answer and then somehow it gets lost and I have to retype and I get impatient. (That just happened) I do not understand your last request though ("You need to define what these phrases mean: make a comment; approaches; view
1095:
1) If you take a sample of 3 points, and the underlying distribution is binomial, then the possible values of samples means will number 4. This certainly is not a normal distribution which has an infinite number of possible values. You cannot do hypothesis testing by approximating a distribution that
1021:
3. The proofs do use N going to infinity, but this is for a different effect. They are looking at dividing a continuum and it says that as N increases the distribution of the sample approaches Normality. But this says NOTHING about the original population. Indeed, the original population might be
874:
I am not sure that is true. I have to think about it. The proof for both LLN and CLT can be similar but I do not think that CLT certainly contains LLN. CLT talks to the distribution around the mean of the sample. I have to think about whether it alone refers the mean to the mean of the population.
397:
Numbers." Yes, strictly speaking LLN applies to infinitely large samples, but of course samples are never infinitely large. Does this mean LLN has no application in real life? No, all it means is that LLN can only be applied when the sample is regarded "large" enough, for which we need rules of thumb.
396:
Hello Blue, you wrote "Yes there are rules of thumb, but the LLN does not give them." and "Regarding rules of thumb for sample size, this article is not about sample size or statistical inferencing. I think you should address those issues in these other articles. This is just about the Law of Large
79:
The article had the example following example: "If you took a sample of 99 apples out of 100 apples, the average would be almost exactly the same as the average for all 100 apples." I removed it because in this example there is only one apple left to be counted, so the sample average will be close to
1874:
Thanks! I guess what bothers me is the concept of probability in the first place. Since the coin (obviously) doesn't "know" which way to fall in one case--i.e the result is entirely random, or let us suppose so for the sake of argument--then there is no principle at work in the individual case other
1754:
There have been large scale changes made in one day to this article. In my opinion it has made the article to be less helpful to a reader wanting to understand LLN. Specifically the history of LLN and the quotes provided are confusing and unhelpful. If other editors agree then we should revert these
1343:
You wrote "pollsters use an "error margin of XXĀ %". But what that margin really means is not well defined." That is not correct. Pollsters are pretty good statisticians. They usually use 95% confidence intervals when they give error margins. If you delve into their reports you will find the details,
1171:
If you do have only the CLT result, you can get to LLN. But if you have only LLN, you cannot get to CLT. But for the lay reader to know this may be unimportant, I would be satisfied if they were described as "related results". You can accordingly change the article if you believe the current version
954:
I think that this is an example of the problem. The current wording is: "The variance as given by CLT collapses as the sample size grows larger, it follows that the mean converges to a number (which CLT says is the population mean)." But CLT does not directly address sample size. (In fact, sample
912:
But the CLT does not say that as the sample size increases, the estimate of the mean improves. It is a theoretical construct looking at a variety of possible sample means and explaining that these fall into a normal distribution pattern. Sure, it gives the mean, but it does not say whether the mean
812:
Since LLN was proposed prior to CLT and since LLN is more basic than CLT there is no way that LLN could have come out of CLT. It may well be that LLN does not produce CLT (I think a case can be made that it does) but logically it is impossible that the LLN comes from CLT and historically it did not
763:
Question: doesn't the idea that something "will be almost exactly the same as" neglect the idea of probability in the convergence? For instance, I cannot guarantee that after 100 tosses of a fair coin, assigning a value of 1 to H and 0 to T, that your sample average "will be almost exactly" 0.5.
633:
Hello Blue, I said "It can work even if only 0.001% or less have been counted as long as the number counted is 'large'". You replied "Although that statement is true -- sometimes -- it is not specifically what the LLN says." As this is math, there should be no ambiguity about what LLN says. When you
470:
You wrote "Read the example. 3 is not as good as 9. 99 is better than 9. It is simply a restatement of the LLN." I agree that 99 more than 9 more than 3 is in the spirit of LLN. I propose we change the example from population size 100 to 100,000 apples. You should not have a problem with that as the
430:
I repeat, when the example says 99 out of 100, it is suggesting to the reader that the means are close because most apples have been counted. If the example said 99 out of 10,000, or 999 out of 30,000 or 999 out of 20,000 or 100 out of 5,000 or 120 out of 60,000 or 1,354 out of 2,245,523 I would not
127:
But look at it this way: Doesn't the law of large numbers explicitly say that as N approaches infinity (for an infinite population) the probability that Xn = mu approaches exactly 1.00? Doesn't that translate into "As a sample measurement of a finite population encompasses the full population, the
2271:
Suppose the difference of the number of heads from the 50% value is d. Then the expected magnitude of d grows at the rate of square-root of N (where N is the number of tosses). Magnitude of d grows at square-root of N, rather than N, due to the randomness of the process. In comparison the number of
1848:
I'm not a mathematician but I can follow the basic idea in the article (I think!) It explains how the law describes certain behavior. My question, though, is why the behavior occurs in the first place. What is it that maintains the tendency for randomness to "even out" over large samples of events?
1788:
I think the structure is not quite right. I thought so last night, but it was late. So I will try to fix it. But my main interest is to make the opening paragraphs more accessable to people who do not care about the mathematical proofs -- people who hear about the Law and want to know briefly what
1528:
This is not true. Consider a population with values 0 and 1, where 1 has probability 1/sqrt(2). Then the population mean is also 1/sqrt(2), which is an irrational number. The sample means are always rational numbers however. So the probability that the sample mean is exactly equal to the population
1221:
An asymptotic result IS a distribution. The LLN provides for an estimate of both the population mean and the population standard deviation or variance. However, to define this in terms of likelihood may indeed require the Central Limit Theorem. So, rather than it permitting a precise measurement
884:
The CLT does give the mean. It gives the entire normal distribution which includes the mean and variance. For example see the Knowledge article on Central Limit Theorem. It says that the modified distribution Z approaches the standard normal distribution N(0,1). Z is obtained by subtracting the n *
849:
If you have evidence that LLN preceeded CLT in time (which it probably did as LLN is a weaker result), and you regard the history of development of these results worth including, you are welcome to add that information. However the current statement "So LLN is a result that can be obtained from the
512:
I would agree with that if you would agree that the example should go to 99,999 apples. But then, there would be no point for the change would there? You see the Law of Large Numbers does not explicitly say that there is some small number that is sufficient. It simply says that more is better and
431:
have any problem. The problem is 99 out of 100 very strongly suggests that LLN works because most apples have been counted, whereas LLN does NOT require most of the population to be counted. It can work even if only 0.001% or less have been counted as long as the number counted is "large". Regards,
302:
6.I would say the example you suggest would be improved by saying that if there were 1,000,000 apples, then 99 would give a good estimate, 999 an even better estimate, 9,999 still better, 99,999 best yet and so on... The point is that "yes, larger samples improve accuracy". But LLN is an asymptopic
285:
1.There are rules of thumb about what number may be regarded as "large", and use convergence results without worrying about rate of convergence when the sample size exceeds the number. I am not a statistician, but I do remember an example of 30 (or was it 60?) being mentioned as a sample size large
170:
Population Mean = Mean of 99 apples * (99/100) + Mean of last apple * (1/100). As the mean of the last apple is divided by a large number (100), so its impact is small and the mean of 99 is "close" to the population mean. This is just algebra, there is no probability involved. The means are "close"
131:
You are right that the sample mean is an estimate of the population mean -- an estimate with some degree of error to it. And the Law of Large Numbers says that is the case. What you are describing is an outgrowth of analysis under the Law of Large Numbers, which leads to the Central Limit Theorum
2635:
Compared to other pictures on wikipedia, the graph is huge. While I agree that it could be more refined, I think that its size is good, it's an integral point of the intuitive explanation of the LLN. An illustration of that many rolls of a dice has to be big to illustrate the point, especially for
2583:
Could somebody explain to me what basic contributions to the LLN, relevant to the topics discussed in this article, have been done by Vapnik and Chervonenkis, since their names appear right after those of Chebyshev, Markov, Borel, Cantelli and Kolmogorov? I mean the statements of the LLN as given
1962:
Maybe what you are really asking is "Why does it not work perfectly? Why is it that I can toss a coin 10 times and sometimes it will be 7 heads and 3 tails. Other times it will be 4 heads and 6 tails. Why is it not exactly 5 heads and 5 tails?" I will answer that, but I have to use a smaller
1676:
You are missing the point. It is a description of how some people use the term and usually this is their informal way of viewing it. Though informal, it is not technically incorrect. With a large enough number of trials, events with low probability may happen... and this derives from the law of
1326:
1) You wrote "LLN certainly considers variance". Most statistical results require population variance to be finite and LLN is no different. However I never said that LLN does not consider population, what I said was that LLN does not provide the variance for the sample mean. And sample variance is
1297:
But without original research, lets look outside of wikipedia. If the CLT answered the same question as the LLN then there would be no need for the LLN -- it would not be the 1st Fundamental Theorem of the Probability and CLT would be the 2nd. Instead, CLT would be the only one mentioned. It is
1167:
Hello Blue, I do not care much how LLN and CLT are connected as long as they are accurately described. You wrote "I think of LLN as saying "large enough samples can help you determine population mean and variance" while CLT says: "FURTHERMORE the sample is normally distributed"." I agree with what
1145:
I think that part of my problem with this is how I connect the two concepts. I think of LLN as saying "large enough samples can help you determine population mean and variance" while CLT says: "FURTHERMORE the sample is normally distributed". To me, the furthermore is important. I think you do
1060:
about assumptions, I agree. If I said that, (I do not see it above but if I said it I was wrong). When you talk about the Central Limit Theorem, sample size does not matter, unless for some reason you are trying to discuss some practical application of the Central Limit Theorem where you are (for
997:
1) You wrote "CLT does not directly address sample size." That is not correct. CLT does consider the sample size, in fact the variance it provides for the normal distribution depends upon sample size (inversely proportional to square root). 2)Also see the article on CLT on Knowledge. It starts by
637:
Specifically look at the assumptions of LLN and you will find no assumption that says the sample size has to be larger than some fraction of the population. What you are saying is that for LLN to apply the sample size has to be (at least sometimes) larger than a certain fraction of the population.
550:
Let me be clear. I want the example to say that the Law of Large numbers works because MORE have been counted and when all but 1 have been counted that is next to the very best thing and when they are all counted that is best. I specifically disagree with this point that you keep making: "It can
351:
5.In the 99 out of 100 apples examples, LLN works because MORE apples have been counted. Read the example. 3 is not as good as 9. 99 is better than 9. It is simply a restatement of the LLN. The larger the sample the more correct or accurate the mean. Your problem with the example is invalid.
242:
So, I again would argue that the Apple example is not a bad one, but it may not be exactly complete, and by itself it could be misleading. What is missing is that the rate of convergence is rapid (a square law) so that reasonable conclusions may be developed with a sample that is much smaller than
2432:
edit an article that they cannot improve. I agree that the introductory section should be comprehensible to a broad audience, but not at the cost of an incorrect statement of the LLN. Perhaps a special case could be stated first, but only with a caveat saying that it is a special case. I'm not
2298:
Whereas previously the introduction "The law of large numbers is a fundamental concept in statistics and probability that describes how the average of a randomly selected large sample from a population is likely to be close to the average of the whole population.", it has now been replaced by the
2259:
LLN is a mathematical result. In math you start with axioms and build up. So the axioms of probability, statistics, number theory etc lead to LLN. Math results can inform us about the real world, for example LLN informs us about coin tosses (your example). I understand your concern about LLN, and
1070:
If I take a sample of say --- 3 points and if someone else takes a sample of 30 points... both these samples are testable against some hypothesis by virtue of the central limit theorem. How did the sample size change that in either case? Certainly I can wish I had done the 30 observations -- I
750:
I do not know about a common misperception that the sample needs to be a large fraction of the population, but the problem with the example of 99 Apples is not that some people would not understand it (That may also be a problem but a different one). The problem is that the independence criteria
207:
I understand what you are saying. In essence, you are talking about the notion of events that may have an infinite number of trials, having success p and that a less than infinite number of repetitions is required to estimate p. I agree that this is true. I also agree that this is a reasonable
107:
The probability of the sample mean being "significantly" different from the population mean becomes smaller as the sample grows larger. This probability "approaches" zero (convergence in probability) as the size of the sample becomes "large". This is based on an analysis of probabilities. In fact
1971:
Looking into the future, the chance that you will get a head or a tail is 50% on any throw. So what is the chance that you will get 2 heads and 2 tails out of 4 throws? First, figure out how many different ways the coin can come up in 4 tosses. This is called an "Outcome Table". Here it is:
802:
Currently the article says "One of the most important conclusions of the Law of Large Numbers is the Central Limit Theorem which, generally, describes how sample means tend to occur in a Normal Distribution around the mean of the population regardless of the shape of the population distribution,
736:
Hello Jules, I think the photograph example may be hard to understand. Normally when we say sample average, we think of a scalar. But a photograph is a vector of facial feature points, and it is not easy to associate sample average with such a vector. I think the old apple example with the total
725:
male American college students, even though that population is much larger than the sample. Moreover, if you take a new sample of about 30 male American college students, and project their 30 portraits upon each other, then you will see almost the same face again. That is, the first sample of 30
716:
I agree completely that the apple example is misleading. It encourages a common misconception (that the sample needs to be a large fraction of the population). I would rather suggest an example that contradicts this misguided intuition: "If you take photographs of the faces of a random sample of
138:
The example of 99 Apples was developed to help people who had no familiarity with Statistics to grasp the basic concept of the Law of Large Numbers. With that in mind, recognizing what the equations say, and realizing that the focus of the article is not sampling confidence probabilities, is it
1949:=50%. Incidentally, the chance that the coin will land tails is also 1/2 =50%. This means that the chance the coin will land either heads or tails is 1/2 +1/2 =1 =100% of the time it will be either heads or tails.. (We ignored landing on its edge or disappearing into air). Is that part clear? 1261:
As I reflect on it, I do not think you can show likelihood without the CLT. For example, when those assumptions are not used, pollsters use an "error margin of XXĀ %". But what that margin really means is not well defined. The CLT actually defines such things because it uses a distribution to
1107:
4) The current sentence in the article that you find objectionable was (many posts back): "The variance as given by CLT collapses as the sample size grows larger, it follows that the mean converges to a number (which CLT says is the population mean)." To this you said "But CLT does not directly
2674:
A couple suggestions that would improve this article for the lay user; this article doesn't address 2 more common useages of "LLN". The first is the invocation of "LLN" when discussing the occurance of improbable events. Perhaps this useage is already imbodied by the current article, but not
2517:
The binomial bit is necessary as LLN talks about a sample mean approaching a population mean, and then suddenly we have a form of LLN that talks about frequency. It therefore needs to be clarified to the reader that for a binomial rv with values of 0 and 1, the sample mean is the same thing as
1733:
Another part of my problem is that this is an article about the Law of large numbers and not about the central limit theorum. The law of large numbers does not depend upon the central limit theorum and no discussion of the CLT is required to understand the LLN. So, in this article it becomes
1568:
I believe Jules is right about the current "exactly equal" has a specific meaning in mathematics, different from "approaches" or "converges". A well known fact is that an irrational number cannot be expressed as the ratio of two integers, and that is what Jules is referring to here. But Jules,
1103:
3) I feel the discussion is digressing now. You wrote "CLT uses sample size, but it does not make a comment about how the mean of the sample approaches the mean of the population as sample size increases. It does view the variance as changing with sample size". There is a lack of mathematical
691:
The apple example is horribly wrong. It is NOT because something near the whole population has been counted that the LLN works. The apple example implies trials being nowhere near independent when the sample size approaches the whole population size. In the apple example, the sample average
1433:
Variance of the distribution of the sample means is important, because if statistical inferences are to be made it is required. My point was that as LLN doesn't provide the variance (or the distribution) it is not sufficient for estimating likelihood (testing statistical hypotheses). Regards,
358:
I would ask, that if you feel that the LLN addresses the issues you raise, that you find a source that says so. But in the mean-time, look at the equations and see that the LLN simply states that as the sample size gets larger the mean approaches more surely the mean of the population or the
1208:
The article had the sentence "In particular, it permits precise measurement of the likelihood that an estimate is close to the "right" or true number." I removed it as I think this is wrong. "measurement of likelihood" requires the distribution, whereas LLN is only an asymptotic result. For
2334:
First of all, wikipedia is the encyclopedia that ANYONE can edit. If you do not like that approach, you should find a different venue. It is entirely inappropriate to tell other editors to leave an article. You should read the agreement here. If you do not want your contributions to be
1013:
1. CLT uses sample size, but it does not make a comment about how the mean of the sample approaches the mean of the population as sample size increases. It does view the variance as changing with sample size -- but LLN did this first and it is somewhat irrelevant to the point that it does
1557:
The sample mean will approach 1/sqrt(2) as the sample size goes to infinity. There is no problem with this. Your restatement would not be right, because as long as the assumptions hold, the law will hold also... not "almost all". I will try to return this weekend to go further, but not
2230:
50-50 heads and tails is only 6/16 = 37.5% - a bit more than 1 in 3. That means we are more likely to get a some other combination of heads and tails. You would think that with a fair 50-50 coin you would get exactly 2 heads and 2 tails more often. But you won't. This is called the
1429:
You wrote "Doesn't LLN hold true even if a sample has only 2 observations? Doesn't that also hold for CLT?" As far as I understand, both LLN and CLT require sample sizes to go to infinity. However these results are applied whenever the statistician believes the sample size is "large"
2341:
Third, the way you have changed it may not be quite so correct. (Mind you, I am the one who wrote the sentence that you changed it to). But the law of large numbers is really a result of probability theory. It is then extrapolated to statistics. So, I think that this distinction is
1279:
Getting more to your point, I believe that the two fundamental theorems are co-equal and describe different things. The CLT looks at the distribution of sample means -- regardless of sample size -- while the LLN looks at the size of a sample without regard to how sample means are
2272:
tosses grows at the rate of N. This is the fact that lies at the heart of LLN. So the ratio of the difference from the 50% value (that is d) to the number of tosses (that is N) collapses to zero as N grows (even though d itself is growing). This leads the ratio to approach 50% in
1239:
One requires the distribution, and (estimate of) variance to estimate "likelihood". Do you have any reference showing LLN provides these? The reason I ask is that if LLN indeed provided the distribution and variance, then it appears to me that it would provide the results of CLT.
1828:
Generally I think there's been a tendency to to clutter the article with superfluous explanations -- all in good intent, I'm sure, but the article is not the place for defining and discusing convergence in probability and almost sure convergence, that should be done elsewhere.
1396:
3)I do not know either, but they exist and are standard. The history would be intesting. Referring to my previous comment before and relating it to your current comment.. would you say that Rational Numbers lead to Counting Integers or that Counting Integers lead to Rational
1609:
mean that the complement is the empty set (= the impossible event). The situation is entirely analogous to the length of a mathematical point on a line. The point has length 0 (exactly 0) even though it exists. Anyhow, many other readers will probably not know the meaning of
100:
CLT makes a statement about the (normal) distribution of means of samples. Here I am referring to LLN which does not say anything about distributions but rather convergence in probability of one particular estimator (the sample mean) to one particular number (the population
1588:
Blue Tie, of course there is no problem with the law, the problem is the formulation in the quoted sentence of the article. My restatement is exactly right, and not in contradiction with the law. The only problem is that you obviously do not know the meaning of the term
298:
5.My problem with the 99/100 example is it seems to say "LLN works because almost all apples have been counted", whereas LLN really says "Leave 99% uncounted, or even 99.99% uncounted, just make sure the number you count is large and you will get close to the population
1344:
though they may not be reported for the lay public. For example Gallup describing one of its polls says "For results based on this sample, one can say with 95% confidence that the maximum error attributable to sampling and other random effects is Ā±3 percentage points."
1017:
2. The article on wikipedia is written by just anyone. Perhaps it is a mistake to say "many" or perhaps they meant many "sets" or many "types". It is not the typical wording for CLT definitions and unless you are refering to types of variables then it does not make
1777:
I think it is helpful to read what Bernoulli originally said as he discussed the theory. He expressed his views in a way that are helpful for non-mathematicians to understand. When someone can do that, it is useful. Einstein tried to do the same thing with his
1713:
sample as the mean is a sum. The variance as given by CLT collapses as the sample size grows larger, it follows that the mean converges to a number (which CLT says is the population mean). This is the LLN. So LLN is a result that can be obtained from the CLT.
634:
say "sometimes" you imply that there exist examples where 0.001% is not sufficient even though the sample is "large". If you can find such an example where the sample size is "large" but LLN is not true you would have disproved LLN as it is currently stated.
2631:
The graph, which was a very good addition and provides a good way to illustrate the concept, is somewhat unrefined in appearance and is way too big for wikipedia standards. Maybe to illustrate the point well it cannot be too small, but it is too big now.
2567:
to do with the LLN or whether or not it is true. Why do people continue to interpret this mathematical theorem beyond its premise. It is about the convergence of the sample mean of identically distributed random variables. Nothing more and nothing less. -
1042:
It's not altogether true that NOTHING is assumed in the CLT about the original population. Usually it's assumed to have finite variance, and sometimes weaker assumptions are involved. These assumptions may be weakened but not simply discarded, since the
1464:." These sentences seem to have no connection with LLN and I am removing them. The issues addressed by these sentences are sampling with or without replacement, independence of draws etc. Not central to LLN and confusing to the reader to have them here. 1640:
I just noticed that the sentence before it, about the weak law, is wrong too. The person who wrote this seems to confuse the weak and the strong law! I wonder why someone writes about something he obviously didn't understand. I will change it shortly.
2308:
The current introduction is WRONG because it implies the random variable being described is a binomial (either occurs with probability p or doesn't occur with 1-p), whereas LLN applies to all random variables (subject to some regularity conditions).
2627:
The first sentence is good, but needs something to follow it. However what follows is not a discussion of probability but of an application of probability, namely statistics. It is somewhat illusory that it is a descriptive follow-on to the first
1333:
3) I do not know how the nomenclature 1st, 2nd etc. arose. Nor can you prove that CLT does not lead to LLN by saying "CLT would be the only one mentioned". These are mathematical issues, and there should be no reason for this kind of ambiguity.
845:
CLT does contain within it LLN. CLT "exceeds" LLN in the sense that not only does it give the mean value for the distributions of sample means to be equal to the population mean, but it also gives the variance and the nature of the distribution
132:
and Hypothesis Testing using sample means. But that concept is better handled under those topics. This article is more limited; It is just the Law of Large Numbers... not an article about sampling probability, which is what you are addressing.
1047:
would then provide a counterexample to such a proposed modified version of the CLT. Also, the sample size matters, in the sense that with too small a sample, the distribution will fail to be a close approximation to a normal distribution.
163:
Suppose you wish to know how 100 million voters are going to vote in the elections (a binomial distribution). You do not have to sample 99 million of them (as the example suggests), you only have to sample 10,000 to estimate the mean quite
2651:
But within the example of the throw of a die, we are missing an opportunity to explain another aspect of the LLN: namely that each of the individual results (1,2,3,4,5,6) will ALSO approach 1/6 of the total throws. Another graph comes to
1755:
changes. Also the reasons for the creation of the two sections "Probability" and "Statistics" and the distinction between the two are not apparent to me. To make it easier to judge, here is the version of the artcle prior to the changes:
1099:
2)I looked at the Rice simulation, but couldn't get it to work. What was the distribution of the population? Can you select binomial? If the distribution of the population is, say, normal, then samples of size N=2 will indeed have normal
2374:"the theorem in probability theory that the number of successes increases as the number of experiments increases and approximates the probability times the number of experiments for a large number of experiments" (Dictionary.com). 2599:
Thanks. Their contribution has indeed little to do with the topics discussed in this page (it's about uniform laws of large numbers, which actually would seem to fit more naturally on a page dealing with concentration of measure).
2367:"If the probability of a given outcome to an event is P and the event is repeated N times, then the larger N becomes, so the likelihood increases that the closer, in proportion, will be the occurrence of the given outcome to N*P." ( 1896:
Lets see if we can understand where the confusion is. When the coin comes down, there is absolutely no way to know, in advance which side will be up. It could be heads or it could be tails. But it can ONLY be one of those two
2241:
We can also see the probability of getting 4 heads in a row. (That is the one on the top row) It is 1 in 16. About 6%. If we looked at getting 10 heads in a row it would be 1 in 1024 -- less than 1 chance in a thousand.
2541:
I've created an article that is the converse of this/the law of averages, and welcome any comments/contributions. I've not yet x-ref'd it, in case people feel it ought to simply be merged and a redirect placed in its stead.
2409:
I have asked Michael Hardy to give his opinion. Who first wrote some particular text is relatively unimportant, but since you mention it, I will point out that the introduction I had reverted to was written by Michael Hardy
885:
nu (where n is sample size and Greek letter nu is population mean) from sum of random variables and then dividing by sigma * sq-root(n). So CLT gives not only the nature of the distribution, but also the mean and variance.
2335:
mercilessly edited then do not edit here. The article WILL change over time. That is fundamental to wikipedia. Other editors WILL contribute. That is fundamental. If you cannot stand this, you are in the wrong place.
1459:
The article had the sentences "However, in an infinite (or very large) set of observations, the value of any one individual observation cannot be predicted based upon past observations. Such predictions are known as the
1141:
4)Actually, my main problem is that you are declaring LLN to be superceded or replaced by CLT. But maybe you are right. I don't think so, but maybe. I have been tossing it back and forth in my mind. And I can see the
1614:
either, so on this point a rephrasing of my formulation is needed. The present formulation in the article is evidently an error though. Believe me, or read a good book like Billingsley, Probability and measure, 1986.
344:
3.Regarding rules of thumb for sample size, this article is not about sample size or statistical inferencing. I think you should address those issues in these other articles. This is just about the Law of Large
1130:
1)Yes, you can do testing on a small sample but I understand what you mean. As sample size increases, you are more confident of a normal distribution. Thus the small sample adjustments that modify the normal
180:
The 99/100 example would also lead a reader to wrongly believe that most of the population (~99%) has to be sampled for LLN to work, whereas LLN will work for samples 1% or smaller as long as they are "large".
159:
But LLN says much more than that. It says that even if you sample only 1% (or even less) of the population, the sample mean will still approach will still approach the population mean as long as the sample is
2359:"in repeated, independent trials with the same probability p of success in each trial, the chance that the percentage of successes differs from the probability p by more than a fixed positive amount, e : --> 2493:
It is now going a bit further away from better. It is now a bit too wordy and technical an introduction. I would like to see it start simply and become more complex later on. Is there no way to do this?
1664:
of sample means, whereas this is about the probability of one success in a sample of very low probability events. Besides as it says "win the lottery", it violates the independence requirement of LLN.
1390:
1)LLN does not provide variance for the sample mean but it is a normally contemplated statistic for such samples (separate from either LLN and CLT). I have lost track about why this might be important.
2238:
If we had chosen to look at 10 coin tosses, out of 1024 possible outcomes we would have seen exactly 512 heads and 512 tails 252 times. That would be 252/1024 = about .246 -- not quite one in four.
1597:). This meaning is that the complement of the event has measure 0, or in the special case of a probability space, probability 0. This implies that the probability of the stated event is 1. Exactly 1. 1032:
Try N=2 and # of Samples =1000 you will see a normal curve. That is with N=2. As I said, sample size does not matter, CLT still works. This is part of the miracle of the Central Limit Theorem. --
1849:
In other words, why should I expect that the coin would come up roughly 500 times heads and roughly 500 times tails, rather than (e.g.) 499 heads and 1 tail (etc.)? Can anyone answer this? Thanks.
1783:
The difference between probability and statistics is this: The Law of Large Numbers is expressed in terms of probability. It may not be exactly clear to some readers how it applies to statistics.
842:
I am not sure about the nomenclature, what is given what name. But I think it is true that if you have proven CLT, then you have also proven LLN. While on the way to proving CLT, you may prove LLN.
1393:
2)The proof of both LLN and CLT contemplate infinity but is that really required in either case? Doesn't LLN hold true even if a sample has only 2 observations? Doesn't that also hold for CLT?
355:
6.Again, you are trying to address statistical inferencing and confidence. That is not really part of LLN but are things that come out of the Law of Large Numbers and the Central Limit Theorem.
704:
I think you are right... The lack of independance invalidates the example. I would like to address this in more detail and produce a better example, but I am busy right now. Maybe later. --
2364:). Technically correct, but it would be nice to find a definition more accessible to non-math majors. Notice also that the same imaginary binomial problem exists that you complain about. 785:
The LLN is not supposed to estimate parameters such as whether a coin's chance of flipping heads is really 50%. Other areas of Statistics can do that, with appropriate levels of confidence.
2384:) Probably one of the clearest definitions, but it is not 100% correct. For example it misses the requirement of independence. And we cannot use it word for word -copyright violations. 551:
work even if only 0.001% or less have been counted as long as the number counted is 'large'". Although that statement is true -- sometimes -- it is not specifically what the LLN says.
153:
Hello Blue, You wrote "Doesn't the law of large numbers explicitly say that as N approaches infinity (for an infinite population) the probability that Xn = mu approaches exactly 1.00?"
1578:
It seems to me what I stated in the above post would remove the difference between the weak law and the strong law. As this is "technical" I will remove myself from this discussion.
111:
The example 99 out of 100 on the other hand is not based on probabilities, but merely because most (99%) of the population has been counted. The example is based on algebra. Thanks!
341:
2. It is not that I am worried about the rate of convergence. I am saying that the LLN does not specifically address that. It simply talks about sample size going to infinity.
2276:. So even though the difference from the 50% (that is d) grows as N gets larger (rather than growing smaller as was your concern), it does not grow as fast as N, leading the 1767:
12:23, 4 March 2007 (UTC) I am okay with removing the section about CLT as it may or many not aid the reader in understanding LLN, but the rest of the edits are problematic.
90:
I believe what you have described is the "Central Limit Theorum" which derives from the Law of Large Numbers but is not the same thing. Consequently I have reverted back. --
104:
The question is whether we can apply LLN when we have a large sample but have sampled only, say 1%, of a population. I believe we can, because of convergence in probability.
1330:
2) CLT does not look as sample mean distribution "regardless of sample size". Look at the proofs of CLT and you will see they require sample size to be "large" (infinite).
2260:
that concern would be justified if LLN actually said the absolute number of coin tosses would approach the 50% value. However LLN does not say that. It actually says the
1875:
than randomness. What I don't understand is why randomness multiplied by n equals some kind of pattern. (I mean, I understand that it does--I just don't understand why.)
289:
2.If you worry about the rate of convergence and wish to be strict, you can always use CLT rather than just LLN to estimate confidence intervals for the population mean.
2226:
If you look at the rows that are highlighted, you can see that those rows have exactly 2 heads and 2 tails. That means that the chance of tossing the coin and getting
1108:
address sample size." Define "directly address". Also please state what part of the sentence in the current article is wrong, and precisely what the error is? Regards,
2315:
The previous introduction was easy to understand and correct, the current is confusing and wrong. Also many other changes have made the article lose focus and ramble.
2552:
I doubt that any of the serious contributors to the article of the law of large numbers believe that the average is the begin all and end all of any distribution.
2416:. I have no problem with you or anyone else editing Wiki articles, all I am requesting is please be very knowledgeable about the subject if you edit this article. 2318:
I don't understand why things are so difficult with this article. I had fill pages over weeks just to get errors removed (for example the 99/100 apples example).
1525:"The strong law states that as the sample size grows larger, the probability that the sample mean and the population mean will be exactly equal approaches 1." 2345:
Fourth, there is a way to handle this better. Use wikipedia standards. This means that we should use verifiable, reliable sources and not Original Research.
1476:
Big mistake. The gambler's fallacy is frequently mentioned in connection with misperceptions about the Law of Large Numbers. They should not be removed. --
1921:
Next you must recognize that out of these possible outcomes, there is only going to be one result each time - either heads or tails. Do you recognize that?
1168:
you wrote except that LLN only provides mean, not the variance. I think what "precedes" what can be argued either way, and really I think it is unimportant.
348:
4.Yes you are right about the SD decreasing according to a square law, and you are right to cite the CLT for that, but this is not the CLT, this is the LLN.
641:
I can prove the equality of sample and population means for a large population of size M is all but one data is counted without resorting to probabilities.
1721:
about a sample and extrapolate their results or conclusions to the population from which the sample was derived with a certain degree of confidence. See
1134:
2)You can choose a variety of non normal distributions such as skewed, uniform or custom distributions. You have to hit the button to the top left.
2473:
It is better. I would like someone in high school who is unfamiliar with statistics or probability to look at it and see if they understand it. --
1349: 2655:
The Bernoulli quote about stupid people should have quote marks around it. And his book was published posthumously ... that should be mentioned.
1071:
will get a tighter result. But, how did the lack of a large sample make the CLT void in that case? (I think this article makes the same case:
1025:
4. Why do you believe that is not a normal distribution? It looks like one to me -- with just 4 points extracted. How do you figure it isn't?
1730:
the CLT. Furthermore, this paragraph justifies by saying that the variance collapses as the sample size grows, which is not exactly true.
2394:) a really excellent, short concise statement. but the phrase "empirical probability approaches the actual probability" could be clearer. 225:
But, of course, when we apply the law, we find that the convergence generally exists and operates as a square function of the sample size.
1802:
Check it now. I still think it would be nice to quote Bernouilli more fully but this is probably sufficient. I hope it reads better --
59: 2607: 1222:
of the likelihood, it provides an estimate of the mean and an analysis of the "reasonable" range in which the mean may be found. --
2658:
The Ars Conjectandi should be added as a link. I tried to find a version of the original. Could not but an English Translation is
1361: 2371:) Again, this has your imagined but nonexistent binomial problem. And again, it is not so clearly worded for the non-math major. 2321:
If you are not an expert on statistics AND are not sure of what you are doing, I request you, please stay away from this article.
1536:
1, it says that a certain probability is exactly 1, namely the probability that the sample mean converges to the population mean.
1569:
shouldn't the better formulation be "samples have a probability "approaching" or "converging" to 0, rather than "exactly" zero?
1298:
not. That is because CLT does not describe the behavior of the mean with respect to the population as N approaches infinity. --
177:
The 99/100 apples example is misleading because the logic behind it is algebra, whereas the logic of LLN is probability theory.
1705:
of the random variables (as long as the distribution has finite variance), as long as the number of random variables added is
1722: 2428:
I'll probably take a look at this later today. The fact that anyone can edit a Knowledge article does not mean that anyone
775:. There is no question that there is a nonzero chance for any finite sample's average to be far off from the expected value. 726:
persons has approximately the same average face as the second sample, even though these samples have no persons in common!"
1104:
precision in your statements. You need to define what these phrases mean: make a comment; approaches; view the variance.
782:
converges to the expected value. "Almost surely" happens with a probability of 1. You might want to look up the subject.
768:
First, I'm glad the apples example is gone; it had more to do with sampling than the LLN. Now for the above questions:
174:
LLN on the other hand says means will be close even if only 1% (or less) are counted, as long as the sample is "large".
717:
about 30 male American college students, and project them upon each other, then you will see a pretty good picture of
38: 2390:"If an experiment is repeated over and over, then the empirical probability approaches the actual probability." ( 47: 17: 2381: 1702: 647:
Population Mean = Sample Mean * (M-1)/M + Last Value * (1/M) = Sample Mean + (Last Value - Sample Mean) * (1/M)
2587: 2312:
So essentially the introduction of the article has been changed from something RIGHT to something WRONG!!!
1718: 1022:
some a distorted, perhaps even discontinuous function and yet the sample means will be normally distributed.
1468: 2611: 2256: 1876: 1850: 1631:
in my reformulation. That there are exceptions with probability 0 is explained in the sentence after it.
292:
3.I suppose it would help the lay reader if we were able to provide some rules of thumb for sample sizes.
2676: 2368: 2232: 1698: 1357: 352:
You are conflating the concepts of statistical confidence with the related but different concept of LLN.
850:
CLT." is true and should be retained. Essentially it tells the reader that LLN is contained within CLT.
655:
as M approaches infinity we have Sample Mean = Population Mean as (Last Value - Sample Mean) = finite.
2603: 2450:. I've also made it clear in the introduction that there are various different versions of the LLN. 1642: 1632: 1619: 1593:, which has an exact meaning in measure theory and probability theory (where it usually rephrased as 1549: 1194: 727: 1717:
CLT allows statisticians to evaluate the reliability of their results because they are able to make
2535: 1701:(CLT) gives the distribution of sums of identical random variables, regardless of the shape of the 1044: 295:
4.The standard deviation of sample means decreases at the rate of square-root of sample size (CLT).
2350:
I do agree that the version you changed back to is more readable but I am not sure it is the best.
2451: 2434: 2338:
Second, there is no assumption of binomial probability. You are reading that into the statement.
2295:
I do not understand why every time I come back to this important article it has deteriorated!!!
1461: 1049: 1028:
5. Let's see.... you can see with your eyes that what I have said is right. take a look here:
693: 2391: 588:
I feel pretty strongly about this. And apparently so do you. Shall we seek outside comment?--
2397:
So, if you do not like the current version, we should use some reliable source to proceed. --
1935:
There is no magical force that will cause it to suddenly prefer to fall one way or the other.
2669: 2640: 2615: 2594: 2572: 2546: 2522: 2519: 2500: 2477: 2464: 2461: 2454: 2437: 2420: 2417: 2401: 2328: 2325: 2284: 2281: 2249: 1879: 1866: 1853: 1833: 1819: 1806: 1793: 1771: 1768: 1764: 1744: 1681: 1668: 1665: 1645: 1635: 1622: 1582: 1579: 1573: 1570: 1562: 1552: 1507: 1504: 1498: 1495: 1480: 1465: 1438: 1435: 1404: 1365: 1353: 1338: 1335: 1302: 1244: 1241: 1226: 1213: 1210: 1198: 1176: 1173: 1150: 1112: 1109: 1079: 1052: 1036: 1002: 999: 980: 926: 923: 917: 889: 886: 879: 857: 854: 829: 817: 813:
happen that way. Any changes you made to suggest that CLT produced LLN should be removed. --
807: 804: 789: 786: 755: 741: 738: 730: 708: 696: 667: 664: 592: 475: 472: 435: 432: 363: 310: 307: 264: 187: 184: 143: 115: 112: 94: 84: 81: 1734:
confusing. It is further confusing in this paragraph because the wording is a bit obtuse.
1345: 1190: 2360:
0, converges to zero as the number of trials n goes to infinity, for every positive e." (
2442:...and now I've rewritten the introduction. "Concept" is the wrong word. The LLN is a 2361: 1209:
distributions we need the CLT. LLN does not enable "precise measurement of likelihood".
1029: 1908:
First you must recognize that there are only two possible outcomes. Is that a problem?
737:
number of apples increased to, say 100,000 (rather than just 100) could work. Regards,
2496:
And the binomial bit is simply unnecessary complexity. It only makes matters worse --
2460:
Michael, thanks! The introduction now looks good, no longer confined to binomial rvs.
825:
In fact, isn't LLN the First Fundamental Theorem of Probability and CLT the second? --
2666: 2591: 2543: 2497: 2474: 2398: 2246: 1863: 1803: 1790: 1741: 1678: 1559: 1477: 1401: 1299: 1223: 1147: 1076: 1033: 977: 914: 876: 826: 814: 779: 752: 705: 661:
I agree that an opinion by an outsider, especically a statistician, would be helpful.
589: 360: 261: 140: 91: 2590:
is a page that mentions their work with regard to a Uniform Law of Large Numbers. --
2637: 2569: 1830: 2556:
exists in some distriutions e.g. the distribution of wealth, this is nothing new.
46:
If you wish to start a new discussion or revive an old one, please do so on the
2659: 1761:
http://en.wikipedia.org/search/?title=Law_of_large_numbers&oldid=112521145
1757:
http://en.wikipedia.org/search/?title=Law_of_large_numbers&oldid=112389587
2382:
http://www.bookrags.com/research/probability-and-the-law-of-large-nu-mmat-03/
156:
Yes, LLN does say that as N approaches infinity the probability approaches 1.
108:
this paragraph would probably be a reasonable way to explain LLN to a layman.
1627:
It occurs to me that for non-technical readers I can just delete the word
1072: 2553: 1660: 2584:
here certainly do not seem to owe Vapnik and Chervonenkis anything...
1963:
example of 4 tosses instead of 10. You will see why in just a moment.
1073:
http://en.wikipedia.org/Central_limit_theorem#Convergence_to_the_limit
2369:
http://www.probabilitytheory.info/topics/the_law_of_large_numbers.htm
1327:
also different from the variance of the distribution of sample means.
1529:
mean is 0, and it remains 0 if the sample size goes to infinity.
139:
really a good idea to remove that example? I do not think so. --
1656: 651:
Sample Mean = Population Mean - (Last Value - Sample Mean)*(1/M)
338:
1. Yes there are rules of thumb, but the LLN does not give them.
1532:
Furthermore, the strong law does not say that some probability
1096:
has probability mass only at 4 points to a normal distribution.
976:
In any case, I think the current wording is not quite right.--
25: 2392:
http://www.andrews.edu/~calkins/math/webtexts/prod01.htm#LLN
1503:
I have explicitly explained the misperception you refer to.
1323:
Hello Blue, There are multiple confusions in your last post.
1789:
it is, or parents having to deal with kid's homework. --
1825:
of July 24 imporved the article tremendously in my view.
286:
enough for CLT to be applied to a binomial distribution.
2414: 2411: 2303: 2300: 1823: 1760: 1756: 1138:
the variance.") But that might be a distraction anyway.
1546:
0, which means that they are practically impossible."
2362:
http://stat-www.berkeley.edu/~stark/Java/Html/lln.htm
2355:
So, in that context, lets see what other sources say:
1605:
1. The complement having probabiliy 0, however, does
1030:
http://www.ruf.rice.edu/~lane/stat_sim/sampling_dist/
1887:
You are right. It is guided entirely by randomness.
778:
As the sample size approaches infinite, the average
167:
The 99/100 apples example works because of algebra.
2433:sure I see a need for that in this case, however. 2291:Correct Introduction Replaced by an Incorrect One 913:is "right" because the sample size is larger. -- 1618:JS, indeed, that would rather be the weak law. 798:LLN follows from CLT, not the other way around 1522:The current articles contains this sentence: 8: 2268:approaches the 50% value (500 in your case). 2205: 2191: 2177: 2163: 2149: 2135: 2121: 2107: 2093: 2079: 2065: 2051: 2037: 2023: 2009: 1995: 1981: 2636:people not used to running mean plots. --- 1759:and here is the version after the changes: 1709:. CLT thus applies to the sample mean of a 1518:Sample mean may never become exactly equal 471:3 vs. 9 vs. 99 will be retained. Regards, 1737:For these reasons I have moved it here. 1400:4)I agree that I erred in that example -- 1979: 1750:Article has become less or more useful? 1494:sense. I am accordingly repositioning. 1172:says that CLT "precedes" LLN. Regards, 658:Note this is just algebra, and not LLN. 638:This would be a new assumption for LLN. 2622:Things that could improve this article 1694:I have a problem with this paragraph: 44:Do not edit the contents of this page. 1655:I removed the text "For example, the 7: 771:The LLN only guarantees convergence 1982: 24: 2233:Binomial Probability Distribution 2223:I chose just 4 for the example. 29: 2670:05:48, 15 September 2007 (UTC) 2641:21:17, 16 September 2007 (UTC) 2616:07:58, 18 September 2007 (UTC) 2595:05:29, 15 September 2007 (UTC) 1723:Statistical hypothesis testing 1539:A better formulation would be 260:Am I still wrong about that?-- 171:because 99% have been counted. 1: 1976:Outcome Table - 4 Coin Tosses 1646:10:19, 22 January 2007 (UTC) 1636:10:59, 20 January 2007 (UTC) 1623:23:29, 19 January 2007 (UTC) 1583:17:19, 19 January 2007 (UTC) 1574:08:07, 19 January 2007 (UTC) 1563:04:12, 19 January 2007 (UTC) 1553:23:18, 18 January 2007 (UTC) 1508:17:04, 14 January 2007 (UTC) 1499:16:37, 14 January 2007 (UTC) 1481:16:32, 14 January 2007 (UTC) 1469:23:07, 13 January 2007 (UTC) 1454: 1439:22:33, 15 January 2007 (UTC) 1405:13:08, 15 January 2007 (UTC) 1366:07:59, 15 January 2007 (UTC) 1339:07:08, 15 January 2007 (UTC) 1303:23:59, 14 January 2007 (UTC) 1245:21:39, 14 January 2007 (UTC) 1227:20:39, 14 January 2007 (UTC) 1214:22:54, 13 January 2007 (UTC) 1199:21:38, 10 January 2008 (UTC) 1177:22:27, 15 January 2007 (UTC) 1151:12:47, 15 January 2007 (UTC) 1113:07:52, 15 January 2007 (UTC) 1080:01:57, 15 January 2007 (UTC) 1053:01:30, 15 January 2007 (UTC) 1037:00:46, 15 January 2007 (UTC) 1003:21:37, 14 January 2007 (UTC) 981:21:13, 14 January 2007 (UTC) 927:21:16, 14 January 2007 (UTC) 918:20:44, 14 January 2007 (UTC) 890:17:45, 13 January 2007 (UTC) 880:17:29, 13 January 2007 (UTC) 858:06:47, 13 January 2007 (UTC) 830:04:39, 13 January 2007 (UTC) 818:04:30, 13 January 2007 (UTC) 808:18:08, 12 January 2007 (UTC) 756:04:15, 19 January 2007 (UTC) 742:01:39, 23 January 2007 (UTC) 731:01:34, 19 January 2007 (UTC) 709:11:35, 18 January 2007 (UTC) 697:21:39, 17 January 2007 (UTC) 668:06:30, 15 January 2007 (UTC) 593:23:33, 14 January 2007 (UTC) 476:22:44, 13 January 2007 (UTC) 436:22:36, 13 January 2007 (UTC) 364:17:09, 13 January 2007 (UTC) 311:07:34, 13 January 2007 (UTC) 265:04:07, 13 January 2007 (UTC) 188:17:53, 12 January 2007 (UTC) 144:15:21, 12 January 2007 (UTC) 116:17:53, 11 January 2007 (UTC) 95:13:18, 11 January 2007 (UTC) 85:00:43, 11 January 2007 (UTC) 2206: 1189:almost sure convergence. 335:Your points one at a time: 2692: 2523:14:35, 30 April 2007 (UTC) 2501:06:38, 30 April 2007 (UTC) 2478:22:13, 27 April 2007 (UTC) 2465:21:39, 26 April 2007 (UTC) 2455:19:47, 26 April 2007 (UTC) 2438:16:07, 26 April 2007 (UTC) 2421:13:34, 26 April 2007 (UTC) 2402:12:06, 26 April 2007 (UTC) 2329:21:38, 25 April 2007 (UTC) 2285:15:50, 28 April 2007 (UTC) 2245:I hope that helps a bit -- 1008:Your points, one at a time 2573:21:59, 31 July 2007 (UTC) 2547:20:05, 31 July 2007 (UTC) 2250:20:50, 4 April 2007 (UTC) 1880:17:18, 4 April 2007 (UTC) 1867:12:23, 4 April 2007 (UTC) 1854:11:16, 4 April 2007 (UTC) 1834:11:58, 29 July 2007 (UTC) 1807:16:50, 4 March 2007 (UTC) 1794:13:47, 4 March 2007 (UTC) 1772:13:14, 4 March 2007 (UTC) 1745:03:57, 4 March 2007 (UTC) 1682:03:20, 4 March 2007 (UTC) 1669:19:58, 3 March 2007 (UTC) 790:17:19, 26 July 2007 (UTC) 18:Talk:Law of large numbers 2192: 2178: 2164: 2150: 2136: 2122: 2108: 2094: 2080: 2066: 2052: 2038: 2024: 2010: 1996: 1991: 1988: 1985: 2648:looks a bit odd anyway. 2264:approaches 50% not the 1651:Removed lottery example 994:(Unindent)Hello Blue, 2413:and was revised by me 1844:Maybe a silly question 1659:that you will win the 1455:Gambler's Fallacy etc. 1699:central limit theorem 1352:comment was added by 751:might be violated. -- 42:of past discussions. 359:theoretical mean. -- 75:99 out of 100 apples 2561:Tyranny of averages 2536:Tyranny of averages 1045:Cauchy distribution 1204:LLN and likelihood 2618: 2606:comment added by 2280:to approach 50%. 2220: 2219: 1462:Gambler's Fallacy 1369: 72: 71: 54: 53: 48:current talk page 2683: 2601: 1980: 1347: 721:average face of 68: 56: 55: 33: 32: 26: 2691: 2690: 2686: 2685: 2684: 2682: 2681: 2680: 2624: 2581: 2539: 2293: 2266:absolute number 1846: 1752: 1692: 1653: 1520: 1457: 1348:ā€”The preceding 1206: 800: 77: 64: 30: 22: 21: 20: 12: 11: 5: 2689: 2687: 2663: 2662: 2656: 2653: 2649: 2645: 2644: 2643: 2629: 2623: 2620: 2580: 2577: 2576: 2575: 2557: 2538: 2533: 2532: 2531: 2530: 2529: 2528: 2527: 2526: 2525: 2508: 2507: 2506: 2505: 2504: 2503: 2494: 2485: 2483: 2482: 2481: 2480: 2468: 2467: 2426: 2425: 2424: 2423: 2357: 2356: 2352: 2351: 2347: 2346: 2343: 2339: 2336: 2292: 2289: 2288: 2287: 2269: 2257:89.100.149.237 2235:by the way. 2218: 2217: 2214: 2211: 2208: 2204: 2203: 2200: 2197: 2194: 2190: 2189: 2186: 2183: 2180: 2176: 2175: 2172: 2169: 2166: 2162: 2161: 2158: 2155: 2152: 2148: 2147: 2144: 2141: 2138: 2134: 2133: 2130: 2127: 2124: 2120: 2119: 2116: 2113: 2110: 2106: 2105: 2102: 2099: 2096: 2092: 2091: 2088: 2085: 2082: 2078: 2077: 2074: 2071: 2068: 2064: 2063: 2060: 2057: 2054: 2050: 2049: 2046: 2043: 2040: 2036: 2035: 2032: 2029: 2026: 2022: 2021: 2018: 2015: 2012: 2008: 2007: 2004: 2001: 1998: 1994: 1993: 1990: 1987: 1984: 1969: 1968: 1967: 1966: 1965: 1964: 1955: 1954: 1953: 1952: 1951: 1950: 1941: 1940: 1939: 1938: 1937: 1936: 1927: 1926: 1925: 1924: 1923: 1922: 1914: 1913: 1912: 1911: 1910: 1909: 1901: 1900: 1899: 1898: 1891: 1890: 1889: 1888: 1877:89.100.149.237 1872: 1871: 1870: 1869: 1851:89.100.149.237 1845: 1842: 1841: 1840: 1839: 1838: 1837: 1836: 1826: 1812: 1811: 1810: 1809: 1797: 1796: 1785: 1784: 1780: 1779: 1751: 1748: 1728: 1691: 1688: 1687: 1686: 1685: 1684: 1652: 1649: 1586: 1585: 1576: 1519: 1516: 1515: 1514: 1513: 1512: 1511: 1510: 1501: 1486: 1485: 1484: 1483: 1456: 1453: 1452: 1451: 1450: 1449: 1448: 1447: 1446: 1445: 1444: 1443: 1442: 1441: 1431: 1416: 1415: 1414: 1413: 1412: 1411: 1410: 1409: 1408: 1407: 1398: 1394: 1391: 1379: 1378: 1377: 1376: 1375: 1374: 1373: 1372: 1371: 1370: 1341: 1331: 1328: 1324: 1312: 1311: 1310: 1309: 1308: 1307: 1306: 1305: 1288: 1287: 1286: 1285: 1284: 1283: 1282: 1281: 1270: 1269: 1268: 1267: 1266: 1265: 1264: 1263: 1252: 1251: 1250: 1249: 1248: 1247: 1232: 1231: 1230: 1229: 1205: 1202: 1186: 1185: 1184: 1183: 1182: 1181: 1180: 1179: 1169: 1158: 1157: 1156: 1155: 1154: 1153: 1143: 1139: 1135: 1132: 1122: 1120: 1119: 1118: 1117: 1116: 1115: 1105: 1101: 1100:distributions. 1097: 1087: 1085: 1084: 1083: 1082: 1065: 1064: 1063: 1062: 1040: 1039: 1026: 1023: 1019: 1015: 1010: 1009: 992: 991: 990: 989: 988: 987: 986: 985: 984: 983: 965: 964: 963: 962: 961: 960: 959: 958: 957: 956: 942: 940: 939: 938: 937: 936: 935: 934: 933: 932: 931: 930: 929: 901: 900: 899: 898: 897: 896: 895: 894: 893: 892: 865: 864: 863: 862: 861: 860: 851: 847: 843: 835: 834: 833: 832: 799: 796: 795: 794: 793: 792: 783: 776: 761: 760: 759: 758: 745: 744: 714: 713: 712: 711: 689: 688: 687: 686: 685: 684: 683: 682: 681: 680: 679: 678: 677: 676: 675: 674: 673: 672: 671: 670: 662: 659: 656: 652: 648: 645: 644:Algebraically, 642: 639: 635: 612: 611: 610: 609: 608: 607: 606: 605: 604: 603: 602: 601: 600: 599: 598: 597: 596: 595: 569: 568: 567: 566: 565: 564: 563: 562: 561: 560: 559: 558: 557: 556: 555: 554: 553: 552: 531: 530: 529: 528: 527: 526: 525: 524: 523: 522: 521: 520: 519: 518: 517: 516: 515: 514: 493: 492: 491: 490: 489: 488: 487: 486: 485: 484: 483: 482: 481: 480: 479: 478: 453: 452: 451: 450: 449: 448: 447: 446: 445: 444: 443: 442: 441: 440: 439: 438: 413: 412: 411: 410: 409: 408: 407: 406: 405: 404: 403: 402: 401: 400: 399: 398: 379: 378: 377: 376: 375: 374: 373: 372: 371: 370: 369: 368: 367: 366: 356: 353: 349: 346: 342: 339: 322: 321: 320: 319: 318: 317: 316: 315: 314: 313: 304: 300: 296: 293: 290: 287: 274: 273: 272: 271: 270: 269: 268: 267: 251: 250: 249: 248: 247: 246: 245: 244: 233: 232: 231: 230: 229: 228: 227: 226: 216: 215: 214: 213: 212: 211: 210: 209: 197: 195: 194: 193: 192: 191: 190: 181: 178: 175: 172: 168: 165: 161: 157: 154: 136: 135: 134: 133: 129: 121: 119: 118: 109: 105: 102: 89: 76: 73: 70: 69: 62: 52: 51: 34: 23: 15: 14: 13: 10: 9: 6: 4: 3: 2: 2688: 2679: 2678: 2677:24.253.40.138 2672: 2671: 2668: 2661: 2657: 2654: 2650: 2646: 2642: 2639: 2634: 2633: 2630: 2626: 2625: 2621: 2619: 2617: 2613: 2609: 2605: 2597: 2596: 2593: 2589: 2585: 2578: 2574: 2571: 2566: 2562: 2558: 2555: 2551: 2550: 2549: 2548: 2545: 2537: 2534: 2524: 2521: 2516: 2515: 2514: 2513: 2512: 2511: 2510: 2509: 2502: 2499: 2495: 2492: 2491: 2490: 2489: 2488: 2487: 2486: 2479: 2476: 2472: 2471: 2470: 2469: 2466: 2463: 2459: 2458: 2457: 2456: 2453: 2452:Michael Hardy 2449: 2445: 2440: 2439: 2436: 2435:Michael Hardy 2431: 2422: 2419: 2415: 2412: 2408: 2407: 2406: 2405: 2404: 2403: 2400: 2395: 2393: 2388: 2385: 2383: 2378: 2375: 2372: 2370: 2365: 2363: 2354: 2353: 2349: 2348: 2344: 2340: 2337: 2333: 2332: 2331: 2330: 2327: 2322: 2319: 2316: 2313: 2310: 2306: 2304: 2301: 2296: 2290: 2286: 2283: 2279: 2275: 2270: 2267: 2263: 2258: 2254: 2253: 2252: 2251: 2248: 2243: 2239: 2236: 2234: 2229: 2224: 2215: 2212: 2209: 2201: 2198: 2195: 2187: 2184: 2181: 2173: 2170: 2167: 2159: 2156: 2153: 2145: 2142: 2139: 2131: 2128: 2125: 2117: 2114: 2111: 2103: 2100: 2097: 2089: 2086: 2083: 2075: 2072: 2069: 2061: 2058: 2055: 2047: 2044: 2041: 2033: 2030: 2027: 2019: 2016: 2013: 2005: 2002: 1999: 1978: 1977: 1973: 1961: 1960: 1959: 1958: 1957: 1956: 1947: 1946: 1945: 1944: 1943: 1942: 1933: 1932: 1931: 1930: 1929: 1928: 1920: 1919: 1918: 1917: 1916: 1915: 1907: 1906: 1905: 1904: 1903: 1902: 1895: 1894: 1893: 1892: 1886: 1885: 1884: 1883: 1882: 1881: 1878: 1868: 1865: 1860: 1859: 1858: 1857: 1856: 1855: 1852: 1843: 1835: 1832: 1827: 1824: 1821: 1820:Shivan Bird's 1818: 1817: 1816: 1815: 1814: 1813: 1808: 1805: 1801: 1800: 1799: 1798: 1795: 1792: 1787: 1786: 1782: 1781: 1776: 1775: 1774: 1773: 1770: 1766: 1762: 1758: 1749: 1747: 1746: 1743: 1738: 1735: 1731: 1726: 1724: 1720: 1715: 1712: 1708: 1704: 1700: 1695: 1689: 1683: 1680: 1675: 1674: 1673: 1672: 1671: 1670: 1667: 1662: 1658: 1650: 1648: 1647: 1644: 1638: 1637: 1634: 1630: 1625: 1624: 1621: 1616: 1613: 1608: 1604: 1600: 1596: 1595:almost surely 1592: 1584: 1581: 1577: 1575: 1572: 1567: 1566: 1565: 1564: 1561: 1555: 1554: 1551: 1547: 1545: 1540: 1537: 1535: 1530: 1526: 1523: 1517: 1509: 1506: 1502: 1500: 1497: 1492: 1491: 1490: 1489: 1488: 1487: 1482: 1479: 1475: 1474: 1473: 1472: 1471: 1470: 1467: 1463: 1440: 1437: 1432: 1428: 1427: 1426: 1425: 1424: 1423: 1422: 1421: 1420: 1419: 1418: 1417: 1406: 1403: 1399: 1395: 1392: 1389: 1388: 1387: 1386: 1385: 1384: 1383: 1382: 1381: 1380: 1367: 1363: 1359: 1355: 1351: 1346: 1342: 1340: 1337: 1332: 1329: 1325: 1322: 1321: 1320: 1319: 1318: 1317: 1316: 1315: 1314: 1313: 1304: 1301: 1296: 1295: 1294: 1293: 1292: 1291: 1290: 1289: 1278: 1277: 1276: 1275: 1274: 1273: 1272: 1271: 1260: 1259: 1258: 1257: 1256: 1255: 1254: 1253: 1246: 1243: 1238: 1237: 1236: 1235: 1234: 1233: 1228: 1225: 1220: 1219: 1218: 1217: 1216: 1215: 1212: 1203: 1201: 1200: 1196: 1192: 1178: 1175: 1170: 1166: 1165: 1164: 1163: 1162: 1161: 1160: 1159: 1152: 1149: 1144: 1140: 1136: 1133: 1131:distribution. 1129: 1128: 1127: 1126: 1125: 1124: 1123: 1114: 1111: 1106: 1102: 1098: 1094: 1093: 1092: 1091: 1090: 1089: 1088: 1081: 1078: 1074: 1069: 1068: 1067: 1066: 1059: 1058: 1057: 1056: 1055: 1054: 1051: 1050:Michael Hardy 1046: 1038: 1035: 1031: 1027: 1024: 1020: 1016: 1012: 1011: 1007: 1006: 1005: 1004: 1001: 995: 982: 979: 975: 974: 973: 972: 971: 970: 969: 968: 967: 966: 953: 952: 951: 950: 949: 948: 947: 946: 945: 944: 943: 928: 925: 921: 920: 919: 916: 911: 910: 909: 908: 907: 906: 905: 904: 903: 902: 891: 888: 883: 882: 881: 878: 873: 872: 871: 870: 869: 868: 867: 866: 859: 856: 852: 848: 844: 841: 840: 839: 838: 837: 836: 831: 828: 824: 823: 822: 821: 820: 819: 816: 810: 809: 806: 797: 791: 788: 784: 781: 780:almost surely 777: 774: 770: 769: 767: 766: 765: 757: 754: 749: 748: 747: 746: 743: 740: 735: 734: 733: 732: 729: 724: 720: 710: 707: 703: 702: 701: 700: 699: 698: 695: 694:Michael Hardy 669: 666: 663: 660: 657: 653: 649: 646: 643: 640: 636: 632: 631: 630: 629: 628: 627: 626: 625: 624: 623: 622: 621: 620: 619: 618: 617: 616: 615: 614: 613: 594: 591: 587: 586: 585: 584: 583: 582: 581: 580: 579: 578: 577: 576: 575: 574: 573: 572: 571: 570: 549: 548: 547: 546: 545: 544: 543: 542: 541: 540: 539: 538: 537: 536: 535: 534: 533: 532: 511: 510: 509: 508: 507: 506: 505: 504: 503: 502: 501: 500: 499: 498: 497: 496: 495: 494: 477: 474: 469: 468: 467: 466: 465: 464: 463: 462: 461: 460: 459: 458: 457: 456: 455: 454: 437: 434: 429: 428: 427: 426: 425: 424: 423: 422: 421: 420: 419: 418: 417: 416: 415: 414: 395: 394: 393: 392: 391: 390: 389: 388: 387: 386: 385: 384: 383: 382: 381: 380: 365: 362: 357: 354: 350: 347: 343: 340: 337: 336: 334: 333: 332: 331: 330: 329: 328: 327: 326: 325: 324: 323: 312: 309: 305: 301: 297: 294: 291: 288: 284: 283: 282: 281: 280: 279: 278: 277: 276: 275: 266: 263: 259: 258: 257: 256: 255: 254: 253: 252: 241: 240: 239: 238: 237: 236: 235: 234: 224: 223: 222: 221: 220: 219: 218: 217: 206: 205: 204: 203: 202: 201: 200: 199: 198: 189: 186: 182: 179: 176: 173: 169: 166: 162: 158: 155: 152: 151: 150: 149: 148: 147: 146: 145: 142: 130: 126: 125: 124: 123: 122: 117: 114: 110: 106: 103: 99: 98: 97: 96: 93: 87: 86: 83: 74: 67: 63: 61: 58: 57: 49: 45: 41: 40: 35: 28: 27: 19: 2673: 2664: 2608:129.194.8.73 2598: 2586: 2582: 2564: 2560: 2540: 2484: 2447: 2443: 2441: 2429: 2427: 2396: 2389: 2386: 2379: 2376: 2373: 2366: 2358: 2323: 2320: 2317: 2314: 2311: 2307: 2297: 2294: 2277: 2273: 2265: 2261: 2244: 2240: 2237: 2227: 2225: 2221: 1975: 1974: 1970: 1873: 1847: 1753: 1739: 1736: 1732: 1727: 1716: 1710: 1706: 1703:distribution 1696: 1693: 1654: 1639: 1628: 1626: 1617: 1611: 1606: 1602: 1598: 1594: 1590: 1587: 1556: 1548: 1543: 1541: 1538: 1533: 1531: 1527: 1524: 1521: 1458: 1280:distributed. 1207: 1187: 1121: 1086: 1041: 996: 993: 941: 811: 801: 772: 762: 722: 718: 715: 690: 196: 137: 120: 88: 78: 65: 43: 37: 2602:ā€”Preceding 2518:frequency. 2444:proposition 1719:assumptions 1354:Jayanta Sen 787:Shivan Bird 164:accurately. 36:This is an 2579:References 2278:proportion 2274:proportion 2262:proportion 1690:Moving CLT 1643:JulesEllis 1633:JulesEllis 1620:JulesEllis 1603:approaches 1591:almost all 1550:JulesEllis 1534:approaches 1191:OliAtlason 773:eventually 728:JulesEllis 2628:sentence. 2302:and this 1763:Regards, 853:Regards, 846:(normal). 306:Regards, 183:Regards, 66:ArchiveĀ 2 60:ArchiveĀ 1 2667:Blue Tie 2604:unsigned 2592:Blue Tie 2554:Skewness 2544:Belg4mit 2498:Blue Tie 2475:Blue Tie 2446:, not a 2399:Blue Tie 2324:Thanks, 2247:Blue Tie 1897:choices. 1864:Blue Tie 1804:Blue Tie 1791:Blue Tie 1742:Blue Tie 1679:Blue Tie 1560:Blue Tie 1478:Blue Tie 1402:Blue Tie 1397:Numbers? 1362:contribs 1350:unsigned 1300:Blue Tie 1224:Blue Tie 1148:Blue Tie 1077:Blue Tie 1034:Blue Tie 978:Blue Tie 915:Blue Tie 877:Blue Tie 827:Blue Tie 815:Blue Tie 753:Blue Tie 706:Blue Tie 590:Blue Tie 361:Blue Tie 345:Numbers. 262:Blue Tie 160:"large". 141:Blue Tie 92:Blue Tie 2638:Aastrup 2570:Aastrup 2565:nothing 2448:concept 2228:exactly 1992:Fourth 1986:Second 1831:Aastrup 1778:papers. 1661:lottery 1601:merely 1544:exactly 1430:enough. 39:archive 2430:should 1989:Third 1983:First 1629:almost 1612:almost 1558:now.-- 1018:sense. 654:=: --> 650:=: --> 299:mean." 101:mean). 2652:mind. 2559:This 2342:lost. 1822:edit 1711:large 1707:large 1142:math. 16:< 2660:Here 2612:talk 2588:Here 2563:has 1697:The 1657:odds 1358:talk 1195:talk 1075:) -- 2255:To 1607:not 1599:Not 1014:so. 723:all 719:the 2665:-- 2614:) 2542:-- 2520:JS 2462:JS 2418:JS 2326:JS 2305:. 2282:JS 2216:T 2213:T 2210:T 2207:T 2202:H 2199:T 2196:T 2193:T 2188:T 2185:H 2182:T 2179:T 2174:H 2171:H 2168:T 2165:T 2160:T 2157:T 2154:H 2151:T 2146:H 2143:T 2140:H 2137:T 2132:T 2129:H 2126:H 2123:T 2118:H 2115:H 2112:H 2109:T 2104:T 2101:T 2098:T 2095:H 2090:H 2087:T 2084:T 2081:H 2076:T 2073:H 2070:T 2067:H 2062:H 2059:H 2056:T 2053:H 2048:T 2045:T 2042:H 2039:H 2034:H 2031:T 2028:H 2025:H 2020:T 2017:H 2014:H 2011:H 2006:H 2003:H 2000:H 1997:H 1769:JS 1765:JS 1740:-- 1725:. 1666:JS 1580:JS 1571:JS 1505:JS 1496:JS 1466:JS 1436:JS 1364:) 1360:ā€¢ 1336:JS 1242:JS 1211:JS 1197:) 1174:JS 1110:JS 1000:JS 924:JS 887:JS 855:JS 805:JS 739:JS 665:JS 473:JS 433:JS 308:JS 185:JS 113:JS 82:JS 2610:( 1368:. 1356:( 1193:( 50:.

Index

Talk:Law of large numbers
archive
current talk page
ArchiveĀ 1
ArchiveĀ 2
JS
00:43, 11 January 2007 (UTC)
Blue Tie
13:18, 11 January 2007 (UTC)
JS
17:53, 11 January 2007 (UTC)
Blue Tie
15:21, 12 January 2007 (UTC)
JS
17:53, 12 January 2007 (UTC)
Blue Tie
04:07, 13 January 2007 (UTC)
JS
07:34, 13 January 2007 (UTC)
Blue Tie
17:09, 13 January 2007 (UTC)
JS
22:36, 13 January 2007 (UTC)
JS
22:44, 13 January 2007 (UTC)
Blue Tie
23:33, 14 January 2007 (UTC)
JS
06:30, 15 January 2007 (UTC)
Michael Hardy

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.

ā†‘