Jump to content

Talk:Bootstrapping (statistics)

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia


Wiki Education Foundation-supported course assignment

[edit]

This article was the subject of a Wiki Education Foundation-supported course assignment, between 27 August 2021 and 19 December 2021. Further details are available on the course page. Student editor(s): 0.25cm.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 16:06, 16 January 2022 (UTC)[reply]

Merging with Bagging

[edit]

(see later for diuscussion of contents)

Yes this page should be merged.Gpeilon 15:01, 10 January 2007 (UTC)[reply]

I agree. Tolstoy the Cat 19:11, 22 January 2007 (UTC)[reply]

I agree. --Bikestats 13:08, 9 February 2007 (UTC)[reply]

I also agree. Tom Joseph 20:53, 13 February 2007 (UTC)[reply]

i also agree.

I do not agree because I was looking for an explanation of the word 'bootstrapping', not 'bootstrap'

I agree Eagon 14:49, 13 March 2007 (UTC)[reply]

I Agree


I do not agree. Bagging is now one of the most famous ensemble methods in machine learning and have their own many unique properties. Nowadays, the reasons why bagging work very well in various situations are still mystery, and there are many theoretical explanations trying to explain bagging click here for a survey.

IMO, merging bagging with Bootstrapping (statistics) is rather similar to merging maximum entropy with information entropy which is not appropriate.

To sum up, bagging has its own unique place in literatures, and should also have their own page here. -- Jung dalglish 03:12, 7 May 2007 (UTC)[reply]

--- I agree completely with this point; bagging is one of the key approaches to ensemble-based machine learning, and it certainly has its own life entirely apart from the origins of bootstrapping in statistics. From a machine learning point of view, it would be meaningless to remove it to a statistics based article; machine learners would not find it, because they would not look there.

---

I do not agree that they should be merged. Bagging is a sufficiently unique and well-defined method that it warrants its own page. I was looking for bagging as a machine learning method, and would not have immediately thought to look under boostrapping.

--

Bagging is a specific application of bootstrapping, which is different enough from the usual applications that it deserves its own page: You are using the bootstrap sample of estimators to create another estimator, rather than using it merely to estimate the distribution of estimators. --Olethros 15:10, 6 June 2007 (UTC)[reply]

--

I do not agree that they should be merged. This article provided a quick and readily absorbed reference for me today, and if it had been buried in a lengthy broad discussion I probably would not have found it and benefitted from the information.

--

I think they should not be merged as "bagging" seems a particular specific application that should not appear in a mainstream initial discussion of bootstrapping. A brief description with cross-reference would be more suitable. Melcombe 13:21, 16 July 2007 (UTC)[reply]


-- There seem to be 2 separate discussions on this page. The first related to "bootstrap" and "bootstrapping". The second to merging "bagging" into the bootstrap article. Like others, I don't think bagging should be merged in. As others have said, it is one particular application. Tolstoy the Little Black Cat 16:50, 19 August 2007 (UTC)[reply]

--

I don't think Bootstrap aggregating (bagging) should be merged in with Boostrapping. The current bootstrapping page is simple and general. To merge in a relatively large, highly specific, relatively atypical application (the page on on bagging) will confuse those looking for a basic understanding of what statistical bootstrapping is, and the basic bootstrapping information will be mostly irrelevant for the typical person looking for bagging. Each article should certainly link to the other, but I think merging will drastically reduce the value. Glenbarnett 03:18, 27 September 2007 (UTC)[reply]

--

I also disagree about merging these. Bootstrap methods are great for inference, but bootstrap aggregation is a method for ensemble learning - i.e. to aggregate collections of models, for robust development using subsamples of the data. To include bagging into bootstrapping is to misunderstand the use of bagging. —Preceding unsigned comment added by 71.132.132.11 (talk) 05:32, 27 September 2007 (UTC)[reply]

I also disagree about merging the Bootstrap and the Bootstrap Aggregating (Bagging) pages; the former is a resampling method for estimating the properties of an estimator while the latter, although it uses bootstrap methodology, is a an Ensemble Learner technique from Statistical Learning and / or Data Mining. In my opinion they are only related by the fact that Bagging uses some modified bootstrap technique to acheive its goal.

Gérald Jean —Preceding unsigned comment added by 206.47.217.67 (talk) 20:06, 22 November 2007 (UTC)[reply]

--

I disagree with merging these. The primary use of bootstrapping is in inferential statistics, providing information about the distribution of an estimator - its bias, standard error, confidence intervals, etc. It is not usually used in its own right as an estimation method. It is tempting for beginners to do so - to use the average of bootstrap statistics as an estimator in place of the statistic calculated on the original data. But this is dangerous, as it typically gives about double the bias.

In contrast, bootstrap aggregation is a randomization method, suitable for use with low-bias high-variability tools such as trees - by averaging across trees the variability is reduced. Yes, the mechanism is the same as what beginners often do, but I don't want to encourage that mistake. Yes, the randomization method happens to use the same sampling mechanism as the simple nonparametric bootstrap, but that is accidental. The intent is different - reducing variability by averaging across random draws, vs quantifying the sampling variation of an estimator.

Tim Hesterberg --Tim Hesterberg (talk) 05:30, 6 December 2007 (UTC)[reply]

Can we now agree that merging is not appropriate and remove this from the discussion, or at least from the top of the article page? Melcombe (talk) 11:35, 12 February 2008 (UTC)[reply]

Yeah, I agree with removing this discussion...what exactly are the rules for that? Doctorambient (talk) 00:59, 30 September 2011 (UTC)[reply]

Discussion of contents

[edit]

mediation

[edit]

I would like to raise an isssue with the mention of "mediation" in the intro material. Should there be a minor subsection for this, explaining what "mediation" means, somw brief details of how boostrapping applies, and possibly with its own example being shown to show the contrast with an ordinary single sample case. Melcombe 13:21, 16 July 2007 (UTC)[reply]


pivots

[edit]

This page needs to mention pivotal statistics, which are critial to bootstrapping. —Preceding unsigned comment added by 129.2.18.171 (talk) 22:18, 11 February 2008 (UTC)[reply]

now added new section, but possibly there is a need for a much more technical description of bootstrapping overall in order to provide enough context/information. This need for a more formal specification would also benefit other parts perhaps. Melcombe (talk) 11:31, 12 February 2008 (UTC)[reply]


Wild bootstrap

[edit]

The definition of "wild bootstrap" is incomplete and does not describe the most commonly used method, see http://fmwww.bc.edu/RePEc/es2000/1413.pdf, page 7. —Preceding unsigned comment added by Arnehe (talkcontribs) 10:07, 4 May 2010 (UTC)[reply]


Thought I would mention an error here. Wild bootstrap does not impose symmetry assumptions. Most of the error distributions \nu will end up imposing that, however classic examples do not. For instance, the derivation of Mammen's distribution is structured in order to not impose a symmetry requirement. 2601:240:C480:2E76:7D0F:9900:4A92:2481 (talk) J —Preceding undated comment added 17:37, 30 January 2019 (UTC)[reply]

unclear

[edit]

I was looking for a definition of the bootstrap method, and couldn't understand the definition given here, in the 2nd sentence: "Bootstrapping is the practice of estimating properties of an estimator (such as its variance) by measuring those properties when sampling from an approximating distribution. One standard choice for an approximating distribution is the empirical distribution of the observed data." Since I do not know whar bootsrapping is, I cannot change it myself, so I wrote it here instead. Setreset (talk) 15:27, 8 April 2008 (UTC)[reply]

The lines are fine, they are somewhat complicated, but they mean what they intend to and what they need to. You could have looked up point estimation and point estimators before delving through this topic, but yes, math on wikipedia seems to be getting crowded with technical jargon. --128.2.48.150 (talk) 13:55, 23 October 2009 (UTC)[reply]
The lines are better than fine. They're a great definition for someone who understands the topic already. Utterly useless for anyone else. —Preceding unsigned comment added by 194.171.7.39 (talk) 13:23, 6 April 2010 (UTC)[reply]
Um, bootstrapping is a fairly advanced topic; that is, it is not for complete novices. You need to already know what an estimator is before coming here, and likely a distribution. Ideally maybe those topics should be linked in the sentences at the top of this article, but we can't start every article from a point of absolute zero knowledge. I am going to re-write the informal example to make it clearer. Maybe someone can look at that and then comment here on how we could make the sentences at the top of the article clearer? Doctorambient (talk) 01:04, 30 September 2011 (UTC)[reply]
It's a questionable approach to think that some topics are only for some specific (educated) people. Base on this notion the lemma of quantum mechanics would be next to unreadable for laymen (but it is not). WP strives to explain even the most complex things in terms which a "novice" would understand. It is okay not to dive too deeply and leave questions open in the details, but the mere definition should be clear even to a barely educated person, and also a kind of vague idea what is being done when using it. Alfe (talk) 08:50, 29 June 2017 (UTC)[reply]

R code

[edit]

The formatting in the R code is ugly. Beside fixing its formatting, another possible solution is to replace that code with Python code. —Preceding unsigned comment added by 79.43.56.205 (talk) 15:56, 19 July 2009 (UTC)[reply]

I heart Python and all, but R is the de facto standard computing language for statistical research. You could make a case for SAS code instead (bleah!) but what is the case for Python? I'll fix the R code. Doctorambient (talk) 01:06, 30 September 2011 (UTC)[reply]
Wait! What R code? I guess it was deleted. Doctorambient (talk) 01:08, 30 September 2011 (UTC)[reply]

Merger proposal

[edit]

Bootstrapping (machine learning) seems to cover basically the same topic. I think the few content of that article should be merged here. Calimo (talk) 11:33, 8 September 2009 (UTC)[reply]

Bootstrapping (machine learning) already contains a link to Bootstrap aggregating and that article would be a better candidate for merging to (or from). The topic here is rather different as indicated by the "(statistics)" in the title. Melcombe (talk) 11:15, 24 September 2009 (UTC)[reply]


The techniques are very different, one is for classifier improvement, the other for inference testing. There is a slim link between the two that the machine learning 'bootstrapping' was in principal derived from statistical bootstrapping technique, but the argument is very thin. I believe it would lead to a lot of confusion. So my vote is for keeping the topics separate while providing the link to either topics in the See also section, i.e. maintain the status quo. --128.2.48.150 (talk) 13:48, 23 October 2009 (UTC)[reply]

Informal description changes

[edit]

When the example refers to a sample of N heights, and then using a computer to make a new sample (called a bootstrap sample) that is also of size N: We need to specify, as I understand it, a bootstrap sample (1) of size n (n is not necessarily equal to N) and (2) that the sample should be made with replacement.

Deriving Confidence intervals from the bootstrap distribution

[edit]

I addded a section talking about " Deriving Confidence intervals from the bootstrap distribution", after finding this part missing in the article although this is a very useful use for bootstrapping methods. However, I left a lot to be extended, so whoever has the knowledge and time, please see to filling up the gaps. Cheers, Talgalili (talk) 10:46, 24 September 2009 (UTC)[reply]

Adèr et al.(2008)?

[edit]

"Adèr et al.(2008)" is mentioned, but not explained. The same sentence is also in a document at dissertationrecipes.com, but it is hard to say who copies who...193.166.223.5 (talk) 13:02, 21 December 2011 (UTC)[reply]

Additionally, two of the three bullet points following this citation are incorrect. The second bullet, "When the sample size is insufficient for straightforward statistical inference," is misleading. "Straigtforward statistical inference" is vague. It would be better to say "asymptotic statistical inference", or "inference based on asymptotic distributions." More importantly, the bootstrap itself only gives correct inference asymptotically. Generally, there is no theoretical reason to expect the bootstrap to have better performance in finite samples than standard procedures. There are some cases where the bootstrap provides an asymptotic refinement, and can be expected to lead to more accurate finite sample inference, but these cases are not described or alluded to. The third bullet point repeats the mistake of the second. — Preceding unsigned comment added by 24.84.24.221 (talk) 05:24, 23 March 2012 (UTC)[reply]

The intended reference might be: Adèr, H. J., Mellenbergh, G. J., & Hand, D.J. (2008). Advising on research methods: A consultant's companion. Huizen, The Netherlands: Johannes van Kessel Publishing.
I have no way of seeing this, so cannot confirm it is the right source. Melcombe (talk) 12:46, 11 June 2012 (UTC)[reply]
I have now found this in an old version of the reference list, so have felt justified in restoring it to the article. The text that is supposed to be supported by this source still nededs to be checked. Melcombe (talk) 13:17, 11 June 2012 (UTC)[reply]

Congrats

[edit]

Congratulations on this accesible but reasonably thorough article. Especially the section with the informal description and practical example should be part of any mathematical page, and possibly every wikipedia page. — Preceding unsigned comment added by 145.18.30.3 (talk) 15:44, 20 March 2012 (UTC)[reply]

Newcomb's speed of light

[edit]

The example section contains figures for Newcomb's speed of light measurements. Going to the referenced URL, one finds the following dataset:

Simon Newcomb's measurements of the speed of light, from Stigler
(1977).  The data are recorded as deviations from $24,\!800$
nanoseconds.  Table 3.1 of Bayesian Data Analysis.

28 26 33 24 34 -44 27 16 40 -2
29 22 24 21 25 30 23 29 31 19
24 20 36 32 36 28 25 21 28 29
37 25 28 26 30 32 36 26 30 22
36 23 27 27 28 27 31 27 26 33
26 32 32 24 39 28 24 25 32 25
29 27 28 29 16 23 

The two outliers are clearly -44 and -2 but the rest of the data ranges from 16 to 40. The bar charts accompanying the example don't reflect the above data at all -- the x-axis labels don't match this range. So WTF!? I slapped a disputed sticker onto the mess. linas (talk) 04:20, 11 April 2012 (UTC)[reply]

The plot appears to be the density of the resampled medians, not the raw data. The point is to obtain the sampling distribution of the median. — Preceding unsigned comment added by 108.75.137.21 (talk) 19:13, 5 June 2012 (UTC)[reply]

Since there is no source for the example in the article it really sould be deleted as WP:OR. Melcombe (talk) 12:49, 11 June 2012 (UTC)[reply]

Not so Simple

[edit]

"very simple methods"

I have to disagree with this statement. See discussion here: http://stats.stackexchange.com/questions/26088/explaining-to-laypeople-why-bootstrapping-works

Compared to deriving asymptotic sampling distribution it is not difficult to apply bootstrapping (http://kurt.schmidheiny.name/teaching/bootstrap2up.pdf). However, to a non-statistician it could be fairly difficult.

The main reason I disagree with the statement is that Wikipedia departs from an expert-driven approach to encyclopaedias and the citation refers to a publication that targets experts. 81.226.179.21 (talk) 14:59, 23 September 2012 (UTC)[reply]


Two History Sections?

[edit]

There's one history section at the top and one history section at the bottom (before "see also"). The bottom contains similar information to the top history section. Weaktofu (talk) 02:32, 7 January 2013 (UTC)[reply]

Semicolons in confidence intervals

[edit]

Is the notation used in the section Methods for bootstrap confidence intervals standard? I've only seen semicolons used in statistics for parameter values (e.g. ), so this confused me. Wouldn't a comma be more standard? E.g.

--Quantum7 08:45, 24 September 2014 (UTC)[reply]

also of size N.

[edit]

Sentence: "...involves taking the original data set of N heights, and, using a computer, sampling from it to form a new sample (called a 'resample' or bootstrap sample) that is also of size N." is unclear. Someone else mentioned it too. If population is 10,000 and you take sample of size N=100, then take samples also of size N (=100), you keep ending up with the original 100 from the original sample... 71.139.163.158 (talk) 05:41, 12 March 2015 (UTC)[reply]

Not if you are sampling with replacement. This means that each time you sample 1 of your 100 values, you put it back into the 'bag', so there's a chance it could be selected again. In fact, if you have 100 unique values in your original sample, from which your perform sampling with replacement, the odds of not resampling at least one value is incredibly small; thus so is computing the same statistic (e.g. mean) on independent trials. Niubrad (talk) 05:39, 4 August 2015 (UTC)[reply]

Dr. MacKinnon's comment on this article

[edit]

Dr. MacKinnon has reviewed this Wikipedia page, and provided us with the following comments to improve its quality:


This an enormous topic, and it would be impossible for one Wikipedia article to do it justice. It does contain quite a bit of good material.

The introduction gives the impression that the bootstrap must involve random sampling with replacement. Of course, this is not true. Counter-examples include the parametric bootstrap and the wild bootstrap, both of which are discussed later.

The following statement in the introduction is either confusing or wrong: "Since we are sampling with replacement, we are likely to get one element repeated, and thus every unique element be used for each resampling." In fact, each resample fails to include a substantial number of the original observations.

The articles cites Cameron, Gelbach, and Miller (2008), but it does not do so for the right reason. By far the most important contribution of that paper is the wild cluster bootstrap, which is not mentioned in section 4.7. Incidentally, I would cite Regina Liu (Annals of Statistics, 1988) as well as Wu (1986) for the wild bootstrap.

The smoothed bootstrap section seems rather odd. If you are going to smooth, why report a histogram rather than a kernel density? What I believe is the most common version of the smoothed bootstrap actually draws bootstrap samples from the kernel density.


We hope Wikipedians on this talk page can take advantage of these comments and improve the quality of the article accordingly.

We believe Dr. MacKinnon has expertise on the topic of this article, since he has published relevant scholarly research:


  • Reference : James G. MacKinnon, 2014. "Wild cluster bootstrap confidence intervals," Working Papers 1329, Queen's University, Department of Economics.

ExpertIdeasBot (talk) 19:00, 30 August 2016 (UTC)[reply]

Small notation error, I think, in Resampling residuals

[edit]

In section (2) of Resampling residuals, we have:

... add a randomly resampled residual, , to the response variable yi. In other words, create synthetic response variables where ...

It looks to me like yi switches to . I'm not sure which is right. I can kind of guess, but I won't because I don't know it well enough. (Or I just can't read and it's right as it is...) 203.14.82.101 (talk) 23:53, 6 November 2018 (UTC)[reply]

The equation is correct, ie. . I'll change the yi to in the text.Lucleon (talk) 00:02, 7 November 2018 (UTC)[reply]
[edit]

I'm not going to do it now, but think it's worth flagging. Tal Galili (talk) 13:27, 18 November 2020 (UTC)[reply]

Yes, I noticed this too and have removed all the links. There's probably a place for what some of them were trying to accomplish (serve as tutorials), but there are plenty of peer-reviewed tutorials that can be linked, and there's really no reason not to include such (peer-reviewed) tutorials in the 'Further Reading' section or the article itself instead of an 'External Links'. Most were more or less blogs anyway, which per WP policy don't meet inclusion standards. - Forst8dits (talk) 17:49, 15 August 2024 (UTC)[reply]

Bayesian Bootstrap

[edit]

could you please add this https://towardsdatascience.com/the-bayesian-bootstrap-6ca4a1d45148 2A02:3100:55C4:4900:31F3:8EA7:5768:BBDD (talk) 13:21, 2 April 2023 (UTC)[reply]

Bootstrap likelihood

[edit]

Not discussed https://www.jstor.org/stable/2337152 . This is linked to empirical likelihood. Bootstrap likelihood uses a nested bootstrap and kernel density estimation. Biggerj1 (talk) 21:50, 18 July 2023 (UTC)[reply]

Asymptotic analysis

[edit]

I added this section to add some description of the asymptotic properties of the bootstrap that a reader might come across, especially consistency. I will list below shortly some additional thoughts about this section. - Forst8dits (talk) 23:12, 17 August 2024 (UTC)[reply]

Potential future work:

  • Add a theorem like that from Giné and Zinn or as refined in van der Vaart Wellner: eg, let be a class of measurable functions with finite envelope function, then is P-Donsker implies . Just want to think a bit how to describe the relationship between conditional weak convergence and consistency. Also probably add a bit why showing weak convergence is important (continuous mapping theorem). I think it's important that the Donsker property get a mention here.
  • Probably will merge the last section that works through an example using the CLT to just be a one-sentence comment in the larger section on consistency. will probably also try to cite a peer-reviewed text instead of lecture notes when I move.
  • Maybe add a small paragraph that lists out situations where the empirical bootstrap is inconsistent (or cite references that have such lists).
  • Cases where the limit distribution of the pivot is not standard normal?
  • Maybe add a small subsection discussing what should be shown for confidence intervals. But any technical details I would put in the section that discusses the different ways to construct confidence intervals. — Preceding unsigned comment added by Forst8dits (talkcontribs) 00:33, 18 August 2024 (UTC)[reply]