74:
53:
22:
146:
316:
This obviously implies that LAD strips out from its sum of residuals those that come from outliers. On what is the phrase "safely ignored" based? Who is saying that LAD ignores residuals from outliers? Reading Gorard's published avaialable academic papers, links below, it seems mistaken. In reading
320:
Since LAD does not do that (it is called absolute deviation), it may be more robust, as the article rightly notes, but why? who would claiming that LAD is more robust because it ignores outliers? What reliable sources advocate removing the outliers in a process they are trying to call LAD? Gorard
271:
It might be that some analytical communities are using the term LAD to refer to a model of the median. However, I would not presume that this is the dominant use of the term LAD, since there is much analysis in which LAD is just modelling to the mean, that is, taking the deviations from the mean.
219:
The article about Least absolute deviations (LAD), in the section "Solving methods", omits a simple transformation that casts LAD problems as Linear
Programs (LP), which can in turn be reliably and efficiently solved by general purpose LP packages (for the transformation, see p. 294 of Boyd and
312:
quote: Least absolute deviations is robust in that it is resistant to outliers in the data. This may be helpful in studies where outliers may be safely and effectively ignored. If it is important to pay attention to any and all outliers, the method of least squares is a better choice.
227:
I have never contributed to
Knowledge before, so I don't know what is the "adequate" way of doing this. I don't know if I have to ask permission from someone, so I decided to post here before and see if there was any feedback. I would be glad to write this myself.
250:
From what I understand, least absolute deviation is equivalent to a special case of quantile regression (with 0.5-th quantile). At the moment, there is no reference to the "Quantile
Regression" article (nor there is a reference from the latter).
224:). Most people would be better served by doing this simple and intuitive transformation and then applying one of the many Linear Programming packages available instead of trying to code their own solution based on Barrodale and Roberts' paper.
321:
suggests LAD may be more robust because it gives outliers that same importance as all observations. OLS might be less robust not because it gives outliers equal importance, but because it gives outliers inflated weight.
317:
Gorard or any textbook, when OLS squares the errors, it is giving more weight to the observations that have greater errors (if the error is 3, the square is 09); that is, it is giving more weight to outliers.
124:
257:
Finally, while the reduction to linear programming is excellent (clean and simple), it is not clear, why is it so much different from the reduction to LP in the "Quantile
Regression" article.
254:
Moreover, I am surprised that the article does not mention that what you are actually modelling with LAD is the median (as opposed to the mean, as in least squares). Am I missing something?
309:
Based on one reading of Gorard (author of hundreds of papers and over a dozen books), it seems hard to believe that the following is reliable, educated and correct:
359:
114:
364:
354:
202:
90:
171:
258:
81:
58:
33:
21:
262:
179:
336:
277:
39:
73:
52:
232:
332:
287:
273:
236:
187:
89:
on
Knowledge. If you would like to participate, please visit the project page, where you can join
210:
186:
Please help fix the broken anchors. You can remove this template after fixing the problems. |
295:
158:
324:
329:
340:
299:
281:
266:
240:
214:
348:
206:
290:
into the section on "Variations, extensions, specializations". I hope that is o.k.
291:
305:
Does any reliable source claim that LAD "ignores" outliers. See Gorard papers.
175:
86:
221:
166:
This article links to one or more target anchors that no longer exist.
286:
Before reading this discussion I had already put the relation to
220:
Vandenberghe's book "Convex
Optimization", freely available at
140:
15:
325:
http://www.econ.uiuc.edu/~roger/research/rq/QRJEP.pdf
330:
http://www.leeds.ac.uk/educol/documents/00003759.htm
85:, a collaborative effort to improve the coverage of
172:Dependent and independent variables#Statistics
8:
19:
47:
246:Reference to quantile regression article?
49:
222:http://www.stanford.edu/~boyd/cvxbook/
203:Knowledge talk:WikiProject Statistics
7:
178:. The anchor (#Statistics) has been
79:This article is within the scope of
38:It is of interest to the following
360:Mid-importance Statistics articles
14:
144:
99:Knowledge:WikiProject Statistics
72:
51:
20:
365:WikiProject Statistics articles
355:Start-Class Statistics articles
119:This article has been rated as
102:Template:WikiProject Statistics
174:links to a specific web page:
1:
267:20:47, 21 November 2011 (UTC)
93:and see a list of open tasks.
381:
201:(This contrib copied from
341:13:46, 13 July 2014 (UTC)
282:13:38, 13 July 2014 (UTC)
241:06:21, 9 March 2010 (UTC)
215:09:40, 9 March 2010 (UTC)
197:Least absolute deviations
118:
67:
46:
300:15:58, 7 May 2021 (UTC)
180:deleted by other users
82:WikiProject Statistics
28:This article is rated
288:quantile regression
105:Statistics articles
34:content assessment
194:
193:
161:in most browsers.
139:
138:
135:
134:
131:
130:
372:
188:Reporting errors
148:
147:
141:
125:importance scale
107:
106:
103:
100:
97:
76:
69:
68:
63:
55:
48:
31:
25:
24:
16:
380:
379:
375:
374:
373:
371:
370:
369:
345:
344:
307:
248:
199:
190:
164:
163:
162:
145:
104:
101:
98:
95:
94:
61:
32:on Knowledge's
29:
12:
11:
5:
378:
376:
368:
367:
362:
357:
347:
346:
306:
303:
247:
244:
198:
195:
192:
191:
185:
184:
183:
159:case-sensitive
153:
152:
151:
149:
137:
136:
133:
132:
129:
128:
121:Mid-importance
117:
111:
110:
108:
91:the discussion
77:
65:
64:
62:Mid‑importance
56:
44:
43:
37:
26:
13:
10:
9:
6:
4:
3:
2:
377:
366:
363:
361:
358:
356:
353:
352:
350:
343:
342:
338:
334:
331:
327:
326:
322:
318:
314:
310:
304:
302:
301:
297:
293:
289:
284:
283:
279:
275:
269:
268:
264:
260:
255:
252:
245:
243:
242:
238:
234:
229:
225:
223:
217:
216:
212:
208:
204:
196:
189:
181:
177:
173:
169:
168:
167:
160:
156:
150:
143:
142:
126:
122:
116:
113:
112:
109:
92:
88:
84:
83:
78:
75:
71:
70:
66:
60:
57:
54:
50:
45:
41:
35:
27:
23:
18:
17:
328:
323:
319:
315:
311:
308:
285:
270:
259:84.52.37.109
256:
253:
249:
230:
226:
218:
200:
165:
157:Anchors are
154:
120:
80:
40:WikiProjects
30:Start-class
349:Categories
176:Statistics
96:Statistics
87:statistics
59:Statistics
233:Gpfreitas
170:] Anchor
333:Kitpuppy
274:Kitpuppy
207:Melcombe
182:before.
123:on the
292:Delius
36:scale.
337:talk
296:talk
278:talk
263:talk
237:talk
211:talk
155:Tip:
115:Mid
351::
339:)
298:)
280:)
265:)
239:)
231:--
213:)
205:)
335:(
294:(
276:(
261:(
235:(
209:(
127:.
42::
Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.