172:
70:
29:
441:
will drop all coming packets if it reaches that limit. Each queue is serviced based on how much packets are served in each queue. If that limit is met, the network OS will hold packets of current queue and services the next queue until that queue is empty or it reaches its packet limit. If one queue is empty, the network OS will skip that queue and service the next queue.
432:
lower priority queue and process data in higher priority queue first. The network OS does not care how long lower priority queues have to wait for their turn because it always finishes each queue from highest to lowest priority first before moving to the next queue. Within each queue, packets are forwarded based on First-In-First-Out basis.
408:
of date at that moment, but it will take the resource back after transferring. “Weighted” means the scheduler will assign weight for each type of packet. Base on the weight, it will determine how to put packet into the queue and serve them. Usually, each packet will be weighted based on IP Precedence field from IP header of each packet.
308:
First-in, First-out processes are taken out from the queue in consecutive order that they are put into the queue. With this method, every process is treated equally. If there are two processes of different priority and the lower priority process enters the queue first, it will be executed first. This
407:
Weighted fair queue uses the min-max-fair-share algorithm to distribute packets. The min fair-share means the network OS will distribute equally minimum resource for each type of packet. The max fair-share means the network OS will provide more resource for packets that need to transfer large amount
353:
Round-robin scheduling method will give a same amount of time for each process and cycle through them. This method is heavily bases on a lot of time consuming to each process. Too short a lot time will fragment the processes, and too long a lot time will increase waiting time for each process to be
431:
Priority queue is divided into 4 sub queues with different priorities. Data in each queue are only served when the higher priority queues are empty. If data come into the empty higher priority queue while the network OS is transferring data of lower priority queue, network OS will hold data of the
323:
The shortest remaining time method tries to predict the processing time of developments and places them into the queue from the smallest to largest processing time. This method estimates and predicts based on prior history records. In term, its performance is not stable but better improves process
440:
Custom queue is divided into 17 different sub queues. The first queue, queue 0, is reserved for the network OS to transmit system packet, the other 16 queues are for user-defined packets. User can define various important packets and assign them into each queue. Each queue has limited size and it
377:
In networking, packets are the key foundation for scheduling. There are many different types of packet travelling around network core every day, and they are treated totally different. For example, voice and video packets have higher priority than normal packets. In order to manage and distribute
392:
In this mode, packets are taken out from the queue in the order that they are coming from the queue. Every packet is treated the same priority. If a large packet A comes before a small packet B, B still has to wait until A is completely served. If a system treats every packet the same, users can
368:
The
Multilevel queue scheduling method employs several queues, and each queue may have its own scheduling algorithm. Multilevel queue scheduling is more complex compared to other methods, but it provides flexibility for OS to serve different response time requirements in complicated situations.
338:
Fixed-priority pre-emptive scheduling method assigns different priorities to the processes based on their processing time and arranges them into the queue in order of their priorities. CPU server processes from higher to lower priority, and processes which have the same priority are served as
277:
Essentially, a queue is a collection which has data added in the rear position and removed from the front position. There are many different types of queues, and the ways they operate may be totally different. Operating systems use First-Come, First-Served queues, Shortest remaining time,
274:(OS), but may also be applied to scheduling inside networking devices. The purpose of scheduling is to ensure resources are being distributed fairly and effectively; therefore, it improves the performance of the system.
270:
to run a program. Input queues are mainly used in
Operating System Scheduling which is a technique for distributing resources among processes. Input queues not only apply to
201:
333:
339:
First-Come, First-Served. The CPU will temporary stop serving low priority process when higher priority process coming into the queue.
282:
and multilevel queue scheduling. Network devices use First-In-First-Out queue, Weighted fair queue, Priority queue and Custom queue.
473:
241:
223:
153:
56:
387:
303:
508:
91:
309:
approach may not be ideal if different processes have different priorities, especially if the processes are long running.
134:
294:(CPU). CPU scheduling manages process states and decides when a process will be executed next by using the input queue.
106:
87:
42:
113:
363:
184:
513:
194:
188:
180:
378:
packet effectively, network devices also use input queue to determine which packet will be transmitted first.
120:
16:
This article is about queues of processes waiting for CPU time. For queues of batch jobs waiting to run, see
318:
291:
80:
348:
279:
205:
402:
102:
482:
290:
In operating systems, processes are loaded into memory, and wait for their turn to be executed by the
267:
487:
469:
271:
263:
255:
48:
127:
426:
414:
Fair allocation = (resource capacity – resource already allocated) / number of packets
502:
492:
450:
69:
17:
354:
executed. Choosing right a lot time is the foundation for this method.
165:
63:
22:
393:
experience the delay in transmitting such as: voice packets.
94:. Unsourced material may be challenged and removed.
193:but its sources remain unclear because it lacks
8:
324:waiting time than First-Come, First-Served.
57:Learn how and when to remove these messages
242:Learn how and when to remove this message
224:Learn how and when to remove this message
154:Learn how and when to remove this message
278:Fixed-priority pre-emptive scheduling,
334:Fixed-priority pre-emptive scheduling
328:Fixed-priority pre-emptive scheduling
7:
266:that are waiting to be brought into
92:adding citations to reliable sources
14:
38:This article has multiple issues.
466:CCIE Practical Studies Volume II
388:FIFO (computing and electronics)
382:First in, first out queue (FIFO)
304:FIFO (computing and electronics)
262:is a collection of processes in
170:
68:
27:
79:needs additional citations for
46:or discuss these issues on the
1:
488:Operating System - Scheduling
493:OS Scheduling and Buffering
483:Operating System Scheduling
464:Stallings, William (2003).
358:Multilevel queue scheduling
530:
424:
400:
385:
361:
346:
331:
316:
301:
15:
397:Weighted fair queue (WFQ)
364:Multilevel feedback queue
179:This article includes a
319:Shortest remaining time
313:Shortest remaining time
292:central processing unit
208:more precise citations.
349:Round-robin scheduling
343:Round-robin scheduling
280:round-robin scheduling
509:Computing terminology
403:Weighted fair queuing
88:improve this article
421:Priority queue (PQ)
298:First-in, First-out
181:list of references
436:Custom queue (CQ)
272:operating systems
252:
251:
244:
234:
233:
226:
164:
163:
156:
138:
61:
521:
479:
286:Operating system
256:computer science
247:
240:
229:
222:
218:
215:
209:
204:this article by
195:inline citations
174:
173:
166:
159:
152:
148:
145:
139:
137:
96:
72:
64:
53:
31:
30:
23:
529:
528:
524:
523:
522:
520:
519:
518:
514:Queueing theory
499:
498:
497:
476:
468:. Cisco Press.
463:
459:
447:
438:
429:
423:
405:
399:
390:
384:
375:
366:
360:
351:
345:
336:
330:
321:
315:
306:
300:
288:
248:
237:
236:
235:
230:
219:
213:
210:
199:
185:related reading
175:
171:
160:
149:
143:
140:
97:
95:
85:
73:
32:
28:
21:
12:
11:
5:
527:
525:
517:
516:
511:
501:
500:
496:
495:
490:
485:
480:
474:
460:
458:
455:
454:
453:
446:
443:
437:
434:
427:Priority queue
425:Main article:
422:
419:
418:
417:
416:
415:
401:Main article:
398:
395:
386:Main article:
383:
380:
374:
371:
362:Main article:
359:
356:
347:Main article:
344:
341:
332:Main article:
329:
326:
317:Main article:
314:
311:
302:Main article:
299:
296:
287:
284:
250:
249:
232:
231:
214:September 2024
189:external links
178:
176:
169:
162:
161:
76:
74:
67:
62:
36:
35:
33:
26:
13:
10:
9:
6:
4:
3:
2:
526:
515:
512:
510:
507:
506:
504:
494:
491:
489:
486:
484:
481:
477:
475:1-58705-072-2
471:
467:
462:
461:
456:
452:
451:Message queue
449:
448:
444:
442:
435:
433:
428:
420:
413:
412:
411:
410:
409:
404:
396:
394:
389:
381:
379:
372:
370:
365:
357:
355:
350:
342:
340:
335:
327:
325:
320:
312:
310:
305:
297:
295:
293:
285:
283:
281:
275:
273:
269:
265:
261:
257:
246:
243:
228:
225:
217:
207:
203:
197:
196:
190:
186:
182:
177:
168:
167:
158:
155:
147:
144:December 2009
136:
133:
129:
126:
122:
119:
115:
112:
108:
105: –
104:
103:"Input queue"
100:
99:Find sources:
93:
89:
83:
82:
77:This article
75:
71:
66:
65:
60:
58:
51:
50:
45:
44:
39:
34:
25:
24:
19:
465:
439:
430:
406:
391:
376:
367:
352:
337:
322:
307:
289:
276:
259:
253:
238:
220:
211:
200:Please help
192:
150:
141:
131:
124:
117:
110:
98:
86:Please help
81:verification
78:
54:
47:
41:
40:Please help
37:
260:input queue
206:introducing
503:Categories
457:References
373:Networking
114:newspapers
43:improve it
49:talk page
18:Job queue
445:See also
264:storage
202:improve
128:scholar
472:
268:memory
130:
123:
116:
109:
101:
258:, an
187:, or
135:JSTOR
121:books
470:ISBN
107:news
254:In
90:by
505::
191:,
183:,
52:.
478:.
245:)
239:(
227:)
221:(
216:)
212:(
198:.
157:)
151:(
146:)
142:(
132:·
125:·
118:·
111:·
84:.
59:)
55:(
20:.
Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.