Queue Management - Patent 7158964 by Patents-77

VIEWS: 15 PAGES: 12

More Info
									


United States Patent: 7158964


































 
( 1 of 1 )



	United States Patent 
	7,158,964



 Wolrich
,   et al.

 
January 2, 2007




Queue management



Abstract

A method of managing queue entries includes storing addresses in a first
     queue entry as a linked list, each of the stored addresses including a
     cell count, retrieving a first address from the first queue entry, and
     modifying the linked list of addresses of the first queue entry based on
     the cell count of the first address retrieved.


 
Inventors: 
 Wolrich; Gilbert (Framingham, MA), Rosenbluth; Mark B. (Uxbridge, MA), Bernstein; Debra (Sudbury, MA), Hooper; Donald F. (Shrewsbury, MA) 
 Assignee:


Intel Corporation
 (Santa Clara, 
CA)





Appl. No.:
                    
10/020,815
  
Filed:
                      
  December 12, 2001





  
Current U.S. Class:
  1/1  ; 707/999.003; 707/E17.007; 707/E17.032
  
Current International Class: 
  G06F 17/30&nbsp(20060101)
  
Field of Search: 
  
  












 707/1,2,6,8,10,3,200 370/412,390,395.7,398 709/226,224
  

References Cited  [Referenced By]
U.S. Patent Documents
 
 
 
3373408
March 1968
Ling

3478322
November 1969
Evans

3792441
February 1974
Wymore et al.

3940745
February 1976
Sajeva

4130890
December 1978
Adam

4400770
August 1983
Chan et al.

4514807
April 1985
Nogi

4523272
June 1985
Fukunaga et al.

4745544
May 1988
Renner et al.

4866664
September 1989
Burkhardt, Jr. et al.

5140685
August 1992
Sipple et al.

5142683
August 1992
Burkhardt, Jr. et al.

5155831
October 1992
Emma et al.

5155854
October 1992
Flynn et al.

5168555
December 1992
Byers et al.

5173897
December 1992
Schrodi et al.

5185861
February 1993
Valencia

5255239
October 1993
Taborn et al.

5263169
November 1993
Genusov et al.

5268900
December 1993
Hluchyj et al.

5347648
September 1994
Stamm et al.

5367678
November 1994
Lee et al.

5390329
February 1995
Gaertner et al.

5392391
February 1995
Caulk, Jr. et al.

5392411
February 1995
Ozaki

5392412
February 1995
McKenna

5404464
April 1995
Bennett

5404482
April 1995
Stamm et al.

5432918
July 1995
Stamm

5448702
September 1995
Garcia, Jr. et al.

5450351
September 1995
Heddes

5452437
September 1995
Richey et al.

5459842
October 1995
Begun et al.

5463625
October 1995
Yasrebi

5467452
November 1995
Blum et al.

5517648
May 1996
Bertone et al.

5542070
July 1996
LeBlanc et al.

5542088
July 1996
Jennings, Jr. et al.

5544236
August 1996
Andruska et al.

5550816
August 1996
Hardwick et al.

5557766
September 1996
Takiguchi et al.

5568617
October 1996
Kametani

5574922
November 1996
James

5592622
January 1997
Isfeld et al.

5613071
March 1997
Rankin et al.

5613136
March 1997
Casavant et al.

5623489
April 1997
Cotton et al.

5627829
May 1997
Gleeson et al.

5630130
May 1997
Perotto et al.

5634015
May 1997
Chang et al.

5644623
July 1997
Gulledge

5649092
July 1997
Price et al.

5649157
July 1997
Williams

5659687
August 1997
Kim et al.

5671446
September 1997
Rakity et al.

5680641
October 1997
Sidman

5684962
November 1997
Black et al.

5689566
November 1997
Nguyen

5699537
December 1997
Sharangpani et al.

5717898
February 1998
Kagan et al.

5721870
February 1998
Matsumoto

5742587
April 1998
Zornig et al.

5742782
April 1998
Ito et al.

5742822
April 1998
Motomura

5745913
April 1998
Pattin et al.

5751987
May 1998
Mahanti-Shetti et al.

5761507
June 1998
Govett

5761522
June 1998
Hisanaga et al.

5781774
July 1998
Krick

5784649
July 1998
Begur et al.

5784712
July 1998
Byers et al.

5796413
August 1998
Shipp et al.

5797043
August 1998
Lewis et al.

5809235
September 1998
Sharma et al.

5809530
September 1998
Samra et al.

5812868
September 1998
Moyer et al.

5828746
October 1998
Ardon

5828863
October 1998
Barrett et al.

5832215
November 1998
Kato et al.

5835755
November 1998
Stellwagen, Jr.

5850395
December 1998
Hauser et al.

5854922
December 1998
Gravenstein et al.

5860158
January 1999
Pai et al.

5872769
February 1999
Caldara et al.

5873089
February 1999
Regache

5886992
March 1999
Raatikainen et al.

5887134
March 1999
Ebrahim

5890208
March 1999
Kwon

5892979
April 1999
Shiraki et al.

5893162
April 1999
Lau et al.

5905876
May 1999
Pawlowski et al.

5905889
May 1999
Wilhelm, Jr.

5915123
June 1999
Mirsky et al.

5937187
August 1999
Kosche et al.

5938736
August 1999
Muller et al.

5940612
August 1999
Brady et al.

5940866
August 1999
Chisholm et al.

5946487
August 1999
Dangelo

5948081
September 1999
Foster

5958031
September 1999
Kim

5961628
October 1999
Nguyen et al.

5970013
October 1999
Fischer et al.

5974518
October 1999
Nogradi

5978838
November 1999
Mohamed et al.

5983274
November 1999
Hyder et al.

6012151
January 2000
Mano

6014729
January 2000
Lannan et al.

6023742
February 2000
Ebeling et al.

6058168
May 2000
Braband

6067585
May 2000
Hoang

6070231
May 2000
Ottinger

6072781
June 2000
Feeney et al.

6073215
June 2000
Snyder

6079008
June 2000
Clery, III

6085215
July 2000
Ramakrishnan et al.

6085294
July 2000
Van Doren et al.

6092127
July 2000
Tausheck

6092158
July 2000
Harriman et al.

6112016
August 2000
MacWilliams et al.

6134665
October 2000
Klein et al.

6141689
October 2000
Yasrebi

6141765
October 2000
Sherman

6144669
November 2000
Williams et al.

6145054
November 2000
Mehrotra et al.

6157955
December 2000
Narad et al.

6160562
December 2000
Chin et al.

6182177
January 2001
Harriman

6195676
February 2001
Spix et al.

6199133
March 2001
Schnell

6201807
March 2001
Prasanna

6212542
April 2001
Kahle et al.

6212611
April 2001
Nizar et al.

6216220
April 2001
Hwang

6223207
April 2001
Lucovsky et al.

6223238
April 2001
Meyer et al.

6223279
April 2001
Nishimura et al.

6247025
June 2001
Bacon

6256713
July 2001
Audityan et al.

6272616
August 2001
Fernando et al.

6275505
August 2001
O Loughlin et al.

6279113
August 2001
Vaidya

6289011
September 2001
Seo et al.

6298370
October 2001
Tang et al.

6307789
October 2001
Wolrich et al.

6320861
November 2001
Adam et al.

6324624
November 2001
Wolrich et al.

6345334
February 2002
Nakagawa et al.

6347341
February 2002
Glassen et al.

6347344
February 2002
Baker et al.

6351474
February 2002
Robinett et al.

6356962
March 2002
Kasper et al.

6359911
March 2002
Movshovich et al.

6360262
March 2002
Guenthner et al.

6373848
April 2002
Allison et al.

6385658
May 2002
Harter et al.

6389449
May 2002
Nemirovsky et al.

6393483
May 2002
Latif et al.

6393531
May 2002
Novak et al.

6415338
July 2002
Habot

6426940
July 2002
Seo et al.

6426957
July 2002
Hauser et al.

6427196
July 2002
Adiletta et al.

6430626
August 2002
Witkowski et al.

6434145
August 2002
Opsasnick et al.

6438651
August 2002
Slane

6463072
October 2002
Wolrich et al.

6522188
February 2003
Poole

6523060
February 2003
Kao

6532509
March 2003
Wolrich et al.

6539024
March 2003
Janoska et al.

6552826
April 2003
Adler et al.

6560667
May 2003
Wolrich et al.

6577542
June 2003
Wolrich et al.

6584522
June 2003
Wolrich et al.

6587906
July 2003
Wolrich et al.

6606704
August 2003
Adiletta et al.

6625654
September 2003
Wolrich et al.

6631430
October 2003
Wolrich et al.

6631462
October 2003
Wolrich et al.

6658546
December 2003
Calvignac et al.

6661794
December 2003
Wolrich et al.

6667920
December 2003
Wolrich et al.

6668317
December 2003
Bernstein et al.

6681300
January 2004
Wolrich et al.

6684303
January 2004
LaBerge

6687247
February 2004
Wilford et al.

6694380
February 2004
Wolrich et al.

6724721
April 2004
Cheriton

6728845
April 2004
Adiletta et al.

6731596
May 2004
Chiang et al.

6754223
June 2004
Lussier et al.

6757791
June 2004
O'Grady et al.

6768717
July 2004
Reynolds et al.

6779084
August 2004
Wolrich et al.

6791989
September 2004
Steinmetz et al.

6795447
September 2004
Kadambi et al.

6804239
October 2004
Lussier et al.

6810426
October 2004
Mysore et al.

6813249
November 2004
Lauffenburger et al.

6816498
November 2004
Viswanath

6822958
November 2004
Branth et al.

6822959
November 2004
Galbi et al.

6842457
January 2005
Malalur

6850999
February 2005
Mak et al.

6868087
March 2005
Agarwala et al.

6876561
April 2005
Adiletta et al.

6888830
May 2005
Snyder II et al.

6895457
May 2005
Wolrich et al.

6975637
December 2005
Lenell

2001/0014100
August 2001
Abe et al.

2002/0131443
September 2002
Robinett et al.

2002/0144006
October 2002
Cranston et al.

2002/0196778
December 2002
Colmant et al.

2003/0041216
February 2003
Rosenbluth et al.

2003/0046488
March 2003
Rosenbluth et al.

2003/0110166
June 2003
Wolrich et al.

2003/0115347
June 2003
Wolrich et al.

2003/0115426
June 2003
Rosenbluth et al.

2003/0131022
July 2003
Wolrich et al.

2003/0131198
July 2003
Wolrich et al.

2003/0140196
July 2003
Wolrich et al.

2003/0147409
August 2003
Wolrich et al.

2004/0039895
February 2004
Wolrich et al.

2004/0054880
March 2004
Bernstein et al.

2004/0071152
April 2004
Wolrich et al.

2004/0073778
April 2004
Adiletta et al.

2004/0098496
May 2004
Wolrich et al.

2004/0179533
September 2004
Donovan



 Foreign Patent Documents
 
 
 
0 379 709
Aug., 1990
EP

0 418 447
Mar., 1991
EP

0 464 715
Jan., 1992
EP

0 633 678
Jan., 1995
EP

0 745 933
Dec., 1996
EP

0 760 501
Mar., 1997
EP

0 809 180
Nov., 1997
EP

59111533
Jun., 1984
JP

WO 94/15287
Jul., 1994
WO

WO 97/38372
Oct., 1997
WO

WO 01/15718
Mar., 2001
WO

WO 01/16769
Mar., 2001
WO

WO 01/16770
Mar., 2001
WO

WO 01/16782
Mar., 2001
WO

WO 01/48596
Jul., 2001
WO

WO 01/48606
Jul., 2001
WO

WO 01/48619
Jul., 2001
WO

WO 01/50247
Jul., 2001
WO

WO 01/50679
Jul., 2001
WO



   
 Other References 

Byrd et al., "Multithread Processor Architectures," IEEE Spectrum, vol. 32, No. 8, New York, Aug. 1, 1995, pp. 38-46. cited by other
.
Doyle et al., Microsoft Press Computer Dictionary, 2.sup.nd ed., Microsoft Press, Redmond, Washington, USA, 1994, p. 326. cited by other
.
U.S. Appl. No. 09/475,614, filed Dec. 30, 1999, Wolrich et al. cited by other
.
U.S. Appl. No. 09/473,571, filed Dec. 28, 1999, Wolrich et al. cited by other
.
Fillo et al., "The M-Machine Multicomputer," IEEE Proceedings of MICRO-28, 1995, pp. 146-156. cited by other
.
Gomez et al., "Efficient Multithreaded User-Space Transport for Network Computing: Design and Test of the TRAP Protocol," Journal of Parallel and Distributed Computing, Academic Press, Duluth, Minnesota, USA, vol. 40, No. 1, Jan. 10, 1997, pp.
103-117. cited by other
.
Haug et al., "Reconfigurable hardware as shared resource for parallel threads," IEEE Symposium on FPGAs for Custom Computing Machines, 1998. cited by other
.
Hauser et al., "Garp: a MIPS processor with a reconfigurable coprocessor," Proceedings of the 5.sup.th Annual IEEE Symposium on Field-Programmable Custom Computing Machines, 1997. cited by other
.
Hyde, R., "Overview of Memory Management," Byte, vol. 13, No. 4, 1998, pp. 219-225. cited by other
.
Litch et al., "StrongARMing Portable Communications," IEEE Micro, 1998, pp. 48-55. cited by other
.
Schmidt et al., "The Performance of Alternative Threading Architectures for Parallel Communication Subsystems," Internet Document, Online!, Nov. 13, 1998. cited by other
.
Thistle et al., "A Processor Architecture for Horizon," IEEE, 1998, pp. 35-41. cited by other
.
Tremblay et al., "A Three Dimensional Register File for Superscalar Processors," IEEE Proceedings of the 28.sup.th Annual Hawaii International Conference on System Sciences, 1995, pp. 191-201. cited by other
.
Trimberger et al, "A time-multiplexed FPGA," Proceedings of the 5.sup.th Annual IEEE Symposium on Field-Programmable Custom Computing Machines, 1998. cited by other
.
Turner et al., "Design of a High Performance Active Router," Internet Document, Online, Mar. 18, 1999. cited by other
.
Vibhatavanijt et al., "Simultaneous Multithreading-Based Routers," Proceedings of the 2000 International Conference of Parallel Processing, Toronto, Ontario, Canada, Aug. 21-24, 2000, pp. 362-359. cited by other
.
Wazlowski et al., "PRSIM-II computer and architecture," IEEE Proceedings, Workshop on FPGAs for Custom Computing Machines, 1993. cited by other
.
Adiletta, et al., "The next generation of the Intel IXP Network Processors", Intel Technology Journal, Network Processors, vol. 6, issue 3, published Aug. 15, 2002, pp. 6-18. cited by other
.
Brewer, et al., "Remote Queues: Exposing Message Queues for Optimization and Atomicity", SPAA '95 Santa Barbara, CA, pp. 1-13. cited by other
.
Buyuktosunoglu, A., et al., "Tradeoffs in Power-Efficient Issue Queue Design", ISLPED '02, ACM, Aug. 2002, 6 pages. cited by other
.
Dandamudi, S., "Multiprocessors", IEEE Computer, Mar. 1997, pp. 82-89. cited by other
.
Hendler, D., et al., "Work Dealing", SPAA '02, ACM, Aug. 2002, pp. 164-172. cited by other
.
Jonkers, H., "Queueing Models of Shared-Memory Parallel Applications", Computer and Telecommunications Systems Performance Enginnering, Pentech Press, London, 1994, 13 pages. cited by other
.
Komaros, et al., "A Fully-Programmable Memory Management System Optimizing Queue Handling at Multi Gigabit Rates", ACM, Jun. 2-6, 2003, pp. 54-59. cited by other
.
Kumar, S., et al., "A Scalable, Cache-Based Queue Management Subsystem for Network Processors", no date, pp. 1-7. cited by other
.
Lymar, T., et al., "Data Streams Organization in Query Executor for Parallel DBMS", no date, 4 pages. cited by other
.
McLuckie, L., et al., "Using the RapidlO Messaging Unit on PowerQUICC [1]", Freescale Semiconductor, Inc., 2004 Rev. 1, pp. 1-19. cited by other
.
Michael, M., "Scalable Lock-Free Dynamic Memory Allocation", PLDI '04, ACM, Jun. 2004, pp. 1-12. cited by other
.
Pan, H., et al., "Heads and Tails: A Variable-Length Instruction Format Supporting Parallel Fetch and Decode", CASES 01, No. 16-17, 2001 8 pages. cited by other
.
Scott, M., "Non-Blocking Timeout in Scalable Queue-Based Spin Locks", PODC '02, ACM, Jul. 2002, pp. 31-40. cited by other.  
  Primary Examiner: LeRoux; E P


  Attorney, Agent or Firm: Fish & Richardson P.C.



Claims  

The invention claimed is:

 1.  A method of managing queues, comprising: storing addresses corresponding to a plurality of data buffers having a corresponding number of cells in a queue configured
as a linked list comprising a plurality of Buffer Descriptor Address (BDA) entries, wherein a first BDA entry points to a subsequent BDA entry and to a final BDA entry, each of the plurality of BDA entries of the linked list includes one of the data
buffer addresses and an associated a cell count that indicates the corresponding number of cells contained in the corresponding data buffer;  retrieving a first address from the queue;  and modifying the linked list of addresses of the queue based on the
cell count of the first address retrieved, including decrementing the cell count of the first address each time the first address is retrieved.


 2.  The method of claim 1, further comprising: determining the cell count is zero.


 3.  The method of claim 2, wherein storing addresses further comprises: setting the first address as the head address of the queue;  and linking a second address to the first address of the queue.


 4.  The method of claim 3, wherein linking the second address to the first address further comprises: setting the second address as a tail address of the queue.


 5.  The method of claim 4, further comprising: linking a third address to the queue by storing the third address in the location indicated by the tail address.


 6.  The method of claim 4, further comprising: incrementing a queue count indicating the number of BDA entries included in the queue each time an address is linked to the queue.


 7.  The method of claim 3, wherein the queue is stored as part of a queue array having a plurality of linked queues.


 8.  An article comprising a machine-readable medium that stores machine-executable instructions for managing a queue array, the instructions causing a machine to: store addresses corresponding to a plurality of data buffers having a
corresponding number of cells in a queue configured as a linked list comprising a plurality of Buffer Descriptor Address (BDA) entries, wherein a first BDA entry points to a subsequent BDA entry and to a final BDA entry, each of the plurality of BDA
entries of the linked list includes one of the data buffer addresses and an associated cell count that indicates the corresponding number of cells contained in the corresponding data buffer;  retrieve a first address from the queue;  and modify the
linked list of addresses of the queue based on the cell count of the first address retrieved, including decrementing the cell count of the first address each time the first address is retrieved.


 9.  The article of claim 8, further comprising instructions causing a machine to: determine the cell count is zero.


 10.  The article of claim 9, wherein storing further comprises instructions causing a machine to: set the first address as the head address of the queue;  and link a second address to the first address of the queue.


 11.  The article of claim 10, wherein linking comprises setting the second address as a tail address of the queue.


 12.  The article of claim 11, further comprising instructions causing a machine to: link a third address to the queue by storing the third address in the location indicated by the tail address.


 13.  The article of claim 11, further comprising instructions causing a machine to: increment a queue count indicating the number of BDA entries included in the queue each time an address is linked to the queue.


 14.  The article of claim 10, wherein the queue is stored as part of a queue array having a plurality of linked queues.


 15.  An apparatus, comprising: a first storage device for holding queues;  a second storage device for holding data packets;  a memory that stores executable instructions;  and a processor that executes the instructions to: store addresses
corresponding to a plurality of data buffers having a corresponding number of cells in a queue configured as a linked list comprising a plurality of Buffer Descriptor Address (BDA) entries, wherein a first BDA entry points to a subsequent BDA entry and
to a final BDA entry, each of the plurality of BDA entries of the linked list includes one of the data buffer addresses and an associated cell count that indicates the corresponding number of cells contained in the corresponding data buffer;  retrieve a
first address from the queue, and modify the linked list of addresses of the queue based on the cell count of the first address retrieved, including decrementing the cell count of the first address each time the first address is retrieved.


 16.  The apparatus of claim 15, wherein instructions to modify comprise instructions to: determine the cell count is zero.


 17.  The apparatus of claim 16, wherein instructions to store addresses comprises instructions to: set the first address as the head address of the queue;  and link a second address to the first address of the queue.


 18.  The apparatus of claim 17, wherein instructions to link comprises instructions to: set the second address as a tail address of the queue.


 19.  The apparatus of claim 18, further comprising instructions to: link a third address to the queue by storing the third address in the location indicated by the tail address.


 20.  The apparatus of claim 18, further comprising instructions to: increment a queue count indicating the number of BDA entries included in the queue each time an address is linked to the queue.


 21.  The apparatus of claim 17, further comprising: a storage medium, the queue being stored on the storage medium as part of a queue array having a plurality of linked queues.


 22.  A processing system for managing queues comprising: a processor;  a memory to store queues;  and a storage-medium accessible by the processor to store executable instructions, which when accessed by the processor cause the processor to:
store addresses corresponding to a plurality of data buffers having a corresponding number of cells in a queue configured as a linked list comprising a plurality of Buffer Descriptor Address (BDA) entries, wherein a first BDA entry points to a subsequent
BDA entry and to a final BDA entry, each of the plurality of BDA entries of the linked list includes one of the data buffer addresses and an associated cell count that indicates the corresponding number of cells contained in the corresponding data
buffer;  retrieve a first address from the queue;  and modify the linked list of addresses of the queue based on the cell count of the first address retrieved, including decrementing the cell count of the first address each time the first address is
retrieved.


 23.  The system of claim 22, further comprising instructions, which when accessed by the processor causes the processor to: determine the cell count is zero.


 24.  The system of claim 23, wherein storing addresses further comprises: setting the first address as the head address of the queue;  and linking a second address to the first address of the queue.


 25.  The system of claim 24, wherein linking the second address to the first address further comprises instructions, which when accessed by the processor causes the processor to: setting the second address as a tail address of the queue.


 26.  The system of claim 25, further comprises instructions, which when accessed by the processor causes the processor to: link a third address to the queue by storing the third address in the location indicated by the tail address.


 27.  The system of claim 25, further comprises instructions, which when accessed by the processor causes the processor to: increment a queue count indicating the number of BDA entries in the queue each time an address is linked to the queue.


 28.  The method of claim 1, further comprising: receiving network packets;  and storing the packets in one or more of the plurality of data buffers.


 29.  The article of claim 8 further comprising instructions that cause the machine to: receive network packets;  and store the packets in one or more of the plurality of data buffers.


 30.  The apparatus of claim 15, wherein the processor executes further instructions to: receive network packets;  and store the packets in one or more of the plurality of data buffers.


 31.  The processing system of claim 22, wherein the storage medium further includes instructions that when accessed by the processor cause the processor to: receive network packets;  and store the packets in one or more of the plurality of data
buffers.  Description  

TECHNICAL FIELD


This invention relates to managing a queue structure and more specifically to scheduling the transmission of packets on an electronic network.


BACKGROUND


Electronic networks perform data transfers using a variety of data packet sizes.  A packet size may be larger than the input or output capacity of a device connected to the network.  Therefore a single packet may require multiple transfers of
smaller "cells" to completely transfer the packet. 

DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram of computer hardware on which a queue management process may be implemented.


FIG. 2A is a block diagram representing an exemplary linked queue array.


FIG. 2B is a block diagram representing addresses stored in a linked queue array being mapped to stored data packets.


FIG. 2C is a flowchart representing the transmission of data packets.


FIG. 3 is a flowchart showing a queue management process.


FIG. 3A is a flowchart showing an en-queuing process.


FIG. 3B is a flowchart showing a de-queuing process.


DESCRIPTION


Referring to FIG. 1, a network processing system 10 is shown operating as a data packet cross-bar device.  The network processing system 10 receives packets (through I/O buses 14a 14n from external devices not shown), stores the packets
temporarily, interprets header (address) information contained in each packet, and transmits each packet to a destination indicated by its header when an appropriate I/O bus 14a 14n is available.  System 10 may include connections to thousands of I/O
buses and may need to simultaneously store and track tens of thousands of data packets of various sizes before each packet is transmitted out of system 10.  The storage and the input and output of data packets (packets) to and from I/O buses 14a 14n is
controlled by several processors 12a 12n.


System 10 includes a first memory 18, to store the received data packets from a set of data buffers 18a 18n.  The data buffers 18a 18n are not necessarily contiguously stored in the first memory 18.  Each data buffer 18a 18n is indexed by a
buffer descriptor address (BDA) that indicates the location and size of the buffer.  As a packet is received from one of the I/O buses 14a 14n and stored by one of the processors 12a 12n in one of the buffers 18a 18n of the first memory 18, the
processor, e.g., processor 12a identifies one of a set of I/O buffers 16a 16n for transmitting the packet from the data buffer 18a 18n out of system 10.  Each of the I/O buffers 16a 16n is associated with one of the I/O buses 14a 14n.


Often, the I/O port chosen for transmitting a packet stored in an I/O buffer is busy receiving or sending packets for other I/O buffers.  In this case, the system 10 includes a second memory 20 for storing the packet.  The second memory 20 stores
a queue array 24.  The queue array 24 has buffer descriptor addresses (BDAs) for packets that are stored in data buffers 18a 18n of the first memory 18 and are waiting for an assigned I/O buffer 16a 16n to become available.


Each data packet received may vary in size.  Therefore, the size of each data buffer 18a 18n may also vary in size.  Furthermore, each data buffer 18a 18n may be logically partitioned by a processor 12a 12n into one or more "cells".  Each cell
partition represents a maximum size of a data packet that may be transmitted by an I/O buffer 16a 16n.  For example, data buffer 18a is partitioned into two cells, data buffer 18b includes one cell, and data buffer 18c includes three cells.


System 10 also includes a queue manager 22 connected to processors 12a 12n and second memory 20.  Queue manager 22 includes the queue array 24 that includes several queue entries, with each queue entry corresponding to an I/O buffer, 16a 16n. 
Each queue entry in queue array 24 stores multiple BDAs, where one BDA links to another BDA in the same queue.  Queue array 24 is stored in second memory 20.  Alternatively or in addition thereto the queue manager 22 may include a cache containing a
sub-set of the contents of queue array 24.


Each BDA includes both an address of the stored data buffer in first memory 18, and a "cell count" that indicates the number of cells contained in data buffer 18a 18n.  The BDA is, for example, 32 bits long, with the lower 24 bits being used for
an address of the buffer descriptor and the upper 7 bits being used to indicate the cell count of the data buffer.


Processors 12a 12n store and retrieve data buffers from queue array 24 by sending "En-queue" and "De-queue" commands to queue manager 22.  Each En-queue and De-queue command includes a queue entry number included in queue array 24.  Queue manager
22 responds to a De-queue command by returning a BDA stored at a "head" of the queue 24, i.e., a top entry of the queue entry specified to the requesting processor.  Queue manager 22 also uses the cell count included in the head BDA being returned to
determine whether all of the cells included in the corresponding data packet have been sent.  If the cell count is greater than zero, then the queue manager 22 leaves the head BDA in the head location of the queue 24.  When the cell count for a De-queued
BDA has reached zero another linked BDA is moved to the head of the queue 24, as will be explained.


Referring to FIGS. 2A and 2B, a first queue entry, "Qa," of an exemplary queue array Qa Qn is shown.  Each queue entry included in queue array Qa Qn includes a three-block queue descriptor 51a 51n, and may also include additional BDAs that are
linked to the same queue entry.  Each queue descriptor 51a 51n includes a first block 52a 52n that contains the head BDA for the queue entry, a second block 54a 54n that contains the "tail" address for the queue entry and a third block 56a 56n that
contains a "queue count" for the queue entry.


As an example of a queue entry that includes both a head BDA and a linked BDA, queue "Qa" is shown.  In this example, head block 52a has the BDA that will be returned in response to a first De-queue command specifying entry Qa.  Head BDA 52a
links to a second BDA stored at address "a:" 57a.  "Tail" block 54a contains the address "b:" of the last linked address of entry Qa.  The address contained in Tail block 54a points to the storage location where another BDA may be En-Queued (and linked
to) queue entry Qa.  Third block 56a contains a current value of Queue Count that indicates a number of linked buffer descriptors included in the queue entry Qa.  In this example, Queue Count 56a equals two, indicating a first BDA in the "head" location
52a and a second linked BDA in block 57a.  It is noted that the BDA contained in the head block 52a 52n, of each queue descriptor 51a 51n, contains the BDA and Cell Count of the second linked BDA on the queue entry Qa, 57a 57n, unless the queue entry Qa
includes only a single BDA.


Referring to FIG. 3, a process 80 is shown for En-queueing BDAs and linking the BDAs to subsequent BDAs using the queue array shown in FIGS. 2A and 2B.  Process 80 includes a sub-process 100 that depicts En-queueing a BDA onto a queue array
structure, and a sub-process 120 that depicts De-queueing a BDA from a queue array.


Referring to FIG. 3A, a example of the sub-process 100 receives (102) an En-queue command that specifies a Q queue entry in the queue array Qa Qn and a BDA for a new data buffer.  Sub-process 100 stores (104) the new BDA in the location indicated
by the tail address, up-dates (106) the tail address to point to the new BDA and increments (108) the queue count by one (block 56a 56n).  Sub-process 100 may be repeated to store and link additional data buffers onto the "tail" of a queue entry, that
is, En-queueing an additional BDA onto the linked address location at the tail of a queue entry, etc.


Referring to FIG. 3B, an example of sub-process 120 depicts a process of De-queueing data buffers, i.e., BDAs, from a queue entry included in queue array Qa Qn.  Sub-process 120 receives (122) a De-queue command that specifies a queue entry
included in queue array Qa Qn.  Sub-process 120 returns (124) the BDA from the head of the queue descriptor for the specified queue entry to the requesting processor.  Sub-Process 120 determines (126) whether the cell count from the head BDA is greater
than zero.  If the cell count is greater than zero, sub-process 120 decrements (128) the cell count included in the head BDA and exits (140).  If the cell count is not greater than zero, process 120 determines (129) if the Queue Count is greater than or
equal to one.  If the Queue Count is not greater than or equal to one (indicating the queue entry is empty) sub-process 120 exits (140).  If the Queue Count is determined (129) greater than or equal to one (indicating the queue entry contains another
linked BDA) sub-process 120 sets (130) the next linked BDA to be the head buffer descriptor decrements (132) the queue count and exits (140).  Sub-process 120 may be repeated to De-queue the head BDA and subsequent linked BDAs stored in a queue entry in
queue array 24.


Performing process 80 on a system, such as system 10, enables the system to keep a multiple-cell data buffer that is being transmitted at the head of a queue entry.  This is an advantage when a relatively large number of I/O buffers are being
served concurrently, with one or more I/O buffers requiring cell-at-a-time data transmission.


Referring to FIG. 2C, a logical representation of the sequence of data packets that are transmitted by a system performing process 80 is shown.  The data buffers; 18a 18n, and cell numbers of FIG. 2C correspond to the same numbers shown in FIGs.
1 and 2B.  As shown in FIG. 2C, a system performing process 80 causes the two cells of data buffer 18a to be transmitted before the transmission of the first cell of data buffer 18b is begun.  Likewise, data buffer 18b completes transmission before the
first cell of data buffer 18c begins transmission, and so forth.


Process 80 is not limited to use with the hardware and software of FIG. 1.  It may find applicability in any computing or processing environment.  Process 80 may be implemented in hardware, software, or a combination of the two.  Process 80 may
be implemented in computer programs executing on programmable computers or other machines that each include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage components), at least one input
device, and one or more output devices.  Program code may be applied to data entered using an input device (e.g., a mouse or keyboard) to perform process 80 and to generate output information.


Each such program may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system.  However, the programs can be implemented in assembly or machine language.  The language may be a
compiled or an interpreted language.


Each computer program may be stored on a storage medium/article (e.g., CD-ROM, hard disk, or magnetic diskette) that is readable by a general or special purpose programmable computer for configuring and operating the computer when the storage
medium or device is read by the computer to perform process 80.  Process 80 may also be implemented as a machine-readable storage medium, configured with a computer program, where, upon execution, instructions in the computer program cause a machine to
operate in accordance with process 80.


The invention is not limited to the specific embodiments described above.  For example, a single memory may be used to store both data packets and buffer descriptors.  Also, the buffer descriptors and BDAs may be stored substantially
simultaneously in second memory 20 and queue array 24 (see FIG. 1).


Other embodiments not described herein are also within the scope of the following claims.


* * * * *























								
To top