Feb 2014 to 2000 Software EngineerWarehouse Distribution System
Apr 2013 to Feb 2014 Software EngineerDenver International Airport thru Logplan LLC Denver, CO Aug 2008 to Apr 2013 Software EngineerGE Aviation Grand Rapids, MI Aug 2004 to Jul 2008 Software EngineerSiemens Dematic/Mannesmann Demag Grand Rapids, MI Aug 1995 to Aug 2004 Project Lead/Software EngineerFAAC Incorporated Ann Arbor, MI Feb 1992 to Aug 1995 Software EngineerProgressive Technologies Grand Rapids, MI Nov 1991 to Dec 1992 Software Engineering ManagerSmiths Aerospace Systems Grand Rapids, MI Dec 1979 to Nov 1991 Software EngineerChrysler Corporation Park, MI, US Apr 1974 to Nov 1979 Development EngineerFord Motor Company Dearborn, MI Apr 1973 to Mar 1974 Development Engineer
Education:
University of Michigan Ann Arbor, MI 1975 M.S. in Electrical EngineeringHope College Holland, MI 1973 B.A. in Math/PhysicsUniversity of Michigan Dearborn, MI 1973 B.S.E in Electrical Engineering
Us Patents
Caching Method For Selecting Data Blocks For Removal From Cache Based On Recall Probability And Size
Richard J. Defouw - Boulder CO Alan Sutton - Boulder CO Ronald W. Korngiebel - Westminister CO
Assignee:
Storage Technology Corporation - Louisville CO
International Classification:
G06F 1200
US Classification:
711133, 711134, 711135, 711136, 711171
Abstract:
A caching method for selecting variable size data blocks for replacement or removal from a cache includes determining the size and the unreferenced time interval of each block in the cache. The size of a block is the amount of cache space taken up by the block. The unreferenced time interval of a block is the time that has elapsed since the block was last accessed, and may be determined using a least recently used (LRU) algorithm. The recall probability of each block in the cache is then determined. The recall probability of a block is a function of its unreferenced time interval and possibly size and other auxiliary parameters. The caching method then determines a quality factor (q) for each block. The (q) of a block is a function of its recall probability and size. The caching method concludes with removing from the cache the block with the lowest (q).
Method And System For Improving Usable Life Of Memory Devices Using Vector Processing
Richard John Defouw - Boulder CO, US Thai Nguyen - Thornton CO, US
Assignee:
Storage Technology Corporation - Louisville CO
International Classification:
G06F 12/00
US Classification:
711103, 711165, 36518533
Abstract:
A method, system and apparatus for improving the useful life of non-volatile memory devices such as flash memory. The present wear-leveling technique advantageously improves the overall useful life of a flash memory device by strategically moving inactive data (data that has been infrequently modified in the recent past) to the memory blocks that have experienced the most wear since the device began operation and by strategically moving active data to the memory blocks that have experienced the least wear. In order to efficiently process and track data activity and block wear, vectors of block-descriptor pointers are maintained. One vector is sorted in decreasing order of overall block erase/write activity (block-wear indicator), whereas the other vector is sorted in increasing order of the number of times a block has been erased since the last wear-leveling event occurred (activity indicator for the data stored in the block). The activity levels of the data and the wear levels of the blocks are then easily compared and otherwise processed using pointers into these vectors to allow for more efficient processing than previous techniques used for wear leveling.
Thai Nguyen - Thornton CO, US Michael L. Leonhardt - Longmont CO, US Richard John Defouw - Boulder CO, US
Assignee:
Storage Technology Corporation - Louisvile CO
International Classification:
G06F 12/08
US Classification:
711120, 711168, 710 32
Abstract:
For use in a storage area network (SAN), a virtualization layer including at least one virtual engine having a respective local cache and a secondary cache layer, wherein the secondary cache layer includes the local caches coupled together, the local caches individually including a first cache layer, and at least one of a data transfer command and data corresponding to the transfer command are multicast to the secondary cache layer through an interconnection bus, the interconnection bus coupling the at least one virtual engine and at least one physical storage device.
Frederick Munro - Broomfield CO Aaron Dailey - Boulder CO Richard Defouw - Boulder CO David Trachy - Louisville CO
International Classification:
G06F 1700 G11B 1568 B65G 100
US Classification:
364478
Abstract:
An automated cartridge system optimizes the time it takes to execute a series of cartridge requests. After cartridge requests are received, the library controller calculates the approximate time it will take to execute each possible sequence of pending cartridge requests. The library controller then executes the first request in the sequence of pending requests which will take the shortest elapsed time. This series of calculations and executions continues until all requests are completed. It is another object of the present invention to carry out the motion optimization in such a manner that assures that a particular cartridge request is not put off indefinitely. Each time a hand or arm is allocated to a request, all other requests waiting for these mechanisms have their calculated execution times reduced to increase the likelihood of execution.