FreshPatents.com Logo
Enter keywords:  

Track companies' patents here: Public Companies RSS Feeds | RSS Feed Home Page
Popular terms

[SEARCH]

Cache Memory topics
Cache Memory
Storage Device
Access Control
Device Control
Host Computer
Control Unit
Touch Screen
External Memory
Computer System
Data Processing
Access Rights
Notifications
Notification
Smart Phone
Microprocessor

Follow us on Twitter
twitter icon@FreshPatents

Web & Computing
Cloud Computing
Ecommerce
Search patents
Smartphone patents
Social Media patents
Video patents
Website patents
Web Server
Android patents
Copyright patents
Database patents
Programming patents
Wearable Computing
Webcam patents

Web Companies
Apple patents
Google patents
Adobe patents
Ebay patents
Oracle patents
Yahoo patents

[SEARCH]

Cache Memory patents



      
           
This page is updated frequently with new Cache Memory-related patent applications. Subscribe to the Cache Memory RSS feed to automatically get the update: related Cache RSS feeds. RSS updates for this page: Cache Memory RSS RSS


System and method for efficient paravirtualized os process switching

System, method and computer-readable medium for providing selective protection and endurance improvements in flash-based…

Multiprocessor system with multiple concurrent modes of execution

Date/App# patent app List of recent Cache Memory-related patents
07/24/14
20140208142
 Semiconductor device patent thumbnailSemiconductor device
Supply of power to a plurality of circuits is controlled efficiently depending on usage conditions and the like of the circuits. An address monitoring circuit monitors whether a cache memory and an input/output interface are in an access state or not, and performs power gating in accordance with the state of the cache memory and the input/output interface.
07/24/14
20140208034
 System and method for efficient paravirtualized os process switching patent thumbnailSystem and method for efficient paravirtualized os process switching
The exemplary embodiments described herein relate to systems and methods for improved process switching of a paravirtualized guest with a software-based memory management unit (“mmu”). One embodiment relates to a non-transitory computer readable storage medium including a set of instructions executable by a processor, the set of instructions, when executed, resulting in a performance of the following: create a plurality of new processes for each of a plurality of virtual environments, each of the virtual environments assigned one of a plurality of address space identifiers (“asids”) stored in a cache memory, perform a process switch to one of the virtual environments thereby designating the one of the virtual environments as the active virtual environment, determine whether the active virtual environment has exhausted each of the asids, and flush a cache memory when it is determined that the active virtual environment has exhausted each of the asids..
07/24/14
20140208005
 System, method and computer-readable medium for providing selective protection and endurance improvements in flash-based cache patent thumbnailSystem, method and computer-readable medium for providing selective protection and endurance improvements in flash-based cache
A cache controller includes a cache memory distributed across multiple solid-state storage units in which cache line fill operations are applied sequentially in a defined manner and write operations are protected by a raid-5 (striping plus parity) scheme upon a stripe reaching capacity. The cache store is responsive to data from a storage controller managing a primary data store.
07/24/14
20140207987
 Multiprocessor system with multiple concurrent modes of execution patent thumbnailMultiprocessor system with multiple concurrent modes of execution
A multiprocessor system supports multiple concurrent modes of speculative execution. Speculation identification numbers (ids) are allocated to speculative threads from a pool of available numbers.
07/17/14
20140201458
 Reducing cache memory requirements for recording statistics from testing with a multiplicity of flows patent thumbnailReducing cache memory requirements for recording statistics from testing with a multiplicity of flows
A method reduces cache memory requirements for testing a multiplicity of flows. The method includes receiving data corresponding to a frame in a particular flow among the multiplicity of flows.
07/17/14
20140201456
 Control of processor cache memory occupancy patent thumbnailControl of processor cache memory occupancy
Techniques are described for controlling processor cache memory within a processor system. Cache occupancy values for each of a plurality of entities executing the processor system can be calculated.
07/10/14
20140195722
 Storage system which realizes asynchronous remote copy using cache memory composed of flash memory, and control method thereof patent thumbnailStorage system which realizes asynchronous remote copy using cache memory composed of flash memory, and control method thereof
The first storage apparatus provides a primary logical volume, and the second storage apparatus has a secondary logical volume. When the first storage apparatus receives a write command to the primary logical volume, a package processor in a flash package allocates first physical area in the flash memory chip to first cache logical area for write data and stores the write data to the allocated first physical area.
07/03/14
20140189245
 Merging eviction and fill buffers for cache line transactions patent thumbnailMerging eviction and fill buffers for cache line transactions
A processor includes a first cache memory and a bus unit in some embodiments. The bus unit includes a plurality of buffers and is operable to allocate a selected buffer of a plurality of buffers for a fill request associated with a first cache line to be stored in a first cache memory, load fill data from the first cache line into the selected buffer, and transfer the fill data to the first cache memory in parallel with storing eviction data for an evicted cache line from the first cache memory in the selected buffer..
07/03/14
20140189204
 Information processing apparatus and cache control method patent thumbnailInformation processing apparatus and cache control method
An information processing apparatus comprises a plurality types of cache memories having different characteristics, decides on a type of cache memory to be used as a data cache destination based on the access characteristics of cache-target data, and caches the data in the cache memory of the decided type.. .
07/03/14
20140189203
 Storage apparatus and storage control method patent thumbnailStorage apparatus and storage control method
A cache memory (cm) in which data, which is accessed with respect to a storage device, is temporarily stored is coupled to a controller for accessing the storage device in accordance with an access command from a higher-level apparatus. The cm comprises a nonvolatile semi-conductor memory (nvm), and provides a logical space to the controller.
06/26/14
20140181420
Distributed cache coherency directory with failure redundancy
A system includes a number of processors with each processor including a cache memory. The system also includes a number of directory controllers coupled to the processors.
06/26/14
20140181418
Managing global cache coherency in a distributed shared caching for clustered file systems
Systems. Methods, and computer program products are provided for managing a global cache coherency in a distributed shared caching for a clustered file systems (cfs).
06/26/14
20140181414
Mechanisms to bound the presence of cache blocks with specific properties in caches
A system and method for efficiently limiting storage space for data with particular properties in a cache memory. A computing system includes a cache array and a corresponding cache controller.
06/26/14
20140181412
Mechanisms to bound the presence of cache blocks with specific properties in caches
A system and method for efficiently limiting storage space for data with particular properties in a cache memory. A computing system includes a cache and one or more sources for memory requests.
06/26/14
20140181408
Managing global cache coherency in a distributed shared caching for clustered file systems
Systems. Methods, and computer program products are provided for managing a global cache coherency in a distributed shared caching for a clustered file systems (cfs).
06/26/14
20140181406
System, method and computer-readable medium for spool cache management
A system, method, and computer-readable medium that facilitate efficient use of cache memory in a massively parallel processing system are provided. A residency time of a data block to be stored in cache memory or a disk drive is estimated.
06/26/14
20140181402
Selective cache memory write-back and replacement policies
A method of managing cache memory includes assigning a caching priority designator to an address that addresses information stored in a memory system. The information is stored in a cacheline of a first level of cache memory in the memory system.
06/26/14
20140181388
Method and apparatus to implement lazy flush in a virtually tagged cache memory
A processor includes a processor core including an execution unit to execute instructions, and a cache memory. The cache memory includes a controller to update each of a plurality of stale indicators in response to a lazy flush instruction.
06/26/14
20140181369
Dynamic overprovisioning for data storage systems
Disclosed embodiments are directed to systems and methods for dynamic overprovisioning for data storage systems. In one embodiment, a data storage system can reserve a portion of memory, such as non-volatile solid-state memory, for overprovisioning.
06/26/14
20140181162
Managing global cache coherency in a distributed shared caching for clustered file systems
Systems. Methods, and computer program products are provided for managing a global cache coherency in a distributed shared caching for a clustered file systems (cfs).
06/19/14
20140173379
Dirty cacheline duplication
A method of managing memory includes installing a first cacheline at a first location in a cache memory and receiving a write request. In response to the write request, the first cacheline is modified in accordance with the write request and marked as dirty.
06/19/14
20140173378
Parity data management for a memory architecture
A processor system as presented herein includes a processor core, cache memory coupled to the processor core, a memory controller coupled to the cache memory, and a system memory component coupled to the memory controller. The system memory component includes a plurality of independent memory channels configured to store data blocks, wherein the memory controller controls the storing of parity bits in at least one of the plurality of independent memory channels.
06/19/14
20140173342
Debug access mechanism for duplicate tag storage
A coherence system includes a storage array that may store duplicate tag information associated with a cache memory of a processor. The system may also include a pipeline unit that includes a number of stages to control accesses to the storage array.
06/19/14
20140173330
Split brain detection and recovery system
The invention provides for split brain detection and recovery in a das cluster data storage system through a secondary network interconnection, such as a sas link, directly between the das controllers. In the event of a communication failure detected on the secondary network, the das controllers initiate communications over the primary network, such as an ethernet used for clustering and failover operations, to diagnose the nature of the failure, which may include a crash of a data storage node or loss of a secondary network link.
06/19/14
20140173221
Cache management
The present disclosure provides techniques for cache management. A data block may be received from an io interface.
06/19/14
20140173216
Invalidation of dead transient data in caches
Embodiments include methods, systems, and articles of manufacture directed to identifying transient data upon storing the transient data in a cache memory, and invalidating the identified transient data in the cache memory.. .
06/19/14
20140173214
Retention priority based cache replacement policy
A data processing system includes a cache memory 58 and cache control circuitry 56 for applying a cache replacement policy based upon a retention priority value pv stored with each cache line 66 within the cache memory 58. The initial retention priority value set upon inserting a cache line 66 into the cache memory 58 is dependent upon either or both of which of a plurality of sources issued the access memory request that resulted in the insertion or the privilege level of the memory access request resulting in the insertion.
06/19/14
20140173211
Partitioning caches for sub-entities in computing devices
Some embodiments include a partitioning mechanism that partitions a cache memory into sub-partitions for sub-entities. In the described embodiments, the cache memory is initially partitioned into two or more partitions for one or more corresponding entities.
06/19/14
20140173207
Power gating a portion of a cache memory
In an embodiment, a processor includes multiple tiles, each including a core and a tile cache hierarchy. This tile cache hierarchy includes a first level cache, a mid-level cache (mlc) and a last level cache (llc), and each of these caches is private to the tile.
06/19/14
20140173206
Power gating a portion of a cache memory
In an embodiment, a processor includes multiple tiles, each including a core and a tile cache hierarchy. This tile cache hierarchy includes a first level cache, a mid-level cache (mlc) and a last level cache (llc), and each of these caches is private to the tile.
06/19/14
20140173203
Block memory engine
In an embodiment, a processor is disclosed and includes a cache memory and a memory execution cluster coupled to the cache memory. The memory execution cluster includes a memory execution unit to execute instructions including non-block memory instructions, and block memory logic to execute one or more block memory operations.
06/19/14
20140173202
Information processing apparatus and scheduling method
An information processing apparatus includes: at least one access unit that issues a memory access request for a memory; an arbitration unit that arbitrates the memory access request issued from the access unit; a management unit that allows the access unit that is an issuance source of the memory access request according to a result of the arbitration made by the arbitration unit to perform a memory access to the memory; a processor that accesses the memory through at least one cache memory; and a timing adjusting unit that holds a process relating to the memory access request issued by the access unit for a holding time set in advance and cancels the holding of the process relating to the memory access request in a case where power of the at least one cache memory is turned off in the processor before the holding time expires.. .
06/19/14
20140172802
Information processor and backup method
An information processor coupled to a storage apparatus that stores information, includes: a creation unit configured to create a snapshot of a file system that manages first information stored in the storage apparatus and to output the snapshot to the storage apparatus; a writing unit configured to write second information stored in cache memory onto the storage apparatus after the snapshot has been created; and a replication instruction unit configured to instruct the storage apparatus to create a replication of the first information stored in the storage apparatus after the second information has been written and the snapshot.. .
06/12/14
20140164713
Bypassing memory requests to a main memory
Some embodiments include a computing device with a control circuit that handles memory requests. The control circuit checks one or more conditions to determine when a memory request should be bypassed to a main memory instead of sending the memory request to a cache memory.
06/12/14
20140164712
Data processing apparatus and control method thereof
A cache memory device includes a data array structure including a plurality of entries identified by indices and including, for each entry, data acquired by a fetch operation or prefetch operation and a reference count associated with the data. The reference count holds a value obtained by subtracting a count at which the entry has been referred to by the fetch operation, from a count at which the entry has been referred to by the prefetch operation.
06/12/14
20140164704
Cache swizzle with inline transposition
A method and circuit arrangement selectively swizzle data in one or more levels of cache memory coupled to a processing unit based upon one or more swizzle-related page attributes stored in a memory address translation data structure such as an effective to real translation (erat) or translation lookaside buffer (tlb). A memory address translation data structure may be accessed, for example, in connection with a memory access request for data in a memory page, such that attributes associated with the memory page in the data structure may be used to control whether data is swizzled, and if so, how the data is to be formatted in association with handling the memory access request..
06/12/14
20140164703
Cache swizzle with inline transposition
A method and circuit arrangement selectively swizzle data in one or more levels of cache memory coupled to a processing unit based upon one or more swizzle-related page attributes stored in a memory address translation data structure such as an effective to real translation (erat) or translation lookaside buffer (tlb). A memory address translation data structure may be accessed, for example, in connection with a memory access request for data in a memory page, such that attributes associated with the memory page in the data structure may be used to control whether data is swizzled, and if so, how the data is to be formatted in association with handling the memory access request..
06/12/14
20140164702
Virtual address cache memory, processor and multiprocessor
An embodiment provides a virtual address cache memory including: a tlb virtual page memory configured to, when a rewrite to a tlb occurs, rewrite entry data; a data memory configured to hold cache data using a virtual page tag or a page offset as a cache index; a cache state memory configured to hold a cache state for the cache data stored in the data memory, in association with the cache index; a first physical address memory configured to, when the rewrite to the tlb occurs, rewrite a held physical address; and a second physical address memory configured to, when the cache data is written to the data memory after the occurrence of the rewrite to the tlb, rewrite a held physical address.. .
06/12/14
20140164698
Logical volume transfer method and storage network system
The present invention transfers replication logical volumes between and among storage control units in a storage system comprising storage control units. To transfer replication logical volumes from a storage control unit to a storage control unit, a virtualization device sets a path to the storage control unit.
06/12/14
20140164485
Caching of data requests in session-based environment
Caching of data requests in session-based environment. An embodiment of a method includes a client preparing a data request for a storage server in a session-based environment.
06/05/14
20140156950
Emulated message signaled interrupts in multiprocessor systems
A processor with coherency-leveraged support for low latency message signaled interrupt handling includes multiple execution cores and their associated cache memories. A first cache memory associated a first of the execution cores includes a plurality of cache lines.
06/05/14
20140156948
Apparatuses and methods for pre-fetching and write-back for a segmented cache memory
Apparatuses and methods for a cache memory are described. In an example method, a transaction history associated with a cache block is referenced, and requested information is read from memory.
06/05/14
20140156947
Method and apparatus for supporting a plurality of load accesses of a cache in a single cycle to maintain throughput
A method for supporting a plurality of requests for access to a data cache memory (“cache”) is disclosed. The method comprises accessing a first set of requests to access the cache, wherein the cache comprises a plurality of blocks.
06/05/14
20140156940
Mechanism for page replacement in cache memory
A mechanism for page replacement for cache memory is disclosed. A method of the disclosure includes referencing an entry of a data structure of a cache in memory to identify a stored value of an eviction counter, the stored value of the eviction counter placed in the entry when a page of a file previously stored in the cache was evicted from the cache, determining a refault distance of the page of the file based on a difference between the stored value of the eviction counter and a current value of the eviction counter, and adjusting a ratio of cache lists maintained by the processing device to track pages in the cache, the adjusting based on the determined refault distance..
06/05/14
20140156939
Methodology for fast detection of false sharing in threaded scientific codes
A profiling tool identifies a code region with a false sharing potential. A static analysis tool classifies variables and arrays in the identified code region.
06/05/14
20140156934
Storage apparatus and module-to-module data transfer method
A storage apparatus includes controller modules configured to have a cache memory and to control a storage device, respectively, and communication channels that connect the controller modules in a mesh topology, where one controller module providing an instruction to perform data transfer in which the controller module is specified as a transfer source and another controller module is specified as a transfer destination. The instruction is provided to a controller module directly connected to the other controller modules using a corresponding one of the communication channels, and configured to perform data transfer from the cache memory of the one controller module to the cache memory of the other controller module, in accordance with the instruction..
06/05/14
20140156929
Network-on-chip using request and reply trees for low-latency processor-memory communication
A network-on-chip (noc) organization comprises a die having a cache area and a core area, a plurality of core tiles arranged in the core area in a plurality of subsets, at least one cache memory bank arranged in the cache area, whereby the at least one cache memory bank is distinct from each of the plurality of core tiles. The noc organization further comprises an interconnect fabric comprising a request tree to connect to a first cache memory bank of the at least one cache memory bank, each core tile of a first one of the subsets, the first subset corresponding to the first cache memory bank, such that each core tile of the first subset is connected to the first cache memory bank only, and allow guiding data packets from each core tile of the first subset to the first memory bank, and a reply tree to connect the first cache memory bank to each core tile of the first subset, and allow guiding data packets from the first cache memory bank to a core tile of the first subset..
06/05/14
20140156909
Systems and methods for dynamic optimization of flash cache in storage devices
In various embodiments, a storage device includes a magnetic media, a cache memory, and a drive controller. In embodiments, the drive controller is configured to establish a portion of the cache memory as an archival zone having a cache policy to maximize write hits.
06/05/14
20140152664
Method of rendering a terrain stored in a massive database
A method of rendering a terrain stored in a massive database the said terrain rendering being displayed for an observer by a display device comprising at least one graphics card comprising a cache memory, comprises at least: a step of generating several regular grids of different resolution level terrain patches so as to represent the terrain data of the massive database; a step of extracting terrain data from the massive database for several resolution levels, the extracted terrain data forming an extraction pyramid, composed of an extraction window for each level of detail, placed in cache memory. Each window comprises an active zone intended to be displayed, and a preloading zone which makes it possible to anticipate the transfers of data; a step of selecting the patches of the extraction pyramid which contribute to the image; and a step of plotting the rendering on the basis of the selected patches..
05/29/14
20140149827
Semiconductor memory device including non-volatile memory, cache memory, and computer system
In one embodiment, the memory device includes a data storage region and an error correction (ecc) region. The data storage region configured to store a first number of data blocks.
05/29/14
20140149698
Storage system capable of managing a plurality of snapshot families and method of operating thereof
There is provided a storage system and a method of identifying delta-data therein between two points-in-time. The method comprises: generating successive snapshots si and si+1 corresponding to the two points-in-time; upon generating the snapshot si+1, searching the cache memory for data blocks associated with snap_version=i, thereby yielding cached delta-metadata; searching the sf mapping data structure for destaged data blocks associated with snap_version=i, thereby yielding destaged delta-metadata; and joining the cached delta-metadata and the destaged delta-metadata, thereby yielding delta-metadata indicative of the delta-data between points-in-time corresponding to the successive snapshots with snap_id=i and snap_id=i+1.
05/29/14
20140149689
Coherent proxy for attached processor
A coherent attached processor proxy (capp) of a primary coherent system receives a memory access request from an attached processor (ap) and an expected coherence state of a target address of the memory access request with respect to a cache memory of the ap. In response, the capp determines a coherence state of the target address and whether or not the expected state matches the determined coherence state.
05/29/14
20140149685
Memory management using dynamically allocated dirty mask space
Systems and methods related to a memory system including a cache memory are disclosed. The cache memory system includes a cache memory including a plurality of cache memory lines and a dirty buffer including a plurality of dirty masks.
05/29/14
20140149681
Coherent proxy for attached processor
A coherent attached processor proxy (capp) of a primary coherent system receives a memory access request from an attached processor (ap) and an expected coherence state of a target address of the memory access request with respect to a cache memory of the ap. In response, the capp determines a coherence state of the target address and whether or not the expected state matches the determined coherence state.
05/29/14
20140149669
Cache memory and methods for managing data of an application processor including the cache memory
In one example embodiment of the inventive concepts, a cache memory system includes a main cache memory including a nonvolatile random access memory, the main cache memory configured to exchange data with an external device and store the exchange data, each exchanged data includes less significant bit (lsb) data and more significant bit (msb) data. The cache memory system further includes a sub-cache memory including a random access memory, the sub-cache memory configured to store lsb data of at least a portion of data stored at the main cache memory, wherein the main cache memory and the sub-cache memory are formed of a single-level cache memory..
05/29/14
20140149664
Storage system capable of managing a plurality of snapshot families and method of operating thereof
There is provided a storage system comprising a control layer operable to manage a plurality of snapshot families, each family constituted by snapshot family members having hierarchical relations therebetween. The method of operating the storage system comprises searching a cache memory for an addressed data block corresponding to an addressed lba and associated with an addressed snapshot family and an addressed sf member.
05/29/14
20140149651
Providing extended cache replacement state information
In an embodiment, a processor includes a decode logic to receive and decode a first memory access instruction to store data in a cache memory with a replacement state indicator of a first level, and to send the decoded first memory access instruction to a control logic. In turn, the control logic is to store the data in a first way of a first set of the cache memory and to store the replacement state indicator of the first level in a metadata field of the first way responsive to the decoded first memory access instruction.
05/29/14
20140146589
Semiconductor memory device with cache function in dram
A semiconductor memory device is provided which includes a dynamic random access memory including a memory cell array formed of dynamic random access memory cells; a cache memory formed at the same chip as the dynamic random access memory and configured to communicate with a processor or an external device; and a controller connected with the dynamic random access memory and the cache memory in the same chip and configured to control a dynamic random access function and a cache function.. .
05/22/14
20140143505
Dynamically configuring regions of a main memory in a write-back mode or a write-through mode
The described embodiments include a main memory and a cache memory (or “cache”) with a cache controller that includes a mode-setting mechanism. In some embodiments, the mode-setting mechanism is configured to dynamically determine an access pattern for the main memory.
05/22/14
20140143503
Cache and method for cache bypass functionality
A cache is provided for operatively coupling a processor with a main memory. The cache includes a cache memory and a cache controller operatively coupled with the cache memory.
05/22/14
20140143502
Predicting outcomes for memory requests in a cache memory
The described embodiments include a cache controller with a prediction mechanism in a cache. In the described embodiments, the prediction mechanism is configured to perform a lookup in each table in a hierarchy of lookup tables in parallel to determine if a memory request is predicted to be a hit in the cache, each table in the hierarchy comprising predictions whether memory requests to corresponding regions of a main memory will hit the cache, the corresponding regions of the main memory being smaller for tables lower in the hierarchy..
05/22/14
20140143498
Methods and apparatus for filtering stack data within a cache memory hierarchy
A method of storing stack data in a cache hierarchy is provided. The cache hierarchy comprises a data cache and a stack filter cache.
05/22/14
20140143491
Semiconductor apparatus and operating method thereof
A semiconductor apparatus includes stacked memory dies; a controller configured to control the memory dies; and a base die configured to electrically connect the memory dies and the controller. The base die includes a control unit configured to receive an external address, a request and external data from the controller; a memory input interface configured to receive a memory control signal for controlling the memory dies, from the control unit, and first cache data, and output an internal address, an internal command and internal data to the memory dies; a write cache memory configured to receive a cache control signal and transfer data from the control unit, output the first cache data to the memory input interface, and output second cache data to a memory output interface; and the memory output interface configured to output the second cache data and stored data inputted from the memory dies, to the controller..
05/22/14
20140142871
Vibration monitoring system
The invention is directed to a method for monitoring vibrations to detect distinct vibration events in an acceleration waveform converted into acceleration samples. The method comprises: storing the acceleration samples as a sequence of acceleration frames into a cache memory (s110); detecting the presence or absence of a distinct vibration event in each of said acceleration frames (s160); in case of detecting a distinct vibration event in an acceleration frame, forwarding said acceleration frame from said cache memory to a long-term storage device (s170)..
05/15/14
20140136796
Arithmetic processing device and method for controlling the same
An arithmetic processing device includes a cache memory, a first controller configured to control the cache memory and a second controller assigned a non-cache space to be accessed without use of the cache memory, wherein, when a condition, that out-of-order processing of a first and a second access requests for the non-cache space is possible and access targets of the first and second access requests are the same, is satisfied, the first controller issues the second access request to the second controller without waiting for a completion notification from the second controller with respect to the first access request previously issued to the second controller, and when the condition is not satisfied, the first controller issues the second access request to the second controller after waiting for a completion notification from the second controller with respect to first access request previously issued to the second controller.. .
05/15/14
20140136793
System and method for reduced cache mode
A system and method are described for dynamically changing the size of a computer memory such as level 2 cache as used in a graphics processing unit. In an embodiment, a relatively large cache memory can be implemented in a computing system so as to meet the needs of memory intensive applications.
05/15/14
20140136767
Memory system having memory controller with cache memory and nvram and method of operating same
In a memory system including a flash memory and a memory controller having a cache memory and a nonvolatile random access memory (nvram), a method of operating the memory system includes; receiving a write request specifying a write operation directed to a page of a designated active write block in the flash memory, storing a page mapping table for the active write block in the cache memory, generating update information for the page mapping table stored in the cache memory as a result of executing the write operation, and storing the update information in the nvram, and storing an updated version of the page mapping table in the flash memory after execution of the write operation is complete.. .
05/15/14
20140133209
Memory architectures having wiring structures that enable different access patterns in multiple dimensions
Multi-dimensional memory architectures are provided having access wiring structures that enable different access patterns in multiple dimensions. Furthermore, three-dimensional multiprocessor systems are provided having multi-dimensional cache memory architectures with access wiring structures that enable different access patterns in multiple dimensions..
05/15/14
20140133208
Memory architectures having wiring structures that enable different access patterns in multiple dimensions
Multi-dimensional memory architectures are provided having access wiring structures that enable different access patterns in multiple dimensions. Furthermore, three-dimensional multiprocessor systems are provided having multi-dimensional cache memory architectures with access wiring structures that enable different access patterns in multiple dimensions..
05/08/14
20140129778
Multi-port shared cache apparatus
An apparatus for use in telecommunications system comprises a cache memory shared by multiple clients and a controller for controlling the shared cache memory. A method of controlling the cache operation in a shared cache memory apparatus is also disclosed.
05/08/14
20140129754
Customization of a bus adapter card
The present disclosure includes systems and techniques relating to customization of a bus adapter card. In some implementations, an apparatus includes a processor and a program memory, a bus adapter card coupled with the computing apparatus and configured to connect with a storage device, the bus adapter card computing a cache memory and a controller to cache in the cache memory data associated with the storage device, where the program memory includes a driver to communicate with the bus adapter card responsive to requests corresponding to the storage device, and the driver is configured to modify its communications with the bus adapter card responsive to information provided separate from the requests..


Popular terms: [SEARCH]

Cache Memory topics: Cache Memory, Storage Device, Access Control, Device Control, Host Computer, Control Unit, Touch Screen, External Memory, Computer System, Data Processing, Access Rights, Notifications, Notification, Smart Phone, Microprocessor

Follow us on Twitter
twitter icon@FreshPatents

###

This listing is a sample listing of patent applications related to Cache Memory for is only meant as a recent sample of applications filed, not a comprehensive history. There may be associated servicemarks and trademarks related to these patents. Please check with patent attorney if you need further assistance or plan to use for business purposes. This patent data is also published to the public by the USPTO and available for free on their website. Note that there may be alternative spellings for Cache Memory with additional patents listed. Browse our RSS directory or Search for other possible listings.
     SHARE
  
         


FreshNews promo



0.3442

3405

1 - 1 - 71