FreshPatents.com Logo
Enter keywords:  

Track companies' patents here: Public Companies RSS Feeds | RSS Feed Home Page
Popular terms

[SEARCH]

Cache Memory topics
Cache Memory
Storage Device
Access Control
Device Control
Host Computer
Control Unit
Touch Screen
External Memory
Computer System
Data Processing
Access Rights
Notifications
Notification
Smart Phone
Microprocessor

Follow us on Twitter
twitter icon@FreshPatents

Web & Computing
Cloud Computing
Ecommerce
Search patents
Smartphone patents
Social Media patents
Video patents
Website patents
Web Server
Android patents
Copyright patents
Database patents
Programming patents
Wearable Computing
Webcam patents

Web Companies
Apple patents
Google patents
Adobe patents
Ebay patents
Oracle patents
Yahoo patents

[SEARCH]

Cache Memory patents



      
           
This page is updated frequently with new Cache Memory-related patent applications. Subscribe to the Cache Memory RSS feed to automatically get the update: related Cache RSS feeds. RSS updates for this page: Cache Memory RSS RSS


Maintaining cache coherency between storage controllers

Storage apparatus

Storage system capable of managing a plurality of snapshot families and method of snapshot family based read

Date/App# patent app List of recent Cache Memory-related patents
08/28/14
20140244939
 Texture cache memory system of non-blocking for texture mapping pipeline and operation method of texture cache memory patent thumbnailnew patent Texture cache memory system of non-blocking for texture mapping pipeline and operation method of texture cache memory
A non-blocking texture cache memory for a texture mapping pipeline and an operation method of the non-blocking texture cache memory may include: a retry buffer configured to temporarily store result data according to a hit pipeline or a miss pipeline; a retry buffer lookup unit configured to look up the retry buffer in response to a texture request transferred from a processor; a verification unit configured to verify whether result data corresponding to the texture request is stored in the retry buffer as the lookup result; and an output control unit configured to output the stored result data to the processor when the result data corresponding to the texture request is stored as the verification result.. .
08/28/14
20140244936
 Maintaining cache coherency between storage controllers patent thumbnailnew patent Maintaining cache coherency between storage controllers
Systems and methods maintain cache coherency between storage controllers utilizing bitmap data. In one embodiment, a storage controller processes an i/o request for a logical volume from a host, and generates one or more cache entries in a cache memory that is based on the request.
08/28/14
20140244935
 Storage system capable of managing a plurality of snapshot families and method of snapshot family based read patent thumbnailnew patent Storage system capable of managing a plurality of snapshot families and method of snapshot family based read
A method for a snapshot family based reading of data units from a storage system, the method comprises: receiving a read request for reading a requested data entity, searching in a cache memory of the storage system for a matching cached data entity, if not finding the matching cached data entity then: searching for one or more relevant data entity candidates stored in the storage system; selecting, out of the one or more relevant data entity candidates, a selected relevant data entity that has a content that has a highest probability, out of contents of the one or more relevant data entity candidates, to be equal to the content of the requested data entity; and responding to the read request by sending the selected relevant data entity.. .
08/28/14
20140244934
 Storage apparatus patent thumbnailnew patent Storage apparatus
[solution] when data designated by a read request from a main frame is stored in a cache memory, a transfer control unit refers to internal control information, identifies the length of a key area and the length of a data area of the data designated by the read request, calculates an address of the data, which is designated by the read request, in the cache memory based on the identified length of the key area, the identified length of the data area, and the length of a count area which is a fixed length, and controls processing for collectively transferring the data, which is stored at the calculated address in the cache memory, from the cache memory to a channel adapter.. .
08/28/14
20140240517
 Monitoring video waveforms patent thumbnailnew patent Monitoring video waveforms
A video signal waveform monitor is shown, which receives an input video signal composed of video lines. A video signal digitizer samples the input video signal at video sample points to generate a sequence of video pixel data, which is written into an acquisition framestore is organized into a video pixel array so as to represent a raster of the input video signal.
08/21/14
20140237187
 Adaptive multilevel binning to improve hierarchical caching patent thumbnailAdaptive multilevel binning to improve hierarchical caching
A device driver calculates a tile size for a plurality of cache memories in a cache hierarchy. The device driver calculates a storage capacity of a first cache memory.
08/21/14
20140237174
 Highly efficient design of storage array utilizing multiple cache lines for use in first and second cache spaces and memory subsystems patent thumbnailHighly efficient design of storage array utilizing multiple cache lines for use in first and second cache spaces and memory subsystems
A method of operating a cache memory includes the step of storing a set of data in a first space in a cache memory, a set of data associated with a set of tags. A subset of the set of data is stored in a second space in the cache memory, the subset of the set of data associated with a tag of a subset of the set of tags.
08/21/14
20140237163
 Reducing writes to solid state drive cache memories of storage controllers patent thumbnailReducing writes to solid state drive cache memories of storage controllers
Methods and structure are provided for reducing the number of writes to a cache of a storage controller. One exemplary embodiment includes a storage controller that has a non-volatile flash cache memory, a primary memory that is distinct from the cache memory, and a memory manager.
08/21/14
20140237160
 Inter-set wear-leveling for caches with limited write endurance patent thumbnailInter-set wear-leveling for caches with limited write endurance
A cache controller includes a first register that updates after every memory location swap operation on a number of cache sets in a cache memory and resets every n−1 memory location swap operations. N is a number of the cache sets in the cache memory.
08/21/14
20140236998
 Managing global cache coherency in a distributed shared caching for clustered file systems patent thumbnailManaging global cache coherency in a distributed shared caching for clustered file systems
Various embodiments are provided for managing a global cache coherency in a distributed shared caching for a clustered file system (cfs). The cfs manages access permissions to an entire space of data segments by using the dsm module.
08/14/14
20140229685
Coherent attached processor proxy supporting coherence state update in presence of dispatched master
A coherent attached processor proxy (capp) of a primary coherent system receives a memory access request specifying a target address in the primary coherent system from an attached processor (ap) external to the primary coherent system. The capp includes a capp directory of contents of a cache memory in the ap that holds copies of memory blocks belonging to a coherent address space of the primary coherent system.
08/14/14
20140229684
Coherent attached processor proxy supporting coherence state update in presence of dispatched master
A coherent attached processor proxy (capp) of a primary coherent system receives a memory access request specifying a target address in the primary coherent system from an attached processor (ap) external to the primary coherent system. The capp includes a capp directory of contents of a cache memory in the ap that holds copies of memory blocks belonging to a coherent address space of the primary coherent system.
08/14/14
20140229680
Aggregating cache eviction notifications to a directory
Technologies are described herein generally relate to aggregation of cache eviction notifications to a directory. Some example technologies may be utilized to update an aggregation table to reflect evictions of a plurality of blocks from a plurality of block addresses of at least one cache memory.
08/14/14
20140229658
Cache load balancing in storage controllers
Methods and structure are provided for cache load balancing in storage controllers that utilize solid state drive (ssd) caches. One embodiment is a storage controller of a storage system.
08/07/14
20140223122
Managing virtual machine placement in a virtualized computing environment
A method for determining that first and second virtual machines, that currently execute in first and second host computing systems, respectively, should both execute within a same host computing system. The method includes determining that the first and second virtual machines have accessed same data more often than a third and fourth virtual machines have accessed said same data.
08/07/14
20140223110
Active memory processor system
In general, the present invention relates to data cache processing. Specifically, the present invention relates to a system that provides reconfigurable dynamic cache which varies the operation strategy of cache memory based on the demand from the applications originating from different external general processor cores, along with functions of a virtualized hybrid core system.
08/07/14
20140223102
Flush control apparatus, flush control method and cache memory apparatus
A flush control apparatus 11 includes: a tag memory unit 14 capable of associating a tag identifier identifying a tag which associates a plurality of cache lines and tag information representing whether or not the tag is valid; a line memory unit 15, a way specification unit 12 and a flush unit 13 which directs to flush the way specified by the way specification unit 12.. .
08/07/14
20140223094
Selective raid protection for cache memory
A raid controller includes a cache memory in which write cache blocks (wcbs) are protected by a raid-5 (striping plus parity) scheme while read cache blocks (rcbs) are not protected in such a manner. If a received cache block is an rcb, the raid controller stores it in the cache memory without storing any corresponding parity information.
07/31/14
20140215157
Monitoring multiple memory locations for targeted stores in a shared-memory multiprocessor
A system and method for supporting targeted stores in a shared-memory multiprocessor. A targeted store enables a first processor to push a cache line to be stored in a cache memory of a second processor.
07/31/14
20140215145
Tape drive cache memory
A system including a tape drive having cache memory to detect a smaller partition and a larger partition in a tape and load stored information in the smaller partition into the cache memory to access the cache memory instead of the smaller partition on the tape.. .
07/24/14
20140208142
Semiconductor device
Supply of power to a plurality of circuits is controlled efficiently depending on usage conditions and the like of the circuits. An address monitoring circuit monitors whether a cache memory and an input/output interface are in an access state or not, and performs power gating in accordance with the state of the cache memory and the input/output interface.
07/24/14
20140208034
System and method for efficient paravirtualized os process switching
The exemplary embodiments described herein relate to systems and methods for improved process switching of a paravirtualized guest with a software-based memory management unit (“mmu”). One embodiment relates to a non-transitory computer readable storage medium including a set of instructions executable by a processor, the set of instructions, when executed, resulting in a performance of the following: create a plurality of new processes for each of a plurality of virtual environments, each of the virtual environments assigned one of a plurality of address space identifiers (“asids”) stored in a cache memory, perform a process switch to one of the virtual environments thereby designating the one of the virtual environments as the active virtual environment, determine whether the active virtual environment has exhausted each of the asids, and flush a cache memory when it is determined that the active virtual environment has exhausted each of the asids..
07/24/14
20140208005
System, method and computer-readable medium for providing selective protection and endurance improvements in flash-based cache
A cache controller includes a cache memory distributed across multiple solid-state storage units in which cache line fill operations are applied sequentially in a defined manner and write operations are protected by a raid-5 (striping plus parity) scheme upon a stripe reaching capacity. The cache store is responsive to data from a storage controller managing a primary data store.
07/24/14
20140207987
Multiprocessor system with multiple concurrent modes of execution
A multiprocessor system supports multiple concurrent modes of speculative execution. Speculation identification numbers (ids) are allocated to speculative threads from a pool of available numbers.
07/17/14
20140201458
Reducing cache memory requirements for recording statistics from testing with a multiplicity of flows
A method reduces cache memory requirements for testing a multiplicity of flows. The method includes receiving data corresponding to a frame in a particular flow among the multiplicity of flows.
07/17/14
20140201456
Control of processor cache memory occupancy
Techniques are described for controlling processor cache memory within a processor system. Cache occupancy values for each of a plurality of entities executing the processor system can be calculated.
07/10/14
20140195722
Storage system which realizes asynchronous remote copy using cache memory composed of flash memory, and control method thereof
The first storage apparatus provides a primary logical volume, and the second storage apparatus has a secondary logical volume. When the first storage apparatus receives a write command to the primary logical volume, a package processor in a flash package allocates first physical area in the flash memory chip to first cache logical area for write data and stores the write data to the allocated first physical area.
07/03/14
20140189245
Merging eviction and fill buffers for cache line transactions
A processor includes a first cache memory and a bus unit in some embodiments. The bus unit includes a plurality of buffers and is operable to allocate a selected buffer of a plurality of buffers for a fill request associated with a first cache line to be stored in a first cache memory, load fill data from the first cache line into the selected buffer, and transfer the fill data to the first cache memory in parallel with storing eviction data for an evicted cache line from the first cache memory in the selected buffer..
07/03/14
20140189204
Information processing apparatus and cache control method
An information processing apparatus comprises a plurality types of cache memories having different characteristics, decides on a type of cache memory to be used as a data cache destination based on the access characteristics of cache-target data, and caches the data in the cache memory of the decided type.. .
07/03/14
20140189203
Storage apparatus and storage control method
A cache memory (cm) in which data, which is accessed with respect to a storage device, is temporarily stored is coupled to a controller for accessing the storage device in accordance with an access command from a higher-level apparatus. The cm comprises a nonvolatile semi-conductor memory (nvm), and provides a logical space to the controller.
06/26/14
20140181420
Distributed cache coherency directory with failure redundancy
A system includes a number of processors with each processor including a cache memory. The system also includes a number of directory controllers coupled to the processors.
06/26/14
20140181418
Managing global cache coherency in a distributed shared caching for clustered file systems
Systems. Methods, and computer program products are provided for managing a global cache coherency in a distributed shared caching for a clustered file systems (cfs).
06/26/14
20140181414
Mechanisms to bound the presence of cache blocks with specific properties in caches
A system and method for efficiently limiting storage space for data with particular properties in a cache memory. A computing system includes a cache array and a corresponding cache controller.
06/26/14
20140181412
Mechanisms to bound the presence of cache blocks with specific properties in caches
A system and method for efficiently limiting storage space for data with particular properties in a cache memory. A computing system includes a cache and one or more sources for memory requests.
06/26/14
20140181408
Managing global cache coherency in a distributed shared caching for clustered file systems
Systems. Methods, and computer program products are provided for managing a global cache coherency in a distributed shared caching for a clustered file systems (cfs).
06/26/14
20140181406
System, method and computer-readable medium for spool cache management
A system, method, and computer-readable medium that facilitate efficient use of cache memory in a massively parallel processing system are provided. A residency time of a data block to be stored in cache memory or a disk drive is estimated.
06/26/14
20140181402
Selective cache memory write-back and replacement policies
A method of managing cache memory includes assigning a caching priority designator to an address that addresses information stored in a memory system. The information is stored in a cacheline of a first level of cache memory in the memory system.
06/26/14
20140181388
Method and apparatus to implement lazy flush in a virtually tagged cache memory
A processor includes a processor core including an execution unit to execute instructions, and a cache memory. The cache memory includes a controller to update each of a plurality of stale indicators in response to a lazy flush instruction.
06/26/14
20140181369
Dynamic overprovisioning for data storage systems
Disclosed embodiments are directed to systems and methods for dynamic overprovisioning for data storage systems. In one embodiment, a data storage system can reserve a portion of memory, such as non-volatile solid-state memory, for overprovisioning.
06/26/14
20140181162
Managing global cache coherency in a distributed shared caching for clustered file systems
Systems. Methods, and computer program products are provided for managing a global cache coherency in a distributed shared caching for a clustered file systems (cfs).
06/19/14
20140173379
Dirty cacheline duplication
A method of managing memory includes installing a first cacheline at a first location in a cache memory and receiving a write request. In response to the write request, the first cacheline is modified in accordance with the write request and marked as dirty.
06/19/14
20140173378
Parity data management for a memory architecture
A processor system as presented herein includes a processor core, cache memory coupled to the processor core, a memory controller coupled to the cache memory, and a system memory component coupled to the memory controller. The system memory component includes a plurality of independent memory channels configured to store data blocks, wherein the memory controller controls the storing of parity bits in at least one of the plurality of independent memory channels.
06/19/14
20140173342
Debug access mechanism for duplicate tag storage
A coherence system includes a storage array that may store duplicate tag information associated with a cache memory of a processor. The system may also include a pipeline unit that includes a number of stages to control accesses to the storage array.
06/19/14
20140173330
Split brain detection and recovery system
The invention provides for split brain detection and recovery in a das cluster data storage system through a secondary network interconnection, such as a sas link, directly between the das controllers. In the event of a communication failure detected on the secondary network, the das controllers initiate communications over the primary network, such as an ethernet used for clustering and failover operations, to diagnose the nature of the failure, which may include a crash of a data storage node or loss of a secondary network link.
06/19/14
20140173221
Cache management
The present disclosure provides techniques for cache management. A data block may be received from an io interface.
06/19/14
20140173216
Invalidation of dead transient data in caches
Embodiments include methods, systems, and articles of manufacture directed to identifying transient data upon storing the transient data in a cache memory, and invalidating the identified transient data in the cache memory.. .
06/19/14
20140173214
Retention priority based cache replacement policy
A data processing system includes a cache memory 58 and cache control circuitry 56 for applying a cache replacement policy based upon a retention priority value pv stored with each cache line 66 within the cache memory 58. The initial retention priority value set upon inserting a cache line 66 into the cache memory 58 is dependent upon either or both of which of a plurality of sources issued the access memory request that resulted in the insertion or the privilege level of the memory access request resulting in the insertion.
06/19/14
20140173211
Partitioning caches for sub-entities in computing devices
Some embodiments include a partitioning mechanism that partitions a cache memory into sub-partitions for sub-entities. In the described embodiments, the cache memory is initially partitioned into two or more partitions for one or more corresponding entities.
06/19/14
20140173207
Power gating a portion of a cache memory
In an embodiment, a processor includes multiple tiles, each including a core and a tile cache hierarchy. This tile cache hierarchy includes a first level cache, a mid-level cache (mlc) and a last level cache (llc), and each of these caches is private to the tile.
06/19/14
20140173206
Power gating a portion of a cache memory
In an embodiment, a processor includes multiple tiles, each including a core and a tile cache hierarchy. This tile cache hierarchy includes a first level cache, a mid-level cache (mlc) and a last level cache (llc), and each of these caches is private to the tile.
06/19/14
20140173203
Block memory engine
In an embodiment, a processor is disclosed and includes a cache memory and a memory execution cluster coupled to the cache memory. The memory execution cluster includes a memory execution unit to execute instructions including non-block memory instructions, and block memory logic to execute one or more block memory operations.
06/19/14
20140173202
Information processing apparatus and scheduling method
An information processing apparatus includes: at least one access unit that issues a memory access request for a memory; an arbitration unit that arbitrates the memory access request issued from the access unit; a management unit that allows the access unit that is an issuance source of the memory access request according to a result of the arbitration made by the arbitration unit to perform a memory access to the memory; a processor that accesses the memory through at least one cache memory; and a timing adjusting unit that holds a process relating to the memory access request issued by the access unit for a holding time set in advance and cancels the holding of the process relating to the memory access request in a case where power of the at least one cache memory is turned off in the processor before the holding time expires.. .
06/19/14
20140172802
Information processor and backup method
An information processor coupled to a storage apparatus that stores information, includes: a creation unit configured to create a snapshot of a file system that manages first information stored in the storage apparatus and to output the snapshot to the storage apparatus; a writing unit configured to write second information stored in cache memory onto the storage apparatus after the snapshot has been created; and a replication instruction unit configured to instruct the storage apparatus to create a replication of the first information stored in the storage apparatus after the second information has been written and the snapshot.. .
06/12/14
20140164713
Bypassing memory requests to a main memory
Some embodiments include a computing device with a control circuit that handles memory requests. The control circuit checks one or more conditions to determine when a memory request should be bypassed to a main memory instead of sending the memory request to a cache memory.
06/12/14
20140164712
Data processing apparatus and control method thereof
A cache memory device includes a data array structure including a plurality of entries identified by indices and including, for each entry, data acquired by a fetch operation or prefetch operation and a reference count associated with the data. The reference count holds a value obtained by subtracting a count at which the entry has been referred to by the fetch operation, from a count at which the entry has been referred to by the prefetch operation.
06/12/14
20140164704
Cache swizzle with inline transposition
A method and circuit arrangement selectively swizzle data in one or more levels of cache memory coupled to a processing unit based upon one or more swizzle-related page attributes stored in a memory address translation data structure such as an effective to real translation (erat) or translation lookaside buffer (tlb). A memory address translation data structure may be accessed, for example, in connection with a memory access request for data in a memory page, such that attributes associated with the memory page in the data structure may be used to control whether data is swizzled, and if so, how the data is to be formatted in association with handling the memory access request..
06/12/14
20140164703
Cache swizzle with inline transposition
A method and circuit arrangement selectively swizzle data in one or more levels of cache memory coupled to a processing unit based upon one or more swizzle-related page attributes stored in a memory address translation data structure such as an effective to real translation (erat) or translation lookaside buffer (tlb). A memory address translation data structure may be accessed, for example, in connection with a memory access request for data in a memory page, such that attributes associated with the memory page in the data structure may be used to control whether data is swizzled, and if so, how the data is to be formatted in association with handling the memory access request..
06/12/14
20140164702
Virtual address cache memory, processor and multiprocessor
An embodiment provides a virtual address cache memory including: a tlb virtual page memory configured to, when a rewrite to a tlb occurs, rewrite entry data; a data memory configured to hold cache data using a virtual page tag or a page offset as a cache index; a cache state memory configured to hold a cache state for the cache data stored in the data memory, in association with the cache index; a first physical address memory configured to, when the rewrite to the tlb occurs, rewrite a held physical address; and a second physical address memory configured to, when the cache data is written to the data memory after the occurrence of the rewrite to the tlb, rewrite a held physical address.. .
06/12/14
20140164698
Logical volume transfer method and storage network system
The present invention transfers replication logical volumes between and among storage control units in a storage system comprising storage control units. To transfer replication logical volumes from a storage control unit to a storage control unit, a virtualization device sets a path to the storage control unit.
06/12/14
20140164485
Caching of data requests in session-based environment
Caching of data requests in session-based environment. An embodiment of a method includes a client preparing a data request for a storage server in a session-based environment.
06/05/14
20140156950
Emulated message signaled interrupts in multiprocessor systems
A processor with coherency-leveraged support for low latency message signaled interrupt handling includes multiple execution cores and their associated cache memories. A first cache memory associated a first of the execution cores includes a plurality of cache lines.
06/05/14
20140156948
Apparatuses and methods for pre-fetching and write-back for a segmented cache memory
Apparatuses and methods for a cache memory are described. In an example method, a transaction history associated with a cache block is referenced, and requested information is read from memory.
06/05/14
20140156947
Method and apparatus for supporting a plurality of load accesses of a cache in a single cycle to maintain throughput
A method for supporting a plurality of requests for access to a data cache memory (“cache”) is disclosed. The method comprises accessing a first set of requests to access the cache, wherein the cache comprises a plurality of blocks.
06/05/14
20140156940
Mechanism for page replacement in cache memory
A mechanism for page replacement for cache memory is disclosed. A method of the disclosure includes referencing an entry of a data structure of a cache in memory to identify a stored value of an eviction counter, the stored value of the eviction counter placed in the entry when a page of a file previously stored in the cache was evicted from the cache, determining a refault distance of the page of the file based on a difference between the stored value of the eviction counter and a current value of the eviction counter, and adjusting a ratio of cache lists maintained by the processing device to track pages in the cache, the adjusting based on the determined refault distance..
06/05/14
20140156939
Methodology for fast detection of false sharing in threaded scientific codes
A profiling tool identifies a code region with a false sharing potential. A static analysis tool classifies variables and arrays in the identified code region.
06/05/14
20140156934
Storage apparatus and module-to-module data transfer method
A storage apparatus includes controller modules configured to have a cache memory and to control a storage device, respectively, and communication channels that connect the controller modules in a mesh topology, where one controller module providing an instruction to perform data transfer in which the controller module is specified as a transfer source and another controller module is specified as a transfer destination. The instruction is provided to a controller module directly connected to the other controller modules using a corresponding one of the communication channels, and configured to perform data transfer from the cache memory of the one controller module to the cache memory of the other controller module, in accordance with the instruction..
06/05/14
20140156929
Network-on-chip using request and reply trees for low-latency processor-memory communication
A network-on-chip (noc) organization comprises a die having a cache area and a core area, a plurality of core tiles arranged in the core area in a plurality of subsets, at least one cache memory bank arranged in the cache area, whereby the at least one cache memory bank is distinct from each of the plurality of core tiles. The noc organization further comprises an interconnect fabric comprising a request tree to connect to a first cache memory bank of the at least one cache memory bank, each core tile of a first one of the subsets, the first subset corresponding to the first cache memory bank, such that each core tile of the first subset is connected to the first cache memory bank only, and allow guiding data packets from each core tile of the first subset to the first memory bank, and a reply tree to connect the first cache memory bank to each core tile of the first subset, and allow guiding data packets from the first cache memory bank to a core tile of the first subset..
06/05/14
20140156909
Systems and methods for dynamic optimization of flash cache in storage devices
In various embodiments, a storage device includes a magnetic media, a cache memory, and a drive controller. In embodiments, the drive controller is configured to establish a portion of the cache memory as an archival zone having a cache policy to maximize write hits.
06/05/14
20140152664
Method of rendering a terrain stored in a massive database
A method of rendering a terrain stored in a massive database the said terrain rendering being displayed for an observer by a display device comprising at least one graphics card comprising a cache memory, comprises at least: a step of generating several regular grids of different resolution level terrain patches so as to represent the terrain data of the massive database; a step of extracting terrain data from the massive database for several resolution levels, the extracted terrain data forming an extraction pyramid, composed of an extraction window for each level of detail, placed in cache memory. Each window comprises an active zone intended to be displayed, and a preloading zone which makes it possible to anticipate the transfers of data; a step of selecting the patches of the extraction pyramid which contribute to the image; and a step of plotting the rendering on the basis of the selected patches..
05/29/14
20140149827
Semiconductor memory device including non-volatile memory, cache memory, and computer system
In one embodiment, the memory device includes a data storage region and an error correction (ecc) region. The data storage region configured to store a first number of data blocks.
05/29/14
20140149698
Storage system capable of managing a plurality of snapshot families and method of operating thereof
There is provided a storage system and a method of identifying delta-data therein between two points-in-time. The method comprises: generating successive snapshots si and si+1 corresponding to the two points-in-time; upon generating the snapshot si+1, searching the cache memory for data blocks associated with snap_version=i, thereby yielding cached delta-metadata; searching the sf mapping data structure for destaged data blocks associated with snap_version=i, thereby yielding destaged delta-metadata; and joining the cached delta-metadata and the destaged delta-metadata, thereby yielding delta-metadata indicative of the delta-data between points-in-time corresponding to the successive snapshots with snap_id=i and snap_id=i+1.
05/29/14
20140149689
Coherent proxy for attached processor
A coherent attached processor proxy (capp) of a primary coherent system receives a memory access request from an attached processor (ap) and an expected coherence state of a target address of the memory access request with respect to a cache memory of the ap. In response, the capp determines a coherence state of the target address and whether or not the expected state matches the determined coherence state.
05/29/14
20140149685
Memory management using dynamically allocated dirty mask space
Systems and methods related to a memory system including a cache memory are disclosed. The cache memory system includes a cache memory including a plurality of cache memory lines and a dirty buffer including a plurality of dirty masks.
05/29/14
20140149681
Coherent proxy for attached processor
A coherent attached processor proxy (capp) of a primary coherent system receives a memory access request from an attached processor (ap) and an expected coherence state of a target address of the memory access request with respect to a cache memory of the ap. In response, the capp determines a coherence state of the target address and whether or not the expected state matches the determined coherence state.
05/29/14
20140149669
Cache memory and methods for managing data of an application processor including the cache memory
In one example embodiment of the inventive concepts, a cache memory system includes a main cache memory including a nonvolatile random access memory, the main cache memory configured to exchange data with an external device and store the exchange data, each exchanged data includes less significant bit (lsb) data and more significant bit (msb) data. The cache memory system further includes a sub-cache memory including a random access memory, the sub-cache memory configured to store lsb data of at least a portion of data stored at the main cache memory, wherein the main cache memory and the sub-cache memory are formed of a single-level cache memory..
05/29/14
20140149664
Storage system capable of managing a plurality of snapshot families and method of operating thereof
There is provided a storage system comprising a control layer operable to manage a plurality of snapshot families, each family constituted by snapshot family members having hierarchical relations therebetween. The method of operating the storage system comprises searching a cache memory for an addressed data block corresponding to an addressed lba and associated with an addressed snapshot family and an addressed sf member.
05/29/14
20140149651
Providing extended cache replacement state information
In an embodiment, a processor includes a decode logic to receive and decode a first memory access instruction to store data in a cache memory with a replacement state indicator of a first level, and to send the decoded first memory access instruction to a control logic. In turn, the control logic is to store the data in a first way of a first set of the cache memory and to store the replacement state indicator of the first level in a metadata field of the first way responsive to the decoded first memory access instruction.


Popular terms: [SEARCH]

Cache Memory topics: Cache Memory, Storage Device, Access Control, Device Control, Host Computer, Control Unit, Touch Screen, External Memory, Computer System, Data Processing, Access Rights, Notifications, Notification, Smart Phone, Microprocessor

Follow us on Twitter
twitter icon@FreshPatents

###

This listing is a sample listing of patent applications related to Cache Memory for is only meant as a recent sample of applications filed, not a comprehensive history. There may be associated servicemarks and trademarks related to these patents. Please check with patent attorney if you need further assistance or plan to use for business purposes. This patent data is also published to the public by the USPTO and available for free on their website. Note that there may be alternative spellings for Cache Memory with additional patents listed. Browse our RSS directory or Search for other possible listings.
     SHARE
  
         


FreshNews promo



0.9706

3693

71 - 0 - 77