FreshPatents.com Logo
Enter keywords:  

Track companies' patents here: Public Companies RSS Feeds | RSS Feed Home Page
Popular terms

[SEARCH]

Cache Memory topics
Cache Memory
Storage Device
Access Control
Device Control
Host Computer
Control Unit
Touch Screen
External Memory
Computer System
Data Processing
Access Rights
Notifications
Notification
Smart Phone
Microprocessor

Follow us on Twitter
twitter icon@FreshPatents

Web & Computing
Cloud Computing
Ecommerce
Search patents
Smartphone patents
Social Media patents
Video patents
Website patents
Web Server
Android patents
Copyright patents
Database patents
Programming patents
Wearable Computing
Webcam patents

Web Companies
Apple patents
Google patents
Adobe patents
Ebay patents
Oracle patents
Yahoo patents

[SEARCH]

Cache Memory patents



      
           
This page is updated frequently with new Cache Memory-related patents. Subscribe to the Cache Memory RSS feed to automatically get the update: related Cache RSS feeds.

Subscribe to updates on this page: Cache Memory RSS RSS

Date/App# patent app List of recent Cache Memory-related patents
04/03/14
20140095832
 Method and apparatus for performance efficient isa virtualization using dynamic partial binary translation patent thumbnailMethod and apparatus for performance efficient isa virtualization using dynamic partial binary translation
Methods, apparatus and systems for virtualization of a native instruction set are disclosed. Embodiments include a processor core executing the native instructions and a second core, or alternatively only the second processor core consuming less power while executing a second instruction set that excludes portions of the native instruction set.
04/03/14
20140095797
 Cache memory having enhanced performance and security features patent thumbnailCache memory having enhanced performance and security features
Methods for accessing, storing and replacing data in a cache memory are provided, wherein a plurality of index bits and a plurality of tag bits at the cache memory are received. The plurality of index bits are processed to determine whether a matching index exists in the cache memory and the plurality of tag bits are processed to determine whether a matching tag exists in the cache memory, and a data line is retrieved from the cache memory if both a matching tag and a matching index exist in the cache memory.
04/03/14
20140095793
 Storage system patent thumbnailStorage system
An object of the present invention is to provide a storage system which is shared by a plurality of application programs, wherein optimum performance tuning for a cache memory can be performed for each of the individual application programs. The storage system of the present invention comprises a storage device which provides a plurality of logical volumes which can be accessed from a plurality of application programs, a controller for controlling input and output of data to and from the logical volumes in response to input/output requests from the plurality of application programs, and a cache memory for temporarily storing data input to and output from the logical volume, wherein the cache memory is logically divided into a plurality of partitions which are exclusively assigned to the plurality of logical volumes respectively..
04/03/14
20140095792
 Cache control device and pipeline control method patent thumbnailCache control device and pipeline control method
A cache control device includes an entering unit, a first searching unit, a reading unit, a second searching unit, and a rewriting unit. The entering unit alternately enters, into a pipeline, a load request for reading a directory received from a processor and a store request for rewriting a directory received from the processor.
04/03/14
20140095776
 Storage system including a plurality of flash memory devices patent thumbnailStorage system including a plurality of flash memory devices
A storage system including a storage device which includes media for storing data from a host computer, a medium controller for controlling the media, a plurality of channel controllers for connecting to the host computer through a channel and a cache memory for temporarily storing data from the host computer, wherein the media have a restriction on a number of writing times. The storage device includes a bus for directly transferring data from the medium controller to the channel controller..
04/03/14
20140095645
 Method for caching data on client device to optimize server data persistence in building of an image-based project patent thumbnailMethod for caching data on client device to optimize server data persistence in building of an image-based project
A system for creating image and or text-based projects includes a server connected to a network, the server having access to least one processor and a data repository, the server including a non-transitory physical medium, and software running from the non-transitory physical medium, the software providing a first function for establishing a client server connection between the server and at least one user-operated computing appliance connected to the network, a second function for initiating and maintaining an active data session between one or more users involved in project creation and or in project editing through a graphics user interface (gui), a third function for establishing a cache memory on the at least one operated computing appliance, the cache dedicated for caching user and server-side data, a fourth function for caching user actions in the cache memory, and a fifth function for persisting the cached data to the server.. .
03/27/14
20140089637
 Optimizing system throughput by automatically altering thread co-execution based on operating system directives patent thumbnailOptimizing system throughput by automatically altering thread co-execution based on operating system directives
A technique for optimizing program instruction execution throughput in a central processing unit core (cpu). The cpu implements a simultaneous multithreading (smt) operational mode wherein program instructions associated with at least two software threads are executed in parallel as hardware threads while sharing one or more hardware resources used by the cpu, such as cache memory, translation lookaside buffers, functional execution units, etc.
03/27/14
20140089599
 Processor and control method of processor patent thumbnailProcessor and control method of processor
A processor includes a cache write queue configured to store write requests, based on store instructions directed to a cache memory issued by an instruction issuing unit, into entries provided with stream_wait flag, and to output a write request including no stream_wait flag set thereon, from among the stored write requests, to a pipeline operating unit which performs pipeline operation with respect to the cache memory, the cache write queue being further configured to determine, when a stream flag attached to the store instruction is set, that there will be succeeding store instruction directed to a data area same as that accessed by the store instruction, to set the stream_wait flag so as to store the write request into the entry, to merge the write requests based on the store instructions, directed to the same data area, into a single write request, and then to hold the merged write request.. .
03/27/14
20140089595
 Utility and lifetime based cache replacement policy patent thumbnailUtility and lifetime based cache replacement policy
Embodiments of the invention describe an apparatus, system and method for utilizing a utility and lifetime based cached replacement policy as described herein. For processors having one or more processor cores and a cache memory accessible via the processor core(s), embodiments of the invention describe a cache controller to determine, for a plurality of cache blocks in the cache memory, an estimated utility and lifetime of the contents of each cache block, the utility of a cache block to indicate a likelihood of use its contents, the lifetime of a cache block to indicate a duration of use of its contents.
03/27/14
20140089591
 Supporting targeted stores in a shared-memory multiprocessor system patent thumbnailSupporting targeted stores in a shared-memory multiprocessor system
The present embodiments provide a system for supporting targeted stores in a shared-memory multiprocessor. A targeted store enables a first processor to push a cache line to be stored in a cache memory of a second processor in the shared-memory multiprocessor.
03/27/14
20140089586
Arithmetic processing unit, information processing device, and arithmetic processing unit control method
An l2 cache control unit searches for a cache memory according to a memory access request which is provided from a request storage unit 0 through a cpu core unit, and retains in request storage units 1 and 2 the memory access request that has a cache mistake that has occurred. A bank abort generation unit counts, for each bank, the number of memory access requests to the main storage device, and instructs the l2 cache control unit to interrupt access when any of the number of counted memory access requests exceeds a specified value.
03/27/14
20140089454
Method for managing content caching based on hop count and network entity thereof
Disclosed is hop-count based content caching. The present invention implements hop-count based content cache placement strategies that efficiently decrease traffics of a network by the routing node's primarily judging whether to cache a content chunk by grasping an attribute of the received content chunk; the routing node's secondarily judging whether to cache the content chunk based on a caching probability of ‘1/hop count’; and storing the content chunk and the hop count information in the cache memory of the routing node when the content chunk is determined to cache the content chunk as a result of the secondary judgment..
03/20/14
20140082287
Cache memory prefetching
According to exemplary embodiments, a computer program product, system, and method for prefetching in memory include determining a missed access request for a first line in a first cache level and accessing an entry in a prefetch table, wherein the entry corresponds to a memory block, wherein the entry includes segments of the memory block. Further, the embodiment includes determining a demand segment of the segments in the entry, the demand segment corresponding to a segment of the memory block that includes the first line, reading a first field in the demand segment to determine if a second line in the demand segment is spatially related with respect to accesses of the demand segment and reading a second field in the demand segment to determine if a second segment in the entry is temporally related to the demand segment..
03/20/14
20140082286
Prefetching method and apparatus
A method and apparatus for determining data to be prefetched based on previous cache miss history is disclosed. In one embodiment, a processor includes a first cache memory and a controller circuit.
03/20/14
20140078816
Signal processing circuit and method for driving the same
It is an object to provide a memory device for which a complex manufacturing process is not necessary and whose power consumption can be suppressed and a signal processing circuit including the memory device. In a memory element including a phase-inversion element by which the phase of an input signal is inverted and the signal is output such as an inverter or a clocked inverter, a capacitor which holds data and a switching element which controls storing and releasing of electric charge in the capacitor are provided.
03/13/14
20140075166
Swapping branch direction history(ies) in response to a branch prediction table swap instruction(s), and related systems and methods
Swapping branch direction history(ies) in response to a branch prediction table swap instruction(s), and related systems and methods are disclosed. In one embodiment, a branch history management circuit is configured to process a branch prediction table swap instruction.
03/13/14
20140075119
Hybrid active memory processor system
In general, the present invention relates to data cache processing. Specifically, the present invention relates to a system that provides reconfigurable dynamic cache which varies the operation strategy of cache memory based on the demand from the applications originating from different external general processor cores, along with functions of a virtualized hybrid core system.
03/13/14
20140074962
Browser device, browser program, browser system, image forming apparatus, and non-transitory storage medium
Browser device for obtaining web data of specified url. Browser device includes: registration unit configured to register one or more urls; dedicated cache memory configured to, when first web data is obtained from registered urls, store first web data without deleting existing web data that is stored already therein; general-purpose cache memory configured to, when second web data is obtained from unregistered url, delete part or all of existing web data that is stored already therein, in accordance with capacity of general-purpose cache memory, an amount of existing web data, and amount of second web data, and then store second web data; and obtaining unit configured to, when web data of specified url is stored in one of dedicated cache memory and general-purpose cache memory, obtain web data therefrom.
03/06/14
20140068196
Method and system for self-tuning cache management
Web objects, such as media files are sent through an adaptation server which includes a transcoder for adapting forwarded objects according to profiles of the receiving destinations, and a cache memory for caching frequently requested objects, including their adapted versions. The probability of additional requests for the same object before the object expires, is assessed by tracking hits.
03/06/14
20140068194
Processor, information processing apparatus, and control method of processor
A processor is includes cache memory; an arithmetic processing section that a load request loading an object data stored at a memory to the cache memory; a cache control part patent a process corresponding to the received load request; a memory management part which requests the object data corresponding to the request from the cache control part and header information containing information indicating whether or not the object data is a latest for the memory, and receives the header information responded by the memory; and a data management part that manages a write control of the data to the cache memory, and receives the object data responded by the memory based on the request. The requested data is transmitted from the memory to the data management part held by a cpu node without being intervened by the memory management part..
03/06/14
20140068192
Processor and control method of processor
A processor includes a plurality of cpu cores, each having an li cache memory, that executes processing and issues a request, and an l2 cache memory connected to the plurality of cpu cores, the l2 cache memory is configured, when a request which requests a target data held by none of the l1 cache memories contained in the plurality of cpu cores, is a load request that permits other cpu cores, to make a response to the cpu core having sent the request, with non-exclusive information that indicates that the target data is non-exclusive data, together with the target data; and when the request is a load request that forbids other cpu cores, to make a response to the cpu core having sent the request, with exclusive information that indicates that the target data is exclusive, together with the target data.. .
03/06/14
20140068191
Synchronous and ansynchronous discard scans based on the type of cache memory
A computational device maintains a first type of cache and a second type of cache. The computational device receives a command from the host to release space.
03/06/14
20140068179
Processor, information processing apparatus, and control method
A processor includes a cache memory that holds data from a main storage device. The processor includes a first control unit that controls acquisition of data, and that outputs an input/output request that requests the transfer of the target data.
03/06/14
20140068140
Dynamic central cache memory
The specification and drawings present a new apparatus, method and software related product for using a cache/central cache module/device (instead of e.g., system dram) which can serve multiple memory modules/devices. Each memory/io module/device connected to the same memory network (e.g., via hub, bus, etc.) may utilize memory resources of this cache module/device either in a fixed manner using pre-set allocation of resources per the memory module/device, or dynamically using run-time allocation of new resources to an existing module/device per its request or to a new module/device connecting to the memory network (e.g., comprised in a host device) and possibly requesting memory resources..
03/06/14
20140067920
Data analysis system
A data processing method includes obtaining a data access pattern of a client terminal with respect to a data storage unit, performing caching operations on the data storage unit according to a caching criterion to thereby obtain and store cache data in the cache memory, and sending the cache data to an analyst server via the data transmission interface so as for the analyst server to analyze the cache data and thereby generate an analysis result.. .
02/20/14
20140053163
Thread processing method and thread processing system
A thread processing method that is executed by a multi-core processor, includes supplying a command to execute a first thread to a first processor; judging a dependence relationship between the first thread and a second thread to be executed by a second processor; comparing a first threshold and a frequency of access of any one among shared memory and shared cache memory by the first thread; and changing a phase of a first operation clock of the first processor when the access frequency is greater than the first threshold and upon judging that no dependence relationship exists.. .
02/20/14
20140053012
System and detection mode
A system includes a cpu; a sensor that detects power of the cpu; a cache memory state monitoring circuit that monitors a state of a cache memory; and a detection circuit that based on a sensor signal from the sensor and a state signal from the cache memory state monitoring circuit, detects a spin state of a program executed by the cpu.. .
02/20/14
20140052932
Method for reducing the overhead associated with a virtual machine exit when handling instructions related to descriptor tables
A computerized method for efficient handling of a privileged instruction executed by a virtual machine (vm). The method comprises identifying when the privileged instruction causes a vm executed on a computing hardware to perform a vm exit; replacing a first virtual-to-physical address mapping to a second virtual-to-physical address mapping respective of a virtual pointer associated with the privileged instruction; and invalidating at least a cache entry in a cache memory allocated to the vm, thereby causing a new translation for the virtual pointer to the second virtual-to-physical address, wherein the second virtual-to-physical address provides a pointer to a physical address in a physical memory in the computing hardware allocated to the vm..
02/20/14
20140052931
Data type dependent memory scrubbing
A method for controlling a memory scrubbing rate based on content of the status bit of a tag array of a cache memory. More specifically, the tag array of a cache memory is scrubbed at smaller interval than the scrubbing rate of the storage arrays of the cache.
02/20/14
20140052924
Selective memory scrubbing based on data type
A method for minimizing soft error rates within caches by controlling a memory scrubbing rate selectively for a cache memory at an individual bank level. More specifically, the disclosure relates to maintaining a predetermined sequence and process of storing all modified information of a cache in a subset of ways of the cache, based upon for example, a state of a modified indication within status information of a cache line.
02/20/14
20140052923
Processor and control method for processor
A processor includes a plurality of nodes arranged two dimensionally in the x-axis direction and in the y-axis direction, and each of the nodes includes a processor core and a distributed shared cache memory. The processor also includes a first connecting unit and a second connecting unit.
02/20/14
20140052921
Store-exclusive instruction conflict resolution
A data processing system includes a plurality of transaction masters (4, 6, 8, 10) each with an associated local cache memory (12, 14, 16, 18) and coupled to coherent interconnect circuitry (20). Monitoring circuitry (24) within the coherent interconnect circuitry (20) maintains a state variable (flag) in respect of each of the transaction masters to monitor whether an exclusive store access state is pending for that transaction master.
02/20/14
20140052915
Information processing apparatus, information processing method, and program
An information processing apparatus includes a plurality of cache memories, a plurality of processors configured to respectively access the plurality of cache memories, and a memory, in which each of the plurality of processors executes a program to function as a cache processing unit configured to perform cache processing including at least one of transfer to the memory and discard with respect to all the pieces of data stored in the cache memory.. .
02/13/14
20140046657
Vocoder processing method, semiconductor device, and electronic device
In a semiconductor device, a vocoder processing unit requests, after executing a first vocoder process being one of an encoding process and a decoding process and before executing a following second vocoder process being other one of the encoding process and the decoding process, a cache memory to prefetch first program data to be used for the second vocoder process from an external memory.. .
02/06/14
20140040968
Reception apparatus, reception method, transmission apparatus, and transmission method
Enable appropriate caching of application programs executed in coordination with av content. The cache memory temporarily stores a coordinated application that is executed in coordination with received av content.
02/06/14
20140040670
Information processing device and processing method for information processing device
An information processing device includes a memory, and a plurality of processors coupled to the memory and including cache memories, and configured to select a processor where a capacity of the cache memory is the smallest among the plurality of processors, the selected processor executes memory dump processing for the memory.. .
02/06/14
20140040560
All invalidate approach for memory management units
An input/output memory management unit (iommu) having an “invalidate all” command available to clear the contents of cache memory is presented. The cache memory provides fast access to address translation data that has been previously obtained by a process.
02/06/14
20140040558
Information processing apparatus, parallel computer system, and control method for arithmetic processing unit
An information processing apparatus included in a parallel computer system has a memory that holds data and a processor including a cache memory that holds a part of the data held on the memory and a processor core that performs arithmetic operations using the data held on the memory or the cache memory. Moreover, the information processing apparatus has a communication device that determines whether data received from a different information processing apparatus is data that the processor core waits for.
02/06/14
20140040357
Optimized key frame caching for remote interface rendering
Frames of user interface (ui) graphical data can be remotely rendered more efficiently at a client during a remote session with a server by utilizing graphical data cached at the client to prevent re-sending data to the client that was sent in previous payloads. By using cache memory to remember recurring frames of similar ui data and delta encoding to correct areas that are not similar, encoded payload sizes are greatly reduced.
01/30/14
20140032968
Cache self-testing technique to reduce cache test time
A method for identifying, based on instructions stored externally to a processor containing a cache memory, a functional portion of the cache memory, then loading cache test code into the functional portion of the cache memory from an external source, and executing the cache test code stored in the cache memory to test the cache memory on a cache-line-granular basis and store fault information.. .
01/30/14
20140032847
Semiconductor device
A semiconductor device which can hold an instruction configuring a loop in a cache memory is provided. A main memory stores an instruction.
01/23/14
20140025890
Methods and structure for improved flexibility in shared storage caching by multiple systems operating as multiple virtual machines
Methods and structure for improved flexibility in managing cache memory in a storage controller of a computing device on which multiple virtual machines (vms) are operating in a vm computing environment. Embodiments hereof provide for the storage controller to receive configuration information from a vm management system coupled with the storage controller where the configuration information comprises information regarding each vm presently operating on the computing device.
01/16/14
20140019681
High density disk drive performance enhancement system
The present invention provides an hdd performance enhancement system that utilizes excess disk capacity as cache memory to enhance the i/o performance of the drive. The cache memory is distributed throughout the disk, for example in alternating tracks, sectors dedicated to serving as cache, or other distributed cache track segments or segment groups.
01/09/14
20140012938
Preventing race condition from causing stale data items in cache
A data cache server may process requests from a data cache client to put, get, and delete data items into or from the data cache server. Each data item may be based on data in a data store.
01/09/14
20140011538
Predictive caching of ip data
Disclosed is a method of predictively caching ip content data for a mobile device. In the mobile device, a content request is sent to an intelligent cache server over an ip network, the content request indicative of recurring ip content data of interest to the mobile device.
01/09/14
20140010369
Methods and devices for handling encrypted communication
By allowing the option of placing an logical link control (llc)-entity within the base station subsystem (bss) an improved system performance in the form of mobile stations experiencing reduced information acquisition times can be achieved by using a cache memory in the bss. A method in the bss can include receiving one or more llc packet data units (pdus) from a mobile station, and extracting sub network (sn)-pdu(s) contained in each llc pdu and reassembling the sn-pdu(s) to recover a single n-pdu.
01/02/14
20140006834
Control device, power supply device, and method for controlling power
There is provided a control device which includes a cache memory configured to temporarily store data, a nonvolatile memory configured to store a copy of the data stored in the cache memory, a battery configured to supply power to the cache memory in a case of a power failure, a data save processing unit configured to save data stored in a backup target region of the cache memory to the nonvolatile memory in the case of the power failure, and a charge control unit configured to charge the battery up to a target amount of charge which is determined on the basis of a size of the backup target region.. .
01/02/14
20140006722
Multiprocessor system, multiprocessor control method and processor
A multiprocessor system includes first through third processors and memory storing address data, all interconnected. In the first processor an access control unit receives the address and the data, and a cache memory storing a cache line including the address, the data and a validity flag.
01/02/14
20140006698
Hybrid cache state and filter tracking of memory operations during a transaction
In one embodiment, a cache memory can store a plurality of cache lines, each including a write-set field to store a write-set indicator to indicate whether data has been speculatively written during a transaction of a transactional memory, and a read-set field to store a plurality of read-set indicators each to indicate whether a corresponding thread has read the data before the transaction has committed. A compression filter associated with the cache memory includes a first filter storage to store a representation of a cache line address of a cache line read by a first thread of threads before the transaction has committed.
01/02/14
20140006668
Performing emulated message signaled interrupt handling
In an embodiment, a processor includes a logic to store a write transaction including an interrupt and data received from a device coupled to the processor to a cache line of a cache memory based on an address in an address queue, and forward an address of the cache line and assert an emulated message signaling interrupt (msi) signal to an interrupt controller of the processor. Other embodiments are described and claimed..
01/02/14
20140002701
Pixel and method for feedback based resetting of a pixel
A storage system, a non-transitory computer readable medium and a method for pre-fetching. The method may include presenting, by a storage system and to at least one host computer, a logical address space; determining, by a fetch module, to fetch a certain data portion from a data storage device to a cache memory of the storage system; determining, by a pre-fetch module, whether to pre-fetch at least one additional data portion from at least one data storage device to the cache memory based upon at least one characteristic of a mapping tree that maps one or more contiguous ranges of addresses related to the logical address space and one or more contiguous ranges of addresses related to the physical address space; and pre-fetching the at least one additional data portions if it is determined to pre-fetch the at least one additional data portions..
12/26/13
20130346975
Memory management method, information processing device, and computer-readable recording medium having stored therein memory management program
A computer that includes arithmetic processing units, a main memory, and a cache memory that is shared, and allows virtual computers to operate executes the following process. In other words, an instruction to arrange a program on a region of a virtualized virtual memory is given.
12/26/13
20130346730
Arithmetic processing apparatus, and cache memory control device and cache memory control method
An arithmetic processing apparatus includes a plurality of processors, each of the processors having an arithmetic unit and a cache memory. The processor includes an instruction port that holds a plurality of instructions accessing data of the cache memory, a first determination unit that validates a first flag when receiving an invalidation request for data in the cache memory, a cache index of a target address and a way id of the received request match with a cache index of a designated address and a way id of the load instruction, a second determination unit that validates a second flag when target data is transmitted due to a cache miss, and an instruction re-execution determination unit that instructs re-execution of an instruction subsequent to the load instruction when both the first flag and the second flag are validated at the time of completion of an instruction in the instruction port..
12/26/13
20130346705
Cache memory with write through, no allocate mode
In a particular embodiment, a method of managing a cache memory includes, responsive to a cache size change command, changing a mode of operation of the cache memory to a write through/no allocate mode. The method also includes processing instructions associated with the cache memory while executing a cache clean operation when the mode of operation of the cache memory is the write through/no allocate mode.
12/26/13
20130346696
Method and apparatus for providing shared caches
A method and apparatus for providing shared caches. A cache memory system may be operated in a first mode or a second mode.
12/26/13
20130346689
Storage system and management method of control information therein
An embodiment of this invention divides a cache memory of a storage system into a plurality of partitions and information in one or more of the partitions is composed of data different from user data and including control information. The storage system dynamically swaps data between an lu storing control information and a cache partition.
12/26/13
20130346683
Cache sector dirty bits
A cache subsystem apparatus and method of operating therefor is disclosed. In one embodiment, a cache subsystem includes a cache memory divided into a plurality of sectors each having a corresponding plurality of cache lines.
12/26/13
20130346682
System and method for supporting fast and deterministic execution and simulation in multi-core environments
The exemplary embodiments described herein relate to supporting fast and deterministic execution and simulation in multi-core environments. Specifically, the exemplary embodiments relate to systems and methods for implementing determinism in a memory system of a multithreaded computer.
12/26/13
20130346456
Device for caching a scalable original file
A device for caching a scalable original file having a first structure which has a header and a plurality of information packets for different information levels has a cache memory configured to cache a proxy file and/or information packets of the proxy file and a proxy file generator configured to generate a proxy file such that the latter is transferable into or directly has a second structure, which corresponds to the first structure of the original file. The proxy file generator further is configured to read out a first information packet of a basic information level from the original file and insert it into the proxy file at a position specified by the second structure and to output the proxy file in the second structure, so that in the second structure, at least one of the information packets of a non-basic information level is replaced with an empty information packet..
12/19/13
20130339625
Cache memory prefetching
According to exemplary embodiments, a computer program product, system, and method for prefetching in memory include determining a missed access request for a first line in a first cache level and accessing an entry in a prefetch table, wherein the entry corresponds to a memory block, wherein the entry includes segments of the memory block. Further, the embodiment includes determining a demand segment of the segments in the entry, the demand segment corresponding to a segment of the memory block that includes the first line, reading a first field in the demand segment to determine if a second line in the demand segment is spatially related with respect to accesses of the demand segment and reading a second field in the demand segment to determine if a second segment in the entry is temporally related to the demand segment..
12/19/13
20130339624
Processor, information processing device, and control method for processor
A processor is connected to a main storage device and includes a cache memory unit, a tag memory unit, a main storage control unit, a cache control unit, a main storage access monitoring unit, a cache access monitoring unit, and a swap control unit. The cache memory unit includes a plurality of cache lines.
12/19/13
20130339620
Providing cache replacement notice using a cache miss request
A computing device has an interface and a processor. The interface is configured to receive a cache miss request from a cache memory, and the processor is configured to identify data that is being removed from the cache memory based at least in part on information obtained from the cache miss request.
12/19/13
20130339612
Apparatus and method for testing a cache memory
An apparatus generates test data for a cache memory that caches data in a cache line in accordance with a memory address. The apparatus generates a memory address to be accessed, data to be arranged in a storage area designated by the memory address, an access instruction for the memory address, and an expected value of the data that is to be cached in the cache memory when memory access is performed in accordance with the access instruction.
12/19/13
20130339576
Method for constructing address mapping table of solid state drive
A method for constructing an address mapping table of a solid state drive is provided. The address mapping table is stored in a non-volatile memory of the solid state drive.
12/19/13
20130339569
Storage system and method for operating thereof
Storage system(s) for providing storing data in physical storage in a recurring manner, method(s) of operating thereof, and corresponding computer program product(s). For example, a possible method can include for each recurrence: generating a snapshot of at least one logical volume; destaging all data corresponding to the snapshot which was accommodated in the cache memory prior to a time of generating the snapshot and which was dirty at the time of generating said snapshot, thus giving rise to destaged data group; and after the destaged data group has been successfully destaged, registering an indication that the snapshot is associated with an order preservation consistency condition for the at least one logical volume, thus giving rise to a consistency snapshot..
12/12/13
20130332668
Methods and apparatuses for addressing memory caches
A cache memory includes cache lines to store information. The stored information is associated with physical addresses that include first, second, and third distinct portions.
12/12/13
20130332645
Synchronous and ansynchronous discard scans based on the type of cache memory
A computational device maintains a first type of cache and a second type of cache. The computational device receives a command from the host to release space.
12/12/13
20130329879
Apparatus and methods for a scalable communications network
A method that incorporates teachings of the subject disclosure may include, for example, transmitting a first request for an identity of a first regional name authority pointer server of a plurality of regional name authority pointer servers to a national domain name system server responsive to determining that a name authority pointer associated with a telephone number is not stored in cache memory, transmitting a second request for the name authority pointer to the first regional pointer server identified by the domain name system server, where the first regional name authority pointer server corresponds to a geographic region associated with the telephone number, and receiving the name authority pointer from the first regional name authority pointer server. Other embodiments are disclosed..
12/05/13
20130326272
Storage system and method of operating thereof
Storage system(s) for storing data in physical storage in a recurring manner, method(s) of operating thereof, and corresponding computer program product(s). For example, a possible method can include: upon start of a storage recurrence, destaging dirty data which had been accommodated in the cache memory prior to the start of said storage recurrence thus giving rise to destaged data group, wherein destaging is provided with no overwriting of at least superseded data destaged before starting said storage recurrence whilst enabling retaining metadata indicative of location of said superseded data in the physical storage space; accommodating data obtained in said cache memory subsequent to the start of said storage recurrence whilst preventing said data from being destaged during said storage recurrence, thus giving rise to accommodated data group; and registering a point-in-time indicative of successful destaging of the destaged data group, thereby providing an order-preservation consistency indication corresponding to said recurrence..
12/05/13
20130326157
Central processing unit and driving method thereof
A cache memory provided in the central processing unit is configured to include a data field which stores data in a main memory unit, a tag field which stores management information on data stored in the data field, and a valid bit which stores information about whether the data stored in the data field and the management information stored in the tag field are valid or invalid. Nonvolatile memory cells are used as memory cells which are components of the data field, the tag field, and the valid bit.
12/05/13
20130326127
Sub-block accessible nonvolatile memory cache
Subject matter disclosed herein relates to sub-block accessible cache memory.. .


Popular terms: [SEARCH]

Cache Memory topics: Cache Memory, Storage Device, Access Control, Device Control, Host Computer, Control Unit, Touch Screen, External Memory, Computer System, Data Processing, Access Rights, Notifications, Notification, Smart Phone, Microprocessor

Follow us on Twitter
twitter icon@FreshPatents

###

This listing is a sample listing of patents related to Cache Memory for is only meant as a recent sample of applications filed, not a comprehensive history. There may be associated servicemarks and trademarks related to these patents. Please check with patent attorney if you need further assistance or plan to use for business purposes. This patent data is also published to the public by the USPTO and available for free on their website. Note that there may be alternative spellings for Cache Memory with additional patents listed. Browse our RSS directory or Search for other possible listings.
     SHARE
  
         


FreshNews promo



0.547

3405

1 - 1 - 71