|| List of recent Cache-related patents
|Client-server based interactive guide with server storage|
An interactive television program guide system is provided. An interactive television program guide provides users with an opportunity to select programs for recording on a remote media server.
|Process grouping for improved cache and memory affinity|
A computer program product for process allocation is configured to determine a set of two or more processes of a plurality of processes that share at least one resource in a multi-node system, wherein each of the set of two or more processes is running on different nodes of the multi-node system. The program code can be configured to calculate a value based on a weight of the resource and frequency of access of the resource by each process.
|Systems and methods for providing class loading for java applications|
A mechanism for providing class loading for a java application is disclosed. A method of the invention includes retrieving, by a processing device, a java class file.
|System and method for efficiently selecting data entities represented in a graphical user interface|
A system, method, and computer program product for selecting single or multiple data entities, based on selection of representative items in a graphical user interface via a user input gestural trajectory. Embodiments display items representing data entities, some of which may be selected by a user for further processing by crossing or surrounding the items with a pointing device, such as a mouse, or stylus or fingertip via a touchscreen device.
|Power management of multiple compute units sharing a cache|
We report methods, integrated circuit devices, and fabrication processes relating to power management transitions of multiple compute units sharing a cache. One method includes indicating that a first compute unit of a plurality of compute units of an integrated circuit device is attempting to enter a low power state, determining if the first compute unit is the only compute unit of the plurality in a normal power state, and in response to determining the first compute unit is the only compute unit in the normal power state: saving a state of a shared cache unit of the integrated circuit device, flushing at least a portion of a cache of the shared cache unit, repeating the flushing until either a second compute unit exits the low power state or the cache is completely flushed, and permitting the first compute unit to enter the low power state..
|Scalable session management|
Scalable session management is achieved by generating a cookie that includes an encrypted session key and encrypted cookie data. The cookie data is encrypted using the session key.
|Autonomic hotspot profiling using paired performance sampling|
A processor performance profiler is enabled to for identify specific instructions causing performance issues within a program being executed by a microprocessor through random sampling to find the worst-case offenders of a particular event type such as a cache miss or a branch mis-prediction. Tracking all instructions causing a particular event generates large data logs, creates performance penalties, and makes code analysis more difficult.
|Branch target buffer for emulation environments|
Branch instructions are managed in an emulation environment that is executing a program. A plurality of slots in a polymorphic inline cache is populated.
|Executing parallel operations to increase data access performance|
Techniques are described for increasing data access performance for a memory device. In various embodiments, a scheduler/controller is configured to manage data as it read to or written from a memory.
In one embodiment, a method performed by one or more computing devices includes receiving at a host cache a first request for data comprising at least one snapshot of a cached logical unit number (lun), sending, by the host cache, the data comprising at least one snapshot of the cached lun in response to the first request, and in response to the completing sending the data comprising at least one snapshot of the cached lun, sending, by the host cache, a first response indicating that sending the data is complete.. .
In one embodiment, a method performed by one or more computing devices includes receiving at a host cache, a first request to prepare a volume of the host cache for creating a snapshot of a cached logical unit number (lun), the request indicating that a snapshot of the cached lun will be taken, preparing, in response to the first request, the volume of the host cache for creating the snapshot of the cached lun depending on a mode of the host cache, receiving, at the host cache, a second request to create the snapshot of the cached lun, and in response to the second request, creating, at the host cache, the snapshot of the cached lun.. .
|System cache with sticky allocation|
Methods and apparatuses for implementing a system cache within a memory controller. Multiple requesting agents may allocate cache lines in the system cache, and each line allocated in the system cache may be associated with a specific group id.
|Method for protecting a gpt cached disks data integrity in an external operating system environment|
An invention is provided for protecting the data integrity of a cached storage device in an alternate operating system (os) environment. The invention includes replacing a globally unique identifiers partition table (gpt) for a cached disk with a modified globally unique identifiers partition table (mgpt).
|Transparent host-side caching of virtual disks located on shared storage|
Techniques for using a host-side cache to accelerate virtual machine (vm) i/o are provided. In one embodiment, the hypervisor of a host system can intercept an i/o request from a vm running on the host system, where the i/o request is directed to a virtual disk residing on a shared storage device.
|Method for protecting storage device data integrity in an external operating environment|
An invention is provided for protecting the data integrity of a cached storage device in an alternate operating system (os) environment. The invention includes replacing an actual partition table for a disk with a dummy partition table.
|Network traffic management using socket-specific syn request caches|
Methods, systems, and devices are described for managing network communications at a traffic manager module serving as a proxy to at least one network service for at least one client device. The traffic manager module may maintaining a syn request cache for a socket implemented by the traffic manager module.
|Method and system for dynamic distributed data caching|
A method and system for dynamic distributed data caching is presented. The system includes one or more peer members and a master member.
|Predictive caching for content|
Disclosed are various embodiments for predictive caching of content to facilitate instantaneous use of the content. If a user is likely to commence use of a content item through a client, and if the client has available resources to facilitate instantaneous use, the client is configured to predictively cache the content item before the user commences use.
|Broker designation and selection in a publish-subscription environment|
Approaches for designating and/or selecting broker systems in a publication-subscription (pub-sub) messaging environment are provided. In one approach, a subscriber system may be designated as a broker system based on a capability of the subscriber system to function as a broker system for its peers.
|Methods, circuits, devices, systems and associated computer executable code for providing domain name resolution|
Disclosed are methods, circuits, devices, systems and associated computer executable code for providing domain name resolution functionality to data client device accessing network data through an access point. According to some embodiments, an access point may be integral or otherwise functionally associated with a zone specific domain name system (zsdns, which zsdns may include a local cache of dns records, which local cache of dns records may be zone specific..
|System, method and computer program product for selectively caching domain name system information on a network gateway|
A system, method and computer program product is provided for selectively caching domain name system (dns) information on a network gateway. A cpe attached to the network gateway executes an application that searches files in cpe memory to identify frequently accessed domain names.
|Reduced disk space standby|
A method and system for replicating database data is provided. One or more standby database replicas can be used for servicing read-only queries, and the amount of storage required is scalable in the size of the primary database storage.
|Global indexing within an enterprise object store file system|
A file system is disclosed that includes an application wide name space instantiated in a global index (gindex) that is used for accessing objects related to an application. Using the gindex, a method for cache coherency includes establishing one or more appliances, each defining a storage cluster; establishing one or more tenants spanning across appliances, wherein an application stores objects in file systems associated with the appliances and tenants; establishing a gindex including metadata relating to objects stored in association with the application; replicating the gindex to plurality of data centers supporting the tenants; storing an original object at a first data center; storing a cached copy of the object at a second data center; aligning the cached copy using metadata for the object from a local copy of the gindex..
|Simulation system for simulating i/o performance of volume and simulation method|
An example is a simulation method for i/o performance of a volume of a first storage apparatus in a simulation target storage apparatus, including: obtaining an i/o history of a period regarding a first migration source volume in the first storage apparatus; obtaining first information indicating at least intra-volume addresses of cache data of the first migration source volume at start of the period; referring to the first information to determine cache data having addresses in a migration destination volume corresponding to at least some of the intra-volume addresses of the cache data of the first migration source volume and determining the determined cache data as cache data of a first simulation target volume of the simulation target storage apparatus; and issuing simulation-use i/o requests to the first simulation target volume according to the i/o history of the period to measure i/o performance of the first simulation target volume.. .
|Methods and apparatus for optimization of sim card initialization|
Methods and apparatus for initializing a sim card may include sending a request to read a file from the sim card. In addition, the methods and apparatus may include receiving a sim version identifier for the file from the sim card and determining whether the sim version identifier matches a cached version identified in a cache.
|Server, server control method, program and recording medium|
A request receiver (101) receives a request in which image id information and a parameter for image processing are specified, from a terminal. An image processor (102) acquires an image based on the id information that was specified in the received request, and by using the parameter that was specified in the received request to apply the image processing on the acquired image, outputs extracted information that was extracted from inside the image.
|Relay with efficient service change handling|
A relay device with efficient service change handling, and method there for, is provided. The relay comprises: a processor; a memory; a communication interface; and a plurality of connection objects, each of the plurality of connection objects comprising a respective queue of messages, each of the messages for relay in association with respective devices via the communication interface, the processor enabled to maintain, in the memory, a cache of associations between respective identifiers of the connection objects and identifiers associated with respective messages respectively queued therein; receive an indication of a service change to a given device; determine, from the cache, a subset of the plurality of connection objects comprising given messages associated with the given device; and, communicate only with the subset to apply an action associated with the service change to the given messages, while ignoring the remaining connection objects..
|Method and terminal for implementing display cache|
The embodiment of the disclosure discloses a method and terminal for implementing display cache, which comprise storing, in memory, texts to be displayed as text component objects; creating a cache image object with a same size as a stored text component when displaying the text component on a screen. In the solution of display cache according to the embodiment of the disclosure, cache images are only created for text regions.
|Data service function|
A data service function for an intelligent television (tv) includes a source plugin that communicates with and receives data from an external content provider and processes the received data into a data model format. The data service function further includes a subservice corresponding to the source plugin, the subservice communicates with and provides the converted data as content to the intelligent tv.
A thumbnail management system is provided for an intelligent tv. Thumbnails may accumulate over time and require removal from a data storage device.
|Epg data functions|
An epg data service for an intelligent tv includes various of source plugins receiving epg information from various respective epg information sources, an epg subservice aggregating the epg information received by source plugins, an epg database storing the aggregated epg information from the epg subservice, and an epg provider providing a relevant portion of the aggregated epg information to an application of the intelligent tv. The epg data service further includes a tag subservice receiving notification from a second application of the intelligent tv to set or unset tags for programs or channels, storing the tag in a database, and serving the epg subservice with information regarding tagged programs or channels.
|Thread processing method and thread processing system|
A thread processing method that is executed by a multi-core processor, includes supplying a command to execute a first thread to a first processor; judging a dependence relationship between the first thread and a second thread to be executed by a second processor; comparing a first threshold and a frequency of access of any one among shared memory and shared cache memory by the first thread; and changing a phase of a first operation clock of the first processor when the access frequency is greater than the first threshold and upon judging that no dependence relationship exists.. .
|Using a buffer to replace failed memory cells in a memory component|
Methods and data processing systems for using a buffer to replace failed memory cells in a memory component are provided. Embodiments include determining that a first copy of data stored within a plurality of memory cells of a memory component contains one or more errors; in response to determining that the first copy contains one or more errors, determining whether a backup cache within the buffer contains a second copy of the data; and in response to determining that the backup cache contains the second copy of the data, transferring the second copy from the backup cache to a location within an error data queue (edq) within the buffer and updating the buffer controller to use the location within the edq instead of the plurality of memory cells within the memory component..
|System and detection mode|
A system includes a cpu; a sensor that detects power of the cpu; a cache memory state monitoring circuit that monitors a state of a cache memory; and a detection circuit that based on a sensor signal from the sensor and a state signal from the cache memory state monitoring circuit, detects a spin state of a program executed by the cpu.. .
|Dma engine with stlb prefetch capabilities and tethered prefetching|
A system with a prefetch address generator coupled to a system translation look-aside buffer that comprises a translation cache. Prefetch requests are sent for page address translations for predicted future normal requests.
|Write transaction management within a memory interconnect|
A memory interconnect between transaction masters and a shared memory. A first snoop request is sent to other transaction masters to trigger them to invalidate any local copy of that data they may hold and for them to return any cached line of data corresponding to the write line of data that is dirty.
|Method for reducing the overhead associated with a virtual machine exit when handling instructions related to descriptor tables|
A computerized method for efficient handling of a privileged instruction executed by a virtual machine (vm). The method comprises identifying when the privileged instruction causes a vm executed on a computing hardware to perform a vm exit; replacing a first virtual-to-physical address mapping to a second virtual-to-physical address mapping respective of a virtual pointer associated with the privileged instruction; and invalidating at least a cache entry in a cache memory allocated to the vm, thereby causing a new translation for the virtual pointer to the second virtual-to-physical address, wherein the second virtual-to-physical address provides a pointer to a physical address in a physical memory in the computing hardware allocated to the vm..
|Data type dependent memory scrubbing|
A method for controlling a memory scrubbing rate based on content of the status bit of a tag array of a cache memory. More specifically, the tag array of a cache memory is scrubbed at smaller interval than the scrubbing rate of the storage arrays of the cache.
|Efficient trace capture buffer management|
A system and method for efficiently storing traces of multiple components in an embedded system. A system-on-a-chip (soc) includes a trace unit for collecting and storing trace history, bus event statistics, or both.
|Programmable resources to track multiple buses|
A system and method for efficiently monitoring traces of multiple components in an embedded system. A system-on-a-chip (soc) includes a trace unit for collecting and storing trace history, bus event statistics, or both.
|Data cache prefetch hints|
The present invention provides a method and apparatus for using prefetch hints. One embodiment of the method includes bypassing, at a first prefetcher associated with a first cache, issuing requests to prefetch data from a number of memory addresses in a sequence of memory addresses determined by the first prefetcher.
|Selective memory scrubbing based on data type|
A method for minimizing soft error rates within caches by controlling a memory scrubbing rate selectively for a cache memory at an individual bank level. More specifically, the disclosure relates to maintaining a predetermined sequence and process of storing all modified information of a cache in a subset of ways of the cache, based upon for example, a state of a modified indication within status information of a cache line.
|Processor and control method for processor|
A processor includes a plurality of nodes arranged two dimensionally in the x-axis direction and in the y-axis direction, and each of the nodes includes a processor core and a distributed shared cache memory. The processor also includes a first connecting unit and a second connecting unit.
|Random access of a cache portion using an access module|
A data processing system having a first processor, a second processor, a local memory of the second processor, and a built-in self-test (bist) controller of the second processor which can be randomly enabled to perform memory accesses on the local memory of the second processor and which includes a random value generator is provided. The system can perform a method including executing a secure code sequence by the first processor and performing, by the bist controller of the second processor, bist memory accesses to the local memory of the second processor in response to the random value generator.
|Store-exclusive instruction conflict resolution|
A data processing system includes a plurality of transaction masters (4, 6, 8, 10) each with an associated local cache memory (12, 14, 16, 18) and coupled to coherent interconnect circuitry (20). Monitoring circuitry (24) within the coherent interconnect circuitry (20) maintains a state variable (flag) in respect of each of the transaction masters to monitor whether an exclusive store access state is pending for that transaction master.
Each tag entry may be associated with a cache line from a cache belonging to a first domain. The first domain may contain multiple caches.
|System translation look-aside buffer integrated in an interconnect|
System tlbs are integrated within an interconnect, use a and share a transport network to connect to a shared walker port. Transactions are able to pass stlb allocation information through a second initiator side interconnect, in a way that interconnects can be cascaded, so as to allow initiators to control a shared stlb within the first interconnect.
|System, method, and computer program product for managing cache miss requests|
A system, method, and computer program product are provided for managing miss requests. In use, a miss request is received at a unified miss handler from one of a plurality of distributed local caches.
|Using a shared last-level tlb to reduce address-translation latency|
The disclosed embodiments provide techniques for reducing address-translation latency and the serialization latency of combined tlb and data cache misses in a coherent shared-memory system. For instance, the last-level tlb structures of two or more multiprocessor nodes can be configured to act together as either a distributed shared last-level tlb or a directory-based shared last-level tlb.
|Reduced scalable cache directory|
A processing network comprising a cache configured to store copies of memory data as a plurality of cache lines, a cache controller configured to receive data requests from a plurality of cache agents, and designate at least one of the cache agents as an owner of a first of the cache lines, and a directory configured to store cache ownership designations of the first cache line, and wherein the directory is encoded to support substantially simultaneous ownership of the first cache line by a plurality but less than all of the cache agents. Also disclosed is a method comprising receiving coherent transactions from a plurality of cache agents, and storing ownership designations of a plurality of cache lines by the cache agents in a directory, wherein the directory is configured to support storage of substantially simultaneous ownership designations for a plurality but less than all of the cache agents..
|Information processing apparatus, information processing method, and program|
An information processing apparatus includes a plurality of cache memories, a plurality of processors configured to respectively access the plurality of cache memories, and a memory, in which each of the plurality of processors executes a program to function as a cache processing unit configured to perform cache processing including at least one of transfer to the memory and discard with respect to all the pieces of data stored in the cache memory.. .
|Multi-ported memory with multiple access support|
A multi-ported memory that supports multiple read and write accesses is described. The multi-ported memory may include a number of read/write ports that is greater than the number of read/write ports of each memory bank of the multi-ported memory.
|Multi-ported memory with multiple access support|
A multi-ported memory that supports multiple read and write accesses is described herein. The multi-ported memory may include a number of read/write ports that is greater than the number of read/write ports of each memory bank of the multi-ported memory.
|Memory device with a logical-to-physical bank mapping cache|
A memory device with a logical-to-physical (ltp) bank mapping cache that supports multiple read and write accesses is described herein. The memory device allows for at least one read operation and one write operation to be received during the same clock cycle.
|Hybrid caching system|
A system operable to: receive a request for an application unit from a first device; generating a key for the application unit; look up segment cache indices corresponding to the application unit, according to the key; and determine whether the segment cache indices are available. Where the segment cache indices are available, the system may retrieve a segment cache using the segment cache indices; and then retrieve the application unit using the retrieved segment cache.
|Cache coherent handshake protocol for in-order and out-of-order networks|
Disclosed herein is a processing network element (ne) comprising at least one receiver configured to receive a plurality of memory request messages from a plurality of memory nodes, wherein each memory request designates a source node, a destination node, and a memory location, and a plurality of response messages to the memory requests from the plurality of memory nodes, wherein each memory request designates a source node, a destination node, and a memory location, at least one transmitter configured to transmit the memory requests and memory responses to the plurality of memory nodes, and a controller coupled to the receiver and the transmitter and configured to enforce ordering such that memory requests and memory responses designating the same memory location and the same source node/destination node pair are transmitted by the transmitter in the same order received by the receiver.. .
|Methods and apparatus for providing acceleration of virtual machines in virtual environments|
A host server computer system that includes a hypervisor within a virtual space architecture running at least one virtualization, acceleration and management server and at least one virtual machine, at least one virtual disk that is read from and written to by the virtual machine, a cache agent residing in the virtual machine, wherein the cache agent intercepts read or write commands made by the virtual machine to the virtual disk, and a solid state drive. The solid state drive includes a non-volatile memory storage device, a cache device and a memory device driver providing a cache primitives application programming interface to the cache agent and a control interface to the virtualization, acceleration and management server..
|Http-based content acquisition method and client|
Embodiments of the present invention provide an http-based content acquisition method and client. The method includes: acquiring, by a client according to an acquired content identifier, a first content corresponding to the content identifier and a validity period of the first content from a cache; displaying, by the client, the first content; requesting, by the client according to the validity period of the first content, a server to verify validity of the first content; requesting, by the client, to acquire a second content corresponding to the content identifier from the server if the validity verification performed by the server for the first content fails; and displaying, by the client, the second content.
|Distributed cache system for optical networks|
Caching techniques are described. An example network device positioned between an optical line terminal (olt) and a service provider device includes a hot cache, a wide cache controller, and a control unit.
|Content distribution system, control apparatus, and content distribution method|
A control apparatus computes an access frequency for a content item stored in a plurality of cache servers that temporarily hold a content item based on a number of accesses to the content item, determines disposition of content items in the plurality of cache servers, using at least one of a load status of the plurality of cache servers, topology information of a mobile network, in-zone information of a terminal requesting a content item, and the access frequency to instruct the plurality of cache servers to obtain a content item according to the determined disposition, and, upon receipt of a request for a content item from the terminal, instructs a cache server that holds the content item among the plurality of cache servers to transmit the content item through a packet forwarding apparatus.. .
|Dynamic content assembly on edge-of network servers in a content delivery network|
Content is dynamically assembled at the edge of the internet, preferably on content delivery network (cdn) edge servers. A content provider leverages an “edge side include” (esi) markup language that is used to define web page fragments for dynamic assembly at the edge.
|Processing, storing, and delivering digital content|
Implementations of the present invention include a public cloud, one or more end-caches and optionally one or more edge-caches in computerized architecture that provides digital content, such as entertainment services and/or informational content, to a guest display (e.g., end-cache connected to in-room tv, end-cache connected to personal portable device) or control of one or more devices (e.g., in-room tv and/or in-room control). Implementations of the present invention also include a content distribution architecture that uses the public internet to securely transmit digital content and data to all desired locations (e.g., end-caches).
|Updating cached database query results|
A data cache platform maintains pre-computed database query results computed by a computation platform based on data maintained in the computation platform and is configured to determine probabilities of the cached database query results being outdated, to automatically issue re-computation orders to the computation platform for updating cached database query results on the basis of the determined probabilities of the pre-computed database query results being outdated and to receive the updated pre-computed database query results as results of the re-computation orders. The probability determination depends on a probabilistic model and on the occurrence of asynchronous real-time events.
|Hardware implementation of the aggregation/group by operation: filter method|
Techniques are described for performing grouping and aggregation operations. In an embodiment, a request is received to aggregate data grouped by a first column.
|Virtual machine image access de-duplication|
A system and an article of manufacture for de-duplicating virtual machine image accesses include identifying one or more identical blocks in two or more images in a virtual machine image repository, generating a block map for mapping different blocks with identical content into a same block, deploying a virtual machine image by reconstituting an image from the block map and fetching any unique blocks remotely on-demand, and de-duplicating virtual machine image accesses by storing the deployed virtual machine image in a local disk cache.. .
|Legal text distribution and processing in mobile broadcasting|
Systems and methods for processing and distributing legal text information allow content providers to distribute legal text to terminals receiving broadcast content. The legal text may include terms and conditions associated with content that a user may want to purchase or subscribe.