FreshPatents.com Logo
Enter keywords:  

Track companies' patents here: Public Companies RSS Feeds | RSS Feed Home Page
Popular terms

[SEARCH]

Follow us on Twitter
twitter icon@FreshPatents

Web & Computing
Cloud Computing
Ecommerce
Search patents
Smartphone patents
Social Media patents
Video patents
Website patents
Web Server
Android patents
Copyright patents
Database patents
Programming patents
Wearable Computing
Webcam patents

Web Companies
Apple patents
Google patents
Adobe patents
Ebay patents
Oracle patents
Yahoo patents

[SEARCH]

Cache patents



      
           
This page is updated frequently with new Cache-related patent applications. Subscribe to the Cache RSS feed to automatically get the update: related Cache RSS feeds. RSS updates for this page: Cache RSS RSS


Thinly provisioned flash cache with shared storage pool

Tiered caching and migration in differing granularities

Use of differing granularity heat maps for caching and migration

Date/App# patent app List of recent Cache-related patents
07/24/14
20140208326
 File presenting method and apparatus for a smart terminal patent thumbnailnew patent File presenting method and apparatus for a smart terminal
A file presenting method and apparatus for a smart terminal is provided. The method includes determining whether it is to present a thumbnail of a file according to a type of the file, and setting loading information of the file in a loading queue if it is by a user interface thread; acquiring the loading information from the loading queue, determining whether a cache of the smart terminal stores the thumbnail of the file, generating the thumbnail of the file in accordance with the loading information and storing the generated thumbnail to the cache of the smart terminal if the cache of the smart terminal does not, and acquiring the thumbnail of the file from the cache of the smart terminal if the cache of the smart terminal does by a loading thread acquiring; presenting the thumbnail of the file as an icon of the file..
07/24/14
20140208295
 Method and system for creating and managing a dynamic route topography for service oriented software environments patent thumbnailnew patent Method and system for creating and managing a dynamic route topography for service oriented software environments
A system, method, and computer-readable medium are provided for managing a route topography in a software environment. The system includes a dashboard user interface for allowing a user to manage the services that are part of a software application.
07/24/14
20140208142
 Semiconductor device patent thumbnailnew patent Semiconductor device
Supply of power to a plurality of circuits is controlled efficiently depending on usage conditions and the like of the circuits. An address monitoring circuit monitors whether a cache memory and an input/output interface are in an access state or not, and performs power gating in accordance with the state of the cache memory and the input/output interface.
07/24/14
20140208040
 Creating a program product or system for executing an instruction for pre-fetching data and releasing cache lines patent thumbnailnew patent Creating a program product or system for executing an instruction for pre-fetching data and releasing cache lines
Systems and program products are created to execute a prefetch data machine instruction having an m field performs a function on a cache line of data specifying an address of an operand. The operation comprises either prefetching a cache line of data from memory to a cache or reducing the access ownership of store and fetch or fetch only of the cache line in the cache or a combination thereof.
07/24/14
20140208039
 Methods and apparatus to reduce cache pollution casued by data prefetching patent thumbnailnew patent Methods and apparatus to reduce cache pollution casued by data prefetching
Efficient techniques are described for reducing cache pollution by use of a prefetch logic that recognizes exits from software loops or function returns to cancel any pending prefetch request operations. The prefetch logic includes a loop data address monitor to determine a data access stride based on repeated execution of a memory access instruction in a program loop.
07/24/14
20140208038
 Sectored cache replacement algorithm for reducing memory writebacks patent thumbnailnew patent Sectored cache replacement algorithm for reducing memory writebacks
A sectored cache replacement algorithm is implemented via a method and computer program product. The method and computer program product select a cache sector among a plurality of cache sectors for replacement in a computer system.
07/24/14
20140208037
 Expiring virtual content from a chache in a virtual universe patent thumbnailnew patent Expiring virtual content from a chache in a virtual universe
Approaches for expiring cached virtual content in a virtual universe are provided. In one approach, there is an expiration tool, including an identification component configured to identify virtual content associated with an avatar in the virtual universe, an analysis component configured to analyze a behavior of the avatar in a region of the virtual universe, the behavior indicating a likely future location of the avatar, and an expiration component configured to expire cached virtual content associated with the avatar based on the behavior of the avatar in the region of the virtual universe, wherein the cached virtual content associated with the avatar in the future location is maintained in the cache longer than cached virtual content associated with the avatar in another region of the virtual universe..
07/24/14
20140208036
 Performing staging or destaging based on the number of waiting discard scans patent thumbnailnew patent Performing staging or destaging based on the number of waiting discard scans
A controller receives a request to perform staging or destaging operations with respect to an area of a cache. A determination is made as to whether more than a threshold number of discard scans are waiting to be performed.
07/24/14
20140208035
 Cache circuit having a tag array with smaller latency than a data array patent thumbnailnew patent Cache circuit having a tag array with smaller latency than a data array
A method is described that includes alternating cache requests sent to a tag array between data requests and dataless requests.. .
07/24/14
20140208034
 System and method for efficient paravirtualized os process switching patent thumbnailnew patent System and method for efficient paravirtualized os process switching
The exemplary embodiments described herein relate to systems and methods for improved process switching of a paravirtualized guest with a software-based memory management unit (“mmu”). One embodiment relates to a non-transitory computer readable storage medium including a set of instructions executable by a processor, the set of instructions, when executed, resulting in a performance of the following: create a plurality of new processes for each of a plurality of virtual environments, each of the virtual environments assigned one of a plurality of address space identifiers (“asids”) stored in a cache memory, perform a process switch to one of the virtual environments thereby designating the one of the virtual environments as the active virtual environment, determine whether the active virtual environment has exhausted each of the asids, and flush a cache memory when it is determined that the active virtual environment has exhausted each of the asids..
07/24/14
20140208032
new patent Use of flash cache to improve tiered migration performance
For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and tiered levels of storage, and at a time in which at least one data segment is to be migrated from one level to another level of the tiered levels of storage, a data migration mechanism is initiated by copying data resident in the lower-speed cache corresponding to the at least one data segment to be migrated to a target on the another level, and reading remaining data, not previously copied from the lower-speed cache, from a source on the one level, and writing the remaining data to the target.. .
07/24/14
20140208031
new patent Apparatus and method for memory-hierarchy aware producer-consumer instructions
An apparatus and method are described for efficiently transferring data from a producer core to a consumer core within a central processing unit (cpu). For example, one embodiment of a method comprises: a method for transferring a chunk of data from a producer core of a central processing unit (cpu) to consumer core of the cpu, comprising: writing data to a buffer within the producer core of the cpu until a designated amount of data has been written; upon detecting that the designated amount of data has been written, responsively generating an eviction cycle, the eviction cycle causing the data to be transferred from the fill buffer to a cache accessible by both the producer core and the consumer core; and upon the consumer core detecting that data is available in the cache, providing the data to the consumer core from the cache upon receipt of a read signal from the consumer core..
07/24/14
20140208029
new patent Use of flash cache to improve tiered migration performance
For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and tiered levels of storage, and at a time in which at least one data segment is to be migrated from one level to another level of the tiered levels of storage, a data migration mechanism is initiated by copying data resident in the lower-speed cache corresponding to the at least one data segment to be migrated to a target on the another level, and reading remaining data, not previously copied from the lower-speed cache, from a source on the one level, and writing the remaining data to the target.. .
07/24/14
20140208027
new patent Configurable cache and method to configure same
A method includes receiving an address at a tag state array of a cache, wherein the cache is configurable to have a first size and a second size that is smaller than the first size. The method further includes identifying a first portion of the address as a set index, wherein the first portion has a same number of bits when the cache has the first size as when the cache has the second size.
07/24/14
20140208021
new patent Thinly provisioned flash cache with shared storage pool
For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and managed tiered levels of storage, a solid state device (ssd) tier is variably shared between the lower-speed cache and the managed tiered levels of storage such that the managed tiered levels of storage are operational on large data segments, and the lower-speed cache is allocated with the large data segments, yet operates with data segments of a smaller size than the large data segments and within the large data segments.. .
07/24/14
20140208020
new patent Use of differing granularity heat maps for caching and migration
For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and tiered levels of storage, groups of data segments are migrated between the tiered levels of storage such that uniformly hot ones of the groups of data segments are migrated to utilize a solid state drive (ssd) portion of the tiered levels of storage, while sparsely hot ones of the groups of data segments are migrated to utilize the lower-speed cache.. .
07/24/14
20140208019
new patent Caching method and caching system using dual disks
A caching method and a caching system using dual disks, adapted to an electronic apparatus having a first storage unit and a second storage unit, are provided, in which an access speed of the second storage unit is higher than that of the first storage unit. In the method, a data access to the first storage unit is monitored, a data category of the data in an access address of the data access is identified and whether the data category belongs to a cache category is determined.
07/24/14
20140208018
new patent Tiered caching and migration in differing granularities
For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and managed tiered levels of storage, groups of data segments are migrated between the tiered levels of storage such that uniformly hot ones of the groups of data segments are migrated to use a solid state drive (ssd) portion of the tiered levels of storage, clumped hot ones of the groups of data segments are migrated to use the ssd portion while using the lower-speed cache for a remaining portion of the clumped hot ones, and sparsely hot ones of the groups of data segments are migrated to use the lower-speed cache while using a lower one of the tiered levels of storage for a remaining portion of the sparsely hot ones.. .
07/24/14
20140208017
new patent Thinly provisioned flash cache with shared storage pool
For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and managed tiered levels of storage, a solid state device (ssd) tier is variably shared between the lower-speed cache and the managed tiered levels of storage such that the managed tiered levels of storage are operational on large data segments, and the lower-speed cache is allocated with the large data segments, yet operates with data segments of a smaller size than the large data segments and within the large data segments.. .
07/24/14
20140208009
new patent Storage system
A storage system in an embodiment of this invention comprises a non-volatile storage area for storing write data from a host, a cache area capable of temporarily storing the write data before storing the write data in the non-volatile storage area, and a controller that determines whether to store the write data in the cache area or to store the write data in the non-volatile storage area without storing the write data in the cache area, and stores the write data in the determined area.. .
07/24/14
20140208005
new patent System, method and computer-readable medium for providing selective protection and endurance improvements in flash-based cache
A cache controller includes a cache memory distributed across multiple solid-state storage units in which cache line fill operations are applied sequentially in a defined manner and write operations are protected by a raid-5 (striping plus parity) scheme upon a stripe reaching capacity. The cache store is responsive to data from a storage controller managing a primary data store.
07/24/14
20140208001
new patent Techniques for achieving crash consistency when performing write-behind caching using a flash storage-based cache
Techniques for achieving crash consistency when performing write-behind caching using a flash storage-based cache are provided. In one embodiment, a computer system receives from a virtual machine a write request that includes data to be written to a virtual disk and caches the data in a flash storage-based cache.
07/24/14
20140207999
new patent Performing staging or destaging based on the number of waiting discard scans
A controller receives a request to perform staging or destaging operations with respect to an area of a cache. A determination is made as to whether more than a threshold number of discard scans are waiting to be performed.
07/24/14
20140207995
new patent Use of differing granularity heat maps for caching and migration
For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and tiered levels of storage, groups of data segments are migrated between the tiered levels of storage such that uniformly hot ones of the groups of data segments are migrated to utilize a solid state drive (ssd) portion of the tiered levels of storage, while sparsely hot ones of the groups of data segments are migrated to utilize the lower-speed cache.. .
07/24/14
20140207987
new patent Multiprocessor system with multiple concurrent modes of execution
A multiprocessor system supports multiple concurrent modes of speculative execution. Speculation identification numbers (ids) are allocated to speculative threads from a pool of available numbers.
07/24/14
20140207981
new patent Cached phy register data access
Ethernet physical sublayer (phy) devices each provide phy register data. One or more of the ethernet phy devices are connected to each of one or more management data input/output (mdio)/management data clock (mdc) interfaces to which a number of mdio/mdc controllers are connected.
07/24/14
20140207835
new patent Configuring a cached website file removal using a pulled data list
An exemplary method generating a data list of at least one website and configuring a server computer to clear a cache for the at least one website may comprise the steps of the server computer requesting a data list generated from one or more job records, identifying one or more websites within the data list to remove one or more website files within a cache on another server, removing the website file(s) from the cache and transmitting instructions to write a job check in record to a database on the other server.. .
07/24/14
20140207825
new patent System, method and computer program product for efficient caching of hierarchical items
Embodiments disclosed herein provide a “lazy” approach in caching a hierarchical navigation tree with one or more associated permission trees. In one embodiment, only a portion of a cached permission tree is updated.
07/24/14
20140207566
new patent Device session identification system
Methods and systems for the identification of electronic devices between web browser and native application sessions is disclosed. The identification process can function even with interrupted internet connectivity.
07/24/14
20140206738
new patent Lipase inhibitors
Is useful in the treatment and prevention of a disorder such as cachexia, stroke, atherosclerosis, coronary artery disease, and diabetes and pharmaceutical compositions of the same. Also, a method of screening for lipase inhibitors using a compound of formula (i) and determining its lipase inhibitory activity.
07/24/14
20140205012
new patent Method and apparatus using software engine and hardware engine collaborated with each other to achieve hybrid video encoding
One video encoding method includes: performing a first part of a video encoding operation by a software engine with instructions, wherein the first part of the video encoding operation comprises at least a motion estimation function; delivering a motion estimation result generated by the motion estimation function to a hardware engine; and performing a second part of the video encoding operation by the hardware engine. Another video encoding method includes: performing a first part of a video encoding operation by a software engine with instructions and a cache buffer; performing a second part of the video encoding operation by a hardware engine; performing data transfer between the software engine and the hardware engine through the cache buffer; and performing address synchronization to ensure that a same entry of the cache buffer is correctly addressed and accessed by both of the software engine and the hardware engine..
07/24/14
20140204108
new patent Pixel cache, method of operating pixel cache, and image processing device including pixel cache
A method of operating a pixel cache having a plurality of linefill units and configured to fetch an image stored in a main memory includes receiving a request for data of one or more image planes from a processor, and if the request for at least one image plane is determined as a “hit”, outputting the requested data of the at least one image plane and fetching the requested data from main memory of at one other image plane determined as not a “hit”. A “hit” is determined for each image plane of the one or more image planes based on whether data of the image plane is stored in one of the plurality of linefill units.
07/24/14
20140204098
new patent System, method, and computer program product for graphics processing unit (gpu) demand paging
A system, method, and computer program product are provided for gpu demand paging. In operation, input data is addressed in terms of a virtual address space.
07/17/14
20140201802
Preemptive preloading of television program data
Digital television channels are preemptively cached based on a modeling of a user to reduce delays while switching channels. A current television channel is selected using a first tuner.
07/17/14
20140201761
Context switching with offload processors
A method for context switching of multiple offload processors is disclosed. The method can include receiving network packets for processing through a memory bus connected socket, organizing the network packets into multiple sessions for processing, suspending processing of at least one session by reading a cache state of at least one of the offload processor into a context memory by operation of a scheduling circuit, with virtual memory locations and physical cache locations being aligned, and subsequently directing transfer of the cache state to at least one of the offload processors for processing by operation of the scheduling circuit..
07/17/14
20140201741
Workload interference estimation and performance optimization
Architecture that facilitates the estimation of interference among workloads (e.g., virtual machines) due to sharing of a shared resource (e.g., a shared cache of a computer processor), and optimization of a desired performance objective such as power or energy use in the presence of the interference. Estimation is to the extent of interference by characterizing the nature of shared resource usage and its effect on performance.
07/17/14
20140201606
Error protection for a data bus
A system for providing error detection or correction on a data bus includes one or more caches coupled to a central processing unit and to a hub by one or more buses. The system also includes a plurality of arrays, each array disposed on one of the buses.
07/17/14
20140201468
Accelerated recovery for snooped addresses in a coherent attached processor proxy
A coherent attached processor proxy (capp) that participates in coherence communication in a primary coherent system on behalf of an external attached processor maintains, in each of a plurality of entries of a capp directory, information regarding a respective associated cache line of data from the primary coherent system cached by the attached processor. In response to initiation of recovery operations, the capp transmits, in a generally sequential order with respect to the capp directory, multiple memory access requests indicating an error for addresses indicated by the plurality of entries.
07/17/14
20140201465
Accelerated recovery for snooped addresses in a coherent attached processor proxy
A coherent attached processor proxy (capp) that participates in coherence communication in a primary coherent system on behalf of an external attached processor maintains, in each of a plurality of entries of a capp directory, information regarding a respective associated cache line of data from the primary coherent system cached by the attached processor. In response to initiation of recovery operations, the capp transmits, in a generally sequential order with respect to the capp directory, multiple memory access requests indicating an error for addresses indicated by the plurality of entries.
07/17/14
20140201463
High performance interconnect coherence protocol
A request is received that is to reference a first agent and to request a particular line of memory to be cached in an exclusive state. A snoop request is sent intended for one or more other agents.
07/17/14
20140201462
Subtractive validation of cache lines for virtual machines
A method and system for managing a cache for a host machine is disclosed. The method includes: indicating each cache line in the cache as being in a transitional meta-state when any virtual machine hosted on the host machine moves out of the host machine; each time a particular cache line is accessed, indicating that particular cache line as no longer in the transitional meta-state; and marking the cache lines still in the transitional meta-state as invalid when a virtual machine moves back to the host machine..
07/17/14
20140201461
Context switching with offload processors
A method for context switching of multiple offload processors coupled to receive data for processing over a memory bus is disclosed. The method can include directing storage of a cache state, via a bulk read from a cache of at least one of a plurality of offload processors into a context memory, by operation of a scheduling circuit, with any virtual and physical memory locations of the cache state being aligned, and subsequently directing transfer of the cache state to at least one of the offload processors for processing, by operation of the scheduling circuit..
07/17/14
20140201458
Reducing cache memory requirements for recording statistics from testing with a multiplicity of flows
A method reduces cache memory requirements for testing a multiplicity of flows. The method includes receiving data corresponding to a frame in a particular flow among the multiplicity of flows.
07/17/14
20140201457
Identifying and resolving cache poisoning
According to some embodiments, a method and apparatus are provided to receive, at a cache entity, a refresh request associated with a resource. A determination is made, via a processor, and based on the refresh request, to reload the resource from a server.
07/17/14
20140201456
Control of processor cache memory occupancy
Techniques are described for controlling processor cache memory within a processor system. Cache occupancy values for each of a plurality of entities executing the processor system can be calculated.
07/17/14
20140201455
Method for increasing cache size
A method for increasing storage space in a system containing a block data storage device, a memory, and a processor is provided. Generally, the processor is configured by the memory to tag metadata of a data block of the block storage device indicating the block as free, used, or semifree.
07/17/14
20140201453
Context switching with offload processors
A context switching cache system is disclosed. The system can include a plurality of offload processors connected to a memory bus, each offload processor having a cache with an associated cache state, a context memory coupled to the offload processors, and a scheduling circuit configured to direct transfer of a cache state between at least one of the offload processors and the context memory..
07/17/14
20140201452
Fill partitioning of a shared cache
Fill partitioning of a shared cache is described. In an embodiment, all threads running in a processor are able to access any data stored in the shared cache; however, in the event of a cache miss, a thread may be restricted such that it can only store data in a portion of the shared cache.
07/17/14
20140201451
Method, apparatus and computer programs providing cluster-wide page management
A data processing system includes a plurality of virtual machines each having associated memory pages; a shared memory page cache that is accessible by each of the plurality of virtual machines; and a global hash map that is accessible by each of the plurality of virtual machines. The data processing system is configured such that, for a particular memory page stored in the shared memory page cache that is associated with two or more of the plurality of virtual machines, there is a single key stored in the global hash map that identifies at least a storage location in the shared memory page cache of the particular memory page.
07/17/14
20140201450
Optimized matrix and vector operations in instruction limited algorithms that perform eos calculations
There is provided a system and method for optimizing matrix and vector calculations in instruction limited algorithms that perform eos calculations. The method includes dividing each matrix associated with an eos stability equation and an eos phase split equation into a number of tiles, wherein the tile size is heterogeneous or homogenous.
07/17/14
20140201449
Data cache way prediction
In a particular embodiment, a method, includes identifying one or more way prediction characteristics of an instruction. The method also includes selectively reading, based on identification of the one or more way prediction characteristics, a table to identify an entry of the table associated with the instruction that identifies a way of a data cache.
07/17/14
20140201448
Management of partial data segments in dual cache systems
For movement of partial data segments within a computing storage environment having lower and higher levels of cache by a processor, a whole data segment containing one of the partial data segments is promoted to both the lower and higher levels of cache. Requested data of the whole data segment is split and positioned at a most recently used (mru) portion of a demotion queue of the higher level of cache..
07/17/14
20140201447
Data processing apparatus and method for handling performance of a cache maintenance operation
A data processing apparatus has data processing circuitry for performing data processing operations on data, and a hierarchical cache structure for storing at least a subset of the data for access by the data processing circuitry. The hierarchical cache structure has first and second level caches, and data evicted from the first level cache is routed to the second level cache under the control of second level cache access control circuitry.
07/17/14
20140201446
High bandwidth full-block write commands
A micro-architecture may provide a hardware and software of a high bandwidth write command. The micro-architecture may invoke a method to perform the high bandwidth write command.
07/17/14
20140201442
Cache based storage controller
Systems and techniques for continuously writing to a secondary storage cache are described. A data storage region of a secondary storage cache is divided into a first cache region and a second cache region.
07/17/14
20140201441
Surviving write errors by using copy-on-write to another system
In one embodiment, a method may include performing a copy-on-write in response to a write error from a first system, where the copy-on-write copies to a second system. The method may further include receiving a write request at the first system from a third system.
07/17/14
20140201404
Offload processor modules for connection to system memory, and corresponding methods and systems
A system can include at least one offload processor having a data cache, the offload processor including a slave interface configured to receive write data and provide read data over a memory bus; an offload processor module including context memory and a bus controller connected to the slave interface; and logic coupled to the offload processor and context memory and configured to detect predetermined write operations over the memory bus; wherein the offload processor is configured to execute operations on data received over the memory bus, and to output context data to the context memory, and read context data from the context memory.. .
07/17/14
20140201402
Context switching with offload processors
A memory bus connected module with context switching capability is described. The module can include a memory bus connection compatible with a memory bus socket, a plurality of offload processors attached to the module and connected to a memory bus, with each offload processor having a cache with an associated cache state, a context memory attached to the module and connected to the offload processors, and a scheduling circuit configured to direct a transfer of a cache state between at least one of the offload processors and the context memory..
07/17/14
20140201385
Method for optimizing wan traffic with deduplicated storage
A local proxy caches, in one or more transmitted data files (tdfs) in a deduplicated manner, chunks of one or more streams that have been transmitted to a remote proxy, each of the streams being identified by a stream identifier (id). For each of the streams, the local proxy maintains a stream object having one or more tdf references, each tdf reference corresponding to at least a segment of the stream, wherein each tdf reference includes information identifying a file location within one of the tdfs at which the segment of the stream is located.
07/17/14
20140201384
Method for optimizing wan traffic with efficient indexing scheme
According to one embodiment, a local proxy caches in a local stream store one or more streams of data transmitted over the wan to a remote proxy. In response to a flow of data received from one of the clients of the local lan, the local proxy chunks using a predetermined chunk algorithm the flow into chunks in sequence, and selectively indexes the chunks in a chunk index maintained by the local proxy based on locations of the chunks in the flow, where a number of chunks in a first region of the flow indexed is different than a number of chunks in a second region of the flow indexed.
07/17/14
20140201357
Framework and method for monitoring performance of virtualized systems based on hardware base tool
The disclosed invention involves a framework and method based on hardware base tool to monitor the performance of virtualized systems, wherein the said framework comprises at least one master host, and each of the said master host comprises user space components, guest space components, kernel space components and hardware. The said user space components comprise policy manager, workload mediator, monitor library, and host performance monitor.
07/17/14
20140201311
Cache-induced opportunistic mimo cooperation for wireless networks
Cooperative caching systems incorporating plug-and-play base stations are described herein. Plug-and-play base stations with large caching capacities are employed in a wireless network to perform cooperative transmission with macro base stations.
07/17/14
20140201308
Method for optimizing wan traffic
A local stream store of a local proxy caches one or more streams of data transmitted over the wan to a remote proxy, where each stream is stored in a continuous manner and identified by a unique stream identifier (id). In response to a flow of data received from a client, the local proxy examines the flow of data to determine whether at least a portion of the flow has been previously transmitted to the remote proxy via one of the streams currently stored in the local stream store.
07/17/14
20140201307
Caching of look-up rules based on flow heuristics to enable high speed look-up
According to one embodiment, a system includes a plurality of ports adapted for connecting to external devices and a switching processor. The switching processor includes a packet processor which includes a look-up interface, fetch and refresh logic (lifrl) module and a packet processor logic (ppl) module adapted to operate in parallel, an internal look-up table cache including a plurality of look-up entries, each relating to a traffic flow which has been or is anticipated to be received by the switching processor, and a traffic manager module including a buffer memory which is connected to the plurality of ports.
07/17/14
20140201302
Method, apparatus and computer programs providing cluster-wide page management
An exemplary method in accordance with embodiments of this invention includes, at a virtual machine that forms a part of a cluster of virtual machines, computing a key for an instance of a memory page that is to be swapped out to a shared memory cache that is accessible by all virtual machines of the cluster of virtual machines; determining if the computed key is already present in a global hash map that is accessible by all virtual machines of the cluster of virtual machines; and only if it is determined that the computed key is not already present in the global hash map, storing the computed key in the global hash map and the instance of the memory page in the shared memory cache.. .
07/17/14
20140201258
System and method of browsing offline and queried content
Embodiments of systems and methods for browsing offline and queried content are presented herein. Specifically, embodiments may receive a request for content from a mobile application.


Popular terms: [SEARCH]



Follow us on Twitter
twitter icon@FreshPatents

###

This listing is a sample listing of patent applications related to Cache for is only meant as a recent sample of applications filed, not a comprehensive history. There may be associated servicemarks and trademarks related to these patents. Please check with patent attorney if you need further assistance or plan to use for business purposes. This patent data is also published to the public by the USPTO and available for free on their website. Note that there may be alternative spellings for Cache with additional patents listed. Browse our RSS directory or Search for other possible listings.
     SHARE
  
         


FreshNews promo



0.3077

3493

1 - 1 - 83