FreshPatents.com Logo
Enter keywords:  

Track companies' patents here: Public Companies RSS Feeds | RSS Feed Home Page
Popular terms

[SEARCH]

Follow us on Twitter
twitter icon@FreshPatents

Web & Computing
Cloud Computing
Ecommerce
Search patents
Smartphone patents
Social Media patents
Video patents
Website patents
Web Server
Android patents
Copyright patents
Database patents
Programming patents
Wearable Computing
Webcam patents

Web Companies
Apple patents
Google patents
Adobe patents
Ebay patents
Oracle patents
Yahoo patents

[SEARCH]

Cache patents



      
           
This page is updated frequently with new Cache-related patents. Subscribe to the Cache RSS feed to automatically get the update: related Cache RSS feeds.

Subscribe to updates on this page: Cache RSS RSS

Date/App# patent app List of recent Cache-related patents
04/17/14
20140109101
 Effective scheduling of producer-consumer processes in a multi-processor system patent thumbnailEffective scheduling of producer-consumer processes in a multi-processor system
A novel technique for improving throughput in a multi-core system in which data is processed according to a producer-consumer relationship by eliminating latencies caused by compulsory cache misses. The producer and consumer entities run as multiple slices of execution.
04/17/14
20140108917
 Systems and/or methods for performing atomic updates on large xml information sets patent thumbnailSystems and/or methods for performing atomic updates on large xml information sets
Certain example embodiments described herein relate to techniques for processing xml documents of potentially very large sizes. For instance, certain example embodiments parse a potentially large xml document, store the parsed data and some associated metadata in multiple independent blocks or partitions, and instantiate only the particular object model object requested by a program.
04/17/14
20140108909
 Graceful degradation of level-of-detail in document rendering patent thumbnailGraceful degradation of level-of-detail in document rendering
In the present invention, a combination of asynchronous rendering and synchronous rendering is utilized to render an electronic document on the screen of a computing device. Particularly, a document-rendering application may be configured to draw asynchronously a high-detail version of the document to a rendering cache.
04/17/14
20140108825
 System and method for hardware based security patent thumbnailSystem and method for hardware based security
An asset management system is provided, which includes a hardware module operating as an asset control core. The asset control core generally includes a small hardware core embedded in a target system on chip that establishes a hardware-based point of trust on the silicon die.
04/17/14
20140108743
 Store data forwarding with no memory model restrictions patent thumbnailStore data forwarding with no memory model restrictions
Embodiments relate to loading data in a pipelined microprocessor. An aspect includes issuing a load request that comprises a load address requiring at least one block of data the same size as a largest contiguous granularity of data returned from a cache.
04/17/14
20140108740
 Prefetch throttling patent thumbnailPrefetch throttling
A processing system monitors memory bandwidth available to transfer data from memory to a cache. In addition, the processing system monitors a prefetching accuracy for prefetched data.
04/17/14
20140108738
 Apparatus and method for detecting large flow patent thumbnailApparatus and method for detecting large flow
An apparatus and method for detecting a large flow are provided. The method includes: storing flow information corresponding to the received flow in a cache entry; determining whether or not there is a possibility to be determined that the flow corresponding to the flow information stored in an entry to be deleted from a cache by storing the flow information in the cache entry is a large flow; restoring the entry to be deleted in the cache according to a result of the possibility determination; inspecting a packet count of the entry in which the flow information is stored; and determining that the flow corresponding to the flow information stored in the corresponding entry is the large flow, if the result of the packet count inspection is greater than or equal to a preset threshold value..
04/17/14
20140108737
 Zero cycle clock invalidate operation patent thumbnailZero cycle clock invalidate operation
A method to eliminate the delay of a block invalidate operation in a multi cpu environment by overlapping the block invalidate operation with normal cpu accesses, thus making the delay transparent. A range check is performed on each cpu access while a block invalidate operation is in progress, and an access that maps to within the address range of the block invalidate operation will be trated as a cache miss to ensure that the requesting cpu will receive valid data..
04/17/14
20140108736
 System and method for removing data from processor caches in a distributed multi-processor computer system patent thumbnailSystem and method for removing data from processor caches in a distributed multi-processor computer system
A processor (600) in a distributed shared memory multi-processor computer system (10) may initiate a flush request to remove data from its cache. A processor interface (24) receives the flush request and performs a snoop operation to determine whether the data is maintained in a one of the local processors (601) and whether the data has been modified.
04/17/14
20140108735
 Managing a cache for storing one or more intermediate products of a computer program patent thumbnailManaging a cache for storing one or more intermediate products of a computer program
A method, program product and a system is provided for managing a cache. The method includes analyzing at least an intermediate product of a computer program.
04/17/14
20140108734
Method and apparatus for saving processor architectural state in cache hierarchy
A processor includes a first processing unit and a first level cache associated with the first processing unit and operable to store data for use by the first processing unit used during normal operation of the first processing unit. The first processing unit is operable to store first architectural state data for the first processing unit in the first level cache responsive to receiving a power down signal.
04/17/14
20140108733
Disabling cache portions during low voltage operations
Methods and apparatus relating to disabling one or more cache portions during low voltage operations are described. In some embodiments, one or more extra bits may be used for a portion of a cache that indicate whether the portion of the cache is capable at operating at or below vccmin levels.
04/17/14
20140108732
Cache layer optimizations for virtualized environments
Embodiments of the invention relate to optimizing the storage of data in a multi-cache level environment. In one aspect, data is classified into primary and secondary cache sections.
04/17/14
20140108731
Energy optimized cache memory architecture exploiting spatial locality
Aspects of the present invention provide a “supertag” cache that manages cache at three granularities: (i) coarse grain, multi-block “super blocks,” (ii) single cache blocks and (iii) fine grain, fractional block “data segments.” since contiguous blocks have the same tag address, by tracking multi-block super blocks, the supertag cache inherently increases per-block tag space, allowing higher compressibility without incurring high area overheads. To improve compression ratio, the supertag cache uses variable-packing compression allowing variable-size compressed blocks without requiring costly compactions.
04/17/14
20140108730
Systems and methods for non-blocking implementation of cache flush instructions
Systems and methods for non-blocking implementation of cache flush instructions are disclosed. As a part of a method, data is accessed that is received in a write-back data holding buffer from a cache flushing operation, the data is flagged with a processor identifier and a serialization flag, and responsive to the flagging, the cache is notified that the cache flush is completed.
04/17/14
20140108729
Systems and methods for load canceling in a processor that is connected to an external interconnect fabric
Systems and methods for load canceling in a processor that is connected to an external interconnect fabric are disclosed. As a part of a method for load canceling in a processor that is connected to an external bus, and responsive to a flush request and a corresponding cancellation of pending speculative loads from a load queue, a type of one or more of the pending speculative loads that are positioned in the instruction pipeline external to the processor, is converted from load to prefetch.
04/17/14
20140108727
Storage apparatus and data processing method
To raise the cpu cache hit rate and improve the i/o processing. Controller is cpu configured from a cpu core and a cpu cache wherein the cpu selects memory bus optimization execution processing or cache poisoning optimization execution processing according to an attribute of the access target volume on the basis of an access request.
04/17/14
20140108723
Reducing metadata in a write-anywhere storage system
Systems and methods for reducing metadata in a write-anywhere storage system are disclosed herein. The system includes a plurality of clients coupled with a plurality of storage nodes, each storage node having a plurality of primary storage devices coupled thereto.
04/17/14
20140108722
Virtual machine installation image caching
The subject matter of this specification can be implemented in, among other things, a computer-implemented method including sending, from a virtual desktop server manager at a data center and over a network, at least one request to a virtual machine storage domain for virtual machine installation images. The virtual machine storage domain stores the virtual machine installation images separate from the data center.
04/17/14
20140108705
Use of high endurance non-volatile memory for read acceleration
A high endurance, short retention nand memory is used as a read cache for a memory of a higher level of non-volatility, such as standard nand flash memory or a hard drive. The combined memory system identifies frequently read logical addresses of the main non-volatile memory or specific read sequences and stores the corresponding data in cache nand to accelerate host reads.
04/17/14
20140108672
Content delivery network routing method, system and user terminal
The present invention provides a content delivery network routing method, system, and user terminal. The method includes: receiving, by a cdn routing device, a first service request sent by a user terminal, where the first service request carries a first uniform resource locator url and a domain name; returning, by the cdn routing device, a redirection response message to the user terminal, where the redirection response message carries a second url, and the domain name; and receiving, by the cache node, a second service request sent by the user terminal, and returning a header field indication to the user terminal.
04/17/14
20140108671
Partitioning streaming media files on multiple content distribution networks
Techniques are disclosed for generating preference rankings in response to requests for streaming media content received from client devices. The preference rankings are used to indirectly partition streaming media content across different content distribution networks (cdns).
04/17/14
20140108586
Method, device and system for delivering live content
The present invention provides a method, a device, and a system for delivering live content. A pre-delivery request with respect to live content is sent to a cdn cache device, and the cdn cache device caches the live content according to the pre-delivery request with respect to the live content before a user views the live content, thereby solving the problems of a long delay and poor user experience in playing live content that is not cached because a part of live content cannot be cached in the prior art, ensuring the play quality of all live content, and improving user experience..
04/17/14
20140108585
Multimedia content management system
A system allows a user to select multimedia content items from sources that include, but are not limited to, any of: internet, network, or local. Selected multimedia content items may be stored in user specific caches residing in at least one cloud based storage device.
04/17/14
20140108548
Fake check-in entries posted on behalf of social network users
An approach is provided in a fake check-in event is received at a software application corresponding to a user of the software application. Fake check-ins are initiated on behalf of the user in response to the fake check-in event.
04/17/14
20140108512
Method and device for accessing web pages
A method and a device for accessing web pages are disclosed. The method includes: preloading a corresponding uniform resource locator (url) according to url information which needs to be preloaded and is configured in a configuration file, and caching obtained web page information; and judging whether a client device has cached the web page information corresponding to the url according to the url carried in a web page access request when a user sends the web page access request through the client device.
04/17/14
20140108485
Dynamically allocated computing method and system for distributed node-based interactive workflows
A system and method for leveraging grid computing for node based interactive workflows is disclosed. A server system spawns a server process that receives node graph data and input attributes from a computing device, processes the data, caches the processed data, and transmits the processed data over a network to a computing device.
04/17/14
20140108362
Data compression apparatus, data compression method, and memory system including the data compression apparatus
Provided are data compression method, data compression apparatus, and memory system. The data compression method includes receiving input data and generating a hash key for the input data, searching a hash table with the generated hash key, and if it is determined that the input data is a hash hit, compressing the input data using the hash table; and searching a cache memory with the input data, and if it is determined that the input data is a cache hit, compressing the input data using the cache memory..
04/17/14
20140108337
System and method for the synchronization of a file in a cache
The present invention provides a system and method for file synchronization. One embodiment of the system of this invention includes a software program stored on a computer readable medium.
04/17/14
20140108335
Cloud based file system surpassing device storage limits
Technology is disclosed herein for a cloud based file system that can surpass physical storage limit. According to at least one embodiment, a computing device includes a file system having multiple storage objects.
04/17/14
20140107020
Fibronectin based scaffold domain proteins that bind to myostatin
The present invention relates to fibronectin-based scaffold domain proteins that bind to myostatin. The invention also relates to the use of these proteins in therapeutic applications to treat muscular dystrophy, cachexia, sarcopenia, osteoarthritis, osteoporosis, diabetes, obesity, copd, chronic kidney disease, heart failure, myocardial infarction, and fibrosis.
04/17/14
20140105896
Fibronectin based scaffold domain proteins that bind to myostatin
The present invention relates to fibronectin-based scaffold domain proteins that bind to myostatin. The invention also relates to the use of these proteins in therapeutic applications to treat muscular dystrophy, cachexia, sarcopenia, osteoarthritis, osteoporosis, diabetes, obesity, copd, chronic kidney disease, heart failure, myocardial infarction, and fibrosis.
04/17/14
20140105575
Method and apparatus for navigating video content
Methods and apparatus for navigating video content. Digital markers may be placed at desired locations within recorded or cached video content.
04/17/14
20140105305

A motion compensation module includes a memory having a cache that stores a portion of an image of a sequence of images, the portion having a horizontal dimension corresponding to the width of the image of the sequence of images and having a vertical dimension corresponding to the height of a search range. A motion search module generates a plurality of motion search motion vectors based on the search range and the portion of the image of the sequence of images..
04/17/14
20140104956
Sensing operations in a memory device
Methods for sensing, method for programming, memory devices, and memory systems are disclosed. In one such method for sensing, a counting circuit generates a count output and a translated count output.
04/17/14
20140104645
Print image processing system and non-transitory computer readable medium
A print image processing system includes plural logical page interpretation units, a caching interpretation unit, and a print image data generation unit. The plural logical page interpretation units interpret different logical pages in print data in parallel to obtain interpretation results, and output the interpretation results.
04/17/14
20140104644
Print image processing system and non-transitory computer readable medium
A print image processing system includes plural logical page interpretation units, a dual interpretation unit, a cache memory, an assignment unit, and a print image data generation unit. The logical page interpretation units interpret assigned logical pages in input print data in parallel.
04/10/14
20140101760
Dad-ns triggered address resolution for dos attack protection
A first network element that receives an appropriation message from a second network element that indicates a target address which the second network element intends to appropriate for its use. In response to the appropriation message, the first network element broadcasts a discovery message to a plurality of network elements on the network to request a link-layer address in association with the first target address.
04/10/14
20140101607
Displaying quantitative trending of pegged data from cache
Methods and systems of displaying response data provide for identifying a pegged area of display content during a first retrieval of the display content by a client device at a first moment in time. Additionally, first data associated with the pegged area may be stored, wherein a comparison can be conducted between the first data and additional data associated with the pegged area at one or more subsequent moments in time.
04/10/14
20140101538
Systems and/or methods for delayed encoding of xml information sets
Certain example embodiments described herein relate to techniques for processing xml documents of potentially very large sizes. For instance, certain example embodiments parse a potentially large xml document, store the parsed data and some associated metadata in multiple independent blocks or partitions, and instantiate only the particular object model object requested by a program.
04/10/14
20140101441
Systems and methods for flash crowd control and batching ocsp requests via online certificate status protocol
The present invention is directed towards systems and methods for batching ocsp requests and caching corresponding responses. An intermediary between a plurality of clients and one or more servers receives a first client certificate during a first ssl handshake with a first client and a second client certificate during a second ssl handshake with a second client.
04/10/14
20140101403
Application-managed translation cache
Mechanisms are provided, in a data processing system, for accessing a memory location in a physical memory of the data processing system. With these mechanisms, a request is received from an application to access a memory location specified by an effective address in an application address space.
04/10/14
20140101391
Conditional write processing for a cache structure of a coupling facility
A method for managing a cache structure of a coupling facility includes receiving a conditional write command from a computing system and determining whether data associated with the conditional write command is part of a working set of data of the cache structure. If the data associated with the conditional write command is part of the working set of data of the cache structure the conditional write command is processed as an unconditional write command.
04/10/14
20140101390
Computer cache system providing multi-line invalidation messages
A computer cache system delays cache coherence invalidation messages related to cache lines of a common memory region to collect these messages into a combined message that can be transmitted more efficiently. This delay may be coordinated with a detection of whether the processor is executing a data-race free portion of the program so that the delay system may be used for a variety of types of programs which may have data-race and data-race free sections..
04/10/14
20140101389
Cache management
A system includes a data store and a memory cache subsystem. A method for pre-fetching data from the data store for the cache includes determining a performance characteristic of a data store.
04/10/14
20140101388
Controlling prefetch aggressiveness based on thrash events
A method and apparatus for controlling the aggressiveness of a prefetcher based on thrash events is presented. An aggressiveness of a prefetcher for a cache is controlled based upon a number of thrashed cache lines that are replaced by a prefetched cache line and subsequently written back into the cache before the prefetched cache line has been accessed..
04/10/14
20140101387
Opportunistic cache replacement policy
A cache management system employs a replacement policy in a manner that manages concurrent accesses to cache. The cache management system comprises a cache, a replacement policy storage for storing replacement statuses of cache lines of the cache, and an update module.
04/10/14
20140101321
Redirecting of network traffic for application of stateful services
Techniques are presented herein for redirection between any number of network devices that are distributed to any number of sites. A first message of a flow is received from a network endpoint at a first network device.
04/10/14
20140101287
Real-time information feed
A computer-implemented method for updating a web user interface on a client device is provided. A router back-boned to the internet communicates with the client device web-user interface data defined in markup language to dynamically update the web-user interface on the client device.
04/10/14
20140101164
Efficient selection of queries matching a record using a cache
A method is provided for constructing a cache for storing results of previously evaluated queries in a binary tree based on a cache key. The cache is searched, by a processing device, for a node representing a set of previously evaluated queries that match a given record using an instance of the cache key.
04/10/14
20140101132
Swapping expected and candidate affinities in a query plan cache
In an embodiment, a hit percentage of an expected affinity for a first query is calculated, wherein the expected affinity comprises a first address range in a query plan cache, a hit percentage of a candidate affinity for the first query is calculated, wherein the candidate affinity comprises a second address range in a query plan cache, and if the hit percentage of the candidate affinity is greater than the hit percentage of the expected affinity by more than a threshold amount, query plans in the candidate affinity are swapped with query plans in the expected affinity.. .
04/10/14
20140101131
Swapping expected and candidate affinities in a query plan cache
In an embodiment, a hit percentage of an expected affinity for a first query is calculated, wherein the expected affinity comprises a first address range in a query plan cache, a hit percentage of a candidate affinity for the first query is calculated, wherein the candidate affinity comprises a second address range in a query plan cache, and if the hit percentage of the candidate affinity is greater than the hit percentage of the expected affinity by more than a threshold amount, query plans in the candidate affinity are swapped with query plans in the expected affinity.. .
04/10/14
20140101113
Locality aware, two-level fingerprint caching
The present disclosure provides for implementing a two-level fingerprint caching scheme for a client cache and a server cache. The client cache hit ratio can be improved by pre-populating the client cache with fingerprints that are relevant to the client.
04/10/14
20140100679
Efficient sharing of intermediate computations in a multimedia graph processing framework
An audio processing system including filters configured to process audio buffers, to retrieve auxiliary data from at audio buffers, and to store auxiliary data in audio buffers, concatenators configured to transmit audio buffers from one filter to another filter, to retrieve audio buffers from a shared buffer cache, and to store audio buffers in the shared buffer cache, a processing graph configured to transmit audio buffers processed by filters in the graph from one filter to another filter in accordance with the concatenators, and a graph processor, for applying the processing graph to audio buffers extracted from an incoming audio stream, for storing intermediate processing results of the filters as auxiliary data in audio buffers, and for storing the audio buffers that include auxiliary data in a buffer cache that is shared among the filters.. .


Popular terms: [SEARCH]



Follow us on Twitter
twitter icon@FreshPatents

###

This listing is a sample listing of patents related to Cache for is only meant as a recent sample of applications filed, not a comprehensive history. There may be associated servicemarks and trademarks related to these patents. Please check with patent attorney if you need further assistance or plan to use for business purposes. This patent data is also published to the public by the USPTO and available for free on their website. Note that there may be alternative spellings for Cache with additional patents listed. Browse our RSS directory or Search for other possible listings.
     SHARE
  
         


FreshNews promo



0.4336

2994

1 - 1 - 71