FreshPatents.com Logo
Enter keywords:  

Track companies' patents here: Public Companies RSS Feeds | RSS Feed Home Page
Popular terms

[SEARCH]

Latency topics
Interactive
Transactions
Precedence
Notification
Virtual Channel
Bottleneck
Congestion
Delta Frame
Cable Modem Termination System
Interleave
Error Rate
Downstream
Cable Modem
Mobile Data
Data Transfer

Follow us on Twitter
twitter icon@FreshPatents

Web & Computing
Cloud Computing
Ecommerce
Search patents
Smartphone patents
Social Media patents
Video patents
Website patents
Web Server
Android patents
Copyright patents
Database patents
Programming patents
Wearable Computing
Webcam patents

Web Companies
Apple patents
Google patents
Adobe patents
Ebay patents
Oracle patents
Yahoo patents

[SEARCH]

Latency patents



      
           
This page is updated frequently with new Latency-related patent applications. Subscribe to the Latency RSS feed to automatically get the update: related Latency RSS feeds. RSS updates for this page: Latency RSS RSS


Systems, methods, apparatus, and computer program products for providing forward error correction with low latency

Methods and systems for chip-to-chip communication with reduced simultaneous switching noise

Date/App# patent app List of recent Latency-related patents
07/17/14
20140201772
 Systems and methods for addressing a media database using distance associative hashing patent thumbnailnew patent Systems and methods for addressing a media database using distance associative hashing
A system, method and computer program utilize a distance associative hashing algorithmic means to provide a highly efficient means to rapidly address a large database. The indexing means can be readily subdivided into a plurality of independently-addressable segments where each such segment can address a portion of related data of the database where the sub-divided indexes of said portions reside entirely in the main memory of each of a multiplicity of server means.
07/17/14
20140201603
 Systems, methods, apparatus, and computer program products for providing forward error correction with low latency patent thumbnailnew patent Systems, methods, apparatus, and computer program products for providing forward error correction with low latency
Systems, methods, apparatus, and computer program products for providing forward error correction with low latency to live streams in networks are provided. One example method includes receiving source data at a first rate, outputting the source data at a rate less than the first rate, collecting the source data in a buffer, fec decoding the source data, thereby generating decoded data; and outputting the decoded data at a rate equal to the first rate, either after collecting the source data in the buffer for a predetermined time duration or after collecting a predetermined amount of the source data in the buffer..
07/17/14
20140201586
 Latency patent thumbnailnew patent Latency
The invention relates to an apparatus including at least one processor and at least one memory including a computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to: prepare a transmission of a no-acknowledgement message to be conveyed to a node, when a part of a transport block has been received erroneously or an indication of inadequate quality of at least part of the transport block has been obtained.. .
07/17/14
20140201471
 Arbitrating memory accesses via a shared memory fabric patent thumbnailnew patent Arbitrating memory accesses via a shared memory fabric
In an embodiment, a shared memory fabric is configured to receive memory requests from multiple agents, where at least some of the requests have an associated deadline value to indicate a maximum latency prior to completion of the memory request. Responsive to the requests, the fabric is to arbitrate between the requests based at least in part on the deadline values.
07/17/14
20140201306
 Remote direct memory access with reduced latency patent thumbnailnew patent Remote direct memory access with reduced latency
The present disclosure provides systems and methods for remote direct memory access (rdma) with reduced latency. Rdma allows information to be transferred directly between memory buffers in networked devices without the need for substantial processing.
07/17/14
20140201259
 Method and apparatus of using separate reverse channel for user input in mobile device display replication patent thumbnailnew patent Method and apparatus of using separate reverse channel for user input in mobile device display replication
A method of controlling media content between a portable device and a head control unit. A first link is initiated for transmitting control signals between a control client and a control server.
07/17/14
20140199072
 System and method for network synchronization and frequency dissemination patent thumbnailnew patent System and method for network synchronization and frequency dissemination
Distribution of reference frequency and timing information in a network involves determining latency between a first and second node from time delay between transmission of a reference frequency and timing signal and reception of an optical return timing signal in response. In a network with pairs of first and second optical fibers in optical fiber connections between network nodes, for transmission of optical data signals separately in mutually opposite directions between the network nodes respectively, provisions are made to transmit the reference frequency and timing signal and the resulting optical return signal via the same fiber, one in the same direction as the unidirectional data signal over that fiber and the other upstream.
07/17/14
20140198839
 Low latency sub-frame level video decoding patent thumbnailnew patent Low latency sub-frame level video decoding
A method includes transmitting encoded video data related to video frames of a video stream from a source to a client device through a network such that a packet of the encoded video data is limited to including data associated with one portion of a video frame. The video frame includes a number of portions including the one portion.
07/17/14
20140198837
 Methods and systems for chip-to-chip communication with reduced simultaneous switching noise patent thumbnailnew patent Methods and systems for chip-to-chip communication with reduced simultaneous switching noise
Systems and methods are described for transmitting data over physical channels to provide a high speed, low latency interface such as between a memory controller and memory devices with significantly reduced or eliminated simultaneous switching output noise. Controller-side and memory-side embodiments of such channel interfaces are disclosed which do not require additional pin count or data transfer cycles, have low power utilization, and introduce minimal additional latency.
07/17/14
20140198789
 Low latency in-line data compression for packet transmission systems patent thumbnailnew patent Low latency in-line data compression for packet transmission systems
Deep packet inspection (dpi) techniques are utilized to provide data compression, particularly necessary in many bandwidth-limited communication systems. A separate processor is initially used within a transmission source to scan, in real time, a data packet stream and recognize repetitive patterns that are occurring in the data.
07/17/14
20140198692
new patent Securing transmit openings by the requester
Techniques for securing transmit opening help enhance the operation of a station that employs the technique. The techniques may facilitate low latency response to a protocol data requester, for instance.
07/17/14
20140198638
new patent Low-latency lossless switch fabric for use in a data center
In one embodiment, a system includes a switch configured for communicating with a low-latency switch and a buffered switch, the switch having a processor adapted for executing logic, logic adapted for receiving a packet at an ingress port of a switch, logic adapted for receiving congestion information, logic adapted for determining that at least one congestion condition is net based on at least the congestion information, logic adapted for applying a packet forwarding policy to the packet when the at least one congestion condition is met, logic adapted for forwarding the packet to a buffered switch when the packet satisfies the packet forwarding policy, and logic adapted for forwarding the packet to a low-latency switch when the at least one congestion condition is not met.. .
07/10/14
20140195872
Simultaneous data transfer and error control to reduce latency and improve throughput to a host
The disclosed embodiments provide a system that transfers data from a storage device to a host. The system includes a communication mechanism that receives a request to read a set of blocks from the host.
07/10/14
20140195834
High throughput low latency user mode drivers implemented in managed code
Implementing a safe driver that can support high throughput and low latency devices. The method includes receiving a hardware message from a hardware device.
07/10/14
20140195833
Adaptive low-power link-state entry policy for active interconnect link power management
Methods and apparatus for implementing active interconnect link power management using an adaptive low-power link-state entry policy. The power state of an interconnect link or fabric is changed in response to applicable conditions determined by low-power link-state entry policy logic in view of runtime traffic on the interconnect link or fabric.
07/10/14
20140195792
Hiding boot latency from system users
Methods and systems may provide for identifying a proximity condition between a system and a potential user of the system. In addition, one or more boot components of the system can be activated in response to the proximity condition, wherein one or more peripheral devices associated with the system are maintained in an inactive state.
07/10/14
20140195788
Reducing instruction miss penalties in applications
Embodiments include systems and methods for reducing instruction cache miss penalties during application execution. Application code is profiled to determine “hot” code regions likely to experience instruction cache miss penalties.
07/10/14
20140195558
System and method for distributed database query engines
Techniques for a system capable of performing low-latency database query processing are disclosed herein. The system includes a gateway server and a plurality of worker nodes.
07/10/14
20140195366
Incremental valuation based network capacity allocation
A bid-based network sells network capacity on a transaction-by-transaction basis in accordance with bids placed on transactions. A transaction is the transmission of a quantum of data across at least some portion of the network, where the quantum of data can be as small as a single packet.
07/10/14
20140193066
Contrast enhancement spiking neuron network sensory processing apparatus and methods
Apparatus and methods for contrast enhancement and feature identification. In one implementation, an image processing apparatus utilizes latency coding and a spiking neuron network to encode image brightness into spike latency.
07/10/14
20140192779
System for efficient recovery of node-b buffered data following mac layer reset
A method and system for the ue and rnc to reduce transmission latency and potentially prevent loss of pdus upon a mac-hs layer reset. The rnc generates a radio resource control (rrc) message with a mac-hs reset indication.
07/10/14
20140192767
System and method for small traffic transmissions
A grant-free transmission mode may be used to communicate small traffic transmissions to reduce overhead and latency. The grant-free transmission mode may be used in downlink and uplink data channels of a wireless network.
07/03/14
20140189633
Semiconductor integrated circuit design supporting apparatus, method, and program
A latency adjusting part calculates a necessary delay based on the number of ffs that are required to be inserted between respective modules through high level synthesis of a behavioral description. An input ff stage number acquiring part extracts a pin having an input that receives an ff, and acquires the number of stages of input ffs of ff reception.
07/03/14
20140189446
Forward error correction with configurable latency
A method of performing forward error correction with configurable latency, where a configurable latency algorithm evaluates a target bit error rate (ber) against an actual ber and adjusts the size of a configurable buffer such that the target ber may be achieved when utilizing the smallest buffer size possible. When errors are corrected without the utilization of each of the configurable buffer locations, the algorithm reduces the size of the buffer by y buffer locations; the algorithm may continue to successively reduce the size of said buffer until the minimum number of buffer locations are utilized to achieve the target ber.
07/03/14
20140189421
Non-volatile memory program failure recovery via redundant arrays
Non-volatile memory program failure recovery via redundant arrays enables higher programming bandwidth and/or reduced latency in some storage subsystem implementations, e.g. A solid-state disk.
07/03/14
20140189409
System and method for providing universal serial bus link power management policies in a processor environment
One particular example implementation may include an apparatus that includes logic, at least a portion of which is in hardware, the logic configured to: determine that a first device maintains a link to a platform in a selective suspend state; assign a first latency value to the first device; identify at least one user detectable artifact when a second device exits the selective suspend state; and assign, to the second device, a second latency value that is different from the first value.. .
07/03/14
20140189403
Periodic activity alignment
Methods and systems may provide for determining a latency constraint associated with a platform and determine an idle window based on the latency constraint. In addition, a plurality of devices on the platform may be instructed to cease one or more activities during the idle window.
07/03/14
20140189391
System and method for conveying service latency requirements for devices connected to low power input/output sub-systems
In at least one embodiment described herein, an apparatus is provided that can include means for communicating a latency tolerance value for a device connected to a platform from a software latency register if a software latency tolerance register mode is active. The apparatus may also include means for communicating the latency tolerance value from a hardware latency register if a host controller is active.
07/03/14
20140189385
Intelligent receive buffer management to optimize idle state residency
Methods and systems may provide for determining a plurality of buffer-related settings for a corresponding plurality of idle states and outputting the plurality of buffer-related settings to a device on a platform. The device may determine an observed bandwidth for a channel associated with a receive buffer and identify a selection of a buffer-related setting from the plurality of buffer-related settings based at least in part on the observed bandwidth.
07/03/14
20140189332
Apparatus and method for low-latency invocation of accelerators
An apparatus and method are described for providing low-latency invocation of accelerators. For example, a processor according to one embodiment comprises: a command register for storing command data identifying a command to be executed; a result register to store a result of the command or data indicating a reason why the command could not be executed; execution logic to execute a plurality of instructions including an accelerator invocation instruction to invoke one or more accelerator commands; and one or more accelerators to read the command data from the command register and responsively attempt to execute the command identified by the command data..
07/03/14
20140189327
Acknowledgement forwarding
A method for processing data packets in a pipeline and executed by a network processor. The pipeline includes a plurality of logical blocks, each logical block configured to process one stage of the pipeline.
07/03/14
20140189317
Apparatus and method for a hybrid latency-throughput processor
An apparatus and method are described for executing both latency-optimized execution logic and throughput-optimized execution logic on a processing device. For example, a processor according to one embodiment comprises: latency-optimized execution logic to execute a first type of program code; throughput-optimized execution logic to execute a second type of program code, wherein the first type of program code and the second type of program code are designed for the same instruction set architecture; logic to identify the first type of program code and the second type of program code within a process and to distribute the first type of program code for execution on the latency-optimized execution logic and the second type of program code for execution on the throughput-optimized execution logic..
07/03/14
20140189212
Presentation of direct accessed storage under a logical drive model
In accordance with embodiments disclosed herein, there are provided methods, systems, mechanisms, techniques, and apparatuses for presentation of direct accessed storage under a logical drive model; for implementing a distributed architecture for cooperative nvm data protection; data mirroring for consistent ssd latency; for boosting a controller's performance and ras with dif support via concurrent raid processing; for implementing arbitration and resource schemes of a doorbell mechanism, including doorbell arbitration for fairness and prevention of attack congestion; and for implementing multiple interrupt generation using a messaging unit and ntb in a controller through use of an interrupt coalescing scheme.. .
07/03/14
20140189113
Tag latency monitoring and control system for enhanced web page performance
Embodiments are directed towards employing a plurality of tag states to control tag suspension based on an asynchronous process that proactively monitors tag performance, response times, and latency. Tags may be in one of multiple states.
07/03/14
20140189091
Network adaptive latency reduction through frame rate control
Novel solutions are provided for consistent quality of service in cloud gaming system that adaptively and dynamically compensate for poor network conditions by moderating rendered frame rates using frame rate capping to optimize for network latency savings (or surplus). In further embodiments, the encoding/sent frame rate to the client can also be managed in addition, or as an alternative to capping the rendered frame rates.
07/03/14
20140188966
Floating-point multiply-add unit using cascade design
A floating-point fused multiply-add (fma) unit embodied in an integrated circuit includes a multiplier circuit cascaded with an adder circuit to produce a result a*c+b. To decrease latency, the fma includes accumulation bypass circuits forwarding an unrounded result of the adder to inputs of the close path and the far path circuits of the adder, and forwarding an exponent result in carry save format to an input of the exponent difference circuit.
07/03/14
20140188870
Lsm cache
A variety of methods for improving efficiency in a database system are provided. In one embodiment, a method may comprise: generating multiple levels of data according to how recently the data have been updated, whereby most recently updated data are assigned to the newest level; storing each level of data in a specific storage tier; splitting data stored in a particular storage tier into two or more groups according to access statistics of each specific data; during compaction, storing data from different groups in separate data blocks of the particular storage tier; and when a particular data in a specific data block is requested, reading the specific data block into a low-latency storage tier..
07/03/14
20140187331
Latency reduction by sub-frame encoding and transmission
A cloud gaming system includes a cloud gaming server that provides rendering for a video frame employed in cloud gaming. The cloud gaming system also includes a video frame latency reduction pipeline coupled to the cloud gaming server, having a slice generator that provides a set of separately-rendered video frame slices required for a video frame, a slice encoder that encodes each of the set of separately-rendered video frame slices into corresponding separately-encoded video frame slices of the video frame and a slice packetizer that packages each separately-encoded video frame slice into slice transmission packets.
07/03/14
20140185496
Low latency arq/harq operating in carrier aggregation for backhaul link
Described embodiments reduce arq/harq latency using carrier aggregation and cross-carrier arq/harq signaling. In embodiments, a wireless backhaul transmission link uses multiple paired carriers with complementary tdd frame timing.
06/26/14
20140181777
Automatic clock tree routing rule generation
Systems and techniques are described for automatically generating a set of non-default routing rules for routing a net in a clock tree based on one or more metrics. The metrics can include a congestion metric, a latency metric, a crosstalk metric, an electromigration metric, and a clock tree level.
06/26/14
20140181768
Automated performance verification for integrated circuit design
A method and apparatus for automated performance verification for integrated circuit design is described herein. The method includes test preparation and automated verification stages.
06/26/14
20140181563
System and method for determination of latency tolerance
Particular embodiments described herein can offer a method that includes determining that a first reported latency tolerance associated with at least one first device has not been received, and causing determination of a platform latency tolerance based, at least in part, on a first predefined latency tolerance, which is to serve as a substitute for the first reported latency tolerance.. .
06/26/14
20140181458
Die-stacked memory device providing data translation
A die-stacked memory device incorporates a data translation controller at one or more logic dies of the device to provide data translation services for data to be stored at, or retrieved from, the die-stacked memory device. The data translation operations implemented by the data translation controller can include compression/decompression operations, encryption/decryption operations, format translations, wear-leveling translations, data ordering operations, and the like.
06/26/14
20140181417
Cache coherency using die-stacked memory device with logic die
A die-stacked memory device implements an integrated coherency manager to offload cache coherency protocol operations for the devices of a processing system. The die-stacked memory device includes a set of one or more stacked memory dies and a set of one or more logic dies.
06/26/14
20140181355
Configurable communications controller
A communications controller includes a physical interface and an internal transmit and receive circuit. The physical interface has a port for connection to a communication medium, an input, and an output, and operates to receive a first sequence of data bits from the input and to transmit the first sequence of data bits to the port, and to receive a second sequence of data bits from the port and to conduct said second sequence of data bits to the output.
06/26/14
20140181338
System and method for audio pass-through between multiple host computing devices
A digital audio pass-through device capable of connecting multiple host computing devices is described. The digital audio pass-through device allows computing devices such as personal computers (mac or pc), tablets, and smart phones to share high quality digital audio data streams with one another via usb connections.
06/26/14
20140181334
System and method for determination of latency tolerance
Particular embodiments described herein can offer a method that includes receiving first link state information associated with a first device, determining, by a processor, an upward latency tolerance based, at least in part, on the first link state information, and providing the upward latency tolerance to a power management controller.. .
06/26/14
20140181327
I/o device and computing host interoperation
An i/o device is coupled to a computing host. In some embodiments, the device is enabled to utilize memory of the computing host not directly coupled to the device to store information such as a shadow copy of a map of the device and/or state of the device.
06/26/14
20140181179
Systems and methods for transmitting data in real time
Systems and methods described herein facilitate the transmission of data in real time by using tcp connections such that the latency issues incurred from packet loss is prevented. A server is in communication with a client, wherein the server is configured to facilitate forming a plurality of tcp connections with the client.
06/26/14
20140180957
Cost and latency reductions through dynamic updates of order movement through a transportation network
A method, system, and computer program product for shipping management. The computer implemented method commences upon receiving a first set of orders to be shipped to a destination region in accordance with a first set of timing constraints, then building a first set of multi-stop shipments, the first set of multi-stop shipments comprising a first multi-stop carrier schedule that satisfies the set of timing constraints.
06/26/14
20140180952
Cost and latency reductions through dynamic updates of order movement through a transportation network
A method, system, and computer program product for shipping management. The computer implemented method commences upon identifying a set of orders to be shipped from a source region to a destination region using a transportation network, and determining candidate options for performing stops over possible routes between the source region and the destination region.
06/26/14
20140180889
Systems and methods for routing trade orders based on exchange latency
Systems and methods for routing trade orders based on exchange latency are disclosed. An example method includes measuring a first latency associated with a first exchange based on a processing time of a first trade order; and routing a second trade order from a trading device to one of the first and a second exchange based on the first latency..
06/26/14
20140180862
Managing operational throughput for shared resources
Usage of shared resources can be managed by enabling users to obtain different types of guarantees at different times for various types and/or levels of resource capacity. A user can select to have an amount or rate of capacity dedicated to that user.
06/26/14
20140179421
Client rendering of latency sensitive game features
Embodiments of the present invention split game processing and rendering between a client and a game server. A rendered video game image is received from a game server and combined with a rendered image generated by the game client to form a single video game image that is presented to a user.
06/26/14
20140179318
Call setup latency optimization for lte to 1xrtt circuit switched fall back
A method and apparatus for reducing call setup latency in an lte network is disclosed. Services in a 1×rtt network are provided for both single receiver (srx) and dual receiver (drx) user equipment using csfb (circuit switched fall back).
06/26/14
20140177680
Wireless link to transmit digital audio data between devices in a manner controlled dynamically to adapt to variable wireless error rates
A communication system including a host transceiver, one or many device transceivers, and a wireless or wired link, in which encoded digital audio data and optionally also other auxiliary data are transmitted and received between the host transceiver and one or many device transceivers. The wireless link can but need not be a certified wireless usb (“cwusb”) link, which utilizes wimedia ultra-wideband (“uwb”) radio technology.
06/26/14
20140177654
Method, a computer program product, and a carrier for indicating one-way latency in a data network
Disclosed herein is a method, a computer program product, and a carrier for indicating one-way latency in a data network (n) between a first node (a) and a second node (b), wherein the data network (n) lacks continuous clock synchronization, comprising: a pre-synchronisation step, a measuring step, a post-synchronisation step, an interpolation step, and generating a latency profile. The present invention also relates to a computer program product incorporating the method, a carrier comprising the computer program product, and a method for indicating server functionality based on the first aspect..
06/26/14
20140177640
Intelligent host route distribution for low latency forwarding and ubiquitous virtual machine mobility in interconnected data centers
Techniques are presented for distributing host route information of virtual machines to routing bridges (rbridges). A first rbridge receives a routing message that is associated with a virtual machine and is sent by a second rbridge.
06/26/14
20140177591
Systems and methods for reduced latency circuit switched fallback
Systems, methods, and devices for wireless communication by a wireless communication device are described. A circuit switched fallback from an lte cell to a geran cell is initiated.
06/26/14
20140176658
Communication method, communication terminal, supervisor terminal and related computer programmes
A communication method in a communications network is provided. The method includes, in a communications network in which a communication link has been communication link having been allocated by a resource manager module in accordance with a first value of a characteristic of data rate, latency or jitter required for providing a first service on the said link between a first and a second communication terminal, the steps of transmitting a request to replace the said first communication service by a second communication service between at least the first and second terminals; further to receipt of the said request by the resource manager module of the network, replacing by the said resource manager module of the first value of the said characteristic by a second value required for providing the said second service; and providing the second communication service instead of the first communication service between the first and second terminals on the said communication link..
06/26/14
20140176591
Low-latency fusing of color image data
A system and method are disclosed for fusing virtual content with real content to provide a mixed reality experience for one or more users. The system includes a mobile display device communicating with a hub computing system.
06/26/14
20140176353
Compression format for high bandwidth dictionary compression
Method, apparatus, and systems employing dictionary-based high-bandwidth lossless compression. A pair of dictionaries having entries that are synchronized and encoded to support compression and decompression operations are implemented via logic at a compressor and decompressor.
06/26/14
20140176215
Method of implementing clock skew and integrated circuit adopting the same
To implement a clock skew in an integrated circuit, end-point circuits are grouped into a push group and a pull group based on target latencies of local clock signals respectively driving the end-point circuits. The push group is driven by slow clock gates, and the pull group is driven by fast clock gates.
06/26/14
20140176187
Die-stacked memory device with reconfigurable logic
A die-stacked memory device incorporates a reconfigurable logic device to provide implementation flexibility in performing various data manipulation operations and other memory operations that use data stored in the die-stacked memory device or that result in data that is to be stored in the die-stacked memory device. One or more configuration files representing corresponding logic configurations for the reconfigurable logic device can be stored in a configuration store at the die-stacked memory device, and a configuration controller can program a reconfigurable logic fabric of the reconfigurable logic device using a selected one of the configuration files.
06/19/14
20140173680
Full-frame buffer to improve video performance in low-latency video communication systems
Embodiments of apparatuses and methods to decrease a size of a memory in a low-latency video communication system are described. A control unit is configured to monitor a condition associated with at the communication link.
06/19/14
20140173621
Conserving power through work load estimation for a portable computing device using scheduled resource set transitions
A start time to begin transitioning resources to states indicated in the second resource state set is scheduled based upon an estimated amount of processing time to complete transitioning the resources. At a scheduled start time, a process starts in which the states of one or more resources are switched from states indicated by the first resource state set to states indicated by the second resource state set.
06/19/14
20140173322
Packet data id generation for serially interconnected devices
Various memory devices (e.g., drams, flash memories) are serially interconnected. The memory devices need their identifiers (ids).
06/19/14
20140173248
Performing frequency coordination in a multiprocessor system based on response timing optimization
In an embodiment, a processor includes a core to execute instructions and a logic to receive memory access requests from the core and to route the memory access requests to a local memory and to route snoop requests corresponding to the memory access requests to a remote processor. The logic is configured to maintain latency information regarding a difference between receipt of responses to the snoop requests from the remote processor and receipt of responses to the memory access requests from the local memory.
06/19/14
20140173235
Resilient distributed replicated data storage system
A resilient distributed replicated data storage system is described herein. The storage system includes zones that are independent, and autonomous from each other.
06/19/14
20140173232
Method and apparatus for automated migration of data among storage centers
A method for controlling the storage of data among multiple regional storage centers coupled through a network in a global storage system is provided. The method includes steps of: defining at least one dataset comprising at least a subset of the data stored in the global storage system; defining at least one ruleset for determining where to store the dataset; obtaining information regarding a demand for the dataset through one or more data requesting entities operating in the global storage system; and determining, as a function of the ruleset, information regarding a location for storing the dataset among regional storage centers having available resources that reduces the total distance traversed by the dataset in serving at least a given one of the data requesting entities and/or reduces the latency of delivery of the dataset to the given one of the data requesting entities..
06/19/14
20140173229
Method and apparatus for automated migration of data among storage centers
A method for controlling the storage of data among multiple regional storage centers coupled through a network in a global storage system is provided. The method includes steps of: defining at least one dataset comprising at least a subset of the data stored in the global storage system; defining at least one ruleset for determining where to store the dataset; obtaining information regarding a demand for the dataset through one or more data requesting entities operating in the global storage system; and determining, as a function of the ruleset, information regarding a location for storing the dataset among regional storage centers having available resources that reduces the total distance traversed by the dataset in serving at least a given one of the data requesting entities and/or reduces the latency of delivery of the dataset to the given one of the data requesting entities..
06/19/14
20140173090
Method and system for detecting network topology change
A method for detecting a topology change in a communication network. The method includes measuring a minimum latency value of a communication between two devices in the communication network for each of a plurality of time cycles, identifying an increase in the minimum latency values among the plurality of time cycles, and detecting a topology change in response to a determination that the increase in minimum latency values is maintained for more than a predetermined number of time cycles..
06/19/14
20140173004
Low latency messaging chat server
A low latency messaging chat service may provide for receiving from a chat client a connection request via a network; authenticating the chat client by a registration process; allocating to the chat client at least a first topic corresponding to a first message queue and a second topic corresponding to a second message queue, the first topic assigned a first format and the second topic assigned a second format; enabling the chat client to post messages in the first topic; and enabling the chat client to receive messages in the second topic.. .


Popular terms: [SEARCH]

Latency topics: Interactive, Transactions, Precedence, Notification, Virtual Channel, Bottleneck, Congestion, Delta Frame, Cable Modem Termination System, Interleave, Error Rate, Downstream, Cable Modem, Mobile Data, Data Transfer

Follow us on Twitter
twitter icon@FreshPatents

###

This listing is a sample listing of patent applications related to Latency for is only meant as a recent sample of applications filed, not a comprehensive history. There may be associated servicemarks and trademarks related to these patents. Please check with patent attorney if you need further assistance or plan to use for business purposes. This patent data is also published to the public by the USPTO and available for free on their website. Note that there may be alternative spellings for Latency with additional patents listed. Browse our RSS directory or Search for other possible listings.
     SHARE
  
         


FreshNews promo



0.3468

3136

1 - 1 - 73