FreshPatents.com Logo
Enter keywords:  

Track companies' patents here: Public Companies RSS Feeds | RSS Feed Home Page
Popular terms

[SEARCH]

Latency topics
Interactive
Transactions
Precedence
Notification
Virtual Channel
Bottleneck
Congestion
Delta Frame
Cable Modem Termination System
Interleave
Error Rate
Downstream
Cable Modem
Mobile Data
Data Transfer

Follow us on Twitter
twitter icon@FreshPatents

Web & Computing
Cloud Computing
Ecommerce
Search patents
Smartphone patents
Social Media patents
Video patents
Website patents
Web Server
Android patents
Copyright patents
Database patents
Programming patents
Wearable Computing
Webcam patents

Web Companies
Apple patents
Google patents
Adobe patents
Ebay patents
Oracle patents
Yahoo patents

[SEARCH]

Latency patents



      
           
This page is updated frequently with new Latency-related patent applications. Subscribe to the Latency RSS feed to automatically get the update: related Latency RSS feeds. RSS updates for this page: Latency RSS RSS


Systems and methods for accommodating end of transfer request in a data storage device

Hiding instruction cache miss latency by running tag lookups ahead of the instruction accesses

Credit-based low-latency arbitration with data transfer

Date/App# patent app List of recent Latency-related patents
08/14/14
20140229741
 Dual composite field advanced encryption standard memory encryption engine patent thumbnailDual composite field advanced encryption standard memory encryption engine
A different set of polynomials may be selected for encryption and decryption accelerators. That is, different sets of polynomials are used for encryption and decryption, each set being chosen to use less area and deliver more power for a memory encryption engine.
08/14/14
20140229700
 Systems and methods for accommodating end of transfer request in a data storage device patent thumbnailSystems and methods for accommodating end of transfer request in a data storage device
Systems and methods for data processing particularly related addressing latency concerns in relation to data processing.. .
08/14/14
20140229677
 Hiding instruction cache miss latency by running tag lookups ahead of the instruction accesses patent thumbnailHiding instruction cache miss latency by running tag lookups ahead of the instruction accesses
This disclosure provides techniques and apparatuses to enable early, run-ahead handling of ic and itlb misses by decoupling the itlb and ic tag lookups from the ic data (instruction bytes) accesses, and making itlb and ic tag lookups run ahead of the ic data accesses. This allows overlapping the itlb and ic miss stall cycles with older instruction byte reads or older ic misses, resulting in fewer stalls than previous implementations and improved performance.
08/14/14
20140229645
 Credit-based low-latency arbitration with data transfer patent thumbnailCredit-based low-latency arbitration with data transfer
An apparatus includes multiple data sources and arbitration circuitry. The data sources are configured to send to a common destination data items and respective arbitration requests, such that the data items are sent to the destination regardless of receiving any indication that the data items were served to the destination in response to the respective arbitration requests.
08/14/14
20140229641
 Method and apparatus for latency reduction patent thumbnailMethod and apparatus for latency reduction
Aspects of the disclosure provide an integrated circuit that includes a plurality of input/output (io) circuits, an instruction receiving circuit and control circuits. The io circuits are configured to receive a plurality of bit streams corresponding to an instruction to the integrated circuit.
08/14/14
20140229608
 Parsimonious monitoring of service latency characteristics patent thumbnailParsimonious monitoring of service latency characteristics
Various exemplary embodiments relate to a method of evaluating cloud network performance. The method includes: measuring a latency of a plurality of service requests in a cloud-network; determining a mean latency; and determining a variance of the plurality of service requests; comparing the mean latency to a first threshold; comparing the variance to a second threshold; and determining that the cloud-network is deficient if either the mean latency exceeds the first threshold or the variance exceeds the second threshold..
08/14/14
20140229524
 Network communication latency patent thumbnailNetwork communication latency
Embodiments disclosed herein may relate to at least partially addressing latency, such as network communication latency, as may occur between a client and server, for example.. .
08/14/14
20140229446
 Method and system for selecting amongst a plurality of processes to send a message patent thumbnailMethod and system for selecting amongst a plurality of processes to send a message
In accordance with embodiments, there are provided mechanisms and methods for selecting amongst a plurality of processes to send a message (e.g. A message for updating an endpoint system, etc.).
08/14/14
20140226980
 Low latency multiplexing for optical transport networks patent thumbnailLow latency multiplexing for optical transport networks
Techniques for multiplexing and demultiplexing signals for optical transport networks are presented. A network component comprises a multiplexer component that multiplexes a plurality of signals having a first signal format to produce a multiplexed signal in accordance with a second signal format, while maintaining error correction code (ecc) of such signals and without decoding such signals and associated ecc.
08/14/14
20140226830
 Method for operating a hearing device and hearing device patent thumbnailMethod for operating a hearing device and hearing device
A hearing device comprises a receiver, an input buffer and a sample processor, the receiver being adapted to receive samples of a digital audio signal and feed received samples as a digital input signal to the input buffer, the sample processor being adapted to process the buffered samples to provide samples of a digital output signal such that the digital output signal is a sample-rate converted representation of the digital input signal with a predetermined target sample rate. The hearing device further comprises a latency controller adapted to estimate the quality of reception of the digital audio signal and to control the processing of the buffered samples in dependence on the estimated quality of reception..
08/14/14
20140226734
Methods and systems for high bandwidth chip-to-chip communications interface
Systems and methods are described for transmitting data over physical channels to provide a high bandwidth, low latency interface between integrated circuit chips with low power utilization. Communication is performed using group signaling over multiple wires using a vector signaling code, where each wire carries a low-swing signal that may take on more than two signal values..
08/14/14
20140226660
Reducing the maximum latency of reserved streams
An embodiment may include circuitry that may facilitate, at least in part, assignment, at least in part, of at least one bandwidth reservation for at least one packet stream and/or at least one stream reservation class. The at least one bandwidth reservation may be greater than an expected communication bandwidth of the at least one packet stream.
08/14/14
20140226024
Camera control in presence of latency
In various implementations, systems and processes may be utilized to reduce the affect of latency in teleoperated cameras. For example, various processes may be utilized to determine latency periods and generate user interfaces based on the latency periods and/or inputs provided to the systems..
08/14/14
20140225561
Charging device
A charging device is provided with a charger, a charging time setting unit and a usage latency calculation unit. The charger is configured to charge a vehicle battery.
08/07/14
20140223254
Qc-ldpc convolutional codes enabling low power trellis-based decoders
A low-density parity check (ldpc) encoding method for increasing constraint length includes determining a ldpc code block h-matrix including a systematic submatrix (hsys) of input systematic data and a parity check submatrix (hpar) of parity check bits. The method includes diagonalizing the parity check submatrix (hpar).
08/07/14
20140223210
Tunable sector buffer for wide bandwidth resonant global clock distribution
A wide bandwidth resonant clock distribution comprises a clock grid configured to distribute a clock signal to a plurality of components of an integrated circuit and a tunable sector buffer configured to receive the clock signal and provide an output to the clock grid. The tunable sector buffer is configured to set latency and slew rate of the clock signal based on an identified resonant or non-resonant mode..
08/07/14
20140223144
Load latency speculation in an out-of-order computer processor
Load latency speculation in an out-of-order computer processor, including: issuing a load instruction for execution, wherein the load instruction has a predetermined expected execution latency; issuing a dependent instruction wakeup signal on an instruction wakeup bus, wherein the dependent instruction wakeup signal indicates that the load instruction will be completed upon the expiration of the expected execution latency; determining, upon the expiration of the expected execution latency, whether the load instruction has completed; and responsive to determining that the load instruction has not completed upon the expiration of the expected execution latency, issuing a negative dependent instruction wakeup signal on the instruction wakeup bus, wherein the negative dependent instruction wakeup signal indicates that the load instruction has not completed upon the expiration of the expected execution latency.. .
08/07/14
20140223143
Load latency speculation in an out-of-order computer processor
Load latency speculation in an out-of-order computer processor, including: issuing a load instruction for execution, wherein the load instruction has a predetermined expected execution latency; issuing a dependent instruction wakeup signal on an instruction wakeup bus, wherein the dependent instruction wakeup signal indicates that the load instruction will be completed upon the expiration of the expected execution latency; determining, upon the expiration of the expected execution latency, whether the load instruction has completed; and responsive to determining that the load instruction has not completed upon the expiration of the expected execution latency, issuing a negative dependent instruction wakeup signal on the instruction wakeup bus, wherein the negative dependent instruction wakeup signal indicates that the load instruction has not completed upon the expiration of the expected execution latency.. .
08/07/14
20140223105
Method and apparatus for cutting senior store latency using store prefetching
In accordance with embodiments disclosed herein, there are provided methods, systems, mechanisms, techniques, and apparatuses for cutting senior store latency using store prefetching. For example, in one embodiment, such means may include an integrated circuit or an out of order processor means that processes out of order instructions and enforces in-order requirements for a cache.
08/07/14
20140223098
Dynamic management of heterogenous memory
A method of operating a computing device includes dynamically managing at least two types of memory based on workloads, or requests from different types of applications. A first type of memory may be high performance memory that may have a higher bandwidth, lower memory latency and/or lower power consumption than a second type of memory in the computing device.
08/07/14
20140223097
Data storage system and data storage control device
A storage system has a plurality of control modules for controlling a plurality of storage devices, which make mounting easier with maintaining low latency response even if the number of control modules increases. A plurality of storage devices are connected to the second interface of each control module using back end routers, so that redundancy for all the control modules to access all the storage devices is maintained.
08/07/14
20140223091
System and method for management of unique alpha-numeric order message identifiers within ddr memory space
An embedded hardware-based risk system is provided that has an apparatus and method for the management of unique alpha-numeric order message identifiers within ddr memory space restrictions. The apparatus provides a new design for assigning orders (clorid) to memory and the method thereof specifically with the intention to not impact latency until memory is over 90% full..
08/07/14
20140223071
Method and system for reducing write latency in a data storage system by using a command-push model
A data storage system is provided that implements a command-push model that reduces latencies. The host system has access to a nonvolatile memory (nvm) device of the memory controller to allow the host system to push commands into a command queue located in the nvm device.
08/07/14
20140223054
Memory buffering system that improves read/write performance and provides low latency for mobile systems
A memory buffering system is disclosed that arbitrates bus ownership through an arbitration scheme for memory elements in chain architecture. A unified host memory controller arbitrates bus ownership for transfer to a unified memory buffer and other buffers within the chain architecture.
08/07/14
20140222397
Front-end signal generator for hardware in-the-loop simulation
A front-end signal generator for hardware-in-the-loop simulators of a simulated missile is disclosed. The front-end signal generator is driven by the digital scene and reticle simulation-hardware in the loop (dsars-hitl) simulator.
08/07/14
20140222101
Determination of sleep quality for neurological disorders
A device determines values for one or more metrics that indicate the quality of a patient's sleep based on sensed physiological parameter values. Sleep efficiency, sleep latency, and time spent in deeper sleep states are example sleep quality metrics for which values may be determined.
08/07/14
20140220976
Resource allocation method and apparatus for use in wireless communication system
A resource allocation method and apparatus are provided for allocating resources for handover between sectors of a cell with handover latency. A resource allocation method of a base station in a wireless communication system includes receiving a handover request message from a terminal for handover from a first sector to a second sector within a cell and transmitting an allocation message for allocating a mobile allocation index offset (maio) to the terminal, the maio being identical to an maio used in the first sector.
08/07/14
20140219497
Temporal winner takes all spiking neuron network sensory processing apparatus and methods
Apparatus and methods for contrast enhancement and feature identification. In one implementation, an image processing apparatus utilizes latency coding and a spiking neuron network to encode image brightness into spike latency.
08/07/14
20140219284
Method and system for reduction of time variance of packets received from bonded communication links
Method and system for reduction of time variance of packets received from bonded communication links. Embodiments of present inventions can be applied to bonded communication links, including wireless connection, ethernet connection, internet protocol connection, asynchronous transfer mode, virtual private network, wifi, high-speed downlink packet access, gprs, lte, x.25 and etc.
08/07/14
20140219196
Method and apparatus for detecting inconsistent control information in wireless communication systems
In order to support low latency and bursty internet data traffic, the 3gpp lte wireless communication system uses dynamic allocation. To keep the allocation overhead lower, the system is designed such that the client terminal must perform a number of decoding attempts to detect resource allocations.
08/07/14
20140218350
Power management of display controller
In general, in one aspect, a display controller has non-essential portions powered off for a portion of vertical blanking interval (vbi) periods to conserve power. The portion takes into account overhead for housekeeping functions and memory latency for receiving a first packet of pixels for a frame to be decoded during a next active period.
08/07/14
20140218087
Wide bandwidth resonant global clock distribution
A wide bandwidth resonant clock distribution comprises a clock grid configured to distribute a clock signal to a plurality of components of an integrated circuit, a tunable sector buffer configured to receive the clock signal and provide an output to the clock grid, at least one inductor, at least one tunable resistance switch, and a capacitor network. The tunable sector buffer is programmable to set latency and slew rate of the clock signal.
07/31/14
20140215539
Apparatus and methods for catalog data distribution
Methods and apparatus for providing enhanced catalog data capacity and functionality within a content distribution network. In one embodiment, the apparatus includes an architecture configured to utilize an upstream out-of-band (oob) channel for upstream catalog data request carriage, while a downstream in-band (ib) qam is used for carriage of the catalog data issued by the network in response to the request.
07/31/14
20140215475
System and method for supporting work sharing muxing in a cluster
A system and method can provide efficient low-latency muxing between servers in the cluster. One such system can include a cluster of one or more high performance computing systems, each including one or more processors and a high performance memory.
07/31/14
20140215461
Low-latency fault-tolerant virtual machines
A system and method are disclosed for managing a plurality of virtual machines (vms) in a fault-tolerant and low-latency manner. In accordance with one example, a computer system executes a first vm and a second vm, and creates a first live snapshot of the first vm and a second live snapshot of the second vm.
07/31/14
20140215446
Automated porting of application to mobile infrastructures
Techniques to automatically port applications to a mobile infrastructure using code separation with semantic guarantees are disclosed. Porting enterprise applications from to a target architecture another is effected via separating code into constituent layers of the target architecture.
07/31/14
20140215177
Methods and systems for managing heterogeneous memories
A system includes a processor and first and second memories coupled to the processor. The first and second memories have a hardware attribute, such as bandwidth, latency and/or power consumption, wherein a first value of the hardware attribute of the first memory is different from a second value of the hardware attribute of the second memory.
07/31/14
20140215146
Apparatus, method and program product for determining the data recall order
To provide a technique for optimizing the processing order of recall requests in which the average latency time of a host apparatus is minimized. A storage manager accepts a request of the host apparatus for the recalling data from a tape library, and stores the request in a queue table.
07/31/14
20140215111
Variable read latency on a serial memory bus
Systems and/or methods are provided that facilitate employing a variable read latency on a serial memory bus. In an aspect, a memory can utilize an undefined amount of time to obtain data from a memory array and prepare the data for transfer on the serial memory bus.
07/31/14
20140215108
Reducing write i/o latency using asynchronous fibre channel exchange
A fcp initiator sends a fcp write command to a fcp target within a second fc exchange, and the target sends one or more fc write control ius to the initiator within a first fc exchange to request a transfer of data associated with the write command. The first and second fc exchanges are distinct from one another.
07/31/14
20140215044
Quality of service management using host specific values
In one embodiment, a latency value is determined for an input/output io request in a host computer of a plurality of host computers based on an amount of time the io request spent in the host computer's issue queue. The issue queue of the host computer is used to transmit io requests to a storage system shared by the plurality of host computers.
07/31/14
20140215007
Multi-level data staging for low latency data access
Techniques for facilitating and accelerating log data processing are disclosed herein. The front-end clusters generate a large amount of log data in real time and transfer the log data to an aggregating cluster.
07/31/14
20140214752
Data stream splitting for low-latency data access
Techniques for facilitating and accelerating log data processing by splitting data streams are disclosed herein. The front-end clusters generate large amount of log data in real time and transfer the log data to an aggregating cluster.
07/31/14
20140213367
Methods and apparatus for hiding latency in network multiplayer games
Aspects of the present disclosure describe methods and apparatuses that hide latency during an interaction between an attacking client device platform and a defending client device platform in a multiplayer game played over a network. The attacking client device platform predicts a successful attack will be made and delivers a hit event to the defending client device platform.
07/31/14
20140213362
System for streaming databases serving real-time applications used through streaming interactive video
An apparatus comprising one or more servers of a hosting service server center and a raid that stores geometry for objects of a complex scene. The raid being coupled to the one or more application or game servers and being operable to interactively stream the geometry on-the-fly during real-time animation associated with running of a game or application on the one or more servers.
07/31/14
20140213273
Localized dynamic channel time allocation
Techniques for localized dynamic channel allocation help meet the challenges of latency, memory size, and channel time optimization for wireless communication systems. As examples, advanced communication standards, such as the wigig standard, may support wireless docking station capability and wireless streaming of high definition video content between transmitting and receiving stations, or engage in other very high throughput tasks.
07/31/14
20140211889
Orthogonal frequency division multiplex (ofdm) receiver with phase noise mitigation and reduced latency
A system according to one embodiment includes a demodulator configured to receive an orthogonal frequency division multiplexed (ofdm) modulated signal comprising a current symbol and a sequence of previous symbols, each of the symbols comprising one or more pilot sub-carriers and one or more data sub-carriers; a phase angle computation circuit coupled to the demodulator, the phase angle computation circuit configured to compute a first mean, the first mean computed from the phase angle of one or more of the pilot sub-carriers of a predetermined number of the previous symbols; a predictive filter circuit coupled to the phase angle computation circuit, the predictive filter circuit configured to compute a second mean, the second mean estimating the phase angle of one or more sub-carriers of the current symbol, the estimation based on the first mean; and a phase noise cancelling circuit coupled to the predictive filter circuit, the phase noise cancelling circuit configured to correct the phase of one or more sub-carriers of the current symbol based on the second mean.. .
07/31/14
20140211755
Radio bearer dependent forwarding for handover
This invention employs an inherent tradeoff in a radio bearer dependent data handling method for intra-e-utra handoffs. For user equipment using real time data, the source node forwards to the target node not yet acknowledged real time service data units and disconnects.
07/31/14
20140211649
Reducing latency of at least one stream that is associated with at least one bandwidth reservation
An embodiment may include circuitry to determine, at least in part, whether to delay transmission, at least in part, of at least one frame in favor of transmitting at least one other frame. The at least one other frame may belongs to at least one packet stream that is associated with at least one bandwidth reservation.
07/24/14
20140208146
Time protocol latency correction based on forward error correction status
One embodiment provides a method for time protocol latency correction based on forward error correction (fec) status. The method includes determining, by a network node element, if a forward error correction (fec) decoding mode is enabled or disabled for a packet received from a link partner in communication with the network node element.
07/24/14
20140208075
Systems and method for unblocking a pipeline with spontaneous load deferral and conversion to prefetch
Apparatuses, systems, and a method for providing a processor architecture with a control speculative load are described. In one embodiment, a computer-implemented method includes determining whether a speculative load instruction encounters a long latency condition, spontaneously deferring the speculative load instruction if the speculative load instruction encounters the long latency condition, and initiating a prefetch of a translation or of data that requires long latency access when the speculative load instruction encounters the long latency condition.
07/24/14
20140208064
Virtual memory management system with reduced latency
A computer system using virtual memory provides hybrid memory access either through a conventional translation between virtual memory and physical memory using a page table possibly with a translation lookaside buffer, or a high-speed translation using a fixed offset value between virtual memory and physical memory. Selection between these modes of access may be encoded into the address space of virtual memory eliminating the need for a separate tagging operation of specific memory addresses..
07/24/14
20140208035
Cache circuit having a tag array with smaller latency than a data array
A method is described that includes alternating cache requests sent to a tag array between data requests and dataless requests.. .
07/24/14
20140207896
Continuous information transfer with reduced latency
The present disclosure provides systems and methods for remote direct memory access (rdma) with reduced latency. Rdma allows information to be transferred directly between memory buffers in networked devices without the need for substantial processing.
07/24/14
20140206446
Methodology for equalizing systemic latencies in television reception in connection with games of skill played in connection with live television programming
A method of and system for handling latency issues encountered in producing real-time entertainment such as games of skill synchronized with live or taped televised events is described herein. There are multiple situations that are dealt with regarding latencies in receiving a television signal with respect to real-time entertainment based on the unfolding games played along with the telecasts.
07/24/14
20140205047
Low latency synchronizer circuit
An apparatus for synchronizing an incoming signal with a clock signal comprises two or more synchronizer circuits, wherein each synchronizer circuit receives the incoming signal and the clock signal. Each synchronizer circuit generates a synchronized signal, wherein the state of each synchronized signal changes on a different phase of said clock signal in response to a change of the state of said incoming signal.
07/24/14
20140204794
System and method for minimizing network load imbalance and latency
A system and method for improving a network topology to minimize load imbalance and latency between network nodes is provided. A network performance and configuration tool receives performance data from payload handling nodes, including radio stations, transport nodes and payload gateways to calculate a current network condition related to the current network topology.
07/24/14
20140204743
Network latency optimization
A network may provide latency optimization by configuring respective latency values of one or more network components. A latency manager may receive a request indicative of a maximum latency value of a communications path between two devices, and may determine a particular network latency value.
07/24/14
20140204102
Using graphics processing units in control and/or data processing systems
A graphics processing unit (gpu) can be used in control and/or data processing systems that require high speed data processing with low input/output latency (i.e., fast transfers into and out of the gpu). Data and/or control information can be transferred directly to and/or from the gpu without involvement of a central processing unit (cpu) or a host memory.
07/24/14
20140204002
Virtual interaction with image projection
Embodiments that relate to providing a low-latency interaction in a virtual environment are provided. In one embodiment an initial image of a hand and initial depth information representing an initial actual position are received.
07/17/14
20140201772
Systems and methods for addressing a media database using distance associative hashing
A system, method and computer program utilize a distance associative hashing algorithmic means to provide a highly efficient means to rapidly address a large database. The indexing means can be readily subdivided into a plurality of independently-addressable segments where each such segment can address a portion of related data of the database where the sub-divided indexes of said portions reside entirely in the main memory of each of a multiplicity of server means.
07/17/14
20140201603
Systems, methods, apparatus, and computer program products for providing forward error correction with low latency
Systems, methods, apparatus, and computer program products for providing forward error correction with low latency to live streams in networks are provided. One example method includes receiving source data at a first rate, outputting the source data at a rate less than the first rate, collecting the source data in a buffer, fec decoding the source data, thereby generating decoded data; and outputting the decoded data at a rate equal to the first rate, either after collecting the source data in the buffer for a predetermined time duration or after collecting a predetermined amount of the source data in the buffer..
07/17/14
20140201586
Latency
The invention relates to an apparatus including at least one processor and at least one memory including a computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to: prepare a transmission of a no-acknowledgement message to be conveyed to a node, when a part of a transport block has been received erroneously or an indication of inadequate quality of at least part of the transport block has been obtained.. .
07/17/14
20140201471
Arbitrating memory accesses via a shared memory fabric
In an embodiment, a shared memory fabric is configured to receive memory requests from multiple agents, where at least some of the requests have an associated deadline value to indicate a maximum latency prior to completion of the memory request. Responsive to the requests, the fabric is to arbitrate between the requests based at least in part on the deadline values.
07/17/14
20140201306
Remote direct memory access with reduced latency
The present disclosure provides systems and methods for remote direct memory access (rdma) with reduced latency. Rdma allows information to be transferred directly between memory buffers in networked devices without the need for substantial processing.
07/17/14
20140201259
Method and apparatus of using separate reverse channel for user input in mobile device display replication
A method of controlling media content between a portable device and a head control unit. A first link is initiated for transmitting control signals between a control client and a control server.
07/17/14
20140199072
System and method for network synchronization and frequency dissemination
Distribution of reference frequency and timing information in a network involves determining latency between a first and second node from time delay between transmission of a reference frequency and timing signal and reception of an optical return timing signal in response. In a network with pairs of first and second optical fibers in optical fiber connections between network nodes, for transmission of optical data signals separately in mutually opposite directions between the network nodes respectively, provisions are made to transmit the reference frequency and timing signal and the resulting optical return signal via the same fiber, one in the same direction as the unidirectional data signal over that fiber and the other upstream.
07/17/14
20140198839
Low latency sub-frame level video decoding
A method includes transmitting encoded video data related to video frames of a video stream from a source to a client device through a network such that a packet of the encoded video data is limited to including data associated with one portion of a video frame. The video frame includes a number of portions including the one portion.
07/17/14
20140198837
Methods and systems for chip-to-chip communication with reduced simultaneous switching noise
Systems and methods are described for transmitting data over physical channels to provide a high speed, low latency interface such as between a memory controller and memory devices with significantly reduced or eliminated simultaneous switching output noise. Controller-side and memory-side embodiments of such channel interfaces are disclosed which do not require additional pin count or data transfer cycles, have low power utilization, and introduce minimal additional latency.
07/17/14
20140198789
Low latency in-line data compression for packet transmission systems
Deep packet inspection (dpi) techniques are utilized to provide data compression, particularly necessary in many bandwidth-limited communication systems. A separate processor is initially used within a transmission source to scan, in real time, a data packet stream and recognize repetitive patterns that are occurring in the data.
07/17/14
20140198692
Securing transmit openings by the requester
Techniques for securing transmit opening help enhance the operation of a station that employs the technique. The techniques may facilitate low latency response to a protocol data requester, for instance.
07/17/14
20140198638
Low-latency lossless switch fabric for use in a data center
In one embodiment, a system includes a switch configured for communicating with a low-latency switch and a buffered switch, the switch having a processor adapted for executing logic, logic adapted for receiving a packet at an ingress port of a switch, logic adapted for receiving congestion information, logic adapted for determining that at least one congestion condition is net based on at least the congestion information, logic adapted for applying a packet forwarding policy to the packet when the at least one congestion condition is met, logic adapted for forwarding the packet to a buffered switch when the packet satisfies the packet forwarding policy, and logic adapted for forwarding the packet to a low-latency switch when the at least one congestion condition is not met.. .


Popular terms: [SEARCH]

Latency topics: Interactive, Transactions, Precedence, Notification, Virtual Channel, Bottleneck, Congestion, Delta Frame, Cable Modem Termination System, Interleave, Error Rate, Downstream, Cable Modem, Mobile Data, Data Transfer

Follow us on Twitter
twitter icon@FreshPatents

###

This listing is a sample listing of patent applications related to Latency for is only meant as a recent sample of applications filed, not a comprehensive history. There may be associated servicemarks and trademarks related to these patents. Please check with patent attorney if you need further assistance or plan to use for business purposes. This patent data is also published to the public by the USPTO and available for free on their website. Note that there may be alternative spellings for Latency with additional patents listed. Browse our RSS directory or Search for other possible listings.
     SHARE
  
         


FreshNews promo



0.5372

3093

3 - 0 - 72