FreshPatents.com Logo
Enter keywords:  

Track companies' patents here: Public Companies RSS Feeds | RSS Feed Home Page
Popular terms

[SEARCH]

Latency topics
Interactive
Transactions
Precedence
Notification
Virtual Channel
Bottleneck
Congestion
Delta Frame
Cable Modem Termination System
Interleave
Error Rate
Downstream
Cable Modem
Mobile Data
Data Transfer

Follow us on Twitter
twitter icon@FreshPatents

Web & Computing
Cloud Computing
Ecommerce
Search patents
Smartphone patents
Social Media patents
Video patents
Website patents
Web Server
Android patents
Copyright patents
Database patents
Programming patents
Wearable Computing
Webcam patents

Web Companies
Apple patents
Google patents
Adobe patents
Ebay patents
Oracle patents
Yahoo patents

[SEARCH]

Latency patents



      
           
This page is updated frequently with new Latency-related patent applications. Subscribe to the Latency RSS feed to automatically get the update: related Latency RSS feeds. RSS updates for this page: Latency RSS RSS


Bi-modal arbitration nodes for a low-latency adaptive asynchronous interconnection network and methods for using the…

Method and apparatus for returning reads in the presence of partial data unavailability

Low latency data transmission network

Date/App# patent app List of recent Latency-related patents
08/28/14
20140245295
 Providing dynamic topology information in virtualized computing environments patent thumbnailnew patent Providing dynamic topology information in virtualized computing environments
Systems and methods for providing dynamic processor topology information to a virtual machine hosted by a multi-processor computer system supporting non-uniform memory access (numa). An example method may comprise assigning a unique identifier to a virtual processor, determining that the virtual processor has been moved from a first physical processor to a second physical processor, determining a memory access latency value for the second physical processor, and updating an element of a data structure storing memory access latency information with the memory access latency value of the second physical processor, the element identified by the unique identifier of the virtual processor..
08/28/14
20140245219
 Predictive pre-decoding of encoded media item patent thumbnailnew patent Predictive pre-decoding of encoded media item
Displaying a plurality of encoded media items on a device includes: detecting that a first scrolling action has been completed; determining a predicted next encoded media item to be displayed; obtaining the predicted next encoded media item from a first memory; pre-decoding the predicted next encoded media item to generate a pre-decoded media item; storing the pre-decoded media item in a second memory, the second memory having lower latency than the first memory; receiving an indication that a second scrolling action has begun; and in response to the second scrolling action, displaying the pre-decoded media item via a display interface.. .
08/28/14
20140245114
 Encapsulation for link layer preemption patent thumbnailnew patent Encapsulation for link layer preemption
Devices implement encapsulation to support link layer preemption. The device may include a encapsulation logic that encapsulates data, such as an ethernet frame, to produce an encapsulated frame.
08/28/14
20140245104
 Latency reduced error correction scheme with error indication function for burst error correction codes patent thumbnailnew patent Latency reduced error correction scheme with error indication function for burst error correction codes
The present discloses provides a decoding method, decoding apparatus and decoder for correcting burst errors. In particular, the decoding method for correcting burst errors comprises: computing an initial syndrome of a received data frame, wherein the data frame is encoded according to cyclic codes for correcting burst errors; determining error correctability of burst error contained in the data frame based on the computed initial syndrome; and processing the burst error in the data frame and outputting the processed data frame based on the determined error correctability.
08/28/14
20140244938
 Method and apparatus for returning reads in the presence of partial data unavailability patent thumbnailnew patent Method and apparatus for returning reads in the presence of partial data unavailability
Techniques are disclosed for reducing perceived read latency. Upon receiving a read request with a scatter-gather array from a guest operating system running on a virtual machine (vm), an early read return virtualization (errv) component of a virtual machine monitor fills the scatter-gather array with data from a cache and data retrieved via input-output requests (ios) to media.
08/28/14
20140244924
 Load reduction dual in-line memory module (lrdimm) and method for programming the same patent thumbnailnew patent Load reduction dual in-line memory module (lrdimm) and method for programming the same
A method is disclosed for providing memory bus timing of a load reduction dual inline memory module (lrdimm). The method includes: determining a latency value of a dynamic random access memory (dram) of the lrdimm; determining a modified latency value of the dram that accounts for a delay caused by a load reduction buffer (lrb) that is deployed between the dram and a memory bus; storing the modified latency value in a serial presence detector (spd) of the lrdimm; and providing memory bus timing for the lrdimm based on the modified latency value, wherein the memory bus timing is compatible with a registered dual inline memory module (rdimm)..
08/28/14
20140244914
 Mitigate flash write latency and bandwidth limitation patent thumbnailnew patent Mitigate flash write latency and bandwidth limitation
A method of operating a memory system is provided. The method includes a controller that regulates read and write access to one or more flash memory devices that are employed for random access memory applications.
08/28/14
20140244891
 Providing dynamic topology information in virtualized computing environments patent thumbnailnew patent Providing dynamic topology information in virtualized computing environments
Systems and methods for providing dynamic topology information to virtual machines hosted by a multi-processor computer system supporting non-uniform memory access (numa). An example method may comprise assigning, by a hypervisor executing on a computer system, unique identifiers to a plurality of memory blocks residing on a plurality of physical nodes; determining that a memory block has been moved from a first physical node to a second physical node; determining memory access latency values to the second physical node by a plurality of virtual processors of the computer system; and updating, using a unique identifier of the memory block, a data structure storing memory access latency information, with the memory access latency values for the memory block..
08/28/14
20140244879
 Sas latency based routing patent thumbnailnew patent Sas latency based routing
Techniques for operating a serial attached scsi (sas) expander that includes a latency table comprising entries of outbound phys with latency values associated with connections between inbound phys and outbound phys. A storage management module to, in response to receipt of a command from an initiator device associated with an inbound phy to route data to a target device associated with an outbound phy, select from the latency table a random outbound phy from among a plurality of outbound phys, wherein the random selection is based on weighted average of latency values of the outbound phy entries of the latency table..
08/28/14
20140244648
 Geographical data storage assignment based on ontological relevancy patent thumbnailnew patent Geographical data storage assignment based on ontological relevancy
System and method for distributing channelized content to a plurality of cohesive local networks. The channelized content is aggregated at a remote central processor with associated storage according to ontological relevancy, and then distributed over a network such as the internet to the local networks.
08/28/14
20140244619
new patent Intelligent data caching for typeahead search
Techniques for providing low latency incremental search results are disclosed herein. According to one embodiment, a method for incremental search includes receiving a first search query from a user, obtaining a plurality of first search results in response to the first search query from an index server, determining whether the plurality of first search results are a substantially exhausted list of results for the first search query, and caching the plurality of first search results in a cache storage if the plurality of first search results are the substantially exhausted list of results for the first search query..
08/28/14
20140241711
new patent Low latency data transmission network
A communications networking having reduced transmission latency and improved reliability is described. To reduce signal transmission latency, network management data is removed from a data stream to prioritize the transmission of payload data at higher transmission rates.
08/28/14
20140241443
new patent Bi-modal arbitration nodes for a low-latency adaptive asynchronous interconnection network and methods for using the same
A dynamically reconfigurable asynchronous arbitration node for use in an adaptive asynchronous interconnection network is provided. The arbitration node includes a circuit, an output channel and two input channels—a first input channel and a second input channel.
08/28/14
20140241366
new patent Comprehensive multipath routing for congestion and quality-of-service in communication networks
A packet routing method includes computing, for each source node in the data network and each destination node in the data network, a set of multiple routes providing a full range of performance from the source node to the destination node. The multiple routes are preferably precomputed and stored.
08/28/14
20140241267
new patent Systems and methods for reduced latency when establishing communication with a wireless communication system
Systems and methods reduce latency associated with establishing communication on a wireless network. In one aspect, an access point determines interface identifiers for associated stations.
08/28/14
20140241266
new patent Systems and methods for reduced latency when establishing communication with a wireless communication system
Systems and methods reduce latency associated with establishing communication on a wireless network. In one aspect, an access point determines interface identifiers for associated stations.
08/28/14
20140241160
new patent Scalable, low latency, deep buffered switch architecture
A switch architecture includes an ingress module, ingress fabric interface module, and a switch fabric. The switch fabric communicates with egress fabric interface modules and egress modules.
08/28/14
20140241103
new patent Semiconductor device having cal latency function
A method for accessing a semiconductor device having a memory array, the method includes receiving a mode register command to set a command latency value in a mode register, receiving a chip select signal, activating a command receiver in response to the chip select signal, receiving, with the command receiver, an access command with a first latency from the chip select signal equal to the command latency value, accessing the memory array in response to the access command, and deactivating the command receiver with a second latency from the chip select signal equal to a deactivation latency value.. .
08/28/14
20140240328
new patent Techniques for low energy computation in graphics processing
Techniques and architecture are disclosed for using a latency first-in/first-out (fifo) to modally enable and disable a compute block in a graphics pipeline. In some example embodiments, the latency fifo collects valid accesses for a downstream compute and integrates invalid inputs (e.g., bubbles), while the compute is in an off state (e.g., sleep).
08/28/14
20140240326
new patent Method, apparatus, system for representing, specifying and using deadlines
In an embodiment, a shared memory fabric is configured to receive memory requests from multiple agents, where at least some of the requests have an associated order identifier and a deadline value to indicate a maximum latency prior to completion of the memory request. Responsive to the requests, the fabric is to arbitrate between the requests based at least in part on the deadline values.
08/21/14
20140237215
Methods and apparatus for scalable array processor interrupt detection and response
Hardware and software techniques for interrupt detection and response in a scalable pipelined array processor environment are described. Utilizing these techniques, a sequential program execution model with interrupts can be maintained in a highly parallel scalable pipelined array processing containing multiple processing elements and distributed memories and register files.
08/21/14
20140237113
Decentralized input/output resource management
A shared input/output (io) resource is managed in a decentralized manner. Each of multiple hosts having io access to the shared resource, computes an average latency value that is normalized with respect to average io request sizes, and stores the computed normalized latency value for later use.
08/21/14
20140236518
Systems and methods for low latency 3-axis accelerometer calibration
Systems and methods for low-latency calibration of the alignment of 3-axis accelerometers in accordance embodiments of the invention are disclosed. In one embodiment of the invention, a telematics system includes a processor, an acceleration sensor, a velocity sensor, and a memory configured to store an acceleration alignment application, wherein the acceleration alignment application configures the processor to determine vehicular forward acceleration information and vehicular lateral acceleration information, calculate a lateral acceleration vector, a forward acceleration vector, and a vertical acceleration vector using a forward incline vector and a lateral incline vector determined using the vehicular forward acceleration information and vehicular lateral acceleration information..
08/21/14
20140235990
Machine-based patient-specific seizure classification system
This disclosure is directed to a machine-based patient-specific seizure classification system. In general, an example system may comprise a non-linear svm seizure classification system-on-chip (soc) with multichannel eeg data acquisition and storage for epileptic patients is presented.
08/21/14
20140233680
Map decoder having low latency and operation method of the same
Provided is a maximum a posteriori (map) decoder having a low latency and an operation method of the map decoder, including a branch metric calculation block to calculate a branch metric based on a received signal, a processor control block to demultiplex a received signal in a certain trellis section, an extrinsec vector, and the calculated branch metric value, and a processor to calculate a path metric entering each state node in a certain trellis section, compensate for the calculated path metric, and calculate a state metric to be applied to a next trellis section based on the compensated path metric.. .
08/21/14
20140233673
Apparatus and method for communicating data over a communication channel
For some applications such as high-speed communication over short-reach links, the complexity and associated high latency provided by existing modulators may be unsuitable. According to an aspect, the present disclosure provides a modulator that can reduce latency for applications such as 400/1000 communication over copper cables or smf.
08/21/14
20140233583
Packet processing with reduced latency
Generally, this disclosure provides devices, methods and computer readable media for packet processing with reduced latency. The device may include a data queue to store data descriptors associated with data packets, the data packets to be transferred between a network and a driver circuit.
08/21/14
20140233481
Search space reconfiguration for enhanced-pdcch
The present invention relates to rapid search space reconfiguration for e-pdcch (enhanced physical downlink control channel) in wireless communication system to avoid flashlight interferences from neighbouring cells and to allocate the e-pdcch on the best physical resource blocks (prbs) in frequency fluctuation dominated scenarios. To this end, a method for providing low-latency feedback on a reconfiguration attempt of a search space for an enhanced-pdcch, and a corresponding apparatus are provided.
08/21/14
20140233402
Wireless network message prioritization technique
A computer-implemented character-string analysis method and associated device are provided for determining reliability of a communication network. The procedures include: assigning a priority to a message for sending by a signal carrier at the transmitting node; determining throughput of the signal carrier based on the priority; establishing latency of the message; calculating a packet error rate of the message; calculating dropout time between the transmitting and receiving nodes; and calculating a reliability index of the message.
08/14/14
20140229741
Dual composite field advanced encryption standard memory encryption engine
A different set of polynomials may be selected for encryption and decryption accelerators. That is, different sets of polynomials are used for encryption and decryption, each set being chosen to use less area and deliver more power for a memory encryption engine.
08/14/14
20140229700
Systems and methods for accommodating end of transfer request in a data storage device
Systems and methods for data processing particularly related addressing latency concerns in relation to data processing.. .
08/14/14
20140229677
Hiding instruction cache miss latency by running tag lookups ahead of the instruction accesses
This disclosure provides techniques and apparatuses to enable early, run-ahead handling of ic and itlb misses by decoupling the itlb and ic tag lookups from the ic data (instruction bytes) accesses, and making itlb and ic tag lookups run ahead of the ic data accesses. This allows overlapping the itlb and ic miss stall cycles with older instruction byte reads or older ic misses, resulting in fewer stalls than previous implementations and improved performance.
08/14/14
20140229645
Credit-based low-latency arbitration with data transfer
An apparatus includes multiple data sources and arbitration circuitry. The data sources are configured to send to a common destination data items and respective arbitration requests, such that the data items are sent to the destination regardless of receiving any indication that the data items were served to the destination in response to the respective arbitration requests.
08/14/14
20140229641
Method and apparatus for latency reduction
Aspects of the disclosure provide an integrated circuit that includes a plurality of input/output (io) circuits, an instruction receiving circuit and control circuits. The io circuits are configured to receive a plurality of bit streams corresponding to an instruction to the integrated circuit.
08/14/14
20140229608
Parsimonious monitoring of service latency characteristics
Various exemplary embodiments relate to a method of evaluating cloud network performance. The method includes: measuring a latency of a plurality of service requests in a cloud-network; determining a mean latency; and determining a variance of the plurality of service requests; comparing the mean latency to a first threshold; comparing the variance to a second threshold; and determining that the cloud-network is deficient if either the mean latency exceeds the first threshold or the variance exceeds the second threshold..
08/14/14
20140229524
Network communication latency
Embodiments disclosed herein may relate to at least partially addressing latency, such as network communication latency, as may occur between a client and server, for example.. .
08/14/14
20140229446
Method and system for selecting amongst a plurality of processes to send a message
In accordance with embodiments, there are provided mechanisms and methods for selecting amongst a plurality of processes to send a message (e.g. A message for updating an endpoint system, etc.).
08/14/14
20140226980
Low latency multiplexing for optical transport networks
Techniques for multiplexing and demultiplexing signals for optical transport networks are presented. A network component comprises a multiplexer component that multiplexes a plurality of signals having a first signal format to produce a multiplexed signal in accordance with a second signal format, while maintaining error correction code (ecc) of such signals and without decoding such signals and associated ecc.
08/14/14
20140226830
Method for operating a hearing device and hearing device
A hearing device comprises a receiver, an input buffer and a sample processor, the receiver being adapted to receive samples of a digital audio signal and feed received samples as a digital input signal to the input buffer, the sample processor being adapted to process the buffered samples to provide samples of a digital output signal such that the digital output signal is a sample-rate converted representation of the digital input signal with a predetermined target sample rate. The hearing device further comprises a latency controller adapted to estimate the quality of reception of the digital audio signal and to control the processing of the buffered samples in dependence on the estimated quality of reception..
08/14/14
20140226734
Methods and systems for high bandwidth chip-to-chip communications interface
Systems and methods are described for transmitting data over physical channels to provide a high bandwidth, low latency interface between integrated circuit chips with low power utilization. Communication is performed using group signaling over multiple wires using a vector signaling code, where each wire carries a low-swing signal that may take on more than two signal values..
08/14/14
20140226660
Reducing the maximum latency of reserved streams
An embodiment may include circuitry that may facilitate, at least in part, assignment, at least in part, of at least one bandwidth reservation for at least one packet stream and/or at least one stream reservation class. The at least one bandwidth reservation may be greater than an expected communication bandwidth of the at least one packet stream.
08/14/14
20140226024
Camera control in presence of latency
In various implementations, systems and processes may be utilized to reduce the affect of latency in teleoperated cameras. For example, various processes may be utilized to determine latency periods and generate user interfaces based on the latency periods and/or inputs provided to the systems..
08/14/14
20140225561
Charging device
A charging device is provided with a charger, a charging time setting unit and a usage latency calculation unit. The charger is configured to charge a vehicle battery.
08/07/14
20140223254
Qc-ldpc convolutional codes enabling low power trellis-based decoders
A low-density parity check (ldpc) encoding method for increasing constraint length includes determining a ldpc code block h-matrix including a systematic submatrix (hsys) of input systematic data and a parity check submatrix (hpar) of parity check bits. The method includes diagonalizing the parity check submatrix (hpar).
08/07/14
20140223210
Tunable sector buffer for wide bandwidth resonant global clock distribution
A wide bandwidth resonant clock distribution comprises a clock grid configured to distribute a clock signal to a plurality of components of an integrated circuit and a tunable sector buffer configured to receive the clock signal and provide an output to the clock grid. The tunable sector buffer is configured to set latency and slew rate of the clock signal based on an identified resonant or non-resonant mode..
08/07/14
20140223144
Load latency speculation in an out-of-order computer processor
Load latency speculation in an out-of-order computer processor, including: issuing a load instruction for execution, wherein the load instruction has a predetermined expected execution latency; issuing a dependent instruction wakeup signal on an instruction wakeup bus, wherein the dependent instruction wakeup signal indicates that the load instruction will be completed upon the expiration of the expected execution latency; determining, upon the expiration of the expected execution latency, whether the load instruction has completed; and responsive to determining that the load instruction has not completed upon the expiration of the expected execution latency, issuing a negative dependent instruction wakeup signal on the instruction wakeup bus, wherein the negative dependent instruction wakeup signal indicates that the load instruction has not completed upon the expiration of the expected execution latency.. .
08/07/14
20140223143
Load latency speculation in an out-of-order computer processor
Load latency speculation in an out-of-order computer processor, including: issuing a load instruction for execution, wherein the load instruction has a predetermined expected execution latency; issuing a dependent instruction wakeup signal on an instruction wakeup bus, wherein the dependent instruction wakeup signal indicates that the load instruction will be completed upon the expiration of the expected execution latency; determining, upon the expiration of the expected execution latency, whether the load instruction has completed; and responsive to determining that the load instruction has not completed upon the expiration of the expected execution latency, issuing a negative dependent instruction wakeup signal on the instruction wakeup bus, wherein the negative dependent instruction wakeup signal indicates that the load instruction has not completed upon the expiration of the expected execution latency.. .
08/07/14
20140223105
Method and apparatus for cutting senior store latency using store prefetching
In accordance with embodiments disclosed herein, there are provided methods, systems, mechanisms, techniques, and apparatuses for cutting senior store latency using store prefetching. For example, in one embodiment, such means may include an integrated circuit or an out of order processor means that processes out of order instructions and enforces in-order requirements for a cache.
08/07/14
20140223098
Dynamic management of heterogenous memory
A method of operating a computing device includes dynamically managing at least two types of memory based on workloads, or requests from different types of applications. A first type of memory may be high performance memory that may have a higher bandwidth, lower memory latency and/or lower power consumption than a second type of memory in the computing device.
08/07/14
20140223097
Data storage system and data storage control device
A storage system has a plurality of control modules for controlling a plurality of storage devices, which make mounting easier with maintaining low latency response even if the number of control modules increases. A plurality of storage devices are connected to the second interface of each control module using back end routers, so that redundancy for all the control modules to access all the storage devices is maintained.
08/07/14
20140223091
System and method for management of unique alpha-numeric order message identifiers within ddr memory space
An embedded hardware-based risk system is provided that has an apparatus and method for the management of unique alpha-numeric order message identifiers within ddr memory space restrictions. The apparatus provides a new design for assigning orders (clorid) to memory and the method thereof specifically with the intention to not impact latency until memory is over 90% full..
08/07/14
20140223071
Method and system for reducing write latency in a data storage system by using a command-push model
A data storage system is provided that implements a command-push model that reduces latencies. The host system has access to a nonvolatile memory (nvm) device of the memory controller to allow the host system to push commands into a command queue located in the nvm device.
08/07/14
20140223054
Memory buffering system that improves read/write performance and provides low latency for mobile systems
A memory buffering system is disclosed that arbitrates bus ownership through an arbitration scheme for memory elements in chain architecture. A unified host memory controller arbitrates bus ownership for transfer to a unified memory buffer and other buffers within the chain architecture.
08/07/14
20140222397
Front-end signal generator for hardware in-the-loop simulation
A front-end signal generator for hardware-in-the-loop simulators of a simulated missile is disclosed. The front-end signal generator is driven by the digital scene and reticle simulation-hardware in the loop (dsars-hitl) simulator.
08/07/14
20140222101
Determination of sleep quality for neurological disorders
A device determines values for one or more metrics that indicate the quality of a patient's sleep based on sensed physiological parameter values. Sleep efficiency, sleep latency, and time spent in deeper sleep states are example sleep quality metrics for which values may be determined.
08/07/14
20140220976
Resource allocation method and apparatus for use in wireless communication system
A resource allocation method and apparatus are provided for allocating resources for handover between sectors of a cell with handover latency. A resource allocation method of a base station in a wireless communication system includes receiving a handover request message from a terminal for handover from a first sector to a second sector within a cell and transmitting an allocation message for allocating a mobile allocation index offset (maio) to the terminal, the maio being identical to an maio used in the first sector.
08/07/14
20140219497
Temporal winner takes all spiking neuron network sensory processing apparatus and methods
Apparatus and methods for contrast enhancement and feature identification. In one implementation, an image processing apparatus utilizes latency coding and a spiking neuron network to encode image brightness into spike latency.
08/07/14
20140219284
Method and system for reduction of time variance of packets received from bonded communication links
Method and system for reduction of time variance of packets received from bonded communication links. Embodiments of present inventions can be applied to bonded communication links, including wireless connection, ethernet connection, internet protocol connection, asynchronous transfer mode, virtual private network, wifi, high-speed downlink packet access, gprs, lte, x.25 and etc.
08/07/14
20140219196
Method and apparatus for detecting inconsistent control information in wireless communication systems
In order to support low latency and bursty internet data traffic, the 3gpp lte wireless communication system uses dynamic allocation. To keep the allocation overhead lower, the system is designed such that the client terminal must perform a number of decoding attempts to detect resource allocations.
08/07/14
20140218350
Power management of display controller
In general, in one aspect, a display controller has non-essential portions powered off for a portion of vertical blanking interval (vbi) periods to conserve power. The portion takes into account overhead for housekeeping functions and memory latency for receiving a first packet of pixels for a frame to be decoded during a next active period.
08/07/14
20140218087
Wide bandwidth resonant global clock distribution
A wide bandwidth resonant clock distribution comprises a clock grid configured to distribute a clock signal to a plurality of components of an integrated circuit, a tunable sector buffer configured to receive the clock signal and provide an output to the clock grid, at least one inductor, at least one tunable resistance switch, and a capacitor network. The tunable sector buffer is programmable to set latency and slew rate of the clock signal.
07/31/14
20140215539
Apparatus and methods for catalog data distribution
Methods and apparatus for providing enhanced catalog data capacity and functionality within a content distribution network. In one embodiment, the apparatus includes an architecture configured to utilize an upstream out-of-band (oob) channel for upstream catalog data request carriage, while a downstream in-band (ib) qam is used for carriage of the catalog data issued by the network in response to the request.
07/31/14
20140215475
System and method for supporting work sharing muxing in a cluster
A system and method can provide efficient low-latency muxing between servers in the cluster. One such system can include a cluster of one or more high performance computing systems, each including one or more processors and a high performance memory.
07/31/14
20140215461
Low-latency fault-tolerant virtual machines
A system and method are disclosed for managing a plurality of virtual machines (vms) in a fault-tolerant and low-latency manner. In accordance with one example, a computer system executes a first vm and a second vm, and creates a first live snapshot of the first vm and a second live snapshot of the second vm.
07/31/14
20140215446
Automated porting of application to mobile infrastructures
Techniques to automatically port applications to a mobile infrastructure using code separation with semantic guarantees are disclosed. Porting enterprise applications from to a target architecture another is effected via separating code into constituent layers of the target architecture.
07/31/14
20140215177
Methods and systems for managing heterogeneous memories
A system includes a processor and first and second memories coupled to the processor. The first and second memories have a hardware attribute, such as bandwidth, latency and/or power consumption, wherein a first value of the hardware attribute of the first memory is different from a second value of the hardware attribute of the second memory.
07/31/14
20140215146
Apparatus, method and program product for determining the data recall order
To provide a technique for optimizing the processing order of recall requests in which the average latency time of a host apparatus is minimized. A storage manager accepts a request of the host apparatus for the recalling data from a tape library, and stores the request in a queue table.
07/31/14
20140215111
Variable read latency on a serial memory bus
Systems and/or methods are provided that facilitate employing a variable read latency on a serial memory bus. In an aspect, a memory can utilize an undefined amount of time to obtain data from a memory array and prepare the data for transfer on the serial memory bus.
07/31/14
20140215108
Reducing write i/o latency using asynchronous fibre channel exchange
A fcp initiator sends a fcp write command to a fcp target within a second fc exchange, and the target sends one or more fc write control ius to the initiator within a first fc exchange to request a transfer of data associated with the write command. The first and second fc exchanges are distinct from one another.
07/31/14
20140215044
Quality of service management using host specific values
In one embodiment, a latency value is determined for an input/output io request in a host computer of a plurality of host computers based on an amount of time the io request spent in the host computer's issue queue. The issue queue of the host computer is used to transmit io requests to a storage system shared by the plurality of host computers.
07/31/14
20140215007
Multi-level data staging for low latency data access
Techniques for facilitating and accelerating log data processing are disclosed herein. The front-end clusters generate a large amount of log data in real time and transfer the log data to an aggregating cluster.


Popular terms: [SEARCH]

Latency topics: Interactive, Transactions, Precedence, Notification, Virtual Channel, Bottleneck, Congestion, Delta Frame, Cable Modem Termination System, Interleave, Error Rate, Downstream, Cable Modem, Mobile Data, Data Transfer

Follow us on Twitter
twitter icon@FreshPatents

###

This listing is a sample listing of patent applications related to Latency for is only meant as a recent sample of applications filed, not a comprehensive history. There may be associated servicemarks and trademarks related to these patents. Please check with patent attorney if you need further assistance or plan to use for business purposes. This patent data is also published to the public by the USPTO and available for free on their website. Note that there may be alternative spellings for Latency with additional patents listed. Browse our RSS directory or Search for other possible listings.
     SHARE
  
         


FreshNews promo



0.4705

3050

1 - 1 - 71