|| List of recent Graphics Processing Unit-related patents
| Graphics processing unit and management method thereof|
A graphics processing unit (gpu) and a management method of the gpu are provided. The gpu includes at least one graphics engine and an engine manager.
| Method and apparatus for unifying graphics processing unit computation languages|
A method and apparatus for unifying graphics processing unit (gpu) computation languages is disclosed. The method comprises identifying a gpu of a computer system; accessing a plurality of macros representing a difference in source code between a first gpu computation language and a second gpu computation language, expanding each macro in the plurality of macros based on the identified gpu and executing a kernel on the computer system using the expanded macro..
|Computing device and method for adjusting bus bandwidth of computing device|
In a method for adjusting bus bandwidth applied on a computing device, the computing device includes a bus controller and several graphics processing units (gpus). The bus controller establishes a data flow of each signal channel of the peripheral component interconnect express (pci-e) bus connected to each gpu, and obtains a total data flow of the pci-e bus connected to each gpu according to the data flow of each of the signal channels.
|Display system for electronic device and display module thereof|
A display module of an electronic device is for showing an operation interface of a first screen on a second screen of the electronic device, and the first screen is disposed on an opposite side of the second screen. The display module includes a graphics processing unit and a screen control unit.
|Virtualized graphics processing for remote display|
User inputs are received from end user devices. The user inputs are associated with applications executing in parallel on a computer system.
|Method and system for processing nested stream events|
One embodiment of the present disclosure sets forth a technique for enforcing cross stream dependencies in a parallel processing subsystem such as a graphics processing unit. The technique involves queuing waiting events to create cross stream dependencies and signaling events to indicated completion to the waiting events.
|Timing controller capable of switching between graphics processing units|
A display system is disclosed that is capable of switching between graphics processing units (gpus). Some embodiments may include a display system, including a display, a timing controller (t-con) coupled to the display, the t-con including a plurality of receivers, and a plurality of gpus, where each gpu is coupled to at least one of the plurality of receivers, and where the t-con selectively couples only one of the plurality of gpus to the display at a time..
|Graphics processing unit sharing between many applications|
A technique for executing a plurality of applications on a gpu. The technique involves establishing a first connection to a first application and a second connection to a second application, establishing a universal processing context that is shared by the first application and the second application, transmitting a first workload pointer to a first queue allocated to the first application, the first workload pointer pointing to a first workload generated by the first application, transmitting a second workload pointer to a second queue allocated to the second application, the second workload pointer pointing to a second workload generated by the second application, transmitting the first workload pointer to a first gpu queue in the gpu, and transmitting the second workload pointer to a second gpu queue in the gpu, wherein the gpu is configured to execute the first workload and the second workload in accordance with the universal processing context..
|Graphic card for collaborative computing through wireless technologies|
A graphics card is provided. The graphics card comprises: a graphics processing units (gpu) for data computing; and a wireless controller for wirelessly receiving data from other graphic cards or sending data to the other graphics cards, and communicating with the gpu by bus.
|Color buffer and depth buffer compression|
In an example, a method of coding graphics data comprising a plurality of pixels includes performing, by a graphics processing unit (gpu), multi-sample anti-aliasing to generate one or more sample values for each pixel of the plurality of pixels. The method may also include determining whether pixels comprise edge pixels, where the determination comprises identifying, for each pixel, differing sample values.
|Apparatus and methods for processing of media signals|
Methods and apparatus for processing media signals. In one embodiment, a data processing device processes fixed and variable rate data using a first and second processing unit.
|Scheduling thread execution based on thread affinity|
In accordance with some embodiments, spatial and temporal locality between threads executing on graphics processing units may be analyzed and tracked in order to improve performance. In some applications where a large number of threads are executed and those threads use common resources such as common data, affinity tracking may be used to improve performance by reducing the cache miss rate and to more effectively use relatively small-sized caches..
|Power-efficient interaction between multiple processors|
A technique for processing instructions in an electronic system is provided. In one embodiment, a processor of the electronic system may submit a unit of work to a queue accessible by a coprocessor, such as a graphics processing unit.
|Methods and systems for power management in a data processing system|
Methods and systems for managing power consumption in data processing systems are described. In one embodiment, a data processing system includes a general purpose processing unit, a graphics processing unit (gpu), at least one peripheral interface controller, at least one bus coupled to the general purpose processing unit, and a power controller coupled to at least the general purpose processing unit and the gpu.
|Secondary graphics processor control system|
A secondary graphics processor control system includes a secondary graphics processor. A controller is coupled to the secondary graphics processor.
|Computer device and method for dissipating heat from a discrete graphics processing unit in the same|
A computer device and a method for dissipating heat from a discrete graphics processing unit therein are provided. The method thereof includes determining whether the discrete graphics processing is in an operating state.
|Transmission of video utilizing static content information from video source|
Methods for removing redundancies in a video stream based on detection of static portions of the video stream prior to encoding of the video stream for wireless transmission. In various embodiments, the generation and buffering of a video stream having a series of video frames is monitored to detect static portions of the video stream.
|Patched shading in graphics processing|
Aspects of this disclosure generally relate to a process for rendering graphics that includes performing, with a hardware shading unit of a graphics processing unit (gpu) designated for vertex shading, vertex shading operations to shade input vertices so as to output vertex shaded vertices, wherein the hardware unit is configured to receive a single vertex as an input and generate a single vertex as an output. The process also includes performing, with the hardware shading unit of the gpu, a geometry shading operation to generate one or more new vertices based on one or more of the vertex shaded vertices, wherein the geometry shading operation operates on at least one of the one or more vertex shaded vertices to output the one or more new vertices..
|Patched shading in graphics processing|
Aspects of this disclosure relate to a process for rendering graphics that includes performing, with a hardware unit of a graphics processing unit (gpu) designated for vertex shading, a vertex shading operation to shade input vertices so as to output vertex shaded vertices, wherein the hardware unit adheres to an interface that receives a single vertex as an input and generates a single vertex as an output. The process also includes performing, with the hardware unit of the gpu designated for vertex shading, a hull shading operation to generate one or more control points based on one or more of the vertex shaded vertices, wherein the one or more hull shading operations operate on at least one of the one or more vertex shaded vertices to output the one or more control points..
|Patched shading in graphics processing|
Aspects of this disclosure relate to a process for rendering graphics that includes designating a hardware shading unit of a graphics processing unit (gpu) to perform first shading operations associated with a first shader stage of a rendering pipeline. The process also includes switching operational modes of the hardware shading unit upon completion of the first shading operations.
|Gpu-based rip architecture|
A method of printing document data in page description language format using a plurality of graphics processing units. The plurality of tiles representing the document using the assigned graphics processing units are rendered in parallel with one another, and the rendered tiles are transmitted, bypassing the central processing units, from each of the graphics processing units to a corresponding one of a plurality of print head controllers, with the rendered tiles transmitted at a higher frequency than a frequency at which the plurality of tiles is output from each print head controller.
|Gpu compute optimization via wavefront reforming|
Methods and systems are provided for graphics processing unit optimization via wavefront reforming including queuing one or more work-items of a wavefront into a plurality of queues of a compute unit. Each queue is associated with a particular processor within the compute unit.
|Gpu distributed work-item queuing|
Methods and systems are provided for graphics processing unit distributed work-item queuing. One or more work-items of a wavefront are queued into a first level queue of a compute unit.
|Real-time camera tracking using depth maps|
Real-time camera tracking using depth maps is described. In an embodiment depth map frames are captured by a mobile depth camera at over 20 frames per second and used to dynamically update in real-time a set of registration parameters which specify how the mobile depth camera has moved.
|Methods and apparatus for interactive debugging on a non-pre-emptible graphics processing unit|
Systems and methods are disclosed for performing interactive debugging of shader programs using a non-preemptible graphics processing unit (gpu). An iterative process is employed to repeatedly re-launch a workload for processing by the shader program on the gpu.
|System and method for improving the graphics performance of hosted applications|
A system and method for efficiently performing graphics operations on a video game/application hosting service. One embodiment of a system comprises: an application/game server comprising a central processing unit and a graphics processing unit generating a series of video frames; a buffer management logic to manage the series of video frames; a shared buffer managed by the buffer management logic to store the video frames generated; wherein the buffer management logic continually monitors a signal indicating when a video display or video compression unit is ready to receive a next video frame and, responsive to detecting that the display or video compression unit is about to be ready, the buffer management logic to transfer the most recently completed from the shared buffer to a back buffer; and responsive to detecting the signal, the buffer management logic transferring the video frame from the back buffer to a front buffer..
|Execution of graphics and non-graphics applications on a graphics processing unit|
The techniques described in this disclosure are directed to efficient parallel execution of graphics and non-graphics application on a graphics processing unit (gpu). The gpu may include a plurality of shader cores within a shader processor.
|Fully parallel construction of k-d trees, octrees, and quadtrees in a graphics processing unit|
A non-transitory computer-readable storage medium having computer-executable instructions for causing a computer system to perform a method for constructing k-d trees, octrees, and quadtrees from radix trees is disclosed. The method includes assigning a morton code for each of a plurality of primitives corresponding to leaf nodes of a binary radix tree, and sorting the plurality of morton codes.
|Fully parallel in-place construction of 3d acceleration structures and bounding volume hierarchies in a graphics processing unit|
A non-transitory computer-readable storage medium having computer-executable instructions for causing a computer system to perform a method for constructing bounding volume hierarchies from binary trees is disclosed. The method includes providing a binary tree including a plurality of leaf nodes and a plurality of internal nodes.
|Fully parallel in-place construction of 3d acceleration structures in a graphics processing unit|
A system and method for constructing binary radix trees in parallel, which are used for as a building block for constructing secondary trees. A non-transitory computer-readable storage medium having computer-executable instructions for causing a computer system to perform a method is disclosed.
|Conversion of contiguous interleaved image data for cpu readback|
A method, system, and computer-readable storage medium are disclosed for conversion of contiguous interleaved image data. Image data in a contiguous interleaved format is received at a graphics processing unit (gpu).
|Validation of applications for graphics processing unit|
The techniques described in this disclosure are directed to validating an application that is to be executed on a graphics processing unit (gpu). For example, a validation server device may receive code of the application.
|Execution model for heterogeneous computing|
The techniques are generally related to implementing a pipeline topology of a data processing algorithm on a graphics processing unit (gpu). A developer may define the pipeline topology in a platform-independent manner.
|First and second software stacks and discrete and integrated graphics processing units|
A first software stack and a second software stack are run in a virtual environment. The virtual environment may be created by a hardware virtualizer.
|Video stream management for remote graphical user interfaces|
Embodiments enable display updates other than a video stream in a graphical user interface (gui) to be rendered, encoded, and transmitted exclusive of the video stream. A virtual machine generates a gui that includes an encoded video stream and other display updates.
|Screen compression for mobile applications|
One embodiment of the invention sets forth a technique for compressing and storing display data and optionally compressing and storing cursor data in a memory that is local to a graphics processing unit to reduce the power consumed by a mobile computing device when refreshing the screen. Compressing the display data and optionally the cursor data also reduces the relative cost of the invention by reducing the size of the local memory relative to the size that would be necessary if the display data were stored locally in uncompressed form.
|Method for compiling a parallel thread execution program for general execution|
A technique is disclosed for executing a compiled parallel application on a general purpose processor. The compiled parallel application comprises parallel thread execution code, which includes single-instruction multiple-data (simd) constructs, as well as references to intrinsic functions conventionally available in a graphics processing unit.
|Graphics processing unit buffer management|
The techniques are generally related to management of buffers with a management unit that resides within an integrated circuit that includes a graphics processing unit (gpu). The management unit may ensure proper access to the buffers by the programmable compute units of the gpu to allow the gpu to execute kernels on the programmable compute units in a pipeline fashion..
|Optimizing texture commands for graphics processing unit|
Aspects of this disclosure relate to a method of compiling high-level software instructions to generate low-level software instructions. In an example, the method includes identifying, with a computing device, a set of high-level (hl) control flow (cf) instructions having one or more associated texture load instructions, wherein the set of hl cf instructions comprises one or more branches.
|Initialization of gpu using rom-based initialization unit and programmable microcontroller|
An approach is disclosed for performing initialization operations for a graphics processing unit (gpu). The approach includes detecting errors while performing one or more initialization operations.
|Server, arithmatic processing method, and arithmatic processing system|
A computing method is provided which includes calling a general purpose graphics processing subroutine for execution of a target program by a client; sending a program code and resource data for execution of the target program to a server by the client; and executing the program code using a general purpose graphics processing unit by the server.. .