|| List of recent Gesture-related patents
| Transitioning between access states of a computing device|
Aspects of this disclosure are directed to outputting, for display at a presence-sensitive display, a first set of two or more selectable objects in a first arrangement of locations, the computing device operating in a first instance of a limited access state. At least one of the selectable objects includes an element of a predetermined passcode.
| Method and system for using a second screen device to tune a set top box to display content playing on the second screen device|
A system and method for displaying operating a receiving device includes a second screen device in communication with the receiving device. The second screen device displays streamed content having an identifier associated therewith having a touch screen, forms a tune command comprising the identifier and a receiving device identifier in response to a gesture on the touch screen and communicates the tune command to the receiving device from the second screen device.
| Systems, methods, and computer program products for capturing natural responses to advertisements|
Systems, methods, and computer program products described herein may allow for the capture of a user's reaction to an advertisement. The reaction may be verbal or may take the form of a gesture.
| Systems and methods for a voice- and gesture-controlled mobile application development and deployment platform|
Systems and methods for developing, customizing, and deploying mobile device applications through voice and/or gesture interactions are provided through a mobile application development and deployment platform. Preferably, these systems and methods are implemented in an internet based environment that allows non-technical users to build sophisticated, highly-customizable cross-platform mobile applications.
| Assigning gesture dictionaries|
Techniques for assigning a gesture dictionary in a gesture-based system to a user comprise capturing data representative of a user in a physical space. In a gesture-based system, gestures may control aspects of a computing environment or application, where the gestures may be derived from a user's position or movement in a physical space.
| Method for operating a gesture-controlled graphical user interface|
A method for operating a gesture-controlled graphical user interface for a device with a display screen interface (106), the method comprising the steps of receiving (301) an input gesture (g1-g8), via the gesture-controlled input interface (101), generating a signal (303) comprising a graphical object (400) corresponding to the device functionality associated with the received input gesture (g1-g8). The generated graphical object (400) has at least part of its border (401-408) resembling the shape of the received input gesture (g1-g8)..
| Method for generating a graphical user interface|
A method for generating a graphical user interface object, the method comprising the steps of awaiting a user's gesture input from a gesture input interface; providing information listing at least one gesture type wherein at least two threshold values of at least one parameter are assigned to the given gesture type; verifying whether the input gesture matches a parameterized gesture type; in case the verification confirms that the input gesture matches a parameterized gesture type, extracting the gesture type and the gesture parameter's value from a gesture event notification; identifying an associated action based on the gesture type and parameter; and generating an output signal with a differently configured graphical user interface object content dependent on the gesture type and the gesture parameter.. .
| Electronic device including touch-sensitive display and method of controlling same|
A method includes displaying a plurality of display elements on a touch-sensitive display of an electronic device; displaying a selection tool on the touch-sensitive display; in response to detecting a first gesture, selecting a first portion of the plurality of display elements, the first portion comprising at least one display element; and in response to detecting a second gesture, moving the selection tool without selecting the display elements; and in response to detecting a third gesture, selecting a second portion of the plurality of the plurality of display elements, the second portion being non-contiguous with the first portion.. .
| Gesture entry techniques|
Techniques are provided for entering, verifying, and saving a gesture on a touch-sensitive display device. In one embodiment, the device displays a gesture entry screen where a user enters a gesture.
| Gesture-based cursor control|
In general, this disclosure describes techniques for enabling gesture-based cursor control on gesture keyboards. For example, a computing device outputs a graphical keyboard and a text display region, including a cursor at a first cursor location.
| Thumbnail and document map based navigation in a document|
An overview mode is used to navigate content. While in the overview mode, content is displayed as thumbnails such that a user may more easily locate content.
| Gesture entry techniques|
Techniques are provided for entering and saving a gesture on a touch-sensitive display device. In one embodiment, the device displays an array of visible graphical elements and may detect a gesture based on a user's touch of the visible graphical elements as well as on hidden areas not displayed to the user.
| Column organization of content|
Column organization of content is described. In an implementation, a mobile communications device configures a user interface to include a plurality of representations of content arranged according to a plurality of columns that permits navigation between first and second said columns upon detection of a gesture input via a touchscreen of the mobile communications device.
| Multiple seesawing panels|
A computing device is described that receives an indication of a first gesture received at an input device. Responsive to receiving the indication of the first gesture, the computing device outputs, for display, a first information panel having a size.
| Gesture keyboard with gesture cancellation|
In one example, a method includes outputting, for display at a presence-sensitive display, a graphical user interface comprising a graphical keyboard that includes a group of keys, wherein each key in the group of keys is associated with a respective, different region of the presence-sensitive display. The method further includes receiving an indication of a gesture to select a sequence of one or more keys in the group of keys of the graphical keyboard.
| Partial gesture text entry|
A graphical keyboard including a number of keys is output for display at a display device. The computing device receives an indication of a gesture to select at least two of the keys based at least in part on detecting an input unit at locations of a presence-sensitive input device.
| Incremental multi-word recognition|
In one example, a computing device includes at least one processor that is operatively coupled to a presence-sensitive display and a gesture module operable by the at least one processor. The gesture module may be operable by the at least one processor to output, for display at the presence-sensitive display, a graphical keyboard comprising a plurality of keys and receive an indication of a continuous gesture detected at the presence-sensitive display, the continuous gesture to select a group of keys of the plurality of keys.
| Contextually-specific automatic separators|
Aspects of the present disclosure are directed to techniques for outputting a graphical keyboard comprising a group of keys, wherein each key in the group of keys is associated with a respective, different display region, receiving an indication of a gesture to select a sequence of keys that are each included in the group of keys of the graphical keyboard, determining that the selected sequence of keys corresponds to a character string that is identifiable by at least one format source, wherein the format source is associated with a syntax, determining, based at least in part on the syntax, that at least one separator character is associated with the character string, and in response to determining that the at least one separator character is associated with the character string, outputting the character string and the at least one separator character at a location proximal to the character string.. .
| Character deletion during keyboard gesture|
Techniques are described for character deletion on a computing device that utilizes a gesture-based keyboard. The computing device includes a processor and at least one module operable by the processor to output, for display at a presence-sensitive display, a graphical keyboard comprising a plurality of keys and a text editor field.
| Image display apparatus and method for operating the same|
An image display apparatus and a method for operating the same are disclosed. The image display apparatus operating method includes receiving a touch input or a gesture input in a first direction, outputting a first sound corresponding to the first direction, receiving a touch input or a gesture input in a second direction, and outputting a second sound corresponding to the second direction.
| Method and apparatus for displaying data in terminal|
A data display method and apparatus of a terminal that displays predetermined data on a region of a size designated by a user are provided. The data display apparatus includes a display unit upon which a predetermined gesture occurs, and a controller configured to detect a region designated by the predetermined gesture as a screen display region when the predetermined gesture occurs, and to detect data selected by the predetermined gesture, and to control displaying of the data on the screen display region, wherein the controller simultaneously detects the region designated by the predetermined gesture and detects the data selected by the predetermined gesture..
| Multi-gesture media recording system|
A computer implemented method and system for recording media data such as audio data in one or more communication modes based on gestures on a graphical user interface (gui) of an electronic device is provided. A gesture based media recording application (gbmra) provided on the electronic device defines multiple interface regions on the gui.
| Gesture based context-sensitive funtionality|
Gesture based, context sensitive help and search functionalities are provided for application programs on a mobile device. In response to detecting movement at a mobile device in the form of a gesture, it is determined whether the movement corresponds to a predetermined gesture for invoking a help or search functionality.
| Advertisement campaign system using socially collaborative filtering|
In one embodiment, a method comprises identifying, in a network, user selection preferences of an identified user having accessed the network, the identifying based on an accumulation of user selection inputs executed by the identified user, the user selection inputs accumulated relative to input options presented to the user and identifying respective available network items; classifying, by an apparatus in the network, the identified user into one of multiple user affinity categories relative to an advertisement campaign for a targeted product, the classifying based on determining whether one of the user selection inputs represents a view gesture of the user having viewed the targeted product; and selecting an advertisement asset for delivery to the identified user based on the classifying of the identified user into the one user affinity category.. .
| Strategically located touch sensors in smartphone casing|
A wireless or handheld device or phone is equipped with corner sensors which control functioning of the device. The corner sensors are configured based how a user holds the handheld device while utilizing the corner sensors.
| Gesture identification with natural images|
A method for gesture identification with natural images includes generating a series of variant images by using each two or more successive ones of the natural images, extracting an image feature from each of the variant images, and comparing the varying pattern of the image feature with a gesture definition to identify a gesture. The method is inherently insensitive to indistinctness of images, and supports the motion estimation in axes x, y, and z without requiring the detected object to maintain a fixed gesture..
| Housing with touch-through membrane|
A camera system includes a camera and a housing that is structured to at least partially enclose the camera. The camera includes a touch-sensitive surface that can be used to receive user input.
| Sensor array touchscreen recognizing finger flick gesture|
Touchscreen user interfaces configured to detect a finger flick touch gesture for controlling software applications, computers, devices, machinery, and process environments. Such user interfaces can be manipulated by users and provide a wide range of uses with computer applications, assistance to the disabled, and control of electronic devices, machines, and processes.
| Sensor array touchscreen recognizing touch gestures|
Touchscreen user interfaces for controlling software applications, computers, devices, machinery, and process environments with touch gestures. Such user interfaces can be manipulated by users and provide a wide range of uses with computer applications, assistance to the disabled, and control of electronic devices, machines, and processes.
| System and method for reducing power consumption in an electronic device having a touch-sensitive display|
A system and method for reducing power consumption in an electronic device by controlling the transition of the electronic device from a sleep mode to a full power mode. The electronic device comprises a main processor a touch-sensitive overlay, and an overlay controller.
| Natural media painting using proximity-based tablet stylus gestures|
Techniques for natural media painting using proximity-based tablet stylus gestures are described. A stylus is implemented for user manipulation to simulate a brush stroke of a paint brush, where the stylus includes an application tip formed from individual virtual bristles that simulate the paint brush.
| Creation of three-dimensional graphics using gestures|
Three-dimensional virtual objects are created using gestures. In one example, a selection of a shape is received.
| Multi-modal user expressions and user intensity as interactions with an application|
Architecture that enables single and multi-modal interaction with computing devices, as well as interpreting user intensity (or liveliness) in the gesture or gestures. In a geospatial implementation, a multi-touch interaction can involve the detection and processing of tactile pressure (touch sensitive) to facilitate general navigation between two geographical points.
| Multi-gesture text input prediction|
A computing device outputs a keyboard for display, receives an indication of a first gesture to select a first sequence of one or more keys, determines a set of candidate strings based in part on the first sequence of keys, and outputs for display at least one of the set of candidate strings. The computing device receives an indication of a second gesture to select a second sequence of one or more keys, and determines that characters associated with the second sequence of keys are included in a first candidate word based at least in part on the set of candidate strings, or are included in a second candidate word not based on the first sequence of keys.
| Gesture control device and method for setting and cancelling gesture operating region in gesture control device|
A method for setting and cancelling a gesture operating region in a gesture control device includes steps of capturing at least one image; detecting whether there is a palm in the at least one image; if there is a palm in the at least one image, detecting whether there is a face in the at least one image; if there is a face in the at least one image, setting the gesture operating region according to the palm and the face; and cancelling the gesture operating region when the palm is at rest over a first time period or the palm disappears from the gesture operating region over a second time period.. .
| Removable protective cover with embedded proximity sensors|
A removable cover for a handheld electronic device, including a protective cover that at least partially covers rear and side surfaces of a handheld electronic device, a plurality of proximity sensors mounted in the cover for detecting user gestures performed outside of the electronic device, a battery, wireless communication circuitry, and a processor configured to operate the proximity sensors, and to operate the wireless communication circuitry to transmit commands to the electronic device based on gestures detected by the proximity sensors.. .
Aspects of the disclosure provide a system that includes a protected module, an input module and a gesture engine. The protected module is configured to be accessible based on a specific gesture of a user predetermined to have a right to access the protected module.
|Method and system for gesture identification based on object tracing|
A method and system provide light to project to an operation space so that a received image from the operation space will include, if an object is in the operation space, a bright region due to the reflection of light by the object, and identify a gesture according to the variation of a barycenter position, an average brightness, or an area of the bright region in successive images, for generating a corresponding command. Only simple operation and calculation is required to detect the motion of an object moving in the x, y, or z axis of an image, for identifying a gesture represented by the motion of the object..
|Denoising touch gesture input|
In one embodiment, a computing device determines a touch gesture on a touch screen of the computing device. The touch gesture includes two or more data points that each correspond to a particular location on the touch screen and a particular point in time.
|Incremental feature-based gesture-keyboard decoding|
In one example, a method includes outputting, at a presence-sensitive display operatively coupled to a computing device, a graphical keyboard comprising a plurality of keys and receiving an indication of a gesture to select a group of keys of the plurality of keys. The method may further include determining, in response to the indication, a candidate word based at least in part on the group of keys.
|Multi display device and control method thereof|
A multi display apparatus includes a first body comprising a first display, a second body comprising a second display, a hinge to connect the first and second bodies to each other, a first imaging unit provided on the first body, a second imaging unit provided on the second body, and a controller to recognize a user's gesture using a plurality of images photographed at the first and second imaging units, and perform a corresponding control operation in response to the recognized user gesture. The controller recognizes the user's gesture using a movement of a user object within recognition ranges of the respective imaging units.
|Multi-display apparatus and method of controlling the same|
A multi-display apparatus includes a first body on which a first display is provided, a second body on which a second display is provided, a hinge configured to connect the first body and the second body, a storage configured to store control operation information which is matched with a rotated state of the first body and the second body, a sensor configured to sense a folding gesture to rotate at least one of the first body and the second body on a basis of the hinge, and a controller configured to perform, when the folding gesture is sensed, an operation corresponding to the folding gesture using the control operation information corresponding to the rotated state of a rotated body from among the first body and the second body.. .
|Media insertion interface|
A computing device may output a graphical user interface for display at a presence-sensitive screen including an edit region and a graphical keyboard. The computing device may receive an indication of a gesture detected at a location of the presence-sensitive screen within the graphical keyboard.
|Provision of haptic feedback for localization and data input|
Various technologies pertaining to provision of haptic feedback to users of computing devices with touch-sensitive displays are described. First haptic feedback is provided to assist a user in localizing a finger or thumb relative to a graphical object displayed on a touch-sensitive display, where no input data is provided to an application corresponding to the graphical object.
|Data and user interaction based on device proximity|
Architecture that enables the detection of a user by a user device and interaction with content of the user device by the user before the user physically contacts the device. The detection capability can utilize one or more sensors of the device to identify the user and the proximity (distance) of the user to the device.
|Game type unlocking method for touch screen information device|
The present invention relates to a method of unlocking an information device, such as a smart phone, applying a touch screen as an input-output means, in which a game progressed by a gesture or the like performed on the touch screen 11 is implemented on an unlock screen, and the locking state of the corresponding information device is released as the game is progressed. Through the present invention, it is possible to induce interest a user and satisfy an aesthetic sense of the user in the unlocking process by getting out of a stereotyped conventional method of unlocking a touch screen 11 information device, which is focused only on functional factors..
|Ear position and gesture detection with mobile device|
A mobile device may include a sensor array. The sensor array may be a touch sensor array, such as a projected capacitive touch (pct) sensor array.
|Gesture recognition in vehicles|
A method and system for performing gesture recognition of a vehicle occupant employing a time of flight (tof) sensor and a computing system in a vehicle. An embodiment of the method of the invention includes the steps of receiving one or more raw frames from the tof sensor, performing clustering to locate one or more body part clusters of the vehicle occupant, calculating the location of the tip of the hand of the vehicle occupant, determining whether the hand has performed a dynamic or a static gesture, retrieving a command corresponding to one of the determined static or dynamic gestures, and executing the command..
|Motion-controlled electronic device and method therefor|
An electronic device obtains a motion of a displaced object in two captured video frames utilizing phase correlation of the two frames. The electronic device identifies a magnitude of the motion and an area in a phase correlation surface corresponding to an area of the object, and accordingly determines if the motion is a qualified motion operable to trigger a gesture command of the electronic device.
|Always-available input through finger instrumentation|
A finger device initiates actions on a computer system when placed in contact with a surface. The finger device includes instrumentation that captures images and gestures.
|Incremental multi-touch gesture recognition|
In one example, a method comprises outputting, by a computing device and for display at an output device, a graphical keyboard comprising a plurality of keys, and receiving, by the computing device, an indication of a multi-touch gesture detected at a presence-sensitive display, the multi-touch gesture comprising a first sub-gesture that traverses a first group of keys of the plurality of keys and a second sub-gesture that traverses a second group of keys of the plurality of keys. This example method further comprises determining, in response to detecting the first sub-gesture and the second sub-gesture, a candidate word based at least in part on the first and second groups of keys, and outputting, by the computing device and for display at the output device, the candidate word..
|Mid-gesture chart scaling|
A system may include presentation of a visualization indicating a first plurality of dimension values and a respective function value for each of the first plurality of dimension values, the function values positioned in accordance with an initial scale of a function value axis, and the first plurality of dimension values positioned in accordance with an initial scale of a dimension value axis, detection of an input gesture to change the indicated first plurality of dimension values, and, before completion of the input gesture, determination of a second plurality of dimension values to indicate in the visualization based on the input gesture, determination of an updated scale of the function value axis based on the respective function values for each dimension value of the second plurality of dimension values, and display of the respective function values for each dimension value of the second plurality of dimension values positioned in accordance with the updated scale.. .
|Wearable sensor for tracking articulated body-parts|
A wearable sensor for tracking articulated body parts is described such as a wrist-worn device which enables 3d tracking of fingers and optionally also the arm and hand without the need to wear a glove or markers on the hand. In an embodiment a camera captures images of an articulated part of a body of a wearer of the device and an articulated model of the body part is tracked in real time to enable gesture-based control of a separate computing device such as a smart phone, laptop computer or other computing device.
|System and method for indirect manipulation of user interface object(s)|
Provided is a system and method for indirectly manipulating user interface object(s) of a user interface. In a pressure sensitive display embodiment, a user maintains a convenient touch position to a display, performs a search gesture (or selection gesture), and user interface object(s) are identified as satisfying the search criteria (or as selected).
|Method and electronic device for running application|
A method and an electronic device run an application. The method for running an application in an electronic device includes displaying one application icon of one or more applications contained in a folder, in an icon of the folder, detecting a gesture to the folder icon, and running or changing the application displayed in the folder icon according to the gesture to the folder icon..
|Device and method for secure user interface gesture processing using processor graphics|
A device and method for securely rendering content on a gesture-enabled computing device includes initializing a secure execution environment on a processor graphics of the computing device. The computing device transfers view rendering code and associated state data to the secure execution environment.
|Portable device and control method thereof|
Disclosed are a portable device to be unlocked using a tactile user interface and a control method thereof. The portable device includes a touch sensitive display unit to sense a touch input, a tactile feedback unit to generate tactile feedback, and a control unit to control the other units.
|Semantic zoom for related content|
Among other things, one or more techniques and/or systems are provided for displaying a related content view within a search interface. That is, a search interface, such as a search application, may provide search results that are relevant to a query submitted through the search interface.
|Validating a transaction with a secure input without requiring pin code entry|
A method for secure transactions on a mobile handset or tablet equipped with a touch screen controlled by a secure processor such as a master secure element or trusted execution environment having gesture recognition capabilities. Since the touch screen is fully controlled by the secure processor, the user can securely enter the transaction amount using gestures to validate the transaction..
|Portable multifunction device, method, and graphical user interface for providing maps and directions|
A device has a touch screen display configured to display a map application, which is configured to separately display a list of bookmarked locations, a list of recent queries, and a list of contacts. In response to detecting a finger gesture on an input icon associated with a search term input area, the map application displays at least one of the list of bookmarked locations, the list of recent queries, and the list of contacts.
|Multi-camera depth imaging|
Embodiments related to acquiring depth images via a plurality of depth cameras are disclosed. For example, in one disclosed embodiment, a first portion of depth data is received from a first depth camera, and a second portion of depth data is received from a second depth camera.
|Method of checking earphone wearing state|
An information processing apparatus that detects an output from a 3-axis acceleration sensor included in an earphone unit worn by a user while the user is in a still state; monitors the output of the 3-axis acceleration sensor while a nodding gesture is performed by the user; detects a time when an angle of the nodding gesture reaches a maximum; and determines an earphone wearing state based on the output from the 3-axis acceleration sensor in the still state and the output from the 3-axis acceleration sensor at the time of detecting the maximum nodding angle.. .
|Method, apparatus, and terminal device for generating and processing information|
Embodiments of the present invention relate to a method, an apparatus, and a terminal device for generating and processing information. The information generation method includes: generating gesture information according to an identified gesture path; detecting location information; and generating summary information according to the gesture information and the location information.
|Filtering documents based on device orientation|
In some implementations, document templates can be presented on a mobile device for selection by a user when the user is creating a document. In some implementations, document templates can be filtered based on the orientation of the mobile device.
A method for providing a user interface includes displaying a first screen with a dial menu. The dial menu is shown as an arch divided into sections that hold first menu options.
|Multi-dimensional scroll wheel|
A multi-dimensional scroll wheel is disclosed. Scroll wheel circuitry is provided to detect input gestures that traverse the center of the scroll wheel and to detect multi-touch input.
|System and method for controlling device switching on/off, and mobile communication device therefor|
A system for controlling device switching on/off including an electronic device and a mobile communication device is illustrated. The mobile communication device includes a haptic display unit and a processing unit.
|System and method for low power input object detection and interaction|
In a method of operating a touch screen, an object interaction is detected with the touch screen while in a first doze mode. It is determined if a detected object interaction with the touch screen is a valid input object interaction with the touch screen.
|Method and apparatus for manipulating a graphical user interface using camera|
A method and a computing device for manipulating a graphical user interface (gui) using camera are disclosed. The computing device captures images of a user by using a camera, detects the face image of the user from captured images, and detects at least one facial feature in the face image.
|Multi-modal touch screen emulator|
Systems and methods may provide for capturing a user input by emulating a touch screen mechanism. In one example, the method may include identifying a point of interest on a front facing display of the device based on gaze information associated with a user of the device, identifying a hand action based on gesture information associated with the user of the device, and initiating a device action with respect to the front facing display based on the point of interest and the hand action..
|Remote control with 3d pointing and gesture recognition capabilities|
A remote control, such as a 3d air mouse, includes motion sensors used to measure position/orientation in space to alternatively control a cursor on a display, recognize gestures by the user so that data/commands can be entered into an electronic system being controlled or the like. Timed sequences of pressing a single trigger button and the quantity of motion during said sequences are both timed in relation to thresholds to switch between modes..