Search

Etienne Guerard Phones & Addresses

  • Cupertino, CA
  • Sunnyvale, CA

Publications

Us Patents

Device, Method, And Graphical User Interface For Manipulating 3D Objects On A 2D Screen

View page
US Patent:
20220350461, Nov 3, 2022
Filed:
Jul 19, 2022
Appl. No.:
17/868478
Inventors:
- Cupertino CA, US
Etienne H. Guerard - Cupertino CA, US
Adam Michael O'Hern - Campbell CA, US
Michelle Chua - Cupertino CA, US
Robin-Yann Joram Storm - Amsterdam, NL
Adam James Bolton - Bend OR, US
Zachary Becker - Santa Clara CA, US
Bradley Warren Peebler - Emerald Hills CA, US
International Classification:
G06F 3/04815
G06F 3/04842
G06F 3/04845
G06F 3/0488
G06F 3/0484
G06F 3/0481
Abstract:
Various implementations disclosed herein include a method performed by a device. While executing a CGR application, the method includes displaying a three-dimensional object in a three-dimensional space, wherein the three-dimensional space is defined by a three-dimensional coordinate system. The method also includes: detecting a first user input directed to the three-dimensional object; and in response to detecting the first user input, displaying a spatial manipulation user interface element including a set of spatial manipulation affordances respectively associated with a set of spatial manipulations of the three-dimensional object, wherein each of the set of spatial manipulations corresponds to a translational movement of the three-dimensional object along a corresponding axis of the three-dimensional space.

Device, Method, And Graphical User Interface For Composing Cgr Files

View page
US Patent:
20220291806, Sep 15, 2022
Filed:
May 26, 2022
Appl. No.:
17/825763
Inventors:
- Cupertino CA, US
Eric Steven Peyton - Naperville IL, US
Olivier Marie Jacques Pinon - Sunnyvale CA, US
Etienne H. Guerard - Cupertino CA, US
David John Addey - Santa Cruz CA, US
Pau Sastre Miguel - San Francisco CA, US
Michelle Chua - Cupertino CA, US
Eric Thivierge - Sunnyvale CA, US
International Classification:
G06F 3/04815
G06T 15/10
Abstract:
A method includes determining to present a computer-generated reality (CGR) object that is associated with a first anchor and a second anchor. The method includes determining, based on an image of a physical environment, whether the physical environment includes a portion corresponding to the first anchor. The method includes, in response to determining that the physical environment lacks a portion that corresponds to the first anchor, determining, based on the image, whether the physical environment includes a portion corresponding to the second anchor. The method includes, in response to determining that the physical environment includes a portion that corresponds to the second anchor, displaying, on the display, the CGR object at a location of the display corresponding to the second anchor.

Device, Method, And Graphical User Interface For Composing Cgr Files

View page
US Patent:
20200387289, Dec 10, 2020
Filed:
Jun 3, 2020
Appl. No.:
16/892045
Inventors:
- Cupertino CA, US
Eric Steven Peyton - Naperville IL, US
Olivier Marie Jacques Pinon - Sunnyvale CA, US
Etienne H. Guerard - Cupertino CA, US
David John Addey - Santa Cruz CA, US
Pau Sastre Miguel - San Francisco CA, US
Michelle Chua - Cupertino CA, US
Eric Thivierge - Sunnyvale CA, US
International Classification:
G06F 3/0481
G06T 15/10
Abstract:
In one embodiment, a method of generating a computer-generated reality (CGR) file includes receiving, via one or more input devices, user input generating a computer-generated reality (CGR) scene, a user input associating an anchor with the CGR scene, user input associating one or more CGR objects with the CGR scene, wherein the CGR objects are to be displayed in association with the anchor, and user input associating a behavior with the CGR scene, wherein the behavior includes one or more triggers and actions and wherein the actions are performed in response to detecting any of the triggers. The method includes generating a CGR file including data regarding the CGR scene, the CGR file including data regarding the anchor, the CGR objects, and the behavior.

Device, Method, And Graphical User Interface For Manipulating 3D Objects On A 2D Screen

View page
US Patent:
20200379626, Dec 3, 2020
Filed:
May 29, 2020
Appl. No.:
16/887426
Inventors:
- Cupertino CA, US
Etienne H. Guerard - Cupertino CA, US
Adam Michael O'Hern - Campbell CA, US
Michelle Chua - Cupertino CA, US
Robin-Yann Joram Storm - Amsterdam, NL
Adam James Bolton - Bend OR, US
Zachary Becker - Santa Clara CA, US
Bradley Warren Peebler - Emerald Hills CA, US
International Classification:
G06F 3/0481
G06F 3/0484
G06F 3/0488
Abstract:
In one implementation, a method of spatially manipulating a three-dimension object includes displaying a three-dimensional object in a three-dimensional space from a first virtual camera perspective, wherein the three-dimensional space is defined by a three-dimensional coordinate system including three perpendicular axes. The method includes displaying a spatial manipulation user interface element including a first set of spatial manipulation affordances respectively associated with a first set of spatial manipulations of the three-dimensional object, wherein the first set of spatial manipulations is based on the first virtual camera perspective. The method includes detecting a user input changing the first virtual camera perspective to a second virtual camera perspective. The method includes, in response to detecting the user input changing the first virtual camera perspective to a second virtual camera perspective, displaying the three-dimensional object in the three-dimensional space from the second virtual camera perspective and displaying the spatial manipulation user interface element including a second set of spatial manipulation affordances respectively associated with a second set of spatial manipulations of the three-dimensional object, wherein the second set of spatial manipulations is based on the second virtual camera perspective, wherein the first set of spatial manipulations includes at least one spatial manipulation excluded from the second set of spatial manipulations and the second set of spatial manipulations includes at least one spatial manipulation excluded from the first set of spatial manipulations.

Seamless Output Video Variations For An Input Video

View page
US Patent:
20180336927, Nov 22, 2018
Filed:
Aug 16, 2017
Appl. No.:
15/678469
Inventors:
- Cupertino CA, US
Jason Klivington - Portland OR, US
Charles A. Mezak - San Francisco CA, US
Etienne Guerard - Cupertino CA, US
Piotr Stanczyk - San Francisco CA, US
International Classification:
G11B 27/00
H04N 9/79
H04N 5/232
G11B 27/031
Abstract:
Techniques and devices for generating multiple output video variations for an input video based on a shared resource architecture. The shared resource architecture reuses and shares computational and gating results from one or more operations to create the multiple output video variations. The shared resource architecture applies a frame-time normalization of the trimmed and stabilized video to produce a trimmed stabilized normalized video and, thereafter, uses the trimmed stabilized normalized video to precompute one or more video parameters that can be shared with multiple output video variations. The shared resource architecture can then generate multiple output video variations using the shared video parameters.

Seamless Forward-Reverse Video Loops

View page
US Patent:
20180090175, Mar 29, 2018
Filed:
Aug 16, 2017
Appl. No.:
15/678497
Inventors:
- Cupertino CA, US
Jason Klivington - Portland OR, US
Rudolph van der Merwe - Portland OR, US
Douglas P. Mitchell - Lake Forest Park WA, US
Amir Hoffnung - Tel Aviv, IL
Charles A. Mezak - San Francisco CA, US
Matan Stauber - Tel Aviv, IL
Ran Margolin - Hod HaSharon, IL
Etienne Guerard - Cupertino CA, US
Piotr Stanczyk - San Francisco CA, US
International Classification:
G11B 27/034
G09G 5/377
Abstract:
Techniques and devices for creating a Forward-Reverse Loop output video and other output video variations. A pipeline may include obtaining input video and determining a start frame within the input video and a frame length parameter based on a temporal discontinuity minimization. The selected start frame and the frame length parameter may provide a reversal point within the Forward-Reverse Loop output video. The Forward-Reverse Loop output video may include a forward segment that begins at the start frame and ends at the reversal point and a reverse segment that starts after the reversal point and plays back one or more frames in the forward segment in a reverse order. The pipeline for the generating Forward-Reverse Loop output video may be part of a shared resource architecture that generates other types of output video variations, such as AutoLoop output videos and Long Exposure output videos.
Etienne Marie A Guerard from Cupertino, CA, age ~45 Get Report