Search

Abhijit Sharma Phones & Addresses

  • Chicago, IL
  • Parker, CO
  • Cupertino, CA
  • Plainfield, IL

Resumes

Resumes

Abhijit Sharma Photo 1

Abhijit Sharma

View page
Abhijit Sharma Photo 2

Abhijit Sharma

View page
Abhijit Sharma Photo 3

Abhijit Sharma

View page
Abhijit Sharma Photo 4

Abhijit Sharma

View page
Abhijit Sharma Photo 5

Abhijit Sharma Cupertino, CA

View page
Work:
Better Future LLC

Apr 2013 to 2000
Co-founder

VertiSystem, Inc

Mar 2013 to 2000
Software Developer

Stanford University School of Medicine

Sep 2012 to Mar 2013
Pre-doctoral Scientist

Scanadu Health

Jun 2012 to Sep 2012
Product Development Intern

DePaul University

Jun 2012 to Sep 2012
Graduate Assistant, Programming Tutor

DePaul FireStarter, Start-up LaunchPad

Jun 2011 to Dec 2011
Initiator and Lead Organizer

Education:
DePaul University
Chicago, IL
Sep 2012
Master of Science in Computer Science

University of Mumbai
Mumbai, Maharashtra
May 2009
Bachelor of Engineering in Information Technology

Abhijit Sharma Photo 6

Abhijit Sharma Mountain View, CA

View page
Work:
Scanadu

Jun 2012 to Sep 2012
Product Development intern

Education:
DePaul University
Chicago, IL
Sep 2012
Master of Science in Computer Science

Stanford University, d.School
Jun 2011
Design Research

University of Mumbai
Mumbai, Maharashtra
May 2009
Bachelor of Engineering in Information Technology

Publications

Us Patents

Systems And Methods For Facilitating Seamless Flow Content Splicing

View page
US Patent:
20230044231, Feb 9, 2023
Filed:
Oct 20, 2022
Appl. No.:
17/970164
Inventors:
- Englewood CO, US
Abhijit Sharma - Denver CO, US
International Classification:
H04N 21/234
H04N 21/81
H04N 21/44
H04N 21/433
H04N 21/462
H04N 21/439
Abstract:
Systems, methods, machine-readable media, and media device are provided to facilitate seamless flow content splicing to dynamically insert particularized content items in television programming content. A plurality of particularized content items may be received and stored in a content store. First content that corresponds to television programming may be received and processed to detect a first set of color characteristics of video content. A subset of the particularized content items may be selected based on matching a second set of color characteristics of the subset of the particularized content items to the first set of color characteristics of the video content corresponding to the television programming. The first content may be output for display. Then, the subset of the particularized content items may be output for display in succession so that display of the subset of the particularized content items directly follows display of the first content.

Content Receiver Control Based On Intra-Content Metrics And Viewing Pattern Detection

View page
US Patent:
20210360323, Nov 18, 2021
Filed:
Aug 2, 2021
Appl. No.:
17/391393
Inventors:
- Englewood CO, US
Abhijit Sharma - Denver CO, US
International Classification:
H04N 21/466
H04N 21/442
H04N 21/835
G06N 5/02
Abstract:
Methods, systems, and machine-readable media are provided to facilitate content receiver control for particularized output of content items based on intra-content metrics. Observation data, corresponding to indications of detected content receiver operations associated with a content receiver and mapped to a first set of content items, may be processed. A first set of intra-content metrics may be detected. An audiovisual pattern of intra-content metrics may be mapped based on correlating the set of observation data with the first set of intra-content metrics. A second set of content items may be processed to detect a second set of intra-content metrics. A subset of the second set of content items may be selected based on a visual category and/or an audio category of the audiovisual pattern of intra-content metrics. The subset may be specified to cause a content receiver to modify operations to record and/or output content corresponding to the subset.

Systems And Methods For Facilitating Seamless Flow Content Splicing

View page
US Patent:
20210029390, Jan 28, 2021
Filed:
Oct 9, 2020
Appl. No.:
17/067422
Inventors:
- Englewood CO, US
Abhijit Sharma - Denver CO, US
Assignee:
DISH Technologies L.L.C. - Englewood CO
International Classification:
H04N 21/234
H04N 21/81
H04N 21/44
H04N 21/433
H04N 21/462
H04N 21/439
Abstract:
Systems, methods, machine-readable media, and media device are provided to facilitate seamless flow content splicing to dynamically insert particularized content items in television programming content. A plurality of particularized content items may be received and stored in a content store. First content that corresponds to television programming may be received and processed to detect a first set of color characteristics of video content. A subset of the particularized content items may be selected based on matching a second set of color characteristics of the subset of the particularized content items to the first set of color characteristics of the video content corresponding to the television programming. The first content may be output for display. Then, the subset of the particularized content items may be output for display in succession so that display of the subset of the particularized content items directly follows display of the first content.

Methods And Systems For An Augmented Film Crew Using Purpose

View page
US Patent:
20210014579, Jan 14, 2021
Filed:
Sep 30, 2020
Appl. No.:
17/039377
Inventors:
- Englewood CO, US
Abhijit Sharma - Denver CO, US
International Classification:
H04N 21/854
H04N 21/45
H04N 21/44
G11B 27/031
H04N 21/84
H04N 21/435
H04N 21/472
G11B 27/34
Abstract:
Systems and processes associated with an augmented film crew. For example, a computer-implemented method may include receiving, at a display of a user media device, an indication that a user of the user media device intends to generate a user video in an environment; receiving, at the display, an input indicating a user preference associated with the user video; generating, by the user media device, data associated with the environment using a sensor of the user media device; determining, by the user media device, a purpose for the user video using the user preference and the data associated with the environment, wherein the purpose is chosen from a predetermined set of purposes; detecting an additional media device that is located in the environment, wherein the additional media device is associated with the user or the user media device; determining pre-production assignments for the user video using the purpose and the additional media device, wherein the pre-production assignments indicate one or more characteristics of the scene for the user video in the environment; generating, using the user media device, a first video stream of the scene in the environment using the pre-production assignments; receiving, from the additional media device, a second video stream of the scene; and generating, by the user media device, the user video using the first video stream or the second video streams. The above steps may be implemented as instructions stored in a computer-readable medium, computer program product, or device such as a television receiver, or in other types of embodiments.

Methods And Systems For An Augmented Film Crew Using Storyboards

View page
US Patent:
20200372936, Nov 26, 2020
Filed:
Aug 14, 2020
Appl. No.:
16/993542
Inventors:
- Englewood CO, US
Abhijit Sharma - Denver CO, US
International Classification:
G11B 27/031
G11B 27/10
H04N 21/44
H04N 21/435
H04N 21/81
H04N 21/854
H04N 21/45
H04N 21/472
Abstract:
Systems and processes associated with an augmented film crew. For example, a computer-implemented method may include receiving, at a display of a user media device, an indication that a user of the media device intends to generate a user video in an environment; generating, by the user media device, data associated with the environment using a sensor of the user media device; determining, by the user media device, a purpose for the user video using the data associated with the environment; presenting, at the display, a set of screenplays for the user video, wherein the set of screenplays is determined based on the duration, the purpose, and the data associated with the environment; receiving, at the display, an input from the user indicating a selected screenplay from the set of screenplays, wherein the selected screenplay is associated with a set of storyboards; displaying, at the user media device, a first storyboard of the set of storyboards, wherein the first storyboard is overlaid onto a user video stream generated by the user media device; receiving, at the user media device, an additional video stream generated by an additional network device; and generating, by the user media device, the user video using the user video stream or the additional video stream. The above steps may be implemented as instructions stored in a computer-readable medium, computer program product, or device such as a television receiver, or in other types of embodiments.

Automated Transition Classification For Binge Watching Of Content

View page
US Patent:
20200267443, Aug 20, 2020
Filed:
May 8, 2020
Appl. No.:
16/870073
Inventors:
- Englewood CO, US
Pratik Divanji - Englewood CO, US
Abhijit Y. Sharma - Englewood CO, US
Swapnil Tilaye - Englewood, US
International Classification:
H04N 21/44
G06K 9/00
G06K 9/62
Abstract:
Novel techniques are described for automated transition classification for binge watching of content. For example, a number of frame images is extracted from a candidate segment time window of content. The frame images can automatically be classified by a trained machine learning model into segment and non-segment classifications, and the classification results can be represented by a two-dimensional (2D) image. The 2D image can be run through a multi-level convolutional conversion to output a set of output images, and a serialized representation of the output images can be run through a trained computational neural network to generate a transition array, from which a candidate transition time can be derived (indicating a precise time at which the content transitions to the classified segment).

Automated Transition Classification For Binge Watching Of Content

View page
US Patent:
20200068253, Feb 27, 2020
Filed:
Aug 23, 2018
Appl. No.:
16/109755
Inventors:
- Englewood CO, US
Pratik Divanji - Englewood CO, US
Abhijit Y. Sharma - Englewood CO, US
Swapnil Tilaye - Englewood CO, US
International Classification:
H04N 21/44
G06K 9/62
G06K 9/00
Abstract:
Novel techniques are described for automated transition classification for binge watching of content. For example, a number of frame images is extracted from a candidate segment time window of content. The frame images can automatically be classified by a trained machine learning model into segment and non-segment classifications, and the classification results can be represented by a two-dimensional (2D) image. The 2D image can be run through a multi-level convolutional conversion to output a set of output images, and a serialized representation of the output images can be run through a trained computational neural network to generate a transition array, from which a candidate transition time can be derived (indicating a precise time at which the content transitions to the classified segment).

Methods And Systems For An Augmented Film Crew Using Storyboards

View page
US Patent:
20190206439, Jul 4, 2019
Filed:
Dec 29, 2017
Appl. No.:
15/858693
Inventors:
- Englewood CO, US
Abhijit Sharma - Denver CO, US
International Classification:
G11B 27/031
G11B 27/10
H04N 21/472
H04N 21/44
H04N 21/45
H04N 21/435
H04N 21/81
H04N 21/854
H04N 21/431
Abstract:
Systems and processes associated with an augmented film crew. For example, a computer-implemented method may include receiving, at a display of a user media device, an indication that a user of the media device intends to generate a user video in an environment; generating, by the user media device, data associated with the environment using a sensor of the user media device; determining, by the user media device, a purpose for the user video using the data associated with the environment; presenting, at the display, a set of screenplays for the user video, wherein the set of screenplays is determined based on the duration, the purpose, and the data associated with the environment; receiving, at the display, an input from the user indicating a selected screenplay from the set of screenplays, wherein the selected screenplay is associated with a set of storyboards; displaying, at the user media device, a first storyboard of the set of storyboards, wherein the first storyboard is overlaid onto a user video stream generated by the user media device; receiving, at the user media device, an additional video stream generated by an additional network device; and generating, by the user media device, the user video using the user video stream or the additional video stream. The above steps may be implemented as instructions stored in a computer-readable medium, computer program product, or device such as a television receiver, or in other types of embodiments.
Abhijit Y Sharma from Chicago, IL, age ~37 Get Report