Search

Abhinav Sethy

from Seattle, WA
Age ~46

Abhinav Sethy Phones & Addresses

  • 1630 1St Ave W, Seattle, WA 98119
  • Chappaqua, NY
  • Yonkers, NY
  • 2727 Ellendale Pl, Los Angeles, CA 90007
  • 310 N Greeley Ave, Chappaqua, NY 10514

Work

Company: Amazon Oct 2017 Position: Principal applied scientist

Education

Degree: Doctorates, Doctor of Philosophy School / High School: University of Southern California 2001 to 2007 Specialities: Electrical Engineering

Skills

Machine Learning • C++ • Information Retrieval • Matlab • Signal Processing • Data Mining • Natural Language Processing

Industries

Internet

Resumes

Resumes

Abhinav Sethy Photo 1

Principal Applied Scientist

View page
Location:
1630 1St St, Seattle, WA 98057
Industry:
Internet
Work:
Amazon
Principal Applied Scientist

Ibm 2007 - Aug 2017
Research Scientist

Ibm 2007 - Aug 2017
Researcher

Adobe Jun 1999 - Jul 2000
Member of Technical Staff
Education:
University of Southern California 2001 - 2007
Doctorates, Doctor of Philosophy, Electrical Engineering
Indian Institute of Technology, Delhi 1995 - 1999
Bachelors, Bachelor of Technology, Electrical Engineering
Skills:
Machine Learning
C++
Information Retrieval
Matlab
Signal Processing
Data Mining
Natural Language Processing

Publications

Us Patents

Accuracy Improvement Of Spoken Queries Transcription Using Co-Occurrence Information

View page
US Patent:
8650031, Feb 11, 2014
Filed:
Jul 31, 2011
Appl. No.:
13/194972
Inventors:
Jonathan Mamou - Jerusalem, IL
Abhinav Sethy - Chappaqua NY, US
Bhuvana Ramabhadran - Mount Kisco NY, US
Ron Hoory - Haifa, IL
Paul Joseph Vozila - Arlington MA, US
Nathan Bodenstab - Winchester OR, US
Assignee:
Nuance Communications, Inc. - Burlington MA
International Classification:
G10L 15/00
G10L 15/26
G06F 17/27
G10L 21/00
G10L 25/00
G10L 21/06
G06F 17/28
G10L 13/00
G10L 13/06
G10L 19/12
G06F 7/00
G06F 17/30
US Classification:
704235, 704 9, 704243, 704257, 704240, 704270, 7042701, 704246, 704 2, 704 4, 704260, 704222, 707738, 707693, 707706, 707713, 707767, 707999, 707722
Abstract:
Techniques disclosed herein include systems and methods for voice-enabled searching. Techniques include a co-occurrence based approach to improve accuracy of the 1-best hypothesis for non-phrase voice queries, as well as for phrased voice queries. A co-occurrence model is used in addition to a statistical natural language model and acoustic model to recognize spoken queries, such as spoken queries for searching a search engine. Given an utterance and an associated list of automated speech recognition n-best hypotheses, the system rescores the different hypotheses using co-occurrence information. For each hypothesis, the system estimates a frequency of co-occurrence within web documents. Combined scores from a speech recognizer and a co-occurrence engine can be combined to select a best hypothesis with a lower word error rate.

Topic Specific Language Models Built From Large Numbers Of Documents

View page
US Patent:
20060212288, Sep 21, 2006
Filed:
Mar 17, 2006
Appl. No.:
11/384226
Inventors:
Abhinav Sethy - Los Angeles CA, US
Panayiotis Georgiou - La Credcenta CA, US
Shrikanth Narayanan - Santa Monica CA, US
International Classification:
G06F 17/21
US Classification:
704010000
Abstract:
Forming and/or improving a language model based on data from a large collection of documents, such as web data. The collection of documents is queried using queries that are formed from the language model. The language model is subsequently improved using the information thus obtained. The improvement is used to improve the query. As data is received from the collection of documents, it is compared to a rejection model, that models what rejected documents typically look like. Any document that meets the test is then rejected. The documents that remain are characterized to determine whether they add information to the language model, whether they are relevant, and whether they should be independently rejected. Rejected documents are used to update the rejection model; accepted documents are used to update the language model. Each iteration improves the language model, and the documents may be analyzed again using the improved language model.

Implementing A Whole Sentence Recurrent Neural Network Language Model For Natural Language Processing

View page
US Patent:
20200013393, Jan 9, 2020
Filed:
Aug 23, 2019
Appl. No.:
16/549893
Inventors:
- ARMONK NY, US
Abhinav Sethy - Chappaqua NY, US
Kartik Audhkhasi - White Plains NY, US
Bhuvana Ramabhadran - Mount Kisco NY, US
International Classification:
G10L 15/197
G10L 15/16
G06N 3/08
G10L 15/22
G06N 7/00
G10L 15/06
Abstract:
A computer selects a test set of sentences from among sentences applied to train a whole sentence recurrent neural network language model to estimate the probability of likelihood of each whole sentence processed by natural language processing being correct. The computer generates imposter sentences from among the test set of sentences by substituting one word in each sentence of the test set of sentences. The computer generates, through the whole sentence recurrent neural network language model, a first score for each sentence of the test set of sentences and at least one additional score for each of the imposter sentences. The computer evaluates an accuracy of the natural language processing system in performing sequential classification tasks based on an accuracy value of the first score in reflecting a correct sentence and the at least one additional score in reflecting an incorrect sentence.

Symbol Sequence Estimation In Speech

View page
US Patent:
20200013408, Jan 9, 2020
Filed:
Sep 20, 2019
Appl. No.:
16/577663
Inventors:
- Armonk NY, US
Gakuto Kurata - Tokyo, JP
Bhuvana Ramabhadran - Yorktown Heights NY, US
Abhinav Sethy - Yorktown Heights NY, US
Masayuki Suzuki - Tokyo, JP
Ryuki Tachibana - Tokyo, JP
International Classification:
G10L 15/26
Abstract:
Symbol sequences are estimated using a computer-implemented method including detecting one or more candidates of a target symbol sequence from a speech-to-text data, extracting a related portion of each candidate from the speech-to-text data, detecting repetition of at least a partial sequence of each candidate within the related portion of the corresponding candidate, labeling the detected repetition with a repetition indication, and estimating whether each candidate is the target symbol sequence, using the corresponding related portion including the repetition indication of each of the candidates.

Implementing A Whole Sentence Recurrent Neural Network Language Model For Natural Language Processing

View page
US Patent:
20190318732, Oct 17, 2019
Filed:
Apr 16, 2018
Appl. No.:
15/954399
Inventors:
- Armonk NY, US
Abhinav Sethy - Chappaqua NY, US
Kartik Audhkhasi - White Plains NY, US
Bhuvana Ramabhadran - Mount Kisco NY, US
International Classification:
G10L 15/197
G10L 15/16
G10L 15/06
G10L 15/22
G06N 7/00
G06N 3/08
Abstract:
A whole sentence recurrent neural network (RNN) language model (LM) is provided for for estimating a probability of likelihood of each whole sentence processed by natural language processing being correct. A noise contrastive estimation sampler is applied against at least one entire sentence from a corpus of multiple sentences to generate at least one incorrect sentence. The whole sentence RNN LN is trained, using the at least one entire sentence from the corpus and the at least one incorrect sentence, to distinguish the at least one entire sentence as correct. The whole sentence recurrent neural network language model is applied to estimate the probability of likelihood of each whole sentence processed by natural language processing being correct.

Combining Installed Audio-Visual Sensors With Ad-Hoc Mobile Audio-Visual Sensors For Smart Meeting Rooms

View page
US Patent:
20190149769, May 16, 2019
Filed:
Jan 9, 2019
Appl. No.:
16/243706
Inventors:
- Armonk NY, US
KENNETH W. CHURCH - CROTON-ON-HUDSON NY, US
VAIBHAVA GOEL - CHAPPAQUA NY, US
LIDIA L. MANGU - NEW YORK NY, US
ETIENNE MARCHERET - WHITE PLAINS NY, US
BHUVANA RAMABHADRAN - MOUNT KISCO NY, US
LAURENCE P. SANSONE - BEACON NY, US
ABHINAV SETHY - CHAPPAQUA NY, US
SAMUEL THOMAS - ELMSFORD NY, US
International Classification:
H04N 7/15
H04W 12/06
G10L 25/60
H04N 7/14
Abstract:
A method of combining data streams from fixed audio-visual sensors with data streams from personal mobile devices including, forming a communication link with at least one of one or more personal mobile devices; receiving at least one of an audio data stream and/or a video data stream from the at least one of the one or more personal mobile devices; determining the quality of the at least one of the audio data stream and/or the video data stream, wherein the audio data stream and/or the video data stream having a quality above a threshold quality is retained; and combining the retained audio data stream and/or the video data stream with the data streams from the fixed audio-visual sensors.

Symbol Sequence Estimation In Speech

View page
US Patent:
20180204567, Jul 19, 2018
Filed:
Jan 18, 2017
Appl. No.:
15/409126
Inventors:
- Armonk NY, US
Gakuto Kurata - Tokyo, JP
Bhuvana Ramabhadran - Yorktown Heights NY, US
Abhinav Sethy - Yorktown Heights NY, US
Masayuki Suzuki - Tokyo, JP
Ryuki Tachibana - Tokyo, JP
International Classification:
G10L 15/197
G10L 15/18
G10L 15/02
G10L 15/14
Abstract:
Symbol sequences are estimated using a computer-implemented method including detecting one or more candidates of a target symbol sequence from a speech-to-text data, extracting a related portion of each candidate from the speech-to-text data, detecting repetition of at least a partial sequence of each candidate within the related portion of the corresponding candidate, labeling the detected repetition with a repetition indication, and estimating whether each candidate is the target symbol sequence, using the corresponding related portion including the repetition indication of each of the candidates.

Combining Installed Audio-Visual Sensors With Ad-Hoc Mobile Audio-Visual Sensors For Smart Meeting Rooms

View page
US Patent:
20180027213, Jan 25, 2018
Filed:
Oct 2, 2017
Appl. No.:
15/722704
Inventors:
- Armonk NY, US
KENNETH W. CHURCH - CROTON-ON-HUDSON NY, US
VAIBHAVA GOEL - CHAPPAQUA NY, US
LIDIA L. MANGU - NEW YORK NY, US
ETIENNE MARCHERET - WHITE PLAINS NY, US
BHUVANA RAMABHADRAN - MOUNT KISCO NY, US
LAURENCE P. SANSONE - BEACON NY, US
ABHINAV SETHY - CHAPPAQUA NY, US
SAMUEL THOMAS - ELMSFORD NY, US
International Classification:
H04N 7/15
G10L 25/60
Abstract:
A method of combining data streams from fixed audio-visual sensors with data streams from personal mobile devices including, forming a communication link with at least one of one or more personal mobile devices; receiving at least one of an audio data stream and/or a video data stream from the at least one of the one or more personal mobile devices; determining the quality of the at least one of the audio data stream and/or the video data stream, wherein the audio data stream and/or the video data stream having a quality above a threshold quality is retained; and combining the retained audio data stream and/or the video data stream with the data streams from the fixed audio-visual sensors.
Abhinav Sethy from Seattle, WA, age ~46 Get Report