Search

Robert Kamil Bryll

from Bothell, WA
Age ~55

Robert Bryll Phones & Addresses

  • 15700 116Th Ave NE UNIT 309, Bothell, WA 98011
  • 15515 Juanita Woodinville Way NE, Bothell, WA 98011
  • 1559 Charterwoods Cir, Fairborn, OH 45324 (937) 431-4877
  • 3001 Haynes Ct, Chicago, IL 60608 (773) 247-3139
  • Kiona, WA
  • Dayton, OH
  • 1559 Charterwoods Cir APT 3, Fairborn, OH 45324 (937) 478-1745

Work

Position: Building and Grounds Cleaning and Maintenance Occupations

Emails

Publications

Us Patents

Feature Based Hierarchical Video Segmentation

View page
US Patent:
6493042, Dec 10, 2002
Filed:
Mar 18, 1999
Appl. No.:
09/271869
Inventors:
Gozde Bozdagi - Yenisehir, TR
Hong Heather Yu - Plainsboro NJ
Steven J. Harrington - Webster NY
Robert Bryll - Chicago IL
Assignee:
Xerox Corporation - Stamford CT
International Classification:
H04N 514
US Classification:
348700, 382173
Abstract:
Systems and methods for detecting robust fade and dissolve and video sequences. The systems and methods use a two step approach to detect both discontinuous cuts and gradual changes in a video sequence. Specifically, an input video signal is first divided into video segments based on the location of the discontinuous cut. A gradual change detector is then applied to the discontinuous cut segments to determine further cuts based on editing characteristics. By using this two part approach, the systems and methods of this invention can robustly detect scene breaks within a video.

Methods And Systems For Real-Time Storyboarding With A Web Page And Graphical User Interface For Automatic Video Parsing And Browsing

View page
US Patent:
6647535, Nov 11, 2003
Filed:
Mar 18, 1999
Appl. No.:
09/271867
Inventors:
Gozde Bozdagi - Yenisehir, TR
Hong Heather Yu - Plainsboro NJ
Michael R. Campanelli - Webster NY
Robert Bryll - Chicago IL
Steven J. Harrington - Webster NY
Assignee:
Xerox Corporation - Stamford CT
International Classification:
G06F 1500
US Classification:
715530, 7155011, 7155001, 348460, 348468, 348700
Abstract:
Systems and methods to enable real-time and near real-time storyboarding on the World Wide Web in addition to a graphical user interface for video parsing and browsing the of the storyboard. Specifically, storyboarding can be accomplished on the World Wide Web by parsing an input video into representative or key frames. These frames then can be posted to a web document, or the like, for subsequent viewing by a user. This allows a video to be distilled down to the essential frames thus eliminating storage and bandwidth problems as well as eliminating the need for a user to view the entirety of the video. Furthermore, the graphical user interface allows a user to visually interact with an input video signal to determine the key or representative frames, or to retrieve video segments associated with already determined key frames. Furthermore, the interface allows manipulation of these frames including, but not limited to, playing of the entire segment represented by that key or significant frame as well as actual determining of the cuts between significant segments.

Methods And Systems For Real-Time Storyboarding With A Web Page And Graphical User Interface For Automatic Video Parsing And Browsing

View page
US Patent:
7313762, Dec 25, 2007
Filed:
Jul 22, 2003
Appl. No.:
10/624762
Inventors:
Gozde Bozdagi - Yenisehir, TR
Hong Heather Yu - Plainsboro NJ, US
Michael R. Campanelli - Webster NY, US
Robert Bryll - Chicago IL, US
Steven J. Harrington - Webster NY, US
Assignee:
Xerox Corporation - Norwalk CT
International Classification:
G06F 3/00
US Classification:
715719
Abstract:
Systems and methods to enable real-time and near real-time storyboarding on the World Wide Web in addition to a graphical user interface for video parsing and browsing the of the storyboard. Specifically, storyboarding can be accomplished on the World Wide Web by parsing an input video into representative or key frames. These frames then can be posted to a web document, or the like, for subsequent viewing by a user. This allows a video to be distilled down to the essential frames thus eliminating storage and bandwidth problems as well as eliminating the need for a user to view the entirety of the video. Furthermore, the graphical user interface allows a user to visually interact with an input video signal to determine the key or representative frames, or to retrieve video segments associated with already determined key frames. Furthermore, the interface allows manipulation of these frames including, but not limited to, playing of the entire segment represented by that key or significant frame as well as actual determining of the cuts between significant segments.

Magnified Machine Vision User Interface

View page
US Patent:
7394926, Jul 1, 2008
Filed:
Sep 30, 2005
Appl. No.:
11/241780
Inventors:
Robert Kamil Bryll - Bothell WA, US
Vidya Venkatachalam - Bellevue WA, US
Assignee:
Mitutoyo Corporation - Kawasaki-shi
International Classification:
G06K 9/00
US Classification:
382141, 382145, 382152, 382298, 348 86, 348 92
Abstract:
Improved user interface methods facilitate navigation and programming operations for a magnified machine vision inspection system. Large composite images are determined and stored. The composite images include workpiece features that are distributed beyond the limits of a single magnified field of view of the machine vision system. Despite their size, the composite images may be recalled and displayed in a user-friendly manner that approximates a smooth, real-time, zoom effect. The user interface may include controls that allow a user to easily define a set of workpiece features to be inspected using a composite image, and to easily position the machine vision system to view those workpiece features for the purpose of programming inspection operations for them.

System And Method For Automatically Recovering Video Tools In A Vision System

View page
US Patent:
7454053, Nov 18, 2008
Filed:
Oct 29, 2004
Appl. No.:
10/978227
Inventors:
Robert K. Bryll - Bothell WA, US
Kozo Ariga - Kawasaki, JP
Assignee:
Mitutoyo Corporation - Kawasaki-shi
International Classification:
G06K 9/00
US Classification:
382152
Abstract:
Methods and systems for automatically recovering a failed video inspection tool in a precision machine vision inspection system are described. A set of recovery instructions may be associated or merged with a video tool to allow the tool to automatically recover and proceed to provide an inspection result after an initial failure. The recovery instructions include operations that evaluate and modify feature inspection parameters that govern acquiring an image of a workpiece feature and inspecting the feature. The set of instructions may include an initial phase of recovery that adjusts image acquisition parameters. If adjusting image acquisition parameters does not result in proper tool operation, additional feature inspection parameters, such as the tool position, may be adjusted. The order in which the multiple feature inspection parameters and their related characteristics are considered may be predefined so as to most efficiently complete the automatic tool recovery process.

System And Method For Fast Template Matching By Adaptive Template Decomposition

View page
US Patent:
7580560, Aug 25, 2009
Filed:
Jul 18, 2005
Appl. No.:
11/184185
Inventors:
Robert K. Bryll - Bothell WA, US
Assignee:
Mitutoyo Corporation - Kawasaki-shi
International Classification:
G06K 9/00
US Classification:
382152, 382209
Abstract:
A fast template matching method by adaptive template decomposition is provided. The template decomposition technique subdivides a template into a number of horizontal and/or vertical subdivisions and one-dimensional characterizations are determined for the subdivisions. The template decomposition may be adapted during learn mode operations of a general-purpose precision machine vision inspection system, with the support of corresponding user interfaces. The matching results for one or more template decomposition configurations may be evaluated by a user, or by an automatic decomposition configuration evaluation routine, to determine a configuration that provides a desirable trade-off between speed and matching position accuracy. Automatic workpiece inspection instructions may implement the configuration to repeatedly provide optimum speed versus accuracy trade-offs during run mode operations of the inspection system.

Fast Multiple Template Matching Using A Shared Correlation Map

View page
US Patent:
7636478, Dec 22, 2009
Filed:
Jul 31, 2006
Appl. No.:
11/461418
Inventors:
Robert Kamil Bryll - Bothell WA, US
Assignee:
Mitutoyo Corporation - Kawasaki-shi
International Classification:
G06K 9/62
US Classification:
382209, 382217, 382218, 382219, 382221
Abstract:
A method is provided that increases throughput and decreases the memory requirements for matching multiple templates in image. The method includes determining a set of inter-template early elimination values that characterize the degree of matching between various templates and the image, at various locations in the image. A later-analyzed template may be rejected as a potential match at a location in the image based on comparing a value characterizing its degree of match at that location to an inter-template early elimination value corresponding to the degree of match of an earlier-analyzed template at that location. The compared values may be determined by different sets of operations, and may be normalized such that they are properly comparable. The inter-template early elimination conditions may be stored in a shared correlation map. The shared correlation map may be analyzed to determine the matching locations for multiple templates in the image.

System And Method For Single Image Focus Assessment

View page
US Patent:
7668388, Feb 23, 2010
Filed:
Mar 3, 2005
Appl. No.:
11/072360
Inventors:
Robert K. Bryll - Bothell WA, US
Assignee:
Mitutoyo Corporation - Kawasaki-shi
International Classification:
G06K 9/40
US Classification:
382255
Abstract:
An image focus assessment method is provided that works reliably for images of a variety of relatively dissimilar workpieces or workpiece features. The focus assessment method is based on analysis of a single image (without the benefit of comparison to other images). The robustness of the focus assessment method is enhanced by the use of at least one classifier based on a plurality of focus classification features. In one application, a primary advantage of assessing focus from a single image is that an overall workpiece inspection time may be reduced by avoiding running an autofocus routine if an image is already in focus. In various embodiments, the focus assessment method may include an ensemble of classifiers. The ensemble of classifiers can be trained on different training data (sub)sets or different parameter (sub)sets, and their classification outcomes combined by a voting operation or the like, in order to enhance the overall accuracy and robustness of the focus assessment method.
Robert Kamil Bryll from Bothell, WA, age ~55 Get Report