Search

Rimma V Nehme

from Bellevue, WA
Age ~43

Rimma Nehme Phones & Addresses

  • 7378 171St Ave SE, Bellevue, WA 98006
  • 4217 Waban Hl, Madison, WI 53711 (608) 230-5530
  • Middleton, WI
  • Indianapolis, IN
  • West Lafayette, IN
  • Framingham, MA
  • Hillsdale, MI
  • Somerville, MA

Resumes

Resumes

Rimma Nehme Photo 1

Databases

View page
Position:
... at Microsoft/Purdue University
Location:
Greater Seattle Area
Industry:
Computer Software
Work:
Microsoft/Purdue University since May 2009
...
Education:
Purdue University 2005 - 2009
Worcester Polytechnic Institute 2003 - 2005
MS, Computer Science
Rimma Nehme Photo 2

Rimma Nehme

View page

Publications

Us Patents

Configuration-Parametric Query Optimization

View page
US Patent:
20090327254, Dec 31, 2009
Filed:
Jun 26, 2008
Appl. No.:
12/146470
Inventors:
Nicolas Bruno - Redmond WA, US
Rimma Nehme - Indianapolis IN, US
Assignee:
MICROSOFT CORPORATION - Redmond WA
International Classification:
G06F 17/30
US Classification:
707 4, 707 2, 707E17017
Abstract:
Described herein are techniques for Configuration-Parametric Query Optimization (C-PQO) that can improve performance of database tuning tools. When first optimizing a query, a compact representation of the optimization space is generated. The representation can then be used to efficiently produce other execution plans for the query under arbitrary hypothetical configurations.

Min-Repro Framework For Database Systems

View page
US Patent:
20100241766, Sep 23, 2010
Filed:
Mar 20, 2009
Appl. No.:
12/408330
Inventors:
Nicolas Bruno - Redmond WA, US
Rimma Vladimirovna Nehme - Indianapolis IN, US
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
G06F 3/00
G06F 17/30
US Classification:
710 8, 707E1701
Abstract:
The min-repro finding technique described herein is designed to ease and speed-up the task of finding a min-repro, a minimum configuration that reproduces a problem in database-related products. Specifically, in one embodiment the technique simplifies transformations in order to find one or more min-repros. One embodiment provides a high-level script language to automate some sub-tasks and to guide the search for a simpler the configuration that reproduces the problem. Yet another embodiment provides record-and-replay functionality, and provides an intuitive representation of results and the search space. These tools can save hours of time for both customers and testers to isolate the problem and can result in faster fixes and large cost savings to organizations.

Scheduler For Planet-Scale Computing System

View page
US Patent:
20220318052, Oct 6, 2022
Filed:
Jun 28, 2021
Appl. No.:
17/361224
Inventors:
- Redmond WA, US
Atul KATIYAR - Sammamish WA, US
Dharma Kiritkumar SHUKLA - Bellevue WA, US
Rimma Vladimirovna NEHME - Bellevue WA, US
Shreshth SINGHAL - Seattle WA, US
Pankaj SHARMA - Redmond WA, US
Nipun KWATRA - Bangalore, IN
Ramachandran RAMJEE - Bengaluru, IN
International Classification:
G06F 9/48
G06F 9/50
Abstract:
The disclosure herein describes scheduling execution of artificial intelligence (AI) workloads in a cloud infrastructure platform. A global scheduler receives AI workloads associated with resource ticket values. The scheduler distributes the AI workloads to nodes based on balancing resource ticket values. Local schedulers of the nodes schedule AI workloads on resources based on the resource ticket values of the AI workloads. Based on scheduling the AI workloads, coordinator services of the local schedulers execute the distributed AI workloads on the infrastructure resources of the nodes. The disclosure further describes scheduling AI workloads based on priority tiers. A scheduler receives AI workloads, and each AI workload is associated with a priority tier indicative of a preemption priority while being executed. The AI workloads are scheduled for execution on a distributed set of nodes based on the priority tiers and then execute based on the scheduling.

Planet-Scale, Fully Managed Artificial Intelligence Infrastructure Service

View page
US Patent:
20220318674, Oct 6, 2022
Filed:
Jun 28, 2021
Appl. No.:
17/361208
Inventors:
- Redmond WA, US
Rimma Vladimirovna NEHME - Bellevue WA, US
Pankaj SHARMA - Redmond WA, US
Shreshth SINGHAL - Seattle WA, US
Vipul Arunkant MODI - Sammamish WA, US
Muthian SIVATHANU - Chennai, IN
Atul KATIYAR - Sammamish WA, US
International Classification:
G06N 20/00
G06N 5/04
Abstract:
The disclosure herein describes managing artificial intelligence (AI) workloads in a cloud infrastructure platform. A set of distributed infrastructure resources are integrated into the cloud infrastructure platform via native support interfaces. AI workloads are received from a plurality of tenants, wherein the AI workloads include training workloads and inferencing workloads and resource subsets of the set of distributed infrastructure resources are assigned to the received AI workloads. The received AI workloads are scheduled for execution on the assigned resource subsets and based on the scheduling of the AI workloads, they are executed on the assigned resource subsets. The described cloud infrastructure platform provides efficient, secure execution of AI workloads for many different tenants and enables the flexible use of a wide variety of both third-party and first-party infrastructure resources.

Transparent Pre-Emption And Migration For Planet-Scale Computer

View page
US Patent:
20220308917, Sep 29, 2022
Filed:
Jun 26, 2021
Appl. No.:
17/359553
Inventors:
- Redmond WA, US
Srinidhi VISWANATHA - Bangalore, IN
Dharma Kiritkumar SHUKLA - Bellevue WA, US
Nipun KWATRA - Bangalore, IN
Ramachandran RAMJEE - Bengaluru, IN
Rimma Vladimirovna NEHME - Bellevue WA, US
Pankaj SHARMA - Redmond WA, US
Vaibhav SHARMA - Seattle WA, US
International Classification:
G06F 9/48
G06N 3/08
G06F 9/46
G06F 9/54
G06T 1/20
G06T 1/60
H04L 29/08
Abstract:
The disclosure herein describes platform-level checkpointing for deep learning (DL) jobs. The checkpointing is performed through capturing two kinds of state data: (i) GPU state (device state), and (ii) CPU state (host state). The GPU state includes GPU data (e.g., model parameters, optimizer state, etc.) that is located in the GPU and GPU context (e.g., the default stream in GPU, various handles created by the libraries such as DNN, Blas, etc.). Only a fraction of the GPU memory is copied because the checkpointing is done in a domain-aware manner. The “active” memory contains useful data like model parameters. To be able to capture the useful data, memory management is controlled to identify which parts of the memory are active. Also, to restore the destination GPU to the same context/state, a mechanism is used to capture such state-changing events on an original GPU and replayed on a destination GPU.

Artificial Intelligence Workload Migration For Planet-Scale Artificial Intelligence Infrastructure Service

View page
US Patent:
20220311832, Sep 29, 2022
Filed:
Jun 25, 2021
Appl. No.:
17/359471
Inventors:
- Redmond WA, US
Muthian SIVATHANU - Chennai, IN
Lu XUN - Redmond WA, US
Rimma Vladimirovna NEHME - Bellevue WA, US
International Classification:
H04L 29/08
G06N 3/08
G06T 1/20
G06F 9/48
Abstract:
The disclosure herein describes platform-level migration for deep learning training (DLT) jobs from a checkpointed stated between a source node and a destination node. The checkpointing is performed through capturing GPU state (e.g., device state) and CPU state (e.g., host state). The GPU state includes GPU data (e.g., model parameters, optimizer state, etc.) that is located in the GPU and GPU context (e.g., the default stream in GPU, various handles created by libraries). Restoring the DLT job on the destination node involves resumption of processing of a destination GPU at the same checkpointed state.

Optimizing Parallel Queries Using Interesting Distributions

View page
US Patent:
20160078090, Mar 17, 2016
Filed:
Nov 27, 2015
Appl. No.:
14/953297
Inventors:
- Redmond WA, US
Rimma V. Nehme - Madison WI, US
International Classification:
G06F 17/30
Abstract:
The present invention extends to methods, systems, and computer program products for optimizing parallel queries using interesting distributions. For each logical operator in an SQL server MEMO, in a top down manner from a root operator to the leaf operators, interesting distributions for the operators can be identified based on the properties of the operators. Identified interesting distributions can be propagated down to lower operators by annotating the lower operators with the interesting distributions. Thus, a SQL server MEMO can be annotated with interesting distributions propagated top down from root to leaf logical operators to generate an annotated SQL server MEMO. Parallel query plans can then be generated from the annotated SQL server MEMO in a bottom up manner from leaf operators to a root operator. Annotated interesting properties can be used to prune operators, thereby facilitating a more tractable search space for a parallel query plan.

Partial Result Classification

View page
US Patent:
20150347508, Dec 3, 2015
Filed:
Jun 2, 2014
Appl. No.:
14/294028
Inventors:
- Redmond WA, US
Rimma V. Nehme - Madison WI, US
Eric R. Robinson - Madison WI, US
Jeffrey F. Naughton - Madison WI, US
International Classification:
G06F 17/30
Abstract:
A query can be executed over incomplete data and produce a partial result. Moreover, the partial result or portion thereof can be classified in accordance with a partial result taxonomy. In accordance with one aspect, the taxonomy can be defined in terms of data correctness and cardinality properties. Further, partial result analysis can be performed at various degrees of granularity. Classified partial result can be presented on a display device to allow user to view and optionally interact with the partial result.
Rimma V Nehme from Bellevue, WA, age ~43 Get Report