Search

Deepak Mital Phones & Addresses

  • Livermore, CA
  • 1051 Craig Dr, San Jose, CA 95129 (610) 737-2976
  • 1123 Linden Hollow Ln, Orefield, PA 18069 (610) 391-9239 (610) 395-7830
  • 1945 Knight St, Allentown, PA 18104 (610) 770-1720
  • Kingston, PA
  • Osseo, MN

Professional Records

License Records

Deepak Mital

License #:
MT027230T - Expired
Category:
Medicine
Type:
Graduate Medical Trainee

Medicine Doctors

Deepak Mital Photo 1

Deepak Mital

View page
Specialties:
Transplant Surgery
Work:
Advocate Medical GroupAdvocate Christ Kidney Transplant
4400 W 95 St STE 112, Oak Lawn, IL 60453
(708) 684-7100 (phone), (708) 684-7130 (fax)
Education:
Medical School
All India Inst of Med Sci, Ansari Nagar, New Delhi, India
Graduated: 1983
Procedures:
Kidney Transplant
Conditions:
Chronic Renal Disease
Languages:
English
Spanish
Description:
Dr. Mital graduated from the All India Inst of Med Sci, Ansari Nagar, New Delhi, India in 1983. He works in Oak Lawn, IL and specializes in Transplant Surgery. Dr. Mital is affiliated with Advocate Christ Medical Center.
Deepak Mital Photo 2

Deepak Mital

View page
Specialties:
Surgery
Vascular Surgery
Transplant Surgery
Education:
All-India Institute Of Medical Sciences *
Albert Einstein Medical Center *

Resumes

Resumes

Deepak Mital Photo 3

Chief Executive Officer

View page
Location:
19750 northwest Phillips Rd, Hillsboro, OR 97124
Industry:
Semiconductors
Work:
Roviero May 2017 - Oct 2018
Chief Executive Officer

Stealth May 2017 - Oct 2018
Chief Executive Officer

Intel Corporation Dec 1, 2014 - Apr 2017
Distinguished Engineer

Lsi Corporation Jun 1998 - Dec 2014
Distinguished Engineer

Texas Instruments 1995 - 1998
Engineer
Education:
Birla Institute of Technology and Science, Pilani 1993 - 1994
Chatrapati Sahuji Maharaj Kanpur University, Kanpur 1989 - 1993
Bachelor of Engineering, Bachelors, Electronics Engineering
Skills:
Bluetooth Low Energy
Home Automation
Debugging
Rtl Design
Fw/Bios
Networking
Arm
Asic
Fpga Prototyping
Epaper
Verilog
Uart
Nand Flash
Soc
Spi
Semiconductors
Design
Ftl
Emulation
Architecture
Flash Memory
Verification
System Architecture
Ic
Languages:
English
Hindi
Deepak Mital Photo 4

Deepak Mital

View page

Publications

Us Patents

Hash Processing In A Network Communications Processor Architecture

View page
US Patent:
8321385, Nov 27, 2012
Filed:
Mar 12, 2011
Appl. No.:
13/046719
Inventors:
William Burroughs - Macungie PA, US
Deepak Mital - Orefield PA, US
Mohammed Reza Hakami - Bethlehem PA, US
Assignee:
LSI Corporation - Milpitas CA
International Classification:
G06F 7/00
G06F 17/00
G06F 17/30
US Classification:
707692, 707698, 707747
Abstract:
Described embodiments provide coherent processing of hash operations of a network processor having a plurality of processing modules. A hash processor of the network processor receives hash operation requests from the plurality of processing modules. A hash table identifier and bucket index corresponding to the received hash operation request are determined. An active index list is maintained for active hash operations for each hash table identifier and bucket index. If the hash table identifier and bucket index of the received hash operation request are in the active index list, the received hash operation request is deferred until the hash table identifier and bucket index corresponding to the received hash operation request clear from the active index list. Otherwise, the active index list is updated with the hash table identifier and bucket index of the received hash operation request and the received hash operation request is processed.

Task Queuing In A Network Communications Processor Architecture

View page
US Patent:
8407707, Mar 26, 2013
Filed:
May 18, 2010
Appl. No.:
12/782411
Inventors:
David P. Sonnier - Austin TX, US
Balakrishnan Sundararaman - Cedar Park TX, US
Shailendra Aulakh - Austin TX, US
Deepak Mital - Orefield PA, US
Assignee:
LSI Corporation - Milpitas CA
International Classification:
G06F 9/46
US Classification:
718101, 718100, 718102
Abstract:
Described embodiments provide a method of assigning tasks to queues of a processing core. Tasks are assigned to a queue by sending, by a source processing core, a new task having a task identifier. A destination processing core receives the new task and determines whether another task having the same identifier exists in any of the queues corresponding to the destination processing core. If another task with the same identifier as the new task exists, the destination processing core assigns the new task to the queue containing a task with the same identifier as the new task. If no task with the same identifier as the new task exists in the queues, the destination processing core assigns the new task to the queue having the fewest tasks. The source processing core writes the new task to the assigned queue. The destination processing core executes the tasks in its queues.

Memory Manager For A Network Communications Processor Architecture

View page
US Patent:
8499137, Jul 30, 2013
Filed:
Dec 9, 2010
Appl. No.:
12/963895
Inventors:
Joseph Hasting - Heidelberg CT, US
Deepak Mital - Orefield PA, US
Assignee:
LSI Corporation - Milpitas CA
International Classification:
G06F 12/00
US Classification:
711170, 711171, 711172, 707813, 707814
Abstract:
Described embodiments provide a memory manager for a network processor having a plurality of processing modules and a shared memory. The memory manager allocates blocks of the shared memory to requesting ones of the plurality of processing modules. A free block list tracks availability of memory blocks of the shared memory. A reference counter maintains, for each allocated memory block, a reference count indicating a number of access requests to the memory block by ones of the plurality of processing modules. The reference count is located with data at the allocated memory block. For subsequent access requests to a given memory block concurrent with processing of a prior access request to the memory block, a memory access accumulator (i) accumulates an incremental value corresponding to the subsequent access requests, (ii) updates the reference count associated with the memory block, and (iii) updates the memory block with the accumulated result.

Reducing Data Read Latency In A Network Communications Processor Architecture

View page
US Patent:
8505013, Aug 6, 2013
Filed:
Dec 22, 2010
Appl. No.:
12/975823
Inventors:
Steven Pollock - Allentown PA, US
William Burroughs - Macungie PA, US
Deepak Mital - Orefield PA, US
Te Khac Ma - Allentown PA, US
Narender Vangati - Austin TX, US
Larry King - Austin TX, US
Assignee:
LSI Corporation - Milpitas CA
International Classification:
G06F 9/46
G06F 12/06
US Classification:
718102, 711 5
Abstract:
Described embodiments provide address translation for data stored in at least one shared memory of a network processor. A processing module of the network processor generates tasks corresponding to each of a plurality of received packets. A packet classifier generates contexts for each task, each context associated with a thread of instructions to apply to the corresponding packet. A first subset of instructions is stored in a tree memory within the at least one shared memory. A second subset of instructions is stored in a cache within a multi-thread engine of the packet classifier. The multi-thread engine maintains status indicators corresponding to the first and second subsets of instructions within the cache and the tree memory and, based on the status indicators, accesses a lookup table while processing a thread to translate between an instruction number and a physical address of the instruction in the first and second subset of instructions.

Thread Synchronization In A Multi-Thread Network Communications Processor Architecture

View page
US Patent:
8514874, Aug 20, 2013
Filed:
Dec 22, 2010
Appl. No.:
12/975880
Inventors:
Deepak Mital - Orefield PA, US
James Clee - Orefield PA, US
Assignee:
LSI Corporation - Milpitas CA
International Classification:
H04L 12/56
US Classification:
370412, 370389
Abstract:
Described embodiments provide a packet classifier for a network processor that generates tasks corresponding to each received packet. The packet classifier includes a scheduler to generate a thread of contexts for each task received by the packet classifier from a plurality of processing modules of the network processor. The scheduler includes one or more output queues to temporarily store contexts. Each thread corresponds to an order of instructions applied to the corresponding packet, and includes an identifier of a corresponding one of the output queues. The scheduler sends the contexts to a multi-thread instruction engine that processes the threads. An arbiter selects one of the output queues in order to provide output packets to the multi-thread instruction engine, the output packets associated with a corresponding thread of contexts. Each output queue transmits output packets corresponding to a given thread contiguously in the order in which the threads started.

Concurrent Linked-List Traversal For Real-Time Hash Processing In Multi-Core, Multi-Thread Network Processors

View page
US Patent:
8515965, Aug 20, 2013
Filed:
Feb 23, 2012
Appl. No.:
13/403468
Inventors:
Deepak Mital - Orefield PA, US
Mohammed Reza Hakami - Bethlehem PA, US
William Burroughs - Macungie PA, US
Assignee:
LSI Corporation - Milpitas CA
International Classification:
G06F 17/30
US Classification:
707747, 707770
Abstract:
Described embodiments process hash operation requests of a network processor. A hash processor determines a job identifier, a corresponding hash table, and a setting of a traversal indicator for a received hash operation request that includes a desired key. The hash processor concurrently generates a read request for a first bucket of the hash table, and provides the job identifier, the key and the traversal indicator to a read return processor. The read return processor stores the key and traversal indicator in a job memory and stores, in a return memory, entries of the first bucket of the hash table. If a stored entry matches the desired key, the read return processor determines, based on the traversal indicator, whether to read a next bucket of the hash table and provides the job identifier, the matching key, and the address of the bucket containing the matching key to the hash processor.

Exception Detection And Thread Rescheduling In A Multi-Core, Multi-Thread Network Processor

View page
US Patent:
8537832, Sep 17, 2013
Filed:
Mar 12, 2011
Appl. No.:
13/046726
Inventors:
Jerry Pirog - Easton PA, US
Deepak Mital - Orefield PA, US
William Burroughs - Macungie PA, US
Assignee:
LSI Corporation - Milpitas CA
International Classification:
H04L 12/28
US Classification:
3703954, 710316, 379242
Abstract:
Described embodiments provide a packet classifier of a network processor having a plurality of processing modules. A scheduler generates a thread of contexts for each tasks generated by the network processor corresponding to each received packet. The thread corresponds to an order of instructions applied to the corresponding packet. A multi-thread instruction engine processes the threads of instructions. A function bus interface inspects instructions received from the multi-thread instruction engine for one or more exception conditions. If the function bus interface detects an exception, the function bus interface reports the exception to the scheduler and the multi-thread instruction engine. The scheduler reschedules the thread corresponding to the instruction having the exception for processing in the multi-thread instruction engine. Otherwise, the function bus interface provides the instruction to a corresponding destination processing module of the network processor.

Hash Processing In A Network Communications Processor Architecture

View page
US Patent:
8539199, Sep 17, 2013
Filed:
Mar 12, 2011
Appl. No.:
13/046717
Inventors:
William Burroughs - Macungie PA, US
Deepak Mital - Orefield PA, US
Mohammed Reza Hakami - Bethlehem PA, US
Michael R. Betker - Orefield PA, US
Assignee:
LSI Corporation - Milpitas CA
International Classification:
G06F 12/08
US Classification:
711216, 707747
Abstract:
Described embodiments provide a hash processor for a system having multiple processing modules and a shared memory. The hash processor includes a descriptor table with N entries, each entry corresponding to a hash table of the hash processor. A direct mapped table in the shared memory includes at least one memory block including N hash buckets. The direct mapped table includes a predetermined number of hash buckets for each hash table. Each hash bucket includes one or more hash key and value pairs, and a link value. Memory blocks in the shared memory include dynamic hash buckets available for allocation to a hash table. A dynamic hash bucket is allocated to a hash table when the hash buckets in the direct mapped table are filled beyond a threshold. The link value in the hash bucket is set to the address of the dynamic hash bucket allocated to the hash table.
Deepak C Mital from Livermore, CA, age ~53 Get Report