Search

Hari Kannan Phones & Addresses

  • Los Altos, CA
  • 1580 Warbler Ave, Sunnyvale, CA 94087
  • San Bruno, CA
  • Stanford, CA
  • Champaign, IL

Work

Company: Pure storage Jan 2014 Position: Technical director, engineering

Education

Degree: Doctorates, Doctor of Philosophy School / High School: Stanford University Specialities: Electrical Engineering

Languages

English

Industries

Computer Hardware

Professional Records

License Records

Hari Dasan Kannan Mbbs

License #:
18355 - Active
Category:
Medicine
Issued Date:
Dec 11, 1990
Effective Date:
Dec 11, 1990
Expiration Date:
Oct 1, 2018
Type:
Physician

Medicine Doctors

Hari Kannan Photo 1

Hari D. Kannan

View page
Specialties:
Psychiatry, Psychiatry, Geriatric
Work:
Kannan Clinic PC
6709 S Minnesota Ave STE 202, Sioux Falls, SD 57108
(605) 271-3900 (phone), (605) 271-3902 (fax)
Education:
Medical School
Kasturba Med Coll Manipal, Manipal Acad Higher Ed, Manipal, Karnataka
Graduated: 1980
Procedures:
Psychiatric Diagnosis or Evaluation
Psychiatric Therapeutic Procedures
Conditions:
Dementia
Depressive Disorders
Anxiety Dissociative and Somatoform Disorders
Anxiety Phobic Disorders
Attention Deficit Disorder (ADD)
Languages:
English
Description:
Dr. Kannan graduated from the Kasturba Med Coll Manipal, Manipal Acad Higher Ed, Manipal, Karnataka in 1980. He works in Sioux Falls, SD and specializes in Psychiatry and Psychiatry, Geriatric.

Resumes

Resumes

Hari Kannan Photo 2

Technical Director, Engineering

View page
Location:
1580 Warbler Ave, Sunnyvale, CA 94087
Industry:
Computer Hardware
Work:
Pure Storage
Technical Director, Engineering

Apple Oct 2009 - Jan 2014
Mobile Silicon Architect
Education:
Stanford University
Doctorates, Doctor of Philosophy, Electrical Engineering
University of Illinois at Urbana - Champaign
Bachelors, Bachelor of Science
Languages:
English

Publications

Us Patents

Optimizing Systems-On-A-Chip Using The Dynamic Critical Path

View page
US Patent:
8037437, Oct 11, 2011
Filed:
Jan 13, 2009
Appl. No.:
12/353168
Inventors:
John D. Davis - San Francisco CA, US
Mihai Budiu - Sunnyvale CA, US
Hari Kannan - Stanford CA, US
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
G06F 9/455
US Classification:
716113, 716108, 716134
Abstract:
The Global Dynamic Critical Path is used to optimize the design of a system-on-a-chip (SoC), where hardware modules are in different clock domains. Control signal transitions of the hardware modules are analyzed to identify the Global Dynamic Critical Path. Rules are provided for handling specific situations such as when concurrent input control signals are received by a hardware module. A configuration of the hardware modules is modified in successive iterations to converge at an optimum design, based on a cost function. The cost function can account for processing time as well as other metrics, such as power consumed. For example, during the iterations, hardware modules which are in the Global Dynamic Critical Path can have their clock speed increased and/or additional resources can be added, while hardware modules which are not in the Global Dynamic Critical Path can have their clock speed decreased and/or unnecessary resources can be removed.

Method And Appliance For Distributing Data Packets Sent By A Computer To A Cluster System

View page
US Patent:
20060013227, Jan 19, 2006
Filed:
May 27, 2005
Appl. No.:
11/140423
Inventors:
Hari Kannan - Sunnyvale CA, US
Assignee:
Fujitsu Siemens Computers Inc. - Milpitas CA
International Classification:
H04L 12/28
US Classification:
370392000
Abstract:
A method and an apparatus for distributing a data packet sent by a computer via a connection line to a cluster system. The data packet comprises a UDP packet and an identification of the computer the data packet was sent from. After the data packet is received by an at least one second node the identification within said data packet is extracted. It will then be checked whether a data packet comprising the same identification has been previously received and forwarded to one of at least two first nodes. If that check is positive, the data packet is forwarded to one of those at least two first nodes. Otherwise, a new node is selected and the data packet is forwarded to that selected node for data processing. This allows high availability against failovers and also load balancing for UDP connections.

Applying Quality Of Service (Qos) To A Translation Lookaside Buffer (Tlb)

View page
US Patent:
20080235487, Sep 25, 2008
Filed:
Mar 21, 2007
Appl. No.:
11/726316
Inventors:
Ramesh Illikkal - Portland OR, US
Hari Kannan - Stanford CA, US
Ravishankar Iyer - Portland OR, US
Donald Newell - Portland OR, US
Jaideep Moses - Portland OR, US
Li Zhao - Beaverton OR, US
International Classification:
G06F 9/34
US Classification:
711207
Abstract:
In one embodiment, the present invention includes a translation lookaside buffer (TLB) having storage locations each including a priority indicator field to store a priority level associated with an agent that requested storage of the data in the TLB, and an identifier field to store an identifier of the agent, where the TLB is apportioned according to a plurality of priority levels. Other embodiments are described and claimed.

Efficient Handling Of Misaligned Loads And Stores

View page
US Patent:
20130013862, Jan 10, 2013
Filed:
Jul 6, 2011
Appl. No.:
13/177192
Inventors:
Hari S. Kannan - Sunnyvale CA, US
Pradeep Kanapathipillai - Santa Clara CA, US
Greg M. Hess - Mountain View CA, US
International Classification:
G06F 12/08
US Classification:
711119, 711E12017
Abstract:
A system and method for efficiently handling misaligned memory accesses within a processor. A processor comprises a load-store unit (LSU) with a banked data cache (d-cache) and a banked store queue. The processor generates a first address corresponding to a memory access instruction identifying a first cache line. The processor determines the memory access is misaligned which crosses over a cache line boundary. The processor generates a second address identifying a second cache line logically adjacent to the first cache line. If the instruction is a load instruction, the LSU simultaneously accesses the d-cache and store queue with the first and the second addresses. If there are two hits, the data from the two cache lines are simultaneously read out. If the access is a store instruction, the LSU separates associated write data into two subsets and simultaneously stores these subsets in separate cache lines in separate banks of the store queue.

Lookahead Scheme For Prioritized Reads

View page
US Patent:
20130107655, May 2, 2013
Filed:
Oct 27, 2011
Appl. No.:
13/282873
Inventors:
Rajat Goel - Saratoga CA, US
Hari S. Kannan - Sunnyvale CA, US
Khurram Z. Malik - Santa Clara CA, US
International Classification:
G11C 8/06
G11C 8/00
US Classification:
36523002, 36523008, 36523001
Abstract:
A circular queue implementing a scheme for prioritized reads is disclosed. In one embodiment, a circular queue (or buffer) includes a number of storage locations each configured to store a data value. A multiplexer tree is coupled between the storage locations and a read port. A priority circuit is configured to generate and provide selection signals to each multiplexer of the multiplexer tree, based on a priority scheme. Based on the states of the selection signals, one of the storage locations is coupled to the read port via the multiplexers of the multiplexer tree.

Coordinated Prefetching In Hierarchically Cached Processors

View page
US Patent:
20130254485, Sep 26, 2013
Filed:
Mar 20, 2012
Appl. No.:
13/425123
Inventors:
Hari S. Kannan - Sunnyvale CA, US
Brian P. Lilly - San Francisco CA, US
Perumal R. Subramoniam - San Jose CA, US
Pradeep Kanapathipillai - Santa Clara CA, US
International Classification:
G06F 12/08
US Classification:
711122, 711E12057, 711E12024
Abstract:
Processors and methods for coordinating prefetch units at multiple cache levels. A single, unified training mechanism is utilized for training on streams generated by a processor core. Prefetch requests are sent from the core to lower level caches, and a packet is sent with each prefetch request. The packet identifies the stream ID of the prefetch request and includes relevant training information for the particular stream ID. The lower level caches generate prefetch requests based on the received training information.

Updates For Flash Translation Layer

View page
US Patent:
20220409119, Dec 29, 2022
Filed:
Aug 26, 2022
Appl. No.:
17/896998
Inventors:
- Mountain View CA, US
Hari Kannan - Sunnyvale CA, US
Yuhong Mao - Fremont CA, US
International Classification:
A61B 5/364
A61B 5/366
A61B 5/282
A61B 5/353
A61B 5/00
A61B 5/352
Abstract:
A method of operating a storage system is provided. The method includes executing an operating system on one or more processors of a compute device that is coupled to one or more solid-state drives and executing a file system on the one or more processors of the compute device. The method includes configuring the compute device with one or more replaceable plug-ins that are specific to the one or more solid-state drives, and executing a flash translation layer on the one or more processors of the compute device, with assistance through the one or more replaceable plug-ins for reading and writing the one or more solid-state drives.

Intelligent Operation Scheduling Based On Latency Of Operations

View page
US Patent:
20220404970, Dec 22, 2022
Filed:
Aug 26, 2022
Appl. No.:
17/897014
Inventors:
- Mountain View CA, US
John Hayes - Mountain View CA, US
Hari Kannan - Sunnyvale CA, US
Nenad Miladinovic - Los Gatos CA, US
Zhangxi Tan - Mountain View CA, US
International Classification:
G06F 3/06
G06F 11/10
H03M 13/37
G11C 29/52
H03M 13/15
Abstract:
A storage system is provided. The storage system includes a plurality of non-volatile memory units and a processor operatively coupled to a plurality of non-volatile memory units. The processor is to perform a method including receiving a request to read data from the storage system. The method also includes determining whether a storage operation should be delayed, based on the request to read the data from the storage system. The method further includes in response to determining that the storage operation should be delayed, delaying the storage operation. The method further includes performing a read operation for the request to read the data.
Hari S Kannan from Los Altos, CA, age ~40 Get Report