Search

Supun Samarasekera

age ~57

from Skillman, NJ

Also known as:
  • Su Samarasekera
  • Sajeeva Samarasekera
  • Samarasekera Supun
  • Supun A
  • Supun E
Phone and address:
3 Pebble Beach Ct, Montgomery, NJ 08558

Supun Samarasekera Phones & Addresses

  • 3 Pebble Beach Ct, Skillman, NJ 08558
  • 24 Blackstone Dr, Princeton, NJ 08540 • 6092522707 • 6099519896
  • 104 Olympic Ct #6, Princeton, NJ 08540 • 6099519896
  • 3514 Lancaster Ave #305, Philadelphia, PA 19104 • 2159519896
  • 3514 Lancaster Ave, Philadelphia, PA 19104

Work

  • Company:
    Sri international
    1997
  • Position:
    Technical director

Education

  • Degree:
    Masters
  • School / High School:
    University of Pennsylvania
    1987 to 1991

Skills

Software Engineering • Image Processing • C++ • Algorithms • Program Management • Machine Learning • C • Simulations • Systems Engineering • R&D

Industries

Research
Name / Title
Company / Classification
Phones & Addresses
Supun Samarasekera
Director Information Technology
Sri International
Noncommercial Research Organization Business Consulting Services
201 Washington Rd, Princeton, NJ 08540

Resumes

Supun Samarasekera Photo 1

Technical Director

view source
Location:
3 Pebble Beach Ct, Skillman, NJ 08558
Industry:
Research
Work:
Sri International
Technical Director
Education:
University of Pennsylvania 1987 - 1991
Masters
Skills:
Software Engineering
Image Processing
C++
Algorithms
Program Management
Machine Learning
C
Simulations
Systems Engineering
R&D

Us Patents

  • Method And Apparatus For Performing Geo-Spatial Registration Using A Euclidean Representation

    view source
  • US Patent:
    6587601, Jul 1, 2003
  • Filed:
    Jun 28, 2000
  • Appl. No.:
    09/605915
  • Inventors:
    Stephen Charles Hsu - East Windsor NJ
    Supun Samarasekera - Princeton NJ
  • Assignee:
    Sarnoff Corporation - Princeton NJ
  • International Classification:
    G06K 932
  • US Classification:
    382294, 707100, 382154
  • Abstract:
    A system and method for accurately mapping between camera coordinates and geo-coordinates, called geo-spatial registration, using a Euclidean model. The system utilizes the imagery and terrain information contained in the geo-spatial database to precisely align geographically calibrated reference imagery with an input image, e. g. , dynamically generated video images, and thus achieve a high accuracy identification of locations within the scene. When a sensor, such as a video camera, images a scene contained in the geo-spatial database, the system recalls a reference image pertaining to the imaged scene. This reference image is aligned very accurately with the sensors images using a parametric transformation produced by a Euclidean model. Thereafter, other information that is associated with the reference image can easily be overlaid upon or otherwise associated with the sensor imagery.
  • Method And Apparatus For Performing Geo-Spatial Registration Of Imagery

    view source
  • US Patent:
    6597818, Jul 22, 2003
  • Filed:
    Mar 9, 2001
  • Appl. No.:
    09/803700
  • Inventors:
    Rakesh Kumar - Monmouth Junction NJ
    Stephen Charles Hsu - Cranbury NJ
    Keith Hanna - Princeton NJ
    Supun Samarasekera - Princeton NJ
    Richard Patrick Wildes - Princeton NJ
    David James Hirvonen - Princeton NJ
    Thomas Edward Klinedinst - Doylestown PA
    William Brian Lehman - Mount Holly NJ
    Bodgan Matei - Piscataway NJ
    Wenyi Zhao - Plainsboro NJ
  • Assignee:
    Sarnoff Corporation - Princeton NJ
  • International Classification:
    G06K 932
  • US Classification:
    382294, 382284, 382305, 707102
  • Abstract:
    A system and method for accurately mapping between image coordinates and geo-coordinates, called geo-spatial registration. The system utilizes the imagery and terrain information contained in the geo-spatial database to precisely align geodetically calibrated reference imagery with an input image, e. g. , dynamically generated video images, and thus achieve a high accuracy identification of locations within the scene. When a sensor, such as a video camera, images a scene contained in the geo-spatial database, the system recalls a reference image pertaining to the imaged scene. This reference image is aligned very accurately with the sensors images using a parametric transformation. Thereafter, other information that is associated with the reference image can easily be overlaid upon or otherwise associated with the sensor imagery.
  • Method Of Pose Estimation And Model Refinement For Video Representation Of A Three Dimensional Scene

    view source
  • US Patent:
    6985620, Jan 10, 2006
  • Filed:
    Mar 7, 2001
  • Appl. No.:
    09/800550
  • Inventors:
    Harpreet Singh Sawhney - West Windsor NJ, US
    Rakesh Kumar - Monmouth Junction NJ, US
    Steve Hsu - Cranbury NJ, US
    Supun Samarasekera - Princeton NJ, US
  • Assignee:
    Sarnoff Corporation - Princeton NJ
  • International Classification:
    G06K 9/00
  • US Classification:
    382154, 382103, 382190, 382294, 348 42, 348169, 345419
  • Abstract:
    The present invention is embodied in a video flashlight method. This method creates virtual images of a scene using a dynamically updated three-dimensional model of the scene and at least one video sequence of images. An estimate of the camera pose is generated by comparing a present image to the three-dimensional model. Next, relevant features of the model are selected based on the estimated pose. The relevant features are then virtually projected onto the estimated pose and matched to features of the image. Matching errors are measured between the relevant features of the virtual projection and the features of the image. The estimated pose is then updated to reduce these matching errors. The model is also refined with updated information from the image. Meanwhile, a viewpoint for a virtual image is selected. The virtual image is then created by projecting the dynamically updated three-dimensional model onto the selected virtual viewpoint.
  • Method And Apparatus For Automatic Registration And Visualization Of Occluded Targets Using Ladar Data

    view source
  • US Patent:
    7242460, Jul 10, 2007
  • Filed:
    Apr 16, 2004
  • Appl. No.:
    10/825946
  • Inventors:
    Stephen Charles Hsu - East Windsor NJ, US
    Supun Samarasekera - Princeton NJ, US
    Rakesh Kumar - Monmouth Junction NJ, US
    Wen-Yi Zhao - Somerset NJ, US
    Keith J. Hanna - Princeton NJ, US
  • Assignee:
    Sarnoff Corporation - Princeton NJ
  • International Classification:
    G01C 3/08
  • US Classification:
    356 401, 356622, 382294
  • Abstract:
    A method and apparatus for high-resolution 3D imaging ladar system which can penetrate foliage and camouflage to sample fragments of concealed surfaces of interest is disclosed. Samples collected while the ladar moves can be integrated into a coherent object shape. In one embodiment, a system and method for automatic data-driven registration of ladar frames, comprises a coarse search stage, a pairwise fine registration stage using an iterated closest points algorithm, and a multi-view registration strategy. After alignment and aggregation, it is often difficult for human observers to find, assess and recognize objects from a point cloud display. Basic display manipulations, surface fitting techniques, and clutter suppression to enhance visual exploitation of 3D imaging ladar data may be utilized.
  • Method And Apparatus For Placing Sensors Using 3D Models

    view source
  • US Patent:
    7259778, Aug 21, 2007
  • Filed:
    Feb 13, 2004
  • Appl. No.:
    10/779444
  • Inventors:
    Aydin Arpa - Plainsboro NJ, US
    Keith J. Hanna - Princeton NJ, US
    Supun Samarasekera - Princeton NJ, US
    Rakesh Kumar - Monmouth Junction NJ, US
    Harpreet Sawhney - West Windsor NJ, US
  • Assignee:
    L-3 Communications Corporation - New York NY
  • International Classification:
    H04N 7/18
  • US Classification:
    348139, 348 25, 348159
  • Abstract:
    Method and apparatus for dynamically placing sensors in a 3D model is provided. Specifically, in one embodiment, the method selects a 3D model and a sensor for placement into the 3D model. The method renders the sensor and the 3D model in accordance with sensor parameters associated with the sensor and parameters desired by a user. In addition, the method determines whether an occlusion to the sensor is present.
  • Method And Apparatus For Detecting Left Objects

    view source
  • US Patent:
    7382898, Jun 3, 2008
  • Filed:
    Jun 15, 2005
  • Appl. No.:
    11/152889
  • Inventors:
    Manoj Aggarwal - Lawrenceville NJ, US
    Supun Samarasekera - Princeton NJ, US
    Keith Hanna - Princeton Jct. NJ, US
  • Assignee:
    Sarnoff Corporation - Princeton NJ
  • International Classification:
    G06K 9/00
  • US Classification:
    382103, 382168, 382218, 382219
  • Abstract:
    A method and apparatus for detecting objects (e. g. , bags, vehicles, etc. ) left in a field of view are disclosed. A long-term representation and a short-term representation of the field of view are constructed, and a difference between the long-term representation and the short-term representation is calculated. One or more criteria may optionally be applied to this difference to determine whether the difference represents an object that was left in the field of view.
  • Method And Apparatus For Providing Immersive Surveillance

    view source
  • US Patent:
    7522186, Apr 21, 2009
  • Filed:
    Jul 24, 2002
  • Appl. No.:
    10/202546
  • Inventors:
    Aydin Arpa - Jacksonville FL, US
    Keith J. Hanna - Princeton Junction NJ, US
    Rakesh Kumar - Monmouth Junction NJ, US
    Supun Samarasekera - Princeton NJ, US
    Harpreet Singh Sawhney - West Windsor NJ, US
    Manoj Aggarwal - Lawrenceville NJ, US
    David Nister - Bellevue WA, US
    Stephen Hsu - Sunnyvale CA, US
  • Assignee:
    L-3 Communications Corporation - New York NY
  • International Classification:
    H04N 7/18
  • US Classification:
    348153, 348143, 348152
  • Abstract:
    A method and apparatus for providing immersive surveillance wherein a remote security guard may monitor a scene using a variety of imagery sources that are rendered upon a model to provide a three-dimensional conceptual view of the scene. Using a view selector, the security guard may dynamically select a camera view to be displayed on his conceptual model, perform a walk through of the scene, identify moving objects and select the best view of those moving objects and so on.
  • Method And Apparatus For Providing A Scalable Multi-Camera Distributed Video Processing And Visualization Surveillance System

    view source
  • US Patent:
    7633520, Dec 15, 2009
  • Filed:
    Jun 21, 2004
  • Appl. No.:
    10/872964
  • Inventors:
    Supun Samarasekera - Princeton NJ, US
    Rakesh Kumar - Monmouth Junction NJ, US
    Keith Hanna - Princeton Junction NJ, US
    Harpreet Sawhney - West Windsor NJ, US
    Aydin Arpa - Plainsboro NJ, US
    Manoj Aggarwal - Plainsboro NJ, US
    Vincent Paragano - Yardley PA, US
  • Assignee:
    L-3 Communications Corporation - New York NY
  • International Classification:
    H04N 7/18
  • US Classification:
    348152, 348143
  • Abstract:
    A scalable architecture for providing real-time multi-camera distributed video processing and visualization. An exemplary system comprises at least one video capture and storage system for capturing and storing a plurality of input videos, at least one vision based alarm system for detecting and reporting alarm situations or events, and at least one video rendering system (e. g. , a video flashlight system) for displaying an alarm situation in a context that speeds up comprehension and response. One advantage of the present architecture is that these systems are all scalable, such that additional sensors (e. g. , cameras, motion sensors, infrared sensors, chemical sensors, biological sensors, temperature sensors and like) can be added in large numbers without overwhelming the ability of security forces to comprehend the alarm situation.

Facebook

Supun Samarasekera Photo 2

Supun Samarasekera

view source

Get Report for Supun Samarasekera from Skillman, NJ, age ~57
Control profile