Tejas Dastane's photo

Tejas Dastane

Computer Science graduate at USC Viterbi School of Engineering

Location  Los Angeles, CA, USA.



LinkedIn    GitHub    Facebook    Instagram
X
A few tips before you start reading:

Use "Expand All" option to view the entire content. Click the arrow before any section header to minimise/maximise that section.

About me

I am a student of Computer Science, who is passionate about technology. I live in Los Angeles, CA and am pursuing my Master's degree in Computer Science. I possess a zeal to take up difficult challenges and work on them, as they provide me with a potential opportunity of learning something interesting and also keep me motivated.

My hobbies include gaming, listening to music, youtube, photography. I have posted a few good clicks at my Instagram page. I know the following languages:

  1. Marathi (Native)
  2. Hindi (Native)
  3. English (Native)
  4. Spanish (Intermediate)
Skills

  • C/C++, Java, Python, JavaScript are the programming languages I am familiar with.
  • I have some experience in Machine Learning, Computer Vision. I am passionate about their applications and wish to gain a deeper insight.
  • Used both kinds of database querying styles - SQL (Oracle, MySQL) and NoSQL (MongoDB).
  • I possess creativity and like to use it to enhance UI/UX of an application
  • I have plenty of experience in Web development, as I have made mostly web-based interfaces for my projects.
  • Can work anywhere - front-end/back-end. Both are equally interesting to me.
A few projects
[Expand all] [All projects ↗]

[All projects ↗]
  • Indian Sign Language Translation (SiLaTra), my undergraduate project.
    Categories: Computer Vision, Machine Learning, Image Processing

    Translates Indian Sign Language Gestures and Hand Poses and gives output in speech about the same. Developed an Android application which captures a live feed and sends it to our back-end server continuously using socket connection. The server uses Image Processing and Machine Learning operations to perform recognition and sends back the result in text format. The application then uses Text to Speech API to give output as speech.

    In this video, one of our group members is making the gesture "Good Morning":

    GitHub repository   |   SiLaTra API GitHub repository   |   SiLaTra APK
  • Developed a web-based platform for Computer Department's Library, In-house project
    Categories: Web Application Development

    Along with 2 other members, developed a web-based platform for the library of Department of Computer Engineering. It enabled users such as students, faculty, staff to easily manage their library account and perform activities such as renewal of books, checking book status, searching books etc. This platform is hosted on the college server for internal use by the department.

  • Tic Tac Toe AI
    Category: Basic Artificial Intelligence

    Developed a simple AI logic for Tic Tac Toe game using MiniMax algorithm. It is impossible to defeat the AI, and you will mostly draw. The AI is capable of winning if you make one mistake.

    See the AI in action:

    Link to the game   |   Link to GitHub repository
  • Pushups Tracker - Application for motivating work-outs.
    Category: Android Application Development

    Developed an Android Application wherein one can monitor their daily work out counts. The application aims to motivate one-self to perform work-out at home. Since Push ups can be quantised, the application tracks push ups performed by the user. There is a calendar available so that by clicking on each date in the calendar, one can view how much work-outs the person has made. It also displays the weekly average.

    Link to GitHub repository
  • Research
    [Expand all]

  • An effective pixel-wise technique for skin colour segmentation - using Pixel Neighbourhood Technique

    Published in: International Journal on Recent and Innovation Trends in Computing and Communication (IJRITCC), March 2018.

    This paper presents a novel technique for skin colour segmentation that overcomes the limitations faced by existing techniques such as Colour Range Thresholding. Skin colour segmentation is affected by the varied skin colours and surrounding lighting conditions, leading to poor skin segmentation for many techniques. We propose a new two stage Pixel Neighbourhood technique that classifies any pixel as skin or non-skin based on its neighbourhood pixels. The first step calculates the probability of each pixel being skin by passing HSV values of the pixel to a Deep Neural Network model. In the next step, it calculates the likeliness of pixel being skin using these probabilities of neighbouring pixels. This technique performs skin colour segmentation better than the existing techniques.

    Link to the research paper
  • Real-time Indian Sign Language (ISL) Recognition

    Presented at: The 9th International Conference on Computing, Communications and Networking Technologies (ICCCNT), held at the Indian Institute of Science (IISc), Bengaluru, India, July 2018.

    This paper presents a system which can recognise hand poses & gestures from the Indian Sign Language (ISL) in real-time using grid-based features. This system attempts to bridge the communication gap between the hearing and speech impaired and the rest of the society. The existing solutions either provide relatively low accuracy or do not work in real-time. This system provides good results on both the parameters. It can identify 33 hand poses and some gestures from the ISL. Sign Language is captured from a smartphone camera and its frames are transmitted to a remote server for processing. The use of any external hardware (such as gloves or the Microsoft Kinect sensor) is avoided, making it user-friendly. Techniques such as Face detection, Object stabilisation and Skin Colour Segmentation are used for hand detection and tracking. The image is further subjected to a Grid-based Feature Extraction technique which represents the hand's pose in the form of a Feature Vector. Hand poses are then classified using the k-Nearest Neighbours algorithm. On the other hand, for gesture classification, the motion and intermediate hand poses observation sequences are fed to Hidden Markov Model chains corresponding to the 12 pre-selected gestures defined in ISL. Using this methodology, the system is able to achieve an accuracy of 99.7% for static hand poses, and an accuracy of 97.23% for gesture recognition.

    The paper will be published in IEEE Xplore soon.

  • Education

  • Master's in Computer Science from the University of Southern California.
  • B. Tech in Computer Engineering from K.J. Somaiya College of Engineering, a college affiliated to University of Mumbai, India, with a CGPA of 8.95/10.
  • Passed 12th grade (HSC), Maharashtra State board, Mumbai, India with 90.76%.
  • Passed 10th grade (SSC), Maharashtra State board, Mumbai, India with 89%.