Overview
About Me
I'm a graduate from Sacramento State, having earned my degree in Computer Science in 2021. Enthusastic, and inquisitve, I'm a team-oriented individual looking to continue a fulltime career in computing and software engineering. I'm currently an iOS Engineer for PG&E. In this page, you can see some of my projects I've worked on at school, and on my own.
I specialize in iOS application development using Swift, UIKit and SwiftUI, but I'm open to exploring new technologies and have personally experimented with using Scikit Learn, TensorFlow and Keras for machine learning, as well as a couple of different stacks (MEVN, Vue, MongoDB, GraphQL), for full-stack development. This page will be continuously updated with more projects as I complete or work on them in my own time.
My software interests include:
- Artificial Intelligence
- Machine Learning
- Computer Vision
- Graphics Programming
Other Things About Me
I have several other interests that extend beyond the realm of computer science as well!
- I'm a hobbyist photographer since I was in high school, and have been practicing for 9 years.
- I enjoy spending my free time doing community service and connecting with others.
- I've been a member of the Kiwanis Family since high school; Key Club back then, Circle K International during college, in which I've held several leadership positions.
- I enjoy playing golf, and would regularly play at my local golf course, pre-COVID.
My affiliate links are at the bottom of the page navigator, or you can head over to my website hub to see them!
Virtual Reality Traffic Simulator
Project demo
Project overview
TrafficSim is a virtual reality application built on Unreal 4.23 (C++) for the Oculus Rift, controlled using a Logitech steering wheel input, developed by myself and my team. It is designed as a product to enable researchers to observe and model driver behavior and interactions in a realistic setting. The application presents to the user several different prebuilt maps to choose from, and also includes a built-in map editor, both of which are intended to be used by our client, with little technical ability required to use the software. While similar driving simulation tools already exist, this VR application comes without the same cost overhead to our client as dedicated driving simulation hardware, as the costs incurred for this project would be the Oculus and VR ready computer.
When the player is seated into the game world, they are loaded into any of the pre-built or user made maps, and every action, reaction or input made is logged into an external spreadsheet for further statistical analysis. The player is able to drive around in the various conditions, time of day, and road hazards, as well as interact with and avoid neighboring NPC cars.
Upcoming Features
The planned feature roadmap includes:
- Adding more road types such as intersections and highway interchanges.
- Adding stop signs, traffic lights, and construction/school zone signs.
- Allowing the user to save a custom created map.
- Traffic density control, so the user/researcher can input a hard value for NPC generation.
- Dynamic and real time weather conditions, currently only fog density is implemented.
- Programmable and timeable pedestrian behavior.
- A scenario editor, so the user/researcher can load a specific combination of maps and map settings for more consistent observation environments.
Miscellaneous
I am also responsible for maintaining the project website.
VRcade
Project demo
Project overview
VRcade is a prize winning submission to the Virtual Reality hackathon "HackReality", on March 2021. In one week,
our team was tasked with developing a virtual reality application with the hardware and tools of our choice. VRcade is developed
using C# on the Unity engine, and deployed to the Oculus Quest. At the time of submission it had supported four activities, billiards,
ping pong, tennis, and air hockey. A fifth activity, foosball, was also planned.
Development
Over the course of the hackathon, I worked with my team to quickly prototype different possible ideas that we could
use to work on, considering our limited time frame to accomplish it. We used Unity's Oculus Integration Package and
Microsoft's Mixed Reality Toolkit in order to get motion tracking and hand tracking to work. For Billiards, we also
provided functionality for the player to scan in a real-world table to use as support in-game. We also used Unity's
built in version control software, Unity Collab, in order to keep track of each others changes in real time.
Miscellaneous
Awarded 3rd Place Prize - Best use of AR/VR for Entertainment/Games
[Devpost] [Figma] [Build Archive]
Project demo
Project overview
VRcade is a prize winning submission to the Virtual Reality hackathon "HackReality", on March 2021. In one week, our team was tasked with developing a virtual reality application with the hardware and tools of our choice. VRcade is developed using C# on the Unity engine, and deployed to the Oculus Quest. At the time of submission it had supported four activities, billiards, ping pong, tennis, and air hockey. A fifth activity, foosball, was also planned.
Development
Over the course of the hackathon, I worked with my team to quickly prototype different possible ideas that we could use to work on, considering our limited time frame to accomplish it. We used Unity's Oculus Integration Package and Microsoft's Mixed Reality Toolkit in order to get motion tracking and hand tracking to work. For Billiards, we also provided functionality for the player to scan in a real-world table to use as support in-game. We also used Unity's built in version control software, Unity Collab, in order to keep track of each others changes in real time.
Miscellaneous
Awarded 3rd Place Prize - Best use of AR/VR for Entertainment/Games
[Devpost] [Figma] [Build Archive]
CIFAR-10-Based Image Classifier
Project overview
The CIFAR-10-Based Image Classifier is a computer vision, convolutional neural network model designed to classify images from the CIFAR-10
dataset. This dataset contained ten different types of images to classify: airplane, automobile, bird, cat, deer, dog, frog,
horse, ship, and truck. We also utilized transfer learning to build a secondary neural network based on the VGG16 model,
and performed an examination of their performance.
Development
Feature extraction is performed using a convolutional neural network (CNN) architecture, with 10 possible outputs.
The CNN is comprised of multiple hidden layers with a 3x3 convolution kernel with strides 1 and 2, using relu activation,
several 2x2 max pooling layers, dropout layers, and compiled with categorical crossentropy and the adam optimizer.
The CIFAR-10 based model achieves a 76% accuracy rate, outperforming the transfer learning VGG16-based model,
which achieved a 71% accuracy rate.
Miscellaneous
[GitHub] [Project Report]
Project overview
The CIFAR-10-Based Image Classifier is a computer vision, convolutional neural network model designed to classify images from the CIFAR-10 dataset. This dataset contained ten different types of images to classify: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck. We also utilized transfer learning to build a secondary neural network based on the VGG16 model, and performed an examination of their performance.
Development
Feature extraction is performed using a convolutional neural network (CNN) architecture, with 10 possible outputs. The CNN is comprised of multiple hidden layers with a 3x3 convolution kernel with strides 1 and 2, using relu activation, several 2x2 max pooling layers, dropout layers, and compiled with categorical crossentropy and the adam optimizer. The CIFAR-10 based model achieves a 76% accuracy rate, outperforming the transfer learning VGG16-based model, which achieved a 71% accuracy rate.
Miscellaneous
[GitHub] [Project Report]
Network Intrusion Detector
Project overview
The Network Intrusion Detector is a binary classification machine learning model, built using a series of
convolution kernels and max pooling layers, trained on the KDD Cup 1999 dataset. The model seeks to distinguish
between well-intentioned and malicious network connections, and identify which is which. The model has an ROC
curve of 1.0, and generalized extremely well to the test data, even with a 25%/75% training/test split,
and as a result, is highly accurate.
Miscellaneous
[GitHub] [Project Report]
Project overview
The Network Intrusion Detector is a binary classification machine learning model, built using a series of convolution kernels and max pooling layers, trained on the KDD Cup 1999 dataset. The model seeks to distinguish between well-intentioned and malicious network connections, and identify which is which. The model has an ROC curve of 1.0, and generalized extremely well to the test data, even with a 25%/75% training/test split, and as a result, is highly accurate.
Miscellaneous
[GitHub] [Project Report]
Live Translator
Project demo
Project overview
Live Translator is a submission to the hackathon, "SF Hacks" in March 2021, for the Health and Wellness track.
The project aimed to allow two individuals to communicate in different languages utilizing the Google Translation API. My managed to develop a working
frontend using React Native, which we were proud of as at the time, only one individual on our team was proficient with it.
Development
We developed the frontend as an Android app primarily using React Native. We planned to use Google's Cloud Translation API
as the backend for our application. We prototyped our user interface on Figma. By submission time, we managed to implement
voice recognition, and the user interface for the chat application.
Miscellaneous
Project demo
Project overview
Live Translator is a submission to the hackathon, "SF Hacks" in March 2021, for the Health and Wellness track. The project aimed to allow two individuals to communicate in different languages utilizing the Google Translation API. My managed to develop a working frontend using React Native, which we were proud of as at the time, only one individual on our team was proficient with it.
Development
We developed the frontend as an Android app primarily using React Native. We planned to use Google's Cloud Translation API as the backend for our application. We prototyped our user interface on Figma. By submission time, we managed to implement voice recognition, and the user interface for the chat application.
Miscellaneous
Yelp Rating Predictor
Project overview
The Yelp Rating Predictor is a logistic regression model
trained using Yelp's academic dataset. The model is trained on the entire corpus of Yelp's provided review text, and
attempts to predict the score rating of a particular business based on its reviews.
Model Architecture
The model consists of three fully connected layers, using tanh, sigmoid, and relu activation functions. The model is compiled
using the adam optimizer. Feature extraction was performed over the entire body of Yelp's review text using TF-IDF vectorization.
Due to hardware limitations, only 1000 features were able to be vectorized and one-hot encoded. The model achieves an RMSE score of
0.2978, and demonstrated that the model generalized the dataset fairly well.
Miscellaneous
[GitHub] [Project Report]
Project overview
The Yelp Rating Predictor is a logistic regression model trained using Yelp's academic dataset. The model is trained on the entire corpus of Yelp's provided review text, and attempts to predict the score rating of a particular business based on its reviews.
Model Architecture
The model consists of three fully connected layers, using tanh, sigmoid, and relu activation functions. The model is compiled using the adam optimizer. Feature extraction was performed over the entire body of Yelp's review text using TF-IDF vectorization. Due to hardware limitations, only 1000 features were able to be vectorized and one-hot encoded. The model achieves an RMSE score of 0.2978, and demonstrated that the model generalized the dataset fairly well.
Miscellaneous
[GitHub] [Project Report]
AWS Learning Assistant
Project demo
The EC2 instance containing the .NET application providing the web service is currently suspended to reduce costs at my
personal expense.
Project overview
A learning application, built using Android Studio and Visual
Studio which supports registration, communication, and task scheduling, that communicates directly with an AWS EC2
instance with a .NET application providing a REST API endpoint and an RDS backend layer. The application client itself
is built using Java, and it connects to an RDS instance which stores user information, class schedules, and tasks.
About the API
There are several API controllers implemented in C++ in the .NET application that the client uses to fetch data relevent
to each screen, as well as update user, subject, and task information to the database. Each POST method is protected using an
implicit transactions to prevent mishandling of data should the service be interrupted mid-request. The webservice itself is
load balanced using AWS's built in feature set for EC2 instances, and SSL certificate to serve the endpoint over HTTPS.
Miscellaneous
[GitHub]
Project demo
The EC2 instance containing the .NET application providing the web service is currently suspended to reduce costs at my personal expense.
Project overview
A learning application, built using Android Studio and Visual Studio which supports registration, communication, and task scheduling, that communicates directly with an AWS EC2 instance with a .NET application providing a REST API endpoint and an RDS backend layer. The application client itself is built using Java, and it connects to an RDS instance which stores user information, class schedules, and tasks.
About the API
There are several API controllers implemented in C++ in the .NET application that the client uses to fetch data relevent to each screen, as well as update user, subject, and task information to the database. Each POST method is protected using an implicit transactions to prevent mishandling of data should the service be interrupted mid-request. The webservice itself is load balanced using AWS's built in feature set for EC2 instances, and SSL certificate to serve the endpoint over HTTPS.
Miscellaneous
[GitHub]
Augmented Reality Basketball Shootout
Project demo
Project Overview
AR Hoops is an augmented reality application my team and I built in 24-36 hours as a submission for the hackathon SacHacks 2021. The project is written entirely in Swift, using Apple's ARKit and RealityKit frameworks. The user is able to shoot the ball towards the hoop, by swiping upwards on the screen and reset its position in world space by pressing the button on the bottom. We are responsible for the design and model of the basketball hoop. The ball and court is generated in-application.
Interesting Challenges
My team and I took on this project despite none of us having any experience in either Augmented Reality applications, or an extensive Swift background. We're proud that we were able to build working prototype of the app, but we did face some rather interesting challenges along the way.
Since none of us either had experience with Swift or augmented reality applications, a lot of time was spent studying Apple's documentation for how to use ARKit/RealityKit. Luckily, Apple's documentation is rather extensive and we were able to set up both the physics engine for the ball, and load in our models to the game.
Lastly, the way that the ball is moved is dependent on the player's orientation to it, and we had assumed that the player would always be facing the hoop, making our swipes dependent on that direction. If you swipe left, the ball would bounce towards the -X axis. However, we hadn't considered "What if the player moves behind the ball? Sure enough, standing behind the ball and swiping left didn't move the ball to the left, relative to our position, but towards the -X direction. To solve this, we implemented a distance checker between the ball's position in world-space, the hoop's location in world-space, and the phone's position in world-space. Whichever object was closer to the hoop would determine which direction the ball would fly if swiped left or right. This is because logically, if the player is closer to the hoop than the ball, then the player is behind the ball. Conversely, if the player is further from the hoop than the ball, then the player must be in front of it.
Miscellaneous
[GitHub]
Postcards
Project demo
Project Overview
Postcards is a full-stack social media web application that supports secure user signup, login, authentication, photo posting, tagging, liking and commenting, using the Vue.js framework on the frontend and MongoDB Atlas on the backend. Data is fetched and mutated through GraphQL API endpoint that interfaces with the database.
Demoed Features
The video demo currently showcases the following features:
- Home page with post carousel.
- User sign in with lazy form validation.
- Posting of an image.
- Favoriting a post.
- Commenting on a post.
- Responsive sidebar navigation.
The following features are not shown, but are implemented:
- User registration.
- Infinite scrolling.
- User profile page.
- Post searching and indicing.
- Logging out.
Miscellaneous
[GitHub]
Simple iOS Calculator
Project demo
Project Overview
Simple iOS Calculator is a fairly self descriptive project. It's an iOS application, built in Swift, that supports several programming calculation features. As a calculator, it can perform your run-of-the-mill numerical calculations, as well as perform base conversions, modulo, and bitwise operations.
Miscellaneous
[GitHub]