Functional Skills

Business Intelligence
Data Analysis
Data Engineering
Data Governance
Data Strategy
Data Visualization / Reports
Machine Learning
Project Management
DevOps
Regulatory Compliance
AI Agent Design
AI Agent Workflow Orchestration
AI Performance Optimization
AI Strategy
Artificial Intelligence

Software Skills

PowerBI
MySQL
YouTube API
Apache Spark
Docker
MongoDB
BigData
LangChain
AWS Lambda
Linux
SQL
Raspberry Pi
AWS
OpenAI API
Python

Sector Experience

Financial Services
Technology
Fortune 500

Experience

Valorence Engineering
Head of Big Data
5/2024 - Present
• Led the Big Data department, which included responsibilities such as designing a strategy, recruitment, teaching, private sessions, writing apps, building POCs, and communicating with the CEO
• Designed a curriculum to teach Data Engineering skills
• Trained students using daily classes and long-term projects building data pipelines in multiple technologies
• Created lesson plans to constantly improve the quality of education given to consultants
• Developed a Python application that uses the OpenAI API to generate scripts for lists of technologies for consultants to practice with in preparation for interviews
• Developed a basic adventure game using Pygame, encouraging students to make improvements to it as part of lessons for Python
• Built a streaming data pipeline with Kafka, Spark Structured Streaming, and Postgres with the game as a data source, to demonstrate practically how to build a streaming pipeline
• Demonstrated how to change the database of an app from Postgres to DynamoDB, and then to Databricks DBFS, in order to demonstrate how to use the same pipeline for multiple databases
• Built a Data Warehouse using Snowflake and DBT to teach about modern data warehousing and ELT
• Installed Llama3.2 locally using Ollama and used it to power a Langchain chatbot application
• Wrote a Terraformjob to streamline the process of students setting up an EMR cluster with the proper configuration to begin running Spark code immediately
• Taugt introductory DevOps, i

Information Tech Consultants Engineering
Big Data Engineer
1/2021 - 5/2024
• Assisted in preparing Big Data consultants for interviews and oversaw their progress
• Designed lessons to teach cloud technology to incoming consultants, including AWS, Azure, GCP, Snowflake, and Databricks
• Developed an AI-powered application in Python and the OpenAI API to automatically generate resumes for consultants based on lists of skills and experience provided by them
• Contributed to the transition of a CI/CD pipeline from TeamCity to Gitlab CI
• Dockerized multiple applications and taught students how to do so
• Walked students through the process of setting up hardware virtualization in BIOS and UEFI in order to be able to use Docker on their machines
• Built a streaming data pipeline in AWS Kinesis and Firehose that took data from the Twitter API and put it into S3, then built a Python app using NLTK to perform sentiment analysis on the data
• Wrote AWS Lambda functions that used Pandas to process small amounts of data arriving in S3, using Eventbridge for events
• Gave hands-on classes on how to deploy AWS EMR clusters, Azure HDInsight clusters, and GCP Dataproc clusters
• Demonstrated how to install and configure Hadoop and connect multiple machines together using a LAN in order to create a cluster
● Big Data Engineer

Anthem Engineering
Big Data Engineer
10/2020 - 1/2021
• Converted BTEQ SQL into Redshift SQL in preparation for migration from Teradata to Redshift
• Contributed code on Github, using Jenkins for CI/CD

JPMorganChase Engineering
Big Data Engineer
10/2019 - 10/2020
• Took ownership of a broken data archival system and led a 3-person team that fixed the issue, then archived the entire backlog of data
• Wrote Python and Pyspark code for a large application that uses machine learning techniques to perform CCAR, a regulatory compliance requirement for US financial institutions
• Took ownership of the application's test suite and improved code coverage from 20% to 75% using Pytest and Coverage.py
• Wrote Python and Bash scripts for ad hoc tasks
• Contributed code to the module of the program that uses Scala
• Migrated and organized files on HDFS using a Hadoop cluster running on Linux
• Scheduled tasks using Apache Airflow

Adobe Engineering
Big Data Engineer
2/2019 - 10/2019
• Built ETL pipelines in Pyspark to prepare data for ingestion into Adobe Experience Cloud
• Built Pysparkjobs to prepare other data for ingestion into Adobe Analytics
• Met with data architects to help plan pipelines from start to finish

Multiple clients Engineering
Freelance Web Developer
1/2016 - 2/2019
• Built a gardening app with Ruby on Rails that interfaced with IoT devices including Arduino, Raspberry Pi, and digital cameras to power an automated watering system and a data collection system using temperature and soil moisture sensors, allowing the user to control and monitor their garden from any location.
• Wrote Arduino code using Arduino's version of C++ to power sensors, solenoid valves, and motors
• Contributed React JS code to an app for a bus scheduling for the city of Savannah, Georgia
• Contributed JavaScript code to a clone of Airbnb used by a presidential campaign in 2016
• Built many apps using the Node.js stack for ad hoc purposes, including the use of MongoDB, Express.JS, React.js, and Redux
• Built web apps in Python using both Flask and Django
• Integrated Postgres and MySQL databases into web apps, and processed JSON files natively in JavaScript