With a 5 star rating on Course Report, we're one of the top data engineering bootcamps in the world.
"I give this program a 20/10 because it truly changed my life professionally when I wasn't sure if a job in tech could actually happen for me."
"When I tell my software/data engineering friends what we're learning, they're impressed by how much the curriculum covers."
We're committed to helping you secure a job, and work with you before and after graduation until you get one. Our 2021-2022 graduates' job search results speak for themselves.
Outcomes are for 2021-2022 data engineering program graduates who were seeking new positions. The graduation rate was 90%. The job placement rate was 100% within the money back guarantee period. Placement and salary numbers include engineering and non-engineering roles, as well as full-time and contract-to-hire positions.
Our money back guarantee provides a full refund if you don't receive an offer of at least $70K within 9 months of graduating.
Our curriculum has been built in partnership with industry experts from companies like Facebook, Meltano, and SquareSpace to make sure you learn the top data engineering skills needed to launch your data engineering career.
You’ll work with real datasets right from the start, and learn how to query, organize and transform data.
Start with software engineering fundamentals with Python, relational databases with SQL, and unit tests.
Then, students learn to build a modern data infrastructure from scratch by using Snowflake for big data, DBT for data processing, and building dashboards using data visualization tools.
We’ll go deeper in the key components of a backend web development — by build APIs in Flask, and learning design patterns like Model View Controller (MVC) and ETL in Python.
At this point, you’ll be able to build an API from scratch with Python programming, and be well practiced in a professional workflow using git and Github.
Use cloud computing frameworks like Docker, AWS, and how to build a data warehouse in Redshift. We’ll then learn how to orchestrate ETL data pipelines with Airflow.
We begin preparing for technical interviews halfway through the program. Students learn the fundamentals of computer science and algorithms in Python, practice with data modeling, and prepare SQL interview questions.
After the program ends we continue to meet twice weekly during post-work. This ensures that students continue to improve and review their skills, and have assistance in navigating the interview process. We also regularly connect students with employers to assist with career placement.
The best engineering candidates show on-the-job experience. So we built an internship directly into our coding bootcamp.
Halfway through our program, students partner with top companies to deliver part-time work. The corporate placements have spanned a variety of industries, from machine learning, to data engineering, as well as blockchain and cryptocurrency.
Our students and instructors come with a diverse range of talents and backgrounds, and support each other during the program and beyond. From pairing during class to teaming up in the internship, our program is designed to grow your technical and teamwork skills and accelerate your transition to a new job.
"Jigsaw Labs have helped changed the trajectory of my life and career over the course of a few short months. [...] The best part is you'll find yourself with a supportive group, all of whom are looking out for your eventual success."
"Taking this course was one of the best decisions I have made in my life. I have felt so challenged and also so rewarded. The job offers I've received post-course have been awesome. There are not a lot of women of color in this field [but] I really felt super supported and connected to the other folks in my cohort. "
"For someone like me seeking to transition to data engineering from a "non-traditional" background, Jigsaw Labs' curriculum covering in-demand skills, coupled with [the instructor's] attentiveness, has been a huge boost to my domain knowledge and morale."
We designed a high quality, money back guaranteed program, that fits around your existing schedule.
By designing our course to be part-time, our students land a new career without quitting their current job.
Our course is designed to fit your schedule. Unlike a full-time course, if you have difficulties with a topic, you'll have time to catch up between classes.
Our Zoom-based classes consist of live lectures followed by interactive readings and pairing on labs with instructor assistance.
Book one on one office hours each week for individualized support.
To maintain our 100% placement rate, we provide career coaching and job placement services to all of our graduates.
Post graduation, we continue with bi-weekly classes for technical and career coaching.
Both with a money back guarantee.
Then after you're hired, $833 over 15 monthly payments for a total cost of $13,500
Money back guarantee if do not receive salary of $70,000+ within 9 months of graduating.
Monthly payments only after you're hired
$4,750 paid before each of the two semesters, for a total cost of $9,500
Money back guarantee if do not receive salary of $70,000+ within 9 months of graduating.
Next cohort starts January 24th and runs for 24 weeks.
This is the best way to get a feel for our teaching style, and figure out if you'd enjoy data engineering. Check out our 80 free interactive lessons on topics ranging from Python to Pandas to Machine Learning and Neural Networks.
Learn coding fundamentals by using Python to pull data from the web.
Then create interactive graphs and analyses of your data.
We'll see how Spark allows us to work with and query large amounts of data quickly. We'll use Pyspark to retrieve and query data from AWS.
Learn why docker is an essential part of cloud computing. Work with images and containers, and then build your own custom image.
We love teaching data engineering. It allows you to focus on modern fundamentals like Python, SQL, and big data and cloud computing. Because of this, our students graduate with in-demand data engineering skills, but also skills that allow for a flexible career going forward -- whether students wish to pursue machine learning or backend engineering.
Multiple graduates have been hired as backend engineers right out of the bootcamp, and many data engineering skills are a prerequisite to becoming a machine learning engineer.
In fact, our first motivation for building a data engineering bootcamp was simply that we saw that going deep in the data engineering skills was a better entryway into data science and machine learning than by learning the skills taught in a data science bootcamp.
First, we try not to just have you check off skills on a list but go in depth in a few core skills, and then branch out into other curriculum that reinforces those skills. For example, our second module in backend development goes deeper into Python and SQL by teaching students how to build an API. And tools redshift, snowflake, and DBT reinforce SQL skills. Finally, tools like AWS, Docker, and bash teach the skills needed for cloud computing.
Second, we built the course to continuously providing mechanisms to ensure students end up landing a job. This starts with admissions, where we only admit students we believe will be successful. Then, it continues with our curriculum where we only teach material that employers are looking for, and remove the material if the job market changes.
With our internship program, we vet companies to make sure students will be able to contribute, but still will be challenged. And with our career services, we continue meeting to continue training for students and connect students with companies to assist in their job search. It’s this combination has allowed all of our graduates to land jobs.
Our data engineering course has a 100% job placement rate, and there are multiple ways that we achieve this.
1. Ongoing Technical and Career Coaching - Even after our students complete our program, we continue to meet with them twice weekly to ensure that they continue to improve their skills and progress through the job search. We work with each student individually to help them clean up their Linkedin profile and revise their resume so that it is most attractive to employers.
2. Job placement - We help students in their job search by reaching out to employers on their behalf, gathering information about what employers are looking for and then making the connection.
3. Program structure - Our curriculum focuses on skills that are most in demand by employers. This makes it an easy sell for students to land interviews. In addition, our internship program gives students on-the-job experience that employers love to see in a candidate.
To enroll in the course, the first step is to schedule a zoom call to learn more about our online bootcamp. From there, we’ll schedule technical interview that assesses your Python programming skills. The interview will cover the Python material in the first ten lessons of our free Python for data course.
When we decide whether to admit students into our program, the primary question we ask is Do we believe this student will get a job upon graduating. We only admit students we are confident will graduate from the course with a positive outcome.
We developed our curriculum by (1) scraping job postings for data engineerings to identify skill sets (2) talking to hiring industry experts (data engineers, hiring partners, internship partners) and monitoring data engineering communities and (3) seeing what is actually asked on technical interviews.
Our goal is for you not just to land your initial job, but to have a flexible career going forward. While we have curriculum on Apache spark and Kubernetes, we moved it out of our core curriculum as we found it more asked of midlevel engineers, and now our graduates. The same goes for real-time streaming tools like Kafka or big data databases like hadoop — these skills rarely came up in job interviews. And we do not teach NoSQL as it has not come up in job interviews, and also is only listed in 15% of data engineering job listings.
We have chosen AWS as our cloud provider, however students have also learned to use GCP (Google's cloud platform) and Microsoft's Azure platform as part of their internship experience.
One thing to note is that because we constantly update our syllabus, we have a lot of battle tested curriculum outside of our core curriculum (one thousand pages). And all of this is available to our students. So if a student needs to learn a topic for an internship or a job interview, we often have material to ramp them up.
None. The primarily task of data engineers is to build the data infrastructure needed for machine learning and data analytics. While learners are asked to perform data analysis work as part of their internship, students lean on their engineering skills, understanding of data collection and data processing, and not statistics or calculus to perform these calculations.
Data analysts may be asked to use a combination of spreadsheets, SQL, and data visualization tools to draw insights. Data scientists by contrast generally know either R or Python, as well as related machine learning libraries like SKLearn, and have a strong understanding of statistics to draw insights and perform predictive modeling.
Data engineers by contrast should be well versed in backend engineering end Python, have a deep understanding of SQL, cloud computing, big data tools like snowflake and redshift, and setting up data pipelines with orchestration tools like airflow.
The primary language is SQL (although not technically a programming language), followed closely by Python. Data engineers should also know HTML and CSS so that they can scrape websites. And some literacy in Javascript also helps as it comes in handy for scraping websites, and for setting up marketing analytics pipelines. (We have curriculum on all of these subjects for our students, even though Javascript does not often come up in interviews).
Some job descriptions may accept (or prefer)literacy in other object oriented programming languages like Java, as an alternative to Python. Still, Python and SQL are the skillsets requested most.
Students complete their capstone projects through their internship. Student projects have involved automating the deployment of machine learning models, setting up a data pipeline pulling data from slack using an EL tool, performing analysis using DBT, and orchestrating with Gitlab actions. Still other students have worked with cryptocurrency databases to develop dashboards that helped their internship partner understand market fluctuations.
During the post-work class sessions multiple students are working with Code For Boston, where they are helping to build a data pipeline that tracks police misconduct.