👉 Building data pipelines in Orchest is really that easy! Check out our introductory video.
This quickstart will follow an example explaining how to build data science pipelines in Orchest and touches upon some core principles that will be helpful when you get to building your own pipelines. The example pipeline will download the sklearn California housing dataset, explore the data, train some classifiers, and in the final step collect the results from those classifiers.
For the impatient¶
As Miguel Grinberg would say: “If you are the instant gratification type, and the screenshot at the top of this article intrigued you, then head over to the Github repository for the code used in this article. Then come back to learn how everything works!”
To get started in Orchest you can import the GitHub repository URL
https://github.com/orchest/quickstart through the UI:
Your first project¶
To start, make sure you have installed Orchest and started it:
# Make sure to be in the root-level orchest directory. ./orchest start
All code in this quickstart is written in Python, nevertheless, we do also support other languages such as R.
Get California housing data¶
The logical next step is to create the first pipeline called
California housing and open the
pipeline editor. This will automatically boot an interactive session so
you can interactively edit the Python script we create (the other steps will be Jupyter Notebooks!):
Create a new step by clicking: + new step.
Enter a Title and File path, respectively
Get housing dataand
The changes you make to the pipeline (through the pipeline editor) are saved automatically.
Now we can start writing our code through the familiar JupyterLab interface, simply press edit in JupyterLab (making sure you have the step selected) and paste in the following code:
1import orchest 2import pandas as pd 3from sklearn import datasets 4 5# Explicitly cache the data in the "/data" directory since the 6# kernel is running in a Docker container, which are stateless. 7# The "/data" directory is a special directory managed by Orchest 8# to allow data to be persisted and shared across pipelines and 9# even projects. 10print("Dowloading California housing data...") 11data = datasets.fetch_california_housing(data_home="/data") 12 13# Convert the data into a DataFrame. 14df_data = pd.DataFrame(data["data"], columns=data["feature_names"]) 15df_target = pd.DataFrame(data["target"], columns=["MedHouseVal"]) 16 17# Output the housing data so the next steps can retrieve it. 18print("Outputting converted housing data...") 19orchest.output((df_data, df_target), name="data") 20print("Success!")
As you can see, we have highlighted a few lines in the code to emphasize important nuts and bolts to get a better understanding of building pipelines in Orchest. These nuts and bolts are explained below:
First we start with explaining line
11in which we cache the data in the
/datadirectory. This is actually the
userdir/datadirectory (from the Orchest GitHub repository) that gets bind mounted in the respective Docker container running your code. This allows you to access the data from any pipeline, even from pipelines in different projects. Data should be stored in
/datanot only for sharing purposes, but also to make sure that jobs do not unnecessarily copy the data when creating the snapshot for reprodicibility reasons.
19showcases the usage of the Orchest SDK to pass data between pipeline steps. Keep in mind that calling
orchest.transfer.output()multiple times will result in the data getting overwritten, in other words: only output data once per step!
To run the code, switch back to the pipeline editor, select the step and press run selected steps. After just a few seconds you should see that the step completed successfully. Let’s check the logs to confirm - the logs contain the latest STDOUT of the script.
Remember that running the code will output the converted housing data, so in the next step we can now retrieve and explore that data!
Now that we have downloaded the data, the next pipeline step can explore it. Create another pipeline
step with Title
Data exploration and File path
explore-data.ipynb, and connect the two
You can get the code for this pipeline step from the
explore-data.ipynb file in the GitHub
Maybe you already noticed the imports in the previous step:
import orchest import pandas as pd from sklearn import datasets
Adding additional dependencies (even system level dependencies) can be done by using environments.
Finalizing the pipeline¶
To end up with the final pipeline, please refer to the For the impatient section to import the pipeline. You can also build the pipeline from scratch yourself!
The interactive session does not shut down automatically and thus the
resources will keep running when editing another pipeline, you can shut down the session manually
by clicking on the shut down button. Of course all resources are shut down when you shut down