Srihari Thyagarajan commited on
Commit
6aff290
·
unverified ·
2 Parent(s): bda95cd b16eaca

Merge pull request #26 from koushikkhan/feat/issue#18/polars-data-wrangling

Browse files
Files changed (2) hide show
  1. polars/01_why_polars.py +313 -0
  2. polars/README.md +9 -0
polars/01_why_polars.py ADDED
@@ -0,0 +1,313 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.12"
3
+ # dependencies = [
4
+ # "marimo",
5
+ # "pandas==2.2.3",
6
+ # "polars==1.22.0",
7
+ # ]
8
+ # ///
9
+
10
+ import marimo
11
+
12
+ __generated_with = "0.11.8"
13
+ app = marimo.App(width="medium")
14
+
15
+
16
+ @app.cell
17
+ def _():
18
+ import marimo as mo
19
+ return (mo,)
20
+
21
+
22
+ @app.cell(hide_code=True)
23
+ def _(mo):
24
+ mo.md(
25
+ """
26
+ # An introduction to Polars
27
+
28
+ This notebook provides a birds-eye overview of [Polars](https://pola.rs/), a fast and user-friendly data manipulation library for Python, and compares it to alternatives like Pandas and PySpark.
29
+
30
+ Like Pandas and PySpark, the central data structure in Polars is **the DataFrame**, a tabular data structure consisting of named columns. For example, the next cell constructs a DataFrame that records the gender, age, and height in centimeters for a number of individuals.
31
+ """
32
+ )
33
+ return
34
+
35
+
36
+ @app.cell
37
+ def _():
38
+ import polars as pl
39
+
40
+ df_pl = pl.DataFrame(
41
+ {
42
+ "gender": ["Male", "Female", "Male", "Female", "Male", "Female",
43
+ "Male", "Female", "Male", "Female"],
44
+ "age": [13, 15, 17, 19, 21, 23, 25, 27, 29, 31],
45
+ "height_cm": [150.0, 170.0, 146.5, 142.0, 155.0, 165.0, 170.8, 130.0, 132.5, 162.0]
46
+ }
47
+ )
48
+ df_pl
49
+ return df_pl, pl
50
+
51
+
52
+ @app.cell(hide_code=True)
53
+ def _(mo):
54
+ mo.md(
55
+ """
56
+ Unlike Python's earliest DataFrame library Pandas, Polars was designed with performance and usability in mind — Polars can scale to large datasets with ease while maintaining a simple and intuitive API.
57
+
58
+ Polars' performance is due to a number of factors, including its implementation and rust and its ability to perform operations in a parallelized and vectorized manner. It supports a wide range of data types, advanced query optimizations, and seamless integration with other Python libraries, making it a versatile tool for data scientists, engineers, and analysts. Additionally, Polars provides a lazy API for deferred execution, allowing users to optimize their workflows by chaining operations and executing them in a single pass.
59
+
60
+ With its focus on speed, scalability, and ease of use, Polars is quickly becoming a go-to choice for data professionals looking to streamline their data processing pipelines and tackle large-scale data challenges.
61
+ """
62
+ )
63
+ return
64
+
65
+
66
+ @app.cell(hide_code=True)
67
+ def _(mo):
68
+ mo.md(
69
+ """
70
+ ## Choosing Polars over Pandas
71
+
72
+ In this section we'll give a few reasons why Polars is a better choice than Pandas, along with examples.
73
+ """
74
+ )
75
+ return
76
+
77
+
78
+ @app.cell(hide_code=True)
79
+ def _(mo):
80
+ mo.md(
81
+ """
82
+ ### Intuitive syntax
83
+
84
+ Polars' syntax is similar to PySpark and intuitive like SQL, making heavy use of **method chaining**. This makes it easy for data professionals to transition to Polars, and leads to an API that is more concise and readable than Pandas.
85
+
86
+ **Example.** In the next few cells, we contrast the code to perform a basic filter and aggregation of data with Pandas to the code required to accomplish the same task with `Polars`.
87
+ """
88
+ )
89
+ return
90
+
91
+
92
+ @app.cell
93
+ def _():
94
+ import pandas as pd
95
+
96
+ df_pd = pd.DataFrame(
97
+ {
98
+ "Gender": ["Male", "Female", "Male", "Female", "Male", "Female",
99
+ "Male", "Female", "Male", "Female"],
100
+ "Age": [13, 15, 17, 19, 21, 23, 25, 27, 29, 31],
101
+ "Height_CM": [150.0, 170.0, 146.5, 142.0, 155.0, 165.0, 170.8, 130.0, 132.5, 162.0]
102
+ }
103
+ )
104
+
105
+ # query: average height of male and female after the age of 15 years
106
+
107
+ # step-1: filter
108
+ filtered_df_pd = df_pd[df_pd["Age"] > 15]
109
+
110
+ # step-2: groupby and aggregation
111
+ result_pd = filtered_df_pd.groupby("Gender")["Height_CM"].mean()
112
+ result_pd
113
+ return df_pd, filtered_df_pd, pd, result_pd
114
+
115
+
116
+ @app.cell(hide_code=True)
117
+ def _(mo):
118
+ mo.md(r"""The same example can be worked out in Polars more concisely, using method chaining. Notice how the Polars code is essentially as readable as English.""")
119
+ return
120
+
121
+
122
+ @app.cell
123
+ def _(pl):
124
+ data_pl = pl.DataFrame(
125
+ {
126
+ "Gender": ["Male", "Female", "Male", "Female", "Male", "Female",
127
+ "Male", "Female", "Male", "Female"],
128
+ "Age": [13, 15, 17, 19, 21, 23, 25, 27, 29, 31],
129
+ "Height_CM": [150.0, 170.0, 146.5, 142.0, 155.0, 165.0, 170.8, 130.0, 132.5, 162.0]
130
+ }
131
+ )
132
+
133
+ # query: average height of male and female after the age of 15 years
134
+
135
+ # filter, groupby and aggregation using method chaining
136
+ result_pl = data_pl.filter(pl.col("Age") > 15).group_by("Gender").agg(pl.mean("Height_CM"))
137
+ result_pl
138
+ return data_pl, result_pl
139
+
140
+
141
+ @app.cell(hide_code=True)
142
+ def _(mo):
143
+ mo.md(
144
+ """
145
+ Notice how Polars uses a *method-chaining* approach, similar to PySpark, which makes the code more readable and expressive while using a *single line* to design the query.
146
+ Additionally, Polars supports SQL-like operations *natively*, that allows you to write SQL queries directly on polars dataframe:
147
+ """
148
+ )
149
+ return
150
+
151
+
152
+ @app.cell
153
+ def _(data_pl):
154
+ result = data_pl.sql("SELECT Gender, AVG(Height_CM) FROM self WHERE Age > 15 GROUP BY Gender")
155
+ result
156
+ return (result,)
157
+
158
+
159
+ @app.cell(hide_code=True)
160
+ def _(mo):
161
+ mo.md(
162
+ """
163
+ ### A large collection of built-in APIs
164
+
165
+ Polars has a comprehensive API that enables to perform virtually any operation using built-in methods. In contrast, Pandas often requires more complex operations to be handled using the `apply` method with a lambda function. The issue with `apply` is that it processes rows sequentially, looping through the DataFrame one row at a time, which can be inefficient. By leveraging Polars' built-in methods, you can operate on entire columns at once, unlocking the power of **SIMD (Single Instruction, Multiple Data)** parallelism. This approach not only simplifies your code but also significantly improves performance.
166
+ """
167
+ )
168
+ return
169
+
170
+
171
+ @app.cell(hide_code=True)
172
+ def _(mo):
173
+ mo.md(
174
+ """
175
+ ### Query optimization 📈
176
+
177
+ A key factor behind Polars' performance lies in its **evaluation strategy**. While Pandas defaults to **eager execution**, executing operations in the exact order they are written, Polars offers both **eager and lazy execution**. With lazy execution, Polars employs a **query optimizer** that analyzes all required operations and determines the most efficient way to execute them. This optimization can involve reordering operations, eliminating redundant calculations, and more.
178
+
179
+ For example, consider the following expression to calculate the mean of the `Number1` column for categories "A" and "B" in the `Category` column:
180
+
181
+ ```python
182
+ (
183
+ df
184
+ .groupby(by="Category").agg(pl.col("Number1").mean())
185
+ .filter(pl.col("Category").is_in(["A", "B"]))
186
+ )
187
+ ```
188
+
189
+ If executed eagerly, the `groupby` operation would first be applied to the entire DataFrame, followed by filtering the results by `Category`. However, with **lazy execution**, Polars can optimize this process by first filtering the DataFrame to include only the relevant categories ("A" and "B") and then performing the `groupby` operation on the reduced dataset. This approach minimizes unnecessary computations and significantly improves efficiency.
190
+ """
191
+ )
192
+ return
193
+
194
+
195
+ @app.cell(hide_code=True)
196
+ def _(mo):
197
+ mo.md(
198
+ """
199
+ ### Scalability — handling large datasets in memory ⬆️
200
+
201
+ Pandas is limited by its single-threaded design and reliance on Python, which makes it inefficient for processing large datasets. Polars, on the other hand, is built in Rust and optimized for parallel processing, enabling it to handle datasets that are orders of magnitude larger.
202
+
203
+ **Example: Processing a Large Dataset**
204
+ In Pandas, loading a large dataset (e.g., 10GB) often results in memory errors:
205
+
206
+ ```python
207
+ # This may fail with large datasets
208
+ df = pd.read_csv("large_dataset.csv")
209
+ ```
210
+
211
+ In Polars, the same operation runs quickly, without memory pressure:
212
+
213
+ ```python
214
+ df = pl.read_csv("large_dataset.csv")
215
+ ```
216
+
217
+ Polars also supports lazy evaluation, which allows you to optimize your workflows by deferring computations until necessary. This is particularly useful for large datasets:
218
+
219
+ ```python
220
+ df = pl.scan_csv("large_dataset.csv") # Lazy DataFrame
221
+ result = df.filter(pl.col("A") > 1).groupby("A").agg(pl.sum("B")).collect() # Execute
222
+ ```
223
+ """
224
+ )
225
+ return
226
+
227
+
228
+ @app.cell(hide_code=True)
229
+ def _(mo):
230
+ mo.md(
231
+ """
232
+ ### Compatibility with other machine learning libraries 🤝
233
+
234
+ Polars integrates seamlessly with popular machine learning libraries like Scikit-learn, PyTorch, and TensorFlow. Its ability to handle large datasets efficiently makes it an excellent choice for preprocessing data before feeding it into ML models.
235
+
236
+ **Example: Preprocessing Data for Scikit-learn**
237
+
238
+ ```python
239
+ import polars as pl
240
+ from sklearn.linear_model import LinearRegression
241
+
242
+ # Load and preprocess data
243
+ df = pl.read_csv("data.csv")
244
+ X = df.select(["feature1", "feature2"]).to_numpy()
245
+ y = df.select("target").to_numpy()
246
+
247
+ # Train a model
248
+ model = LinearRegression()
249
+ model.fit(X, y)
250
+ ```
251
+
252
+ Polars also supports conversion to other formats like NumPy arrays and Pandas DataFrames, ensuring compatibility with virtually any ML library:
253
+
254
+ ```python
255
+ # Convert to Pandas DataFrame
256
+ pandas_df = df.to_pandas()
257
+
258
+ # Convert to NumPy array
259
+ numpy_array = df.to_numpy()
260
+ ```
261
+ """
262
+ )
263
+ return
264
+
265
+
266
+ @app.cell(hide_code=True)
267
+ def _(mo):
268
+ mo.md(
269
+ """
270
+ ### Easy to use, with room for power users
271
+
272
+ Polars supports advanced operations like
273
+
274
+ - **date handling**
275
+ - **window functions**
276
+ - **joins**
277
+ - **nested data types**
278
+
279
+ which is making it a versatile tool for data manipulation.
280
+ """
281
+ )
282
+ return
283
+
284
+
285
+ @app.cell(hide_code=True)
286
+ def _(mo):
287
+ mo.md(
288
+ """
289
+ ## Why not PySpark?
290
+
291
+ While **PySpark** is versatile tool that has transformed the way big data is handled and processed in Python, its **complex setup process** can be intimidating, especially for beginners. In contrast, **Polars** requires minimal setup and is ready to use right out of the box, making it more accessible for users of all skill levels.
292
+
293
+ When deciding between the two, **PySpark** is the preferred choice for processing large datasets distributed across a **multi-node cluster**. However, for computations on a **single-node machine**, **Polars** is an excellent alternative. Remarkably, Polars is capable of handling datasets that exceed the size of the available RAM, making it a powerful tool for efficient data processing even on limited hardware.
294
+ """
295
+ )
296
+ return
297
+
298
+
299
+ @app.cell(hide_code=True)
300
+ def _(mo):
301
+ mo.md(
302
+ """
303
+ ## 🔖 References
304
+
305
+ - [Polars official website](https://pola.rs/)
306
+ - [Polars vs. Pandas](https://blog.jetbrains.com/pycharm/2024/07/polars-vs-pandas/)
307
+ """
308
+ )
309
+ return
310
+
311
+
312
+ if __name__ == "__main__":
313
+ app.run()
polars/README.md ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ # Learn Polars
2
+
3
+ This collection of marimo notebooks is designed to teach you the basics of data wrangling using a Python library called Polars.
4
+
5
+ **Running notebooks.** To run a notebook locally, use
6
+
7
+ ```bash
8
+ uvx marimo edit <file_url>
9
+ ```