File size: 13,216 Bytes
3f17228
 
 
 
3bdaf96
 
3f17228
 
3bdaf96
bca2d8f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fb175fb
bca2d8f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3bdaf96
 
 
 
bca2d8f
3bdaf96
 
 
 
7be0656
bca2d8f
912d45e
 
 
bca2d8f
 
 
3bdaf96
bca2d8f
 
3bdaf96
bca2d8f
 
 
 
3bdaf96
bca2d8f
 
 
 
 
 
3bdaf96
bca2d8f
 
3bdaf96
bca2d8f
3bdaf96
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bca2d8f
3bdaf96
bca2d8f
 
3bdaf96
bca2d8f
 
 
 
3bdaf96
bca2d8f
 
 
 
 
 
3bdaf96
 
bca2d8f
3bdaf96
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bca2d8f
3bdaf96
 
 
 
bca2d8f
 
 
 
3bdaf96
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bca2d8f
3bdaf96
 
 
 
bca2d8f
3bdaf96
 
 
 
 
 
 
 
 
 
 
 
24c07d4
 
3bdaf96
 
 
 
24c07d4
3bdaf96
 
 
 
 
24c07d4
 
 
 
 
3bdaf96
24c07d4
 
 
 
 
 
 
 
3bdaf96
 
 
24c07d4
3bdaf96
 
 
 
 
 
bca2d8f
 
 
 
 
 
3bdaf96
bca2d8f
 
 
 
 
 
3bdaf96
bca2d8f
 
 
 
 
3bdaf96
bca2d8f
 
 
3bdaf96
 
 
bca2d8f
3bdaf96
 
 
 
 
 
bca2d8f
 
 
 
 
3bdaf96
bca2d8f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3bdaf96
bca2d8f
 
 
 
 
 
3bdaf96
 
 
 
 
 
 
 
 
 
 
 
bca2d8f
3bdaf96
 
 
 
bca2d8f
3bdaf96
bca2d8f
 
 
 
 
 
24c07d4
 
 
3bdaf96
bca2d8f
24c07d4
bca2d8f
24c07d4
 
bca2d8f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
# /// script
# requires-python = ">=3.12"
# dependencies = [
#     "marimo",
#     "pandas==2.2.3",
#     "polars==1.22.0",
# ]
# ///

import marimo

__generated_with = "0.11.0"
app = marimo.App(width="medium")


@app.cell
def _():
    import marimo as mo
    return (mo,)


@app.cell
def _(mo):
    mo.md(
        """
        # An introduction to Polars

        [Polars](https://pola.rs/) is a blazingly fast, efficient, and user-friendly DataFrame library designed for data manipulation and analysis in Python. Built with performance in mind, Polars leverages the power of Rust under the hood, enabling it to handle large datasets with ease while maintaining a simple and intuitive API. Whether you're working with structured data, performing complex transformations, or analyzing massive datasets, Polars is designed to deliver exceptional speed and memory efficiency, often outperforming other popular DataFrame libraries like Pandas.

        One of the standout features of Polars is its ability to perform operations in a parallelized and vectorized manner, making it ideal for modern data processing tasks. It supports a wide range of data types, advanced query optimizations, and seamless integration with other Python libraries, making it a versatile tool for data scientists, engineers, and analysts. Additionally, Polars provides a lazy API for deferred execution, allowing users to optimize their workflows by chaining operations and executing them in a single pass.

        With its focus on speed, scalability, and ease of use, Polars is quickly becoming a go-to choice for data professionals looking to streamline their data processing pipelines and tackle large-scale data challenges. Whether you're analyzing gigabytes of data or performing real-time computations, Polars empowers you to work faster and smarter.
        """
    )
    return


@app.cell
def _(mo):
    mo.md(
        """
        # Why Polars?

        Pandas has long been the go-to library for data manipulation and analysis in Python. However, as datasets grow larger and more complex, Pandas often struggles with performance and memory limitations. This is where Polars shines. Polars is a modern, high-performance DataFrame library designed to address the shortcomings of Pandas while providing a user-friendly experience. 

        Below, we’ll explore key reasons why Polars is a better choice in many scenarios, along with examples.
        """
    )
    return


@app.cell
def _(mo):
    mo.md(
        """
        ### Intuitive syntax

        Polars' syntax is similar to PySpark and intuitive like SQL, making heavy use of **method chaining**. This makes it easy for data professionals to transition to Polars, and leads to an API that is more concise and readable than Pandas.
        
        **Example.** In the next few cells, we contrast the code to perform a basic filter and aggregation of data with Pandas to the code required to accomplish the same task with `Polars`.

        **Example: Filtering and Aggregating Data**

        ```python
        import pandas as pd

        df_pd = pd.DataFrame(
            { 
                "Gender": ["Male", "Female", "Male", "Female", "Male", "Female", 
                           "Male", "Female", "Male", "Female"],
                "Age": [13, 15, 17, 19, 21, 23, 25, 27, 29, 31],
                "Height_CM": [150.0, 170.0, 146.5, 142.0, 155.0, 165.0, 170.8, 130.0, 132.5, 162.0]
            }
        )

        # query: average height of male and female after the age of 15 years

        # step-1: filter
        filtered_df_pd = df_pd[df_pd["Age"] > 15]

        # step-2: groupby and aggregation
        result_pd = filtered_df_pd.groupby("Gender")["Height_CM"].mean()
        ```
        """
    )
    return


@app.cell
def _():
    import pandas as pd

    df_pd = pd.DataFrame(
        { 
            "Gender": ["Male", "Female", "Male", "Female", "Male", "Female", 
                       "Male", "Female", "Male", "Female"],
            "Age": [13, 15, 17, 19, 21, 23, 25, 27, 29, 31],
            "Height_CM": [150.0, 170.0, 146.5, 142.0, 155.0, 165.0, 170.8, 130.0, 132.5, 162.0]
        }
    )

    # query: average height of male and female after the age of 15 years

    # step-1: filter
    filtered_df_pd = df_pd[df_pd["Age"] > 15]

    # step-2: groupby and aggregation
    result_pd = filtered_df_pd.groupby("Gender")["Height_CM"].mean()
    result_pd
    return df_pd, filtered_df_pd, pd, result_pd


@app.cell
def _(mo):
    mo.md(
        r"""
        The same example can be worked out in Polars like below,

        ```python
        import polars as pl

        df_pl = pl.DataFrame(
            { 
                "Gender": ["Male", "Female", "Male", "Female", "Male", "Female", 
                           "Male", "Female", "Male", "Female"],
                "Age": [13, 15, 17, 19, 21, 23, 25, 27, 29, 31],
                "Height_CM": [150.0, 170.0, 146.5, 142.0, 155.0, 165.0, 170.8, 130.0, 132.5, 162.0]
            }
        )

        # query: average height of male and female after the age of 15 years

        # filter, groupby and aggregation using method chaining
        result_pl = df_pl.filter(pl.col("Age") > 15).group_by("Gender").agg(pl.mean("Height_CM"))
        result_pl
        ```
        """
    )
    return


@app.cell
def _():
    import polars as pl

    df_pl = pl.DataFrame(
        { 
            "Gender": ["Male", "Female", "Male", "Female", "Male", "Female", 
                       "Male", "Female", "Male", "Female"],
            "Age": [13, 15, 17, 19, 21, 23, 25, 27, 29, 31],
            "Height_CM": [150.0, 170.0, 146.5, 142.0, 155.0, 165.0, 170.8, 130.0, 132.5, 162.0]
        }
    )

    # query: average height of male and female after the age of 15 years

    # filter, groupby and aggregation using method chaining
    result_pl = df_pl.filter(pl.col("Age") > 15).group_by("Gender").agg(pl.mean("Height_CM"))
    result_pl
    return df_pl, pl, result_pl


@app.cell
def _(mo):
    mo.md(
        """
        Notice how Polars uses a *method-chaining* approach, similar to PySpark, which makes the code more readable and expressive while using a *single line* to design the query.

        Additionally, Polars supports SQL-like operations *natively*, that allows you to write SQL queries directly on polars dataframe:

        ```python
        import polars as pl

        df_pl = pl.DataFrame(
            { 
                "Gender": ["Male", "Female", "Male", "Female", "Male", "Female", 
                           "Male", "Female", "Male", "Female"],
                "Age": [13, 15, 17, 19, 21, 23, 25, 27, 29, 31],
                "Height_CM": [150.0, 170.0, 146.5, 142.0, 155.0, 165.0, 170.8, 130.0, 132.5, 162.0]
            }
        )

        # query: average height of male and female after the age of 15 years
        result = df_pl.sql("SELECT Gender, AVG(Height_CM) FROM self WHERE Age > 15 GROUP BY Gender")
        result
        ```
        """
    )
    return


@app.cell
def _(df_pl):
    result = df_pl.sql("SELECT Gender, AVG(Height_CM) FROM self WHERE Age > 15 GROUP BY Gender")
    result
    return (result,)


@app.cell
def _(mo):
    mo.md(
        """
        ## (b) Large Collection of Built-in APIs ⚙️

        Polars boasts an **extremely expressive API**, enabling you to perform virtually any operation using built-in methods. In contrast, Pandas often requires more complex operations to be handled using the `apply` method with a lambda function. The issue with `apply` is that it processes rows sequentially, looping through the DataFrame one row at a time, which can be inefficient. By leveraging Polars' built-in methods, you can operate on entire columns at once, unlocking the power of **SIMD (Single Instruction, Multiple Data)** parallelism. This approach not only simplifies your code but also significantly enhances performance.
        """
    )
    return


@app.cell
def _(mo):
    mo.md(
        """
        ## (c) Query Optimization 📈

        A key factor behind Polars' performance lies in its **evaluation strategy**. While Pandas defaults to **eager execution**, executing operations in the exact order they are written, Polars offers both **eager and lazy execution**. With lazy execution, Polars employs a **query optimizer** that analyzes all required operations and determines the most efficient way to execute them. This optimization can involve reordering operations, eliminating redundant calculations, and more. 

        For example, consider the following expression to calculate the mean of the `Number1` column for categories "A" and "B" in the `Category` column:

        ```python
        (
            df
            .groupby(by="Category").agg(pl.col("Number1").mean())
            .filter(pl.col("Category").is_in(["A", "B"]))
        )
        ```

        If executed eagerly, the `groupby` operation would first be applied to the entire DataFrame, followed by filtering the results by `Category`. However, with **lazy execution**, Polars can optimize this process by first filtering the DataFrame to include only the relevant categories ("A" and "B") and then performing the `groupby` operation on the reduced dataset. This approach minimizes unnecessary computations and significantly improves efficiency.
        """
    )
    return


@app.cell
def _(mo):
    mo.md(
        """
        ## (d) Scalability - Handling Large Datasets in Memory ⬆️

        Pandas is limited by its single-threaded design and reliance on Python, which makes it inefficient for processing large datasets. Polars, on the other hand, is built in Rust and optimized for parallel processing, enabling it to handle datasets that are orders of magnitude larger.

        **Example: Processing a Large Dataset**
        In Pandas, loading a large dataset (e.g., 10GB) often results in memory errors:

        ```python
        # This may fail with large datasets
        df = pd.read_csv("large_dataset.csv")
        ```

        In Polars, the same operation is seamless:

        ```python
        df = pl.read_csv("large_dataset.csv")
        ```

        Polars also supports lazy evaluation, which allows you to optimize your workflows by deferring computations until necessary. This is particularly useful for large datasets:

        ```python
        df = pl.scan_csv("large_dataset.csv")  # Lazy DataFrame
        result = df.filter(pl.col("A") > 1).groupby("A").agg(pl.sum("B")).collect()  # Execute
        ```
        """
    )
    return


@app.cell
def _(mo):
    mo.md(
        """
        ## (e) Compatibility with Other ML Libraries 🤝

        Polars integrates seamlessly with popular machine learning libraries like Scikit-learn, PyTorch, and TensorFlow. Its ability to handle large datasets efficiently makes it an excellent choice for preprocessing data before feeding it into ML models.

        **Example: Preprocessing Data for Scikit-learn**

        ```python
        import polars as pl
        from sklearn.linear_model import LinearRegression

        # Load and preprocess data
        df = pl.read_csv("data.csv")
        X = df.select(["feature1", "feature2"]).to_numpy()
        y = df.select("target").to_numpy()

        # Train a model
        model = LinearRegression()
        model.fit(X, y)
        ```

        Polars also supports conversion to other formats like NumPy arrays and Pandas DataFrames, ensuring compatibility with virtually any ML library:

        ```python
        # Convert to Pandas DataFrame
        pandas_df = df.to_pandas()

        # Convert to NumPy array
        numpy_array = df.to_numpy()
        ```
        """
    )
    return


@app.cell
def _(mo):
    mo.md(
        """
        ## (f) Rich Functionality ⚡

        Polars supports advanced operations like

        - **date handling**
        - **window functions**
        - **joins**
        - **nested data types**

        which is making it a versatile tool for data manipulation.
        """
    )
    return


@app.cell
def _(mo):
    mo.md(
        """
        # Why Not PySpark? ⁉️

        While **PySpark** is undoubtedly a versatile tool that has transformed the way big data is handled and processed in Python, its **complex setup process** can be intimidating, especially for beginners. In contrast, **Polars** requires minimal setup and is ready to use right out of the box, making it more accessible for users of all skill levels.

        When deciding between the two, **PySpark** is the preferred choice for processing large datasets distributed across a **multi-node cluster**. However, for computations on a **single-node machine**, **Polars** is an excellent alternative. Remarkably, Polars is capable of handling datasets that exceed the size of the available RAM, making it a powerful tool for efficient data processing even on limited hardware.
        """
    )
    return


@app.cell
def _(mo):
    mo.md(
        """
        # 🔖 References

        - [Polars official website](https://pola.rs/)
        - [Polars Vs. Pandas](https://blog.jetbrains.com/pycharm/2024/07/polars-vs-pandas/)
        """
    )
    return


if __name__ == "__main__":
    app.run()