text
stringlengths 301
426
| source
stringclasses 3
values | __index_level_0__
int64 0
404k
|
---|---|---|
Data Platforms, Sql, Speakers, Learning, Data.
Journey Worth the Flight? Exploring the Microsoft Semantic Link for improved data science. Field Parameters or Personalize Visuals — Choosing the Right Aircraft for Your Self-Service Journey Decide on the fly: Real-Time Analytics in Microsoft Fabric Fabric Direct Lake Deep Dive What we’ve learned | medium | 3,326 |
Data Platforms, Sql, Speakers, Learning, Data.
running a YouTube channel (so far) My Journey as a Speaker I’m thrilled to have had the opportunity to speak at SQLBits again this year. With an emphasis on the Query Store Hints feature, my session, “Elevating SQL Server Performance with Query Store Hints,” seeks to walk attendees through it on | medium | 3,327 |
Data Platforms, Sql, Speakers, Learning, Data.
SQL Server 2022. https://sqlbits.com/attend/the-agenda/saturday/#Elevating_SQL_Server_Performance_with_Query_Store_Hints Query Store Hints play a role in database performance optimization by providing the query optimizer with exact guidelines on which execution plan to choose. In this session, I’ll | medium | 3,328 |
Data Platforms, Sql, Speakers, Learning, Data.
go over the foundations of SQL Server hints, explore the architecture of Query Store and how it interacts with Query Store Hints, and emphasize the advantages and things to keep in mind when utilizing this feature. My presentation will include a live demonstration that highlights SQL Server 2022’s | medium | 3,329 |
Data Platforms, Sql, Speakers, Learning, Data.
implementation of Query Store Hints. See you There! SQLBits 2024 is expected to be a growth-oriented, networking, and educational event. I’m thrilled to add to and gain from the collective knowledge of the data platform community as a speaker and participant. The sessions and networking | medium | 3,330 |
Data Platforms, Sql, Speakers, Learning, Data.
opportunities are set to provide an enriching experience. Let’s set out on this adventure, prepared to discover, grow, and create. What’s more? For just $5 a month, become a Medium Member and enjoy the liberty of limitless access to every masterpiece on Medium. By subscribing via my page, you not | medium | 3,331 |
Image Processing, Photography, Editing, Post Production, Computer Vision.
Image size is directly related to image resolution i.e. the higher the resolution, the bigger the file size An image captured on a digital CMOS sensor camera is saved to a storage media (SD/SDXC card) in a file format. For best results typically shoot in RAW, a lossless uncompressed file format | medium | 3,333 |
Image Processing, Photography, Editing, Post Production, Computer Vision.
which preserves all the details captured. It is also the best format to use for making adjustments to an image before final post processing, yet not all images captured are the exact same file size. The file size is not the same as resolution, though resolution does help determine quality and | medium | 3,334 |
Image Processing, Photography, Editing, Post Production, Computer Vision.
overall file size. File size is measured in bytes, typically with today’s cameras images captured are stored in files measured in MB (Megabytes). This is due to the many differences in detail stored in an image that is determined by lighting, exposure and shutter speed. Some images will have more | medium | 3,335 |
Image Processing, Photography, Editing, Post Production, Computer Vision.
detail than others depending on your camera settings. Even if the images shot in RAW are all the same resolution, they will still not have the same file size across all the images captured, but will be close. There is another thing that determines the file size and it is called the bit depth. | medium | 3,336 |
Image Processing, Photography, Editing, Post Production, Computer Vision.
Calculating Image Size Images are made of a grid of pixels aka “picture elements”. A pixel contains 8 bits (1 byte) if it is in BW (black and white). For colored images it uses a certain color scheme called RGB (Red, Green, Blue) represented as 1 byte each or 24 bits (3 bytes) per pixel. This is | medium | 3,337 |
Image Processing, Photography, Editing, Post Production, Computer Vision.
also referred to as the bit depth of an image. To determine bit depth you need the number of bits used to define each pixel. The > the bit depth, the > the number of tones (grayscale or color) that can be represented. Digital images may be produced in black and white (bitonal), grayscale or color. | medium | 3,338 |
Image Processing, Photography, Editing, Post Production, Computer Vision.
Each color has a varying level determined by exponential values from 256 colors for 8 bit and 16,777,216 colors for 24 bit images. So a bit depth of 24 bits represents 16.7 million tonal representation of color. Image resolution is just the size of the images width(W) and height(H) measured in | medium | 3,339 |
Image Processing, Photography, Editing, Post Production, Computer Vision.
number of pixels. To get the size of the image the resolution will be needed. (W x H x BitDepth) / 8 bits/byte = (W x H x BitDepth) x 1 byte/8bits As an example let’s say the image has the following dimensions: W = 4928 pixels H = 3264 pixels BitDepth = 24 bits/pixel = (4928 x 3264) x 24 bits/pixel | medium | 3,340 |
Image Processing, Photography, Editing, Post Production, Computer Vision.
/ 8 bits/byte = 16,084,992 pixels x 24 bits/pixel / 8 bits/byte = 386,039,808 bits / 8 bits/byte = 386,039,808 bits x 1 byte / 8 bits = 48,254,976 bytes = 48 MB As an estimate, a 4 GB SD card can store about 89 images at 48 MB each. At 32 GB up to 712 images can be stored. This will once again vary | medium | 3,341 |
Image Processing, Photography, Editing, Post Production, Computer Vision.
depending on what a photographer is shooting (wedding, sports, fashion, events) and the detail that is captured in color or black and white. Note: The image size is an approximation based on the dimensions. They can vary from image to image depending on the details they contain in color, depth and | medium | 3,342 |
Image Processing, Photography, Editing, Post Production, Computer Vision.
lightness. For example, even if the bit depth is 24 bits, not all those bits will show a uniform tone or color, but rather show gradients of the RGB color spectrum’s gamut. When planning for shoots with a DSLR or other digital camera, make sure you have enough capacity on your SD card. For | medium | 3,343 |
Image Processing, Photography, Editing, Post Production, Computer Vision.
commercial and wedding photographers who need to shoot non-stop, a fast write operation and high capacity storage device is ideal. (Photo Source Panasonic) Reasons To Shoot In High Resolution One reason to shoot in high resolution is the image can be blown up to a large print format. This is ideal | medium | 3,344 |
Image Processing, Photography, Editing, Post Production, Computer Vision.
for publishing and advertising, where the maximum resolution determines the final quality of the output image. For example, if you want to print an image the size of a billboard, the best result is from an uncompressed high resolution image because the magnification scale is at a more acceptable | medium | 3,345 |
Image Processing, Photography, Editing, Post Production, Computer Vision.
level. High resolution images, when magnified or up scaled from its original size still appear sharp and detailed. Whereas a lower resolution image when blown up becomes blurry, pixelated (noticeable pixelations) and does not retain sharpness in detail. (L) 4928 pixels (horizontal) at 300% | medium | 3,346 |
Image Processing, Photography, Editing, Post Production, Computer Vision.
magnification. (R) 1200 pixels (horizontal) at 300% magnification. At higher resolutions, you can get better details when zooming in. At lower resolutions, the image gets more blurry and loses plenty of details when zooming in. This is even worse when the low resolution image uses a lossy | medium | 3,347 |
Image Processing, Photography, Editing, Post Production, Computer Vision.
compression algorithm. (L) 4928 pixels (horizontal) at 1300% magnification. (R) 1200 pixels (horizontal) at 1300% magnification. When further magnifying or up scaling a lower resolution image, the “staircase effect” or jagged edges start to appear and the image becomes less clear, more blurry. This | medium | 3,348 |
Image Processing, Photography, Editing, Post Production, Computer Vision.
is the result of aliasing, and photo editing software use a technique called anti-aliasing to minimize this. The higher resolution image on the other hand, appears more detailed even at higher zoom. Although the pixels become noticeable at close distance to the eye, it does not matter from a large | medium | 3,349 |
Image Processing, Photography, Editing, Post Production, Computer Vision.
distance. For example, when viewing a billboard display, the users will not notice the pixels that much but will instead see the larger scaled image without the aliasing and blurry effect. Commercial and advertisements shoot in high resolution for the purpose of blowing the image up or scaling it | medium | 3,350 |
Image Processing, Photography, Editing, Post Production, Computer Vision.
for large prints. The Uses For Low Resolution Uploading images to the web for content creation is much different. Web resolution does not have to be so high, and priority is sometimes on smaller image size for faster download time. In this case compressed, lower resolution formats are actually | medium | 3,351 |
Image Processing, Photography, Editing, Post Production, Computer Vision.
acceptable since they don’t have to be viewed in a large print format. Instead the output is a typical screen display, which are mostly at least 720 or 1080 pixel resolution on the vertical. In some image editing programs, lossy compression is used to reduce the file size for uploading to the web. | medium | 3,352 |
Image Processing, Photography, Editing, Post Production, Computer Vision.
This technique, while great at reducing the file’s size, suffers from generation loss. This means the image through a repeated process of compression will loss the details of the original image’s quality. This is typical with JPEG image formats. This uses an algorithm technique called Discrete | medium | 3,353 |
Image Processing, Photography, Editing, Post Production, Computer Vision.
Cosine Transforms (DCT) that saves the image using some form of lossy compression. For web content (e.g. lookbooks, e-commerce catalogues, portfolios, etc.) the image resolution does not have to be so high. For one thing, it won’t fit on screen if your display is only 1080 pixels wide, while the | medium | 3,354 |
Image Processing, Photography, Editing, Post Production, Computer Vision.
image is over 4000 pixels wide. Instead the images are resized to appear on the website to fit. Users can still download the original image in its original resolution. Summary The > image size, the more pixels an image contains, the higher the resolution, the > space you need for storage. This is | medium | 3,355 |
Image Processing, Photography, Editing, Post Production, Computer Vision.
in it’s pre-processed or pre-edited format. After editing and saving the file to a compressed format, the file size decreases due to lost details from file compression. The best way to keep that uncompressed detail in the image is to save the file in a lossless format like TIFF rather than JPEG, | medium | 3,356 |
Cryptography, Cybersecurity.
The Paranoid Cryptography website is cool [2] and has lots of examples of vulnerabilities that could affect your code. One method relates to the Pollard p-1 method, and whether it is power smooth. RSA is used in many areas of cybersecurity and is often key in proving the identity of a person or a | medium | 3,358 |
Cryptography, Cybersecurity.
remote Web site. But, the strength of RSA depends on the prime numbers (p and q) selected to provide the modulus (n=p.q). If there are weaknesses in the selection of p and q, it may be possible to factorize the modulus and thus discover the private key. One method that can be used to discover | medium | 3,359 |
Cryptography, Cybersecurity.
whether we have weak prime numbers is the Pollard p-1 method. John Pollard [1], in 1974, defined a method of factorization which factorized values into their prime number roots. It finds these factors when the value of p-1 is powersmooth. Smooth numbers are used in cryptography to provide fast | medium | 3,360 |
Cryptography, Cybersecurity.
factorization methods. A smooth number is defined as a number whose factors are smaller than a given value. For a 5-smooth number, the factors must be equal to five or less. Every value -apart from prime numbers — can be reduced to the multiplication of prime numbers. For example 102 is equal to 2 | medium | 3,361 |
Cryptography, Cybersecurity.
x 3 x 17 [here]. A value of 56 is 2 x 2 x 2 x 7, and is thus 7-smooth, but not 3-smooth nor 5-smooth. With powersmooth, we say we are B-powersmooth, if all the prime number powers up to v are: and where v is the power of a prime number. For example, with 720 (2 ×2 × 2× 2 × 3 ×3× 5×5 =2⁴×3³×5²) will | medium | 3,362 |
Cryptography, Cybersecurity.
be 5-smooth, but not 5-powersmooth. With this, it will be 16-powersmooth, as the highest prime factors power is 16 (2⁴). For 56, we have 2³×7, and is thus 8-powersmooth (2³). Let’s say, we want to value of n = 55. First, we select a smoothness bound, such as B=7. We then compute the value of: Next, | medium | 3,363 |
Cryptography, Cybersecurity.
we select a = 2, and use: and where gcd() is the greatest common factor between two values. Two values that do not share any factors, such as 16 (2x2x2x2) and 21 (3x7) have a gcd of 1. The result the computation of g will give us either no result or one of the factors. In this case, we get g=7 , | medium | 3,364 |
Cryptography, Cybersecurity.
and can then easily find the other factor as 79: a=2 M=2*3*5*7 n=553 g=math.gcd(a**M-1,n) print (g) print (n//g) Coding The basic algorithm is then [taken from [here][2][here]: # Code derived from https://github.com/google/paranoid_crypto from typing import Optional import gmpy2 from gmpy2 import | medium | 3,365 |
Cryptography, Cybersecurity.
mpz import random import sys PRIMES = primes = (2,3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 251, 257, 263, | medium | 3,366 |
Cryptography, Cybersecurity.
269, 271, 277, 281, 283, 293, 307, 311, 313, 317, 331, 337, 347, 349, 353, 359, 367, 373, 379, 383, 389, 397, 401, 409, 419, 421, 431, 433, 439, 443, 449, 457, 461, 463, 467, 479, 487, 491, 499, 503, 509, 521, 523, 541, 547, 557, 563, 569, 571, 577, 587, 593, 599, 601, 607, 613, 617, 619, 631, 641, | medium | 3,367 |
Cryptography, Cybersecurity.
643, 647, 653, 659, 661, 673, 677, 683, 691, 701, 709, 719, 727, 733, 739, 743, 751, 757, 761, 769, 773, 787, 797, 809, 811, 821, 823, 827, 829, 839, 853, 857, 859, 863, 877, 881, 883, 887, 907, 911, 919, 929, 937, 941, 947, 953, 967, 971, 977, 983, 991, 997, 1009, 1013, 1019, 1021, 1031, 1033, | medium | 3,368 |
Cryptography, Cybersecurity.
1039, 1049, 1051, 1061, 1063, 1069, 1087, 1091, 1093, 1097, 1103, 1109, 1117, 1123, 1129, 1151, 1153, 1163, 1171, 1181, 1187, 1193, 1201, 1213, 1217, 1223, 1229, 1231, 1237, 1249, 1259, 1277, 1279, 1283, 1289, 1291, 1297, 1301, 1303, 1307, 1319, 1321, 1327, 1361, 1367, 1373, 1381, 1399, 1409, 1423, | medium | 3,369 |
Cryptography, Cybersecurity.
1427, 1429, 1433, 1439, 1447, 1451, 1453, 1459, 1471, 1481, 1483, 1487, 1489, 1493, 1499, 1511, 1523, 1531, 1543, 1549, 1553, 1559, 1567, 1571, 1579, 1583, 1597, 1601, 1607, 1609, 1613, 1619, 1621, 1627, 1637, 1657, 1663, 1667, 1669, 1693, 1697, 1699, 1709, 1721, 1723, 1733, 1741, 1747, 1753, 1759, | medium | 3,370 |
Cryptography, Cybersecurity.
1777, 1783, 1787, 1789, 1801, 1811, 1823, 1831, 1847, 1861, 1867, 1871, 1873, 1877, 1879, 1889, 1901, 1907, 1913, 1931, 1933, 1949, 1951, 1973, 1979, 1987, 1993, 1997, 1999, 2003, 2011, 2017, 2027, 2029, 2039, 2053, 2063, 2069, 2081, 2083, 2087, 2089, 2099, 2111, 2113, 2129, 2131, 2137, 2141, 2143, | medium | 3,371 |
Cryptography, Cybersecurity.
2153, 2161, 2179, 2203, 2207, 2213, 2221, 2237, 2239, 2243, 2251, 2267, 2269, 2273, 2281, 2287, 2293, 2297, 2309, 2311, 2333, 2339, 2341, 2347, 2351, 2357, 2371, 2377, 2381, 2383, 2389, 2393, 2399, 2411, 2417, 2423, 2437, 2441, 2447, 2459, 2467, 2473, 2477, 2503, 2521, 2531, 2539, 2543, 2549, 2551, | medium | 3,372 |
Cryptography, Cybersecurity.
2557, 2579, 2591, 2593, 2609, 2617, 2621, 2633, 2647, 2657, 2659, 2663, 2671, 2677, 2683, 2687, 2689, 2693, 2699, 2707, 2711, 2713, 2719, 2729, 2731, 2741, 2749, 2753, 2767, 2777, 2789, 2791, 2797, 2801, 2803, 2819, 2833, 2837, 2843, 2851, 2857, 2861, 2879, 2887, 2897, 2903, 2909, 2917, 2927, 2939, | medium | 3,373 |
Cryptography, Cybersecurity.
2953, 2957, 2963, 2969, 2971, 2999) def product_of_primes(nprimes): product_of_primes=1 if (nprimes>len(PRIMES)): nprimes=len(PRIMES) for i in range(0,nprimes): product_of_primes *= primes[i] return product_of_primes def Pollardpm1(n: int, m: Optional[int] = None,gcd_bound: int = 2**60): if | medium | 3,374 |
Cryptography, Cybersecurity.
gmpy2.gcd(n - 1, m) >= gcd_bound: a = pow(2, n - 1, n) p = gmpy2.gcd(pow(a, m, n) - 1, n) if 1 < p < n: return True, [p, n // p] if p == n: return True, [] return False, [] nprimebits=16 nprimes=64 if (len(sys.argv)>1): nprimebits=int(sys.argv[1]) if (len(sys.argv)>2): nprimes=int(sys.argv[2]) p = | medium | 3,375 |
Cryptography, Cybersecurity.
gmpy2.next_prime(random.getrandbits(nprimebits)) q = gmpy2.next_prime(random.getrandbits(nprimebits)) n=p*q print(f"Number of bits in prime={nprimebits} and {primes[nprimes]}-powersmooth\n") m=product_of_primes(nprimes) res, factors = Pollardpm1(n, m, gcd_bound=1) print(f"Result: ", res) if | medium | 3,376 |
Cryptography, Cybersecurity.
(len(factors)>1): print(f"Factors found: {int(factors[0])}, {int(factors[1])}") else: print("No factors discovered") print(f"\nThe originally generator factors were: p={p}, q={q}") And a sample run is [here]: Number of bits in prime=16 and 233-powersmooth Result: True Factors found: 48337, 23663 | medium | 3,377 |
Cryptography, Cybersecurity.
The originally generator factors were: p=23663, q=48337 Conclusions Don’t just take your libraries for granted, and assume they have been fully tested. References [1] Pollard, J. M. (1974, November). Theorems on factorization and primality testing. In Mathematical Proceedings of the Cambridge | medium | 3,378 |
.
overwhelmed 23-year-old hiding her pregnancy from her parents, while also racing against a clock imposed by the new law in Texas that bans abortion after six weeks. The piece, in my opinion, was well-crafted and deeply researched, the kind of journalism that fosters dialogue. With dialogue can come | medium | 3,382 |
.
choices can be. My adoption agency required I attend an in-person seminar. Two days. Real stories. Birth mothers and adoptees laying their lives bare. For decades, adoption narratives have been spun into a harmful web of fairy tales that fail to acknowledge the loss, the identity struggles, and the | medium | 3,387 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
Distributed Algorithms, Map-Reduce Paradigm, Scalable ML using Spark MLlib on Standalone, AWS EMR Cluster with Docker & Nvidia RAPIDS. Since the early 2000s, the amount of data collected has increased enormously due to the advent of internet giants such as Google, Netflix, Youtube, Amazon, | medium | 3,393 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
Facebook, etc. Near to 2010, another “data wave” had come about when mobile phones became hugely popular. In 2020s, we anticipate another exponential rise in data when IoT devices become all-pervasive. Given this backdrop, building scalable systems becomes a sine qua non for machine learning | medium | 3,394 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
solutions. Machine Learning in Spark: Zero to Hero Edition Any solution majorly depends on these 2 types of tasks: a) Compute-heavy: Prior to 2000s, parallel processing boxes known as ‘Supercomputers’ were popular for compute-heavy tasks. Pre-2005, parallel processing libraries like MPI and PVM | medium | 3,395 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
were popular for compute heavy tasks, based on which TensorFlow was designed later. b) Data-heavy: Relational algebra based databases were designed in 1970s, when hard disk storage was very expensive. Hence, the design was aimed to reduce data redundancy, by dividing larger tables into smaller | medium | 3,396 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
tables, and link them using relationships (Normalization). Thus, traditional databases such as mySQL, PostgreSQL, Oracle etc. were not designed to scale, especially in the data-explosion context mentioned above. Consequently, NoSQL databases were designed to cater to different situations: MongoDB: | medium | 3,397 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
To store text documents Redis, Memcache: distributed hash table for quick key-value lookup Elastic Search: to search through text documents HBase and Cassandra: columnar stores Neo4j and Grakn: Graph Databases. However, Machine Learning & Deep Learning solutions on large datasets are both compute | medium | 3,398 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
heavy and data heavy, at the same time. Hence, in order to make scalable AI/ML solutions, it is necessary the solution caters to both. Fig 1. Author’s Parallel Implementation of Photon Mapping using MPI In 2004, Jeff Dean et al. published the seminal MapReduce paper to handle data heavy tasks [2]. | medium | 3,399 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
In 2006, Hadoop implemented MapReduce and designed a distributed file system called HDFS, wherein a single big file is split and stored in the disks of multiple computers. The idea was to split huge databases across hard-disks in multiple motherboards, each with individual CPU, RAM, hard disk etc, | medium | 3,400 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
interconnected by a fast LAN network. However, Hadoop stores all the intermediate data to disk, as it was designed in 2000s, when hard disk prices plummeted, while RAM prices remained high. In 2010s, when RAM prices came down, Spark was born with a big design change to store all intermediate data | medium | 3,401 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
to RAM, instead of disk. Spark was good for both, i) Data-heavy tasks: as it was using HDFS & ii) Compute-heavy tasks: as it uses RAM instead of disk, to store intermediate outputs. Eg: Iterative solutions As Spark could utilize RAM, it became an efficient solution for iterative tasks in Machine | medium | 3,402 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
Learning like Stochastic Gradient Descent (SGD). So is the reason, Spark MLlib became so popular for Machine Learning, in contrast to Hadoop’s Mahout. Furthermore, to do Distributed Deep-Learning with TF you can use, Multiple GPUs on the same box (or) Multiple GPUs on different boxes (GPU Cluster) | medium | 3,403 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
While today’s supercomputers use GPU Cluster for compute intensive tasks, you can install Spark in such a cluster to make it suitable for tasks such as distributed deep-learning, which are both compute and data intensive. Introduction to Hadoop & Spark Majorly, there are 2 components in Hadoop, | medium | 3,404 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
Hadoop Distributed File System (HDFS): a fault-tolerant distributed file system, used by Hadoop and Spark both. HDFS enables splitting a big file into ’n’ chunks & keep in ’n’ nodes. When the file is accessed, then different chunks of data have to be accessed, across the nodes via LAN. Map-Reduce: | medium | 3,405 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
Given a task across huge amount of data, distributed across numerous nodes, a lot of data transfer has to happen and processing needs to be distributed. Let’s look into this in detail. Map-Reduce Paradigm Consider the task to find word frequency in a large distributed file of 900 GB. HDFS will | medium | 3,406 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
enable splitting the big file into 3 chunks P1, P2, P3 of 300 GB each and keep one each in 3 nodes. Any Hadoop code would have 3 stages: Map: Mapper function will pass through the data, stored in the disk of each node, and increment the word count in the output dictionary. It will get executed | medium | 3,407 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
independently on each distributed box. Fig 2. Word Count Map-Reduce workflow (Image by Author) 2. Shuffle: Hadoop automatically moves the data across the LAN network, so that the same keys are grouped together in one box. 3. Reduce: A function which will consume the dictionary and add up the values | medium | 3,408 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
with same keys (to compute the total count). To implement a function in Hadoop, you just need to write the Map & Reduce function. Please note, there is disk I/O between each Map-Reduce operation in Hadoop. However, almost all ML algorithms work iteratively. Each iteration step in SGD [Equation | medium | 3,409 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
below] corresponds to a Map-Reduce operation. After each iteration step, intermediate weights will be written to disk, taking up 90% of the total time to converge. Equation: Weight Update Formula in ML & DL Iteration As a solution, Spark was born in 2013 that replaced disk I/O operations to | medium | 3,410 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
in-memory operations. With the help of Mesos — a distributed system kernel — Spark caches the intermediate data set after each iteration. Since output of each iteration is stored in RDD, only 1 disk read and write operation is required to complete all iterations of SGD. Spark is built on Resilient | medium | 3,411 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
Distributed Dataset (RDD), a fault tolerant immutable collection of distributed datasets stored in main memory. On top of RDD, DataFrame API is designed to abstract away its complexity and ease doing Machine Learning on Spark. RDDs support two types of operations: Transformations: to create a new | medium | 3,412 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
data set from an existing one ✓ Map: pass each data set element through a function ✓ ReduceByKey: values for each key are aggregated using a function ✓ Filter: selects only those elements on which function returns true. Actions: return a value after running computation on data set ✓ Reduce: | medium | 3,413 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
aggregates all elements of the RDD using some function ✓ Collect: return all the elements of the output data set ✓ SaveAsTextFile: write elements of the data set as a text file. All transformations in Spark are lazy, i.e. they are computed only when an ‘action’ requires a result. In the code below, | medium | 3,414 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
lineLengths is not immediately computed, due to laziness. Only when ‘reduce’ is run, Spark breaks the computation into tasks to run on separate machines, to compute total length. lines = sc.textFile("data.txt") lineLengths = lines.map(lambda s: len(s)) totalLength = lineLengths.reduce(lambda a, b: | medium | 3,415 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
a + b) A simple data transformation example to count the occurrence of keys, stored in a distributed RDD of 3 partitions is as below: To count occurrence of alphabets, ‘a’ and ‘b’ Logistic Regression as Map Reduce The most expensive operation in SGD iteration is the gradient operation across all | medium | 3,416 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
data points [Eqn. above]. If the data set is huge, say ’n’ billion data points, then we can distribute gradient computation across ‘k’ different boxes. Map Stage: Each box would compute gradient of n/k billion points Reduce Stage: Partial sums in each box are summed up using same key Gradient of | medium | 3,417 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
Loss over all points = ∑ Partial Sums Thus, easily compute w_new & store in memory of each node This is how you distribute any optimization based ML algorithm. However, see the Hadoop vs Spark performance, for a distributed LR implementation. Fig 3. Running Time Comparison: Hadoop vs Spark [3] As | medium | 3,418 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
Spark RDDs allow performing several map operations in memory, there is no need to write interim data sets to a disk, thus being 100x faster. Note, the time taken for first iteration is almost the same, as both Hadoop and Spark have to read from disk. But in subsequent iterations, Spark’s | medium | 3,419 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
memory-read takes only 6 secs vs 127 secs for Hadoop’s disk-read. Besides, a ML Scientist don't need to code Map and Reduce functions. Most ML algorithms are contained in Spark MLlib and all data preprocessing is done using Spark SQL. Spark Installation and Setup You can setup Spark in either, Your | medium | 3,420 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
local box or boxes (OR) Managed Cluster using AWS EMR or Azure Databricks Below we will see both ways. First, we will run trivially parallelizable tasks on your personal box, after doing Spark local system setup. Then we will take a more complex ML project and run in Spark Docker, AWS EMR & Spark | medium | 3,421 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
Rapids. Spark: Local System Setup docker pull jupyter/pyspark-notebook docker run -it -p 8888:8888 jupyter/pyspark-notebook Either click the link with Auth-Token or go-to http://localhost:8888/ and copy paste the token Now you can execute Spark code in Jupyter or Terminal. To execute in docker, | medium | 3,422 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
just run spark-submit pi-code.py Task 1: Estimate the value of Pi (π) Take a unit circle and consider a square circumscribing the circle. Area of unit square = 1 Since its a unit circle, the area of the circle = π The area of quarter arc = π/4 Thus, π = 4 * area of quarter arc Fig 4. Area of Circle | medium | 3,423 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
= # of Red Points/ Total Points. (Image by Author) The area of quarter arc can be computed using, Numerical Methods: using integration Monte Carlo Approach: to find answers using random sampling In Monte Carlo Approach, Take uniform distribution of (x, y) points from 0 to 1 (i.e. inside square) | medium | 3,424 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
Area of quarter region = % of points within the circle, i.e. 𝑥²+𝑦² < 1 Eg: out of 1000 random points, if ‘k’ points are within the circle, then area of shaded region = k/1000 These operations are trivially parallelizable as there is no dependency across nodes in order to check whether a point falls | medium | 3,425 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
within the circle. Below pyspark code, once run on Spark local setup, will output value nearer to π=3.14 as we increase number of random points (NUM_SAMPLES) The random function will generate a number between 0 to 1. The ‘inside’ function runs a million times and returns ‘True’ only when the random | medium | 3,426 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
point is within the circle. sc.parallelize() will create an RDD broken up into k=10 instances. Filter will apply the passed function. Task 2: Find Word Count To find word frequency in a large distributed file, just replace the local file path in below code to HDFS file path. The map function will | medium | 3,427 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
create a list of lists & flatMap merges the list into one. Task 3: Data Preprocessing Most data preprocessing can be done with DataFrame API using Spark SQL. Spark SQL query executed on Spark Data Frame will be converted into Map and Reduce ops before execution in Spark. Intro to Spark MLLib & ML | medium | 3,428 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
Pipeline Spark MLLib is a sklearn-inspired library which contains distributed implementation of popular ML algorithms. The main difference with sklearn is the usage of sc.parallelize() function to split data across multiple boxes. All the steps required to convert the raw data on disk to a final | medium | 3,429 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
model is known as ML pipeline. Pipeline() contains the input and output stages in sequence. For instance, Tokenizer →Count Vectorizer → Logistic Regression pipeline sequence can be coded as, Thus, the training data is fed into tokenizer first, then to CV and then to LR. To test the model, call the | medium | 3,430 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
model.transform(). You can also do Distributed Hyper Parameter Tuning, i.e. to run the same architecture on multiple boxes with different hyper-parameters. However, to distributively store and train one big model in the VRAM of GPUs in different boxes is slightly intricate. ML Algorithms in Spark: | medium | 3,431 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
Custom Implementation An algorithm can be made parallel only when it can be divided into independent sub-tasks. To explicate, Bitonic sorting can be made parallel as the sequence of operations are data-independant, while merge sort is not. Similarly, some ML algorithms are trivially parallelizable | medium | 3,432 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
while others are not. a) Trivially Parallel: Take KNN (k-nearest neighbours) for example. Split the data set D into ’n’ boxes. Eg: 40K points into 4 boxes Find top ‘k’ nearest points from each 10K points in each box Transfer all 4k points to 1 box and find top ‘k’ nearest points. b) Non-Trivially | medium | 3,433 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
Parallel: Take GBDT for instance. Each decision tree in GBDT is built based on the residues of the previous decision trees. Hence, to train GBDT is inherently a sequential operation, not parallel. However, we can parallelize the building of each decision tree, as the dataset used for left and right | medium | 3,434 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
sub-tree are independent. Thus, xgboost parallelize at tree level, i.e. left & right sub-trees are trained on 2 nodes independently. Time Series Prediction using Random Forest Let’s solve an ML problem in Standalone, Spark Local & Cluster mode. Problem Statement: The daily temperate, wind, rainfall | medium | 3,435 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
and humidity of a location is noted from 1990~2020s. Given these features, build a time series model to predict the humidity in Y2021. To verify the model, use 2020Q4 humidity values to compare, using a metric. The complete source code of the below experiments can be found here. Fig 5. Input | medium | 3,436 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
Dataset Features Fig 6. Time series nature of the humidity values is clearly visible Firstly, transform data to derive new features useful to predict humidity. Fig 7. New Features: Day, Week & Scale = Temp*Rainfall A. Standalone Implementation Now we can train sklearn’s Random Forest Regressor with | medium | 3,437 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
above features. The predicted and actual humidity values are too near (visualization below) Fig 8. 2020Q4 Humidity: Blue-Red Line as Actual and Predicted Humidity (Standalone) B. Spark Local Implementation First, you need to do Spark local system setup steps mentioned above. Then, you can use | medium | 3,438 |
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability.
PySpark’s RandomForestRegressor to do the same above. To feed in features to machine learning models in Spark MLlib, you need to merge multiple columns into a vector column, using the VectorAssembler module in Spark ML library. Fig 9. Features are combined using VectorAssembler Then, you can run | medium | 3,439 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.