text
stringlengths
301
426
source
stringclasses
3 values
__index_level_0__
int64
0
404k
Machine Learning, Regression, Data Science. the same fashion that it measures the local gradient (partial derivative) of the error function with regards to the parameter vector θ, and it goes in the direction of descending gradient. You have reached a minimum or bottom village when the gradient is zero. Size of the step is controlled by
medium
6,421
Machine Learning, Regression, Data Science. learning rate hyper parameter. If the learning rate is small, it will take a long time to reach the minima/converge but if it is large, then it can miss the minima. You can use grid search or random search to come up with a good learning rate and iterations. When using gradient descent, make sure
medium
6,422
Machine Learning, Regression, Data Science. you perform the feature scaling to have all the predictors on same scale (using StandardScaler i.e. Z-score standardization) for faster convergence. So you need to know direction in which to take the step i.e. how much the cost function will change if you change θj (partial derivative of the cost
medium
6,423
Machine Learning, Regression, Data Science. function at current point) and how big of a step you need to take i.e. learning rate. Gradient is a vector having direction and magnitude while learning rate is a scalar which we multipky to gradient. So small learning rate will take lot of time to train while for large learning rate we might
medium
6,424
Machine Learning, Regression, Data Science. overshoot the minima. To avoid overfitting in gradient descent and to stop when a given level of convergence is reached, you can use early stopping where we stop training when validation error reaches minimum. For small learning rate, training will take a long time and for large learning rate,
medium
6,425
Machine Learning, Regression, Data Science. training will bounce around. Good is 40 to 100. Larger batches require smaller learning rates. For small batch size, training will bounce around and for large learning rate, training will take a long time . Default rate is 0.2 or 1/sqrt(num_of_features). There are different ways to implement
medium
6,426
Machine Learning, Regression, Data Science. gradient descent such as: batch — Uses whole batch of training data at every step; so its slow stochastic — Picks a random training data instance and calculate the gradient for that single instance so its faster. This can never reach optimal/global minima but can bounce out of local minima. mini
medium
6,427
Machine Learning, Regression, Data Science. batch — Computes the gradient for a random sets of instances(mini batches of 10 to 1000 examples). It can get stuck in local minima. from sklearn.linear_model import SGDClassifier from sklearn.preprocessing import StandardScaler from sklearn.pipeline import make_pipeline sgdc =
medium
6,428
Machine Learning, Regression, Data Science. make_pipeline(StandardScaler(), SGDClassifier(loss="hinge", penalty="l2", max_iter=5)) sgdc.fit(x_train, y_train) y_pred = sgdc.predict(x_test) Logistic regression: Logistic regression is used for classification ML problems(output/dependant variable is categorical) by estimating the probabilty that
medium
6,429
Machine Learning, Regression, Data Science. an instance will belong to a particular class (fraud or not fraud transaction). It uses a logistic/sigmoid function(S shaped) which outputs number between 0 and1. Logistic regression estimates the probability p= hθ(x), where if its above 0.5 then prediction will be 1/positive and if its below 0.5
medium
6,430
Machine Learning, Regression, Data Science. then prediction will be 0/negative. Logistic regression hypothesis Scikit-learn’s Logistic regression implementation supports binary, One-vs-Rest, or multinomial logistic regression with optional ℓ1, ℓ2(default) or Elastic-Net regularization and different solvers such as “liblinear”, “newton-cg”,
medium
6,431
100 Days Of Solidity, Assembly Math, Solidity Academy. #100DaysOfSolidity Series: “Assembly Math” Welcome to another exciting episode of #100DaysOfSolidity! In this installment, we’re diving deep into the world of Assembly Math. 💡 Solidity is a versatile and powerful language for building smart contracts on the Ethereum blockchain, and understanding
medium
6,433
100 Days Of Solidity, Assembly Math, Solidity Academy. assembly math can take your skills to the next level. #100DaysOfSolidity Series: “Assembly Math” Why Assembly Math Matters 🤔 Solidity, like many programming languages, relies heavily on mathematical operations. These operations are essential for various tasks, from simple arithmetic calculations to
medium
6,434
100 Days Of Solidity, Assembly Math, Solidity Academy. complex cryptographic operations. When you’re working with smart contracts, efficient and precise math can make a significant difference in terms of gas costs and security. Let’s explore some key aspects of Assembly Math in Solidity that every developer should be aware of. Getting Started with
medium
6,435
100 Days Of Solidity, Assembly Math, Solidity Academy. Assembly Math 🔢 Assembly Overview 🛠️ Solidity allows you to embed inline assembly code within your contracts. This can be extremely powerful, as it enables you to optimize your code for gas efficiency or perform low-level operations that aren’t possible with high-level Solidity alone. We can use
medium
6,436
100 Days Of Solidity, Assembly Math, Solidity Academy. the `assembly` keyword to start an assembly block. Within this block, you have access to a wide range of low-level operations, including bit manipulation, direct memory access, and more. This is where we can perform assembly math. // Example of inline assembly to add two numbers function
medium
6,437
100 Days Of Solidity, Assembly Math, Solidity Academy. add(uint256 a, uint256 b) public pure returns (uint256 result) { assembly { result := add(a, b) } } Arithmetic Operations ✖️➕➖➗ Assembly math provides fine-grained control over arithmetic operations. You can choose between signed and unsigned integers, specify the bit size, and even use assembly
medium
6,438
100 Days Of Solidity, Assembly Math, Solidity Academy. functions for efficient calculations. Let’s look at some assembly math operations: // Add two unsigned 256-bit integers function add(uint256 a, uint256 b) public pure returns (uint256 result) { assembly { result := add(a, b) } } Bit Manipulation 🧩 Bit-level manipulation is a fundamental concept in
medium
6,439
100 Days Of Solidity, Assembly Math, Solidity Academy. Assembly Math. You can shift bits, perform logical AND/OR/XOR operations, and set/clear specific bits. // Set the nth bit of a uint256 to 1 function setBit(uint256 num, uint8 bit) public pure returns (uint256) { assembly { num := or(num, shl(bit, 1)) } } Gas Optimization ⛽ One of the main reasons
medium
6,440
100 Days Of Solidity, Assembly Math, Solidity Academy. to use assembly math is to optimize gas consumption. By writing precise assembly code, you can reduce gas costs and make your smart contracts more efficient. We’ll explore gas optimization techniques and best practices in this article. Unique Use Cases 🌟 Assembly math opens the door to a world of
medium
6,441
100 Days Of Solidity, Assembly Math, Solidity Academy. unique use cases in Solidity. Here are some examples of how you can leverage this powerful tool: Cryptographic Operations 🔐 Assembling cryptographic functions can make your smart contracts more secure and efficient. We’ll delve into the assembly code for hashing and digital signatures. // Assembly
medium
6,442
100 Days Of Solidity, Assembly Math, Solidity Academy. code for Keccak256 hash function keccak256Hash(bytes memory data) public pure returns (bytes32 hash) { assembly { hash := keccak256(add(data, 0x20), mload(data)) } } Gas-Efficient Loops 🔄 Looping in Solidity can be costly in terms of gas. We’ll explore how to use assembly math to create
medium
6,443
100 Days Of Solidity, Assembly Math, Solidity Academy. gas-efficient loops. // Assembly loop to calculate the sum of an array function sumArray(uint256[] memory data) public pure returns (uint256) { assembly { let len := mload(data) let sum := 0 for { let i := 0 } lt(i, len) { i := add(i, 1) } { sum := add(sum, mload(add(data, add(0x20, mul(i, 0x20))))
medium
6,444
100 Days Of Solidity, Assembly Math, Solidity Academy. } return sum } } Real-World Examples 🌐 Understanding assembly math is vital, but seeing it in action can be even more enlightening. We’ll explore real-world smart contracts and projects that utilize assembly math to achieve exceptional results. 🚀 Conclusion Solidity assembly math is a powerful tool
medium
6,445
100 Days Of Solidity, Assembly Math, Solidity Academy. in the arsenal of any Ethereum developer. It enables you to optimize your contracts for gas efficiency, implement low-level operations, and handle complex math with precision. This article has provided a glimpse into the world of assembly math, but there’s so much more to explore. So, what’s next?
medium
6,446
100 Days Of Solidity, Assembly Math, Solidity Academy. Dive deeper into assembly math, experiment with your own assembly code, and explore its applications in various smart contract projects. Your journey to mastering Solidity continues, and assembly math is a crucial step in that direction. Keep coding, keep experimenting, and keep building the
medium
6,447
Science, Climate Change. Thermodynamics 101 — Do we need more CO2 or less? Mark Harris, Mechanical engineer, LSSBB. First let me say that I am not a climate change denier, nor do I deny that there is human caused climate change. My purpose in writing this is to say that perhaps reducing CO2 emissions may not provide the
medium
6,449
Science, Climate Change. best return on investment as compared to other countermeasures. During my time as a mechanical engineering student, I received instruction in the subject of thermodynamics (Thermo). This is the branch of physical science that deals with the relationship between heat and other forms of energy (such
medium
6,450
Science, Climate Change. as mechanical, electrical, or chemical energy), and, by extension, of the relationships between all forms of energy. In undergraduate mechanical engineering, students take a fundamentals class (Thermo I), an intermediate class (Thermo II) and other advanced applied thermodynamics classes. In Thermo
medium
6,451
Science, Climate Change. I, I worked problems that required the use of 25 or so pages of “steam tables” in the back of our textbook. The table contained numerical values of the physical properties of water in various states, for all 3 phases from compressed liquid (frozen) to vapor, that we used to calculate energy values
medium
6,452
Science, Climate Change. such as enthalpy (the total heat content), pressure, temperature, or entropy (the randomness of a substance and its ability to do work) for systems such as a steam turbines or pressure vessels. Using these steam tables required time and effort to extrapolate between different temperature, pressure,
medium
6,453
Science, Climate Change. entropy, and enthalpy values because they were listed in 10 to 20-unit increments. So, I wrote a computer program that incorporated the same equations (4th order and greater polynomial curve fits) that were used to generate the steam tables’ values throughout the 3 phase regions. This allowed me to
medium
6,454
Science, Climate Change. just enter the values from the problems I was working to obtain the precise output value, saving me half the time it took to work the problems. By the way, I gave the program away to other students. For example, here is the code for just one of the 20 or so functions used in the program — this one
medium
6,455
Science, Climate Change. is used to solve for the enthalpy given the pressure and entropy in a system (provided purely as an illustration of the complexity of thermodynamics): FUNCTION HPS(P1,S1:REAL):REAL — {Pascal code} . VAR TEMP,P,S,X,SG,X4,S0,A0,A1,A2,A3,A4,A5:REAL; BEGIN . P:=P1/6.8947572; . S:=S1/4.1868; . IF
medium
6,456
Science, Climate Change. S0:=2.150098+(-0.25438439+(2.17448E-04–9.3986E-04*X)*X)*X; . A0:=1223.2933+(-0.57781294+(0.2303143–1.0434265*X)*X)*X; . A1:=820.09617+(-1.9634176+(2.6069465–0.76847051*X)*X)*X; . A2:=895.12074+(-10.468214+(7.0858389–10.321004*X)*X)*X; . A3:=547.70336+(195.11068+(-313.48831+166.94769*X)*X)*X; .
medium
6,458
Science, Climate Change. A1:=1144.6178+(33.297322+(-26.451758+8.9579684*X)*X)*X-1.0968016*X4; . A2:=993.78383+(521.1334+(-506.58014+220.41684*X)*X)*X-37.982498*X4; . A3:=1424.0878+(-1663.6047+(1345.659–489.18341*X)*X)*X+73.075686*X4; . A4:=3431.7851+(-7341.2575+(5997.1054–2208.4202*X)*X)*X+297.74553*X4; . END { IF P <=450
medium
6,460
Science, Climate Change. } . ELSE BEGIN {10} S0:=1.7066779+(0.54400879+(-0.37780533+0.077093291*X)*X)*X-0.0054871968*X4; . A0:=1400.0; . A1:=742.2428+(661.0354+(-321.27928+53.456926*X)*X)*X; . A2:=-3491.438+(4615.4327+(-1470.6537+145.94655*X)*X)*X; . A3:=34807.748+(-35596.564+(12288.438–1388.0814*X)*X)*X; . A4:=0.0; . END;
medium
6,461
Science, Climate Change. { ELSE BEGIN } {11} A5:=S-S0; . TEMP:=A0+(A1+(A2+(A3+A4 *A5)*A5)*A5)*A5; . END { IF SG<S. } . ELSE BEGIN {12} X:=0.43429448*LN(P); . X4:=X*X*X*X; . A0:=-4.7169141+(-10.049146+(-7.0532835–1.9473822*X)*X)*X . +(0.1175487–0.25473452*X)*X4; . A1:=561.46162+(76.93328+(12.117678+2.1291364*X)*X)*X .
medium
6,462
Science, Climate Change. +(0.12850077+0.14437713*X)*X4; . TEMP:=A0+A1*S; . END; . HPS:=2.326*TEMP; . END; { PROCEDURE } I say all of this to impart to you just some of the intricacies related to the physical science of Thermodynamics. What I have provided is just a tiny part of that science, and as such I ask you to
medium
6,463
Science, Climate Change. consider that when climate scientists suggest they can “model” global atmospheric temperature trends over multiple decades to within tenths of a degree Celsius, then there might be room for a measure of skepticism. This is especially at issue given that the atmosphere is about 4.2 trillion cubic
medium
6,464
Science, Climate Change. meters (1), it surrounds the entire planet, which is spinning on its tilted axis and is revolving around its primary heat source. This makes the thermodynamic effects on the atmosphere from all internal and external factors nearly impossible to observe and collect meaningful data for, let alone
medium
6,465
Science, Climate Change. model. But the climate is changing, so what are the true causes? Water vapor is ever present in our atmosphere, in varying quantities, dependent upon the physical properties of the atmosphere (temperature, pressure, etc.). In the atmosphere, water vapor averages 25,000 parts per million (PPM)2
medium
6,466
Science, Climate Change. which is over 60 times the concentration of CO2, and thus, the effects of water molecules in the atmosphere are much greater than the effects of CO2. Water molecules, in vapor form, also exhibit a much more impactful greenhouse effect than does CO2. This along with other factors such as solar
medium
6,467
Science, Climate Change. radiation, which is dependent upon orbital distance from the sun, our changing axis tilt (about 23.44o), solar activities like solar flares, normal solar fluctuations. Additionally, how much Earth’s axis is tilted towards or away from the Sun changes through time, over approximately 41,000-year
medium
6,468
Science, Climate Change. cycles (Milankovitch cycles). Small changes in Earth’s spin, tilt, and orbit over these long periods of time can change the amount of sunlight received (and therefore absorbed and re-radiated) by different parts of the Earth. Over 10s to 100s of thousands of years, these small changes in the
medium
6,469
Science, Climate Change. position of the Earth in relationship to the Sun changes the amount of solar radiation, also known as insolation, received by different parts of the Earth. In turn, changes in insolation over these long periods of time can change regional climates and the length and intensity of the seasons.(2) The
medium
6,470
Science, Climate Change. Earth’s spin, tilt, and orbit continue to change today, but may not explain the current rate of climate change. Additionally, changes in land and sea vegetation make modeling global temperatures as they correlate just to increasing CO2 virtually impossible to accurately calculate. For these
medium
6,471
Science, Climate Change. reasons, model predictions published 20 years ago based on increased CO2 have not come close to accurately predicting global temperature increases… in fact over the past 6 years, global temperatures increases have slowed, partially due to more energy being stored in the ocean’s depths (3). One
medium
6,472
Science, Climate Change. additional fact to consider: During both the 2008 economic downturn and the 2020–2021 pandemic shutdown, human CO2 production decreased significantly, nearly twice the goals of the Paris climate accord, with not even a blip on changes in global climate temperatures (4). This may not seem
medium
6,473
Science, Climate Change. significant but consider the strife and reduced freedom each caused and imagine what life would be like under those conditions permanently. My final point is this — current levels of CO2 in the atmosphere are approximately 417 PPM. Plants rely on CO2 for growth which requires water as well. When
medium
6,474
Science, Climate Change. CO2 levels are relatively low, plants must process more water to grow and thus their leaves must transpire more H2O into the atmosphere through their stoma. The density of stoma on leaves from fossilized plants is how scientists have determined that, millions of years ago, CO2 levels were as much
medium
6,475
Science, Climate Change. as 2000 PPM. Fossilized leaves had much lower density of stoma and thus transpired much less water vapor into the atmosphere. So, contrarily, plants deprived of CO2, must process more water from ground sources and thus will transpire more into the atmosphere. I leave you with this question: if CO2
medium
6,476
Science, Climate Change. levels were higher, would less water vapor be present in the atmosphere? And with less water vapor, which has a significantly greater greenhouse effect, would global temperatures decrease? 1. https://www.quora.com/How-much-m3-is-the-volume-of-air-on-Earth-from-the-ground-to-the-atmosphere 2.
medium
6,477
Markov Chains, Jeff Noon, Generative Text, Machine Learning, Artificial Intelligence. Zoomed in text from a Markov Chain I wanted to throw down some notes about the work/play I’ve been doing around remixing Jeff Noon’s books. Also I promised to explain what the above image was all about. It all starts way back when I first fell in love with Mark V Shaney, who I discovered via an
medium
6,479
Markov Chains, Jeff Noon, Generative Text, Machine Learning, Artificial Intelligence. article written by Penn Jillette: “I Spent an Interesting Evening Recently with a Grain of Salt”. Mark is, or was I guess, a bot that posted rambling messages to the net.singles UseNet group. He did this by consuming all of the messages posted to net.singles, and using a Markov chain to spit weird
medium
6,480
Markov Chains, Jeff Noon, Generative Text, Machine Learning, Artificial Intelligence. messages back out again. Markov chain, Mark V Shaney, see, it kind of makes sense. I’ve toyed with Markov chains before but always had trouble explaining my use of them to other people, so I drew a diagram part of which is at the start of this post. One way I explain is by saying I’ve thrown loads
medium
6,481
Markov Chains, Jeff Noon, Generative Text, Machine Learning, Artificial Intelligence. of fairy tales into a system, then say “Once upon a” and ask you to predict what the next word will be. You’ll probably say “time”. You’re guessing the next word based on the previous ones and your experience with fairy tales. We then move onto “upon a time”, next word is probably “there” at which
medium
6,482
Markov Chains, Jeff Noon, Generative Text, Machine Learning, Artificial Intelligence. point the sentence may continue with either “was” or “were”, as in “Once upon a time there was a…” or “Once upon a time there were…”, at this point we’ve got a split and there’s nothing more ahead except forks and probability. Google auto-completing search queries as you type is similar. Anyway,
medium
6,483
Markov Chains, Jeff Noon, Generative Text, Machine Learning, Artificial Intelligence. I’ve done a similar thing using the work of Jeff Noon (for reasons I’ll get into in a moment) as the source material, you can see it here: revdancatt.github.com/CAT780-remixing-noon. I’ll take the following output as an example… “The whole forest had been anesthetised, her temples wired, her senses
medium
6,484
Markov Chains, Jeff Noon, Generative Text, Machine Learning, Artificial Intelligence. stimulated, her eyes were not on Eva, not on Eva, not on Eva, not on anybody in that same realm, the land of dreams and nightmares.” This started with the words “The whole” from the book Channel SK1N, if we search for those two words we can see that they appear in four different places… …meaning
medium
6,485
Markov Chains, Jeff Noon, Generative Text, Machine Learning, Artificial Intelligence. the system has a 25% chance of picking either “team.”, “room”, “place” or “forest”. In our case this time it picked “forest” and off the system goes again moving onto “whole forest” as the next word pair to put back in again. It only appears once followed by “had”, “whole forest had” which in turn
medium
6,486
Markov Chains, Jeff Noon, Generative Text, Machine Learning, Artificial Intelligence. leads to either “the” or “been”. The original sentence from Channel SK1N starting with “The whole forest…” continues “…had the feel of a stage set, a location, of something she had already seen on film.” quite different to our “The whole forest had been anesthetised, her temples wired, her senses
medium
6,487
Markov Chains, Jeff Noon, Generative Text, Machine Learning, Artificial Intelligence. stimulated…”. The end result is Jeff Noonian but not Jeff Noon, it’s using all the same words that Noon uses and no more, just not in the same order. The chances of it recreating the whole original sentence are slim, 25% to pick “had”, then 50–50 of to get “the”, only 20% for the “feel” and
medium
6,488
Markov Chains, Jeff Noon, Generative Text, Machine Learning, Artificial Intelligence. onwards. Just a 2.5% chance to have matched the first 6 words. As soon as you hit a super connector such as “by the”, “with a”, “on a” and so on then all bets are off. Here’s the resulting image for our phrase… Why Jeff Noon? A couple of reasons, and not very well articulated ones at that. The
medium
6,489
Markov Chains, Jeff Noon, Generative Text, Machine Learning, Artificial Intelligence. first is easy, Noon’s books are being re-released as ebooks: metamorphiction.com which makes it particularly easy to get at the source text (ssshhh, don’t tell). The second is a bit more complex and it’s not what I’m claiming Jeff’s work to be, but rather how I think of it. I’m not going to say
medium
6,490
Markov Chains, Jeff Noon, Generative Text, Machine Learning, Artificial Intelligence. “Jeff’s work is all about” but rather; it seems/feels to me as though a lot of Jeff Noon’s work has parallels to music and remix/dub attitudes. Needle in the Groove is a story that remixes itself as it progresses, a number of books remix the location of the city of Manchester… or maybe they’re a
medium
6,491
Markov Chains, Jeff Noon, Generative Text, Machine Learning, Artificial Intelligence. cover version of Manchester. This comes out again in Cobralingus a fictional writing engine that “…uses the Metamorphiction process to apply the techniques of electronic dance music to the production of words, dissolving languages. In this mutated, liquid state, words are manipulated into new
medium
6,492
Markov Chains, Jeff Noon, Generative Text, Machine Learning, Artificial Intelligence. forms; borrowed text is sampled and transformed.” A simple Markov chain almost certainly isn’t that but it is an interesting starting point. Music is steeped in remixes, cover versions, mashups and sampling. But this doesn’t happen in writing (probably for very specific copyright reasons), you
medium
6,493
Markov Chains, Jeff Noon, Generative Text, Machine Learning, Artificial Intelligence. don’t often get authors covering or remixing someone else’s story. It happens but very rarely. Where are the 7", 12", radio edit, club mix version of writing, held within the same medium of writing? As Automated Alice is the story of Alice from Lewis Carroll’s book set in a future version of
medium
6,494
Markov Chains, Jeff Noon, Generative Text, Machine Learning, Artificial Intelligence. Manchester, a Markov chain allow us to throw the two books together and mix between them… “‘Oh dear!’ Alice murmurs to herself. ‘Not only is this snake poisonous,’ replied the Badgerman, ‘and after all…there aren’t that many…’ that is what she did, she picked her way between Birdcaging girls and
medium
6,495
Markov Chains, Jeff Noon, Generative Text, Machine Learning, Artificial Intelligence. Briefcasing boys, ‘and the real Whippoorwill?’ she cried, ‘wherever have you flown to?’ And then cubism, because I couldn’t be doing with making calculations?” “You MUST have meant some mischief, or else you’d have signed your name on a slight breeze. ‘Whippoorwill!’ cried Alice to the north of the
medium
6,496
Markov Chains, Jeff Noon, Generative Text, Machine Learning, Artificial Intelligence. ground. So she began very cautiously: ‘But I am getting forgetful in my time, but never had fits, my dear’, and she said to Alice, ‘Give yourself up! Give yourself up!’ Alice forced herself to her full size by this time, that Alice managed to find (frustratingly) that fully twelve pieces were
medium
6,497
Markov Chains, Jeff Noon, Generative Text, Machine Learning, Artificial Intelligence. missing from my jigsaw!” …one Alice blending into the other. Or even allow Nola from Channel SK1N to meet Alice from Automated Alice… “Whirrrrr. Moments passing. Clikck whirrrrrrrrrrrrrrrrrr. Nola moved over the Dome’s image vanished as she whipped her scaly tail at the bar, flipping open the
medium
6,498
Markov Chains, Jeff Noon, Generative Text, Machine Learning, Artificial Intelligence. trapdoor and Alice did tap on the camera’s gaze, on the shelf, according to Chrowdingler, is known as Djinnetic Engineering, on account of her skin, her belly, her hands started to move! Suddenly Alice was curious at hearing this news, and the screen like a white sheet, nothing more. Had she been
medium
6,499
Markov Chains, Jeff Noon, Generative Text, Machine Learning, Artificial Intelligence. called, back then? And she slipped and almost fell. Too close. Sweat. Heat. The smell of my adventures!’ Alice observed to herself, ‘it’s a book beginning in C… e… y. Now what could they possibly be called? Wait a minute!’ Just then, a thousand pieces and the sounds of their pursuers dwindled
medium
6,500
Markov Chains, Jeff Noon, Generative Text, Machine Learning, Artificial Intelligence. away.” A Markov chain is a blunt tool, but an interesting starting point. Further reading: Origins of a Dub Fiction by Jeff Noon over at Language is a Virus Note: this was 1st published on my own site on 13th March 2013.
medium
6,501
Math, Mathematics, Study, Learn, Linear Algebra. Linear algebra is based on a factor known as the vector. There are many different perspectives on vectors so let's review them all: vectors are arrows pointing in space and it is defined by its length and the direction it is pointing. Even if you move around the vector but those two factors are the
medium
6,502
Math, Mathematics, Study, Learn, Linear Algebra. same the vector is still the same. vectors are ordered lists of numbers where the order does matter! vectors can be anything where there is a notion of adding two vectors and multiplying a vector by a number. Now let's go back. Imagine an arrow in a coordinate plain with its tail at the origin.
medium
6,503
Math, Mathematics, Study, Learn, Linear Algebra. This is different from the first view but in linear algebra vectors are almost always at the origin. Before we move on let's review the coordinate system. The coordinate of a vector is the coordinate from the tail at the origin to its tip, the arrow. Source — (33) Vectors | Chapter 1, Essence of
medium
6,504
Math, Mathematics, Study, Learn, Linear Algebra. linear algebra — YouTube This essentially combines the first two perspectives of looking at a vector. The first number tells you how far to walk along axis while the second number tells you how far to walk up the y axis. Positive numbers — rightwards/upwards Negative numbers- leftwards/downwards
medium
6,505
Math, Mathematics, Study, Learn, Linear Algebra. The square brackets and vertical writing system is used to differ vectors from points. Every pair of numbers corresponds to one vector and vice versa. Vector Addition & Multiplication by Numbers To add two vectors, move the second vector so that its tail sits at the tip of the first one. Now if you
medium
6,506
Math, Mathematics, Study, Learn, Linear Algebra. draw a new vector from the start of the first vector to the tip of the second vector. Source — (33) Vectors | Chapter 1, Essence of linear algebra — YouTube This is one of the only times where vectors stray from the origin. This works because each vector represents a step in a certain direction of
medium
6,507
Math, Mathematics, Study, Learn, Linear Algebra. space. If you take a step across the first vector and then the second vector the overall effect is if you moved along the sum of those two vectors. Now let's see how this looks numerically: Source — (33) Vectors | Chapter 1, Essence of linear algebra — YouTube This is same as moving 1 + 3 to the
medium
6,508
Math, Mathematics, Study, Learn, Linear Algebra. right and 2 up and 1 down. This makes it so the new coordinate of the vector would be: [4/1] To add vectors, we match the terms and add! Now let's look at vector multiplication… To multiply a number to a vector we are basically stretching and shrinking that vector. For example, if we multiply a
medium
6,509
Math, Mathematics, Study, Learn, Linear Algebra. vector by two, we stretch it out to be twice as long. Source — (33) Vectors | Chapter 1, Essence of linear algebra — YouTube If we muliply by 1/3 we squish down the vector is it a third of its orginal length: Source — (33) Vectors | Chapter 1, Essence of linear algebra — YouTube When multiplying by
medium
6,510
Math, Mathematics, Study, Learn, Linear Algebra. a negative number the vector gets flipped around and then squished/stretched: Source — (33) Vectors | Chapter 1, Essence of linear algebra — YouTube This process is known as scaling and the numbers being multiplied are scalers. When multiply a vector by a number we multiply each of its parts by
medium
6,511
Nanosubmarine, Nanotechnology, Optical Microsystem. Until last years, technological developments have been inspired by nature and used biomimicry at the microscopic scale. Scientists like Richard Feynman noticed the potential of microsystems approximately 60 years ago. With the development of semiconductor device fabrication, microelectromechanical
medium
6,513
Nanosubmarine, Nanotechnology, Optical Microsystem. systems were practiced. In this decade, it has become possible to make nanomachines through enhancing nanotechnology methods. Lecture on “There’s Plenty of Room at the Bottom” Richard Feynman. (Dec 29, 1959) CALTECH In University of Rice, scientists produced the first nano-submarine that can
medium
6,514
Nanosubmarine, Nanotechnology, Optical Microsystem. circulate in the solution. The nano-submarine is a molecular structure which consists of 22 atoms [1]. The structure includes two significant parts: motor for motion and fluorophores which is a fluorescent chemical for observation. As Lopez (2015) stated, in this study, it is observed motion of
medium
6,515
Nanosubmarine, Nanotechnology, Optical Microsystem. single-molecule nanomachines with fluorescence correlation spectroscopy (FCS) in solution. In free solution, the nanomachines invariably moves under the influence of Brownian motion which is a kind of stochastic diffusion model. On the other hand, the nanomachine when activated by UV light, it is
medium
6,516
Nanosubmarine, Nanotechnology, Optical Microsystem. clearly visible that nanomachines moves a directed motion. When the UV light activates nano-submarine, molecular motor part acts like a pusher tail. Thus, it is provided to move forward nano-submarine 18 nanometer. According to Tour (2015), until now, this molecular nanosystem is the fastest moving
medium
6,517
Nanosubmarine, Nanotechnology, Optical Microsystem. molecules in solution. The nanosubmarine, unimolecular submersible nanomachine (USN), can be faster 26% from diffusion owing to UV light influence. Unimolecular submersible nanomachine (USN) Above all, using of UV light and laser provides the ability of movement to unimolecular submersible
medium
6,518
Nanosubmarine, Nanotechnology, Optical Microsystem. nanomachine (USN). Owing to given beam from outside, this molecular system bonds is changed in four steps. Therefore, this changes assure flagellate movement over nanomachine. Motion mechanism of USNs. Furthermore, photonic technologies are also used to monitor the motion of nano-submarines. It is
medium
6,519
Nanosubmarine, Nanotechnology, Optical Microsystem. observed motion of single-molecule nanomachines with fluorescence correlation spectroscopy (FCS) in solution and the study results are obtained. Consequently, through the light-driven nano-submarines, it is possible that it can be realize drug transportation in blood circulation. Morever, molecular
medium
6,520
Nanosubmarine, Nanotechnology, Optical Microsystem. nanomachines will be precursor for nano-transmission and developed the molecular systems that can be remote control by controlling the light sources. These technological improvement provides medical opportunity in treatments that require cellular therapy by drug transmission without damaging
medium
6,521
Python. A modeller, analyst or computational scientist can produce gigabytes or terabytes of output data while simulating large molecular systems. Molecular simulations produce data files, called trajectories, which contain multiple snapshots of the molecular system over time. Post-processing this data can
medium
6,523
Python. be a long and iterative process and the trajectory files need to be kept available for use. Further hesitancy to delete trajectories is a result of : the cost, time and compute resources utilised to carry out the simulation and produce the data the iterative analysis and post-processing stage — may
medium
6,524
Python. need to analyse the output in a new manner or with a new program. requirement to keep data for a few years. This may be due to a research data management plan, stipulation by funders or journal publishing house. The chart below compares typical output sizes of trajectory files for an ~80000 frame
medium
6,525
Python. simulation of a small system (https://bitbucket.org/ramonbsc/pypcazip/wiki/Home). Small scale projects, such as nanoseconds of enzyme calculations, can produce terabytes of data, composed of several hundred files of between 100Mb to 10Gb each. Methods to reduce this usage while still having easy
medium
6,526
Python. access to the underlying data is important. The challenge with builtin Linux compression tools Straight up compression with gzip, bzip2 does not work effectively and is really time consuming for large systems. It is not effective as compressed trajectories may only be 10 % smaller than before. The
medium
6,527