markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Променливи Променливата е име,с което се асоциира дадена стойност. Валидни имена на променливи Името на променлива може да съдържа главни и малки букви, цифри и символът _. Името на променлива трябва да започва с буква или _. За имена на променливи не може да се използват служебни думи от Python. Препоръки за именуване на променливи Имената трябва да са описателни и да обясняват за какво служи дадената променлива. Например за име на човек подходящо име е person_name, а неподходящо име е x. Трябва да се използват само латински букви. В Python e прието променливите да започват винаги с малка буква и да съдържат само малки букви, като всяка следваща дума в тях е разделе от предходната със символа _. Името на променливите трябва да не е нито много дълго, нито много късо – просто трябва да е ясно за какво служи променливата в контекста, в който се използва. Трябва да се внимава за главни и малки букви, тъй като Python прави разлика между тях. Например age и Age са различни променливи. Работа с променливи
c = 10 # number of coins - прекалени късо number_of_coins = 10 # прекалино детайлно име coinsCount = 10 # ОК, но за Java coins_count = 10 # OK # Задаването на стойност на променлива се нарича `присвояване` count = 1 # Когато Python срещне променлива в израз, той я заменя със стойността и print(count + 1) # Променливите се наричат променливи, защото стойността им може да се променя count = 2 print(count + 1)
week2/Expressions, variables and errors.ipynb
YAtOff/python0-reloaded
mit
Какво трябва да напишем, за да увеличим стойността на count с 1 (приемете, че не знаем каква е стойността на count)?
count = 1 count = count + 1 print(count)
week2/Expressions, variables and errors.ipynb
YAtOff/python0-reloaded
mit
Грешки
my var = 1 price = 1 print(pirce)
week2/Expressions, variables and errors.ipynb
YAtOff/python0-reloaded
mit
Names scores Problem 22 Using names.txt (right click and 'Save Link/Target As...'), a 46K text file containing over five-thousand first names, begin by sorting it into alphabetical order. Then working out the alphabetical value for each name, multiply this value by its alphabetical position in the list to obtain a name score. For example, when the list is sorted into alphabetical order, COLIN, which is worth 3 + 15 + 12 + 9 + 14 = 53, is the 938th name in the list. So, COLIN would obtain a score of 938 × 53 = 49714. What is the total of all the name scores in the file?
from euler import timer, Seq def score(s): return s >> Seq.map(lambda x: ord(x) - 64) >> Seq.sum def p022(): return ( open('data/p022.txt').read().split(',') >> Seq.map(lambda x: x.strip('"')) >> Seq.sort >> Seq.mapi(lambda (i,x): score(x)*(i+1)) >> Seq.sum) timer(p022)
euler_021_030.ipynb
mndrake/PythonEuler
mit
Non-abundant sums Problem 23 A perfect number is a number for which the sum of its proper divisors is exactly equal to the number. For example, the sum of the proper divisors of 28 would be $1 + 2 + 4 + 7 + 14 = 28$, which means that 28 is a perfect number. A number n is called deficient if the sum of its proper divisors is less than n and it is called abundant if this sum exceeds n. As 12 is the smallest abundant number, $1 + 2 + 3 + 4 + 6 = 16$, the smallest number that can be written as the sum of two abundant numbers is 24. By mathematical analysis, it can be shown that all integers greater than 28123 can be written as the sum of two abundant numbers. However, this upper limit cannot be reduced any further by analysis even though it is known that the greatest number that cannot be expressed as the sum of two abundant numbers is less than this limit. Find the sum of all the positive integers which cannot be written as the sum of two abundant numbers.
from euler import FactorInteger, Seq, timer from operator import mul def divisor_sum(n): return ( FactorInteger(n) >> Seq.map(lambda (p,a): (p**(a+1) - 1)/(p-1)) >> Seq.reduce(mul) ) - n def p023(): max_n = 28123 abundants = range(12, max_n+1) >> Seq.filter(lambda n: n < divisor_sum(n)) >> Seq.toList abundant_sums = (abundants >> Seq.collect(lambda a: abundants >> Seq.map(lambda b: a+b) >> Seq.takeWhile(lambda x: x < (max_n+1))) >> Seq.toSet) return max_n * (max_n + 1) / 2 - sum(abundant_sums) timer(p023)
euler_021_030.ipynb
mndrake/PythonEuler
mit
Lexicographic permutations Problem 24 A permutation is an ordered arrangement of objects. For example, 3124 is one possible permutation of the digits 1, 2, 3 and 4. If all of the permutations are listed numerically or alphabetically, we call it lexicographic order. The lexicographic permutations of 0, 1 and 2 are: 012 021 102 120 201 210 What is the millionth lexicographic permutation of the digits 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9?
from math import factorial from euler import timer def p024(): numbers = range(10) def loop(remainder, acc): k = len(numbers) - 1 if k==0: return acc + str(numbers[0]) else: next = numbers[remainder / factorial(k)] numbers.remove(next) return loop((remainder%(factorial(k))),(acc + str(next))) return loop(999999,"") timer(p024)
euler_021_030.ipynb
mndrake/PythonEuler
mit
1000-digit Fibonacci number Problem 25 The Fibonacci sequence is defined by the recurrence relation: $F_n = F_{n−1} + F_{n−2}$, where $F_1 = 1$ and $F_2 = 1$. Hence the first 12 terms will be: $F_1 = 1$ $F_2 = 1$ $F_3 = 2$ $F_4 = 3$ $F_5 = 5$ $F_6 = 8$ $F_7 = 13$ $F_8 = 21$ $F_9 = 34$ $F_{10} = 55$ $F_{11} = 89$ $F_{12} = 144$ The 12th term, $F_{12}$, is the first term to contain three digits. What is the first term in the Fibonacci sequence to contain 1000 digits?
from math import log10 from euler import timer, Seq def p025(): return ( Seq.unfold(lambda (a,b):(b, (b,a+b)), (0,1)) >> Seq.findIndex(lambda x: log10(x) > 999) ) + 1 timer(p025)
euler_021_030.ipynb
mndrake/PythonEuler
mit
Reciprocal cycles Problem 26 A unit fraction contains 1 in the numerator. The decimal representation of the unit fractions with denominators 2 to 10 are given: 1/2 = 0.5 1/3 = 0.(3) 1/4 = 0.25 1/5 = 0.2 1/6 = 0.1(6) 1/7 = 0.(142857) 1/8 = 0.125 1/9 = 0.(1) 1/10 = 0.1 Where 0.1(6) means 0.166666..., and has a 1-digit recurring cycle. It can be seen that 1/7 has a 6-digit recurring cycle. Find the value of d < 1000 for which 1/d contains the longest recurring cycle in its decimal fraction part.
from euler import timer, Seq def cycle(denom): if denom==2 or denom==5: return 0 elif denom%2==0: return cycle(denom/2) elif denom%5==0: return cycle(denom/5) else: return ( Seq.initInfinite(lambda x: x+1) >> Seq.map (lambda x: 10 ** x - 1) >> Seq.findIndex(lambda x: x%denom==0) ) + 1 def p026(): return range(1, 1001) >> Seq.maxBy(cycle) timer(p026)
euler_021_030.ipynb
mndrake/PythonEuler
mit
Quadratic primes Problem 27 Euler discovered the remarkable quadratic formula: $n^2 + n + 41$ It turns out that the formula will produce $40$ primes for the consecutive values $n = 0$ to $39$. However, when $n = 40$, $40^2 + 40 + 41 = 40(40 + 1) + 41$ is divisible by $41$, and certainly when $n = 41$, $41^2 + 41 + 41$ is clearly divisible by $41$. The incredible formula $n^2 − 79n + 1601$ was discovered, which produces $80$ primes for the consecutive values $n = 0$ to $79$. The product of the coefficients, $−79$ and $1601$, is $−126479$. Considering quadratics of the form: $n^2 + an + b$, where $|a| < 1000$ and $|b| < 1000$ where $|n|$ is the modulus/absolute value of $n$ e.g. $|11| = 11$ and $|−4| = 4$ Find the product of the coefficients, $a$ and $b$, for the quadratic expression that produces the maximum number of primes for consecutive values of $n$, starting with $n = 0$.
from euler import is_prime, Seq, timer, primes def primes_generated(x): a,b = x return ( Seq.initInfinite(lambda n: n*n + a*n + b) >> Seq.takeWhile(is_prime) >> Seq.length) def p027(): primes_1000 = (primes() >> Seq.takeWhile(lambda x: x<1000) >> Seq.toList) a,b = ([(a,b) for a in range(-999,1000) for b in primes_1000] >> Seq.maxBy(primes_generated)) return a*b timer(p027)
euler_021_030.ipynb
mndrake/PythonEuler
mit
Number spiral diagonals Problem 28 Starting with the number 1 and moving to the right in a clockwise direction a 5 by 5 spiral is formed as follows: <font color='red'>21</font> 22 23 24 <font color='red'>25</font> 20 <font color='red'>07</font> 08 <font color='red'>09</font> 10 19 06 <font color='red'>01</font> 02 11 18 <font color='red'>05</font> 04 <font color='red'>03</font> 12 <font color='red'>17</font> 16 15 14 <font color='red'>13</font> It can be verified that the sum of the numbers on the diagonals is 101. What is the sum of the numbers on the diagonals in a 1001 by 1001 spiral formed in the same way?
from euler import timer def p028(): n = 1001 def collect(depth, start, acc): if (depth > n/2): return acc else: return collect(depth+1, start+8*depth, acc+4*start+20*depth) return collect(1,1,1) timer(p028)
euler_021_030.ipynb
mndrake/PythonEuler
mit
Distinct powers Problem 29 Consider all integer combinations of $a^b$ for $2 ≤ a ≤ 5$ and $2 ≤ b ≤ 5$: $2^2=4$, $2^3=8$, $2^4=16$, $2^5=32$ $3^2=9$, $3^3=27$, $3^4=81$, $3^5=243$ $4^2=16$, $4^3=64$, $4^4=256$, $4^5=1024$ $5^2=25$, $5^3=125$, $5^4=625$, $5^5=3125$ If they are then placed in numerical order, with any repeats removed, we get the following sequence of 15 distinct terms: $4, 8, 9, 16, 25, 27, 32, 64, 81, 125, 243, 256, 625, 1024, 3125$ How many distinct terms are in the sequence generated by $a^b$ for $2 ≤ a ≤ 100$ and $2 ≤ b ≤ 100$?
from euler import timer def p029(): return (set(a **b for a in range(2,101) for b in range(2,101)) >> Seq.length) timer(p029)
euler_021_030.ipynb
mndrake/PythonEuler
mit
Digit fifth powers Problem 30 Surprisingly there are only three numbers that can be written as the sum of fourth powers of their digits: $1634 = 1^4 + 6^4 + 3^4 + 4^4$ $8208 = 8^4 + 2^4 + 0^4 + 8^4$ $9474 = 9^4 + 4^4 + 7^4 + 4^4$ As $1 = 1^4$ is not a sum it is not included. The sum of these numbers is $1634 + 8208 + 9474 = 19316$. Find the sum of all the numbers that can be written as the sum of fifth powers of their digits.
from euler import timer def p030(): def is_sum(n): return ( str(n) >> Seq.map(lambda x: int(x) ** 5) >> Seq.sum ) == n max_n = ( ((Seq.unfold(lambda x: (x, x+1), 1) >> Seq.find(lambda x: 10 ** x - 1 > x * 9 ** 5) ) - 1) * 9 ** 5) return ( range(2, max_n + 1) >> Seq.filter(is_sum) >> Seq.sum) timer(p030)
euler_021_030.ipynb
mndrake/PythonEuler
mit
Set up inline matplotlib
%matplotlib inline rcParams['figure.figsize'] = 5, 4 sb.set_style('whitegrid')
Community_identity.ipynb
HNoorazar/PyOpinionGame
gpl-3.0
Setting Up Game Parameters
config = og_cfg.staticParameters() path = '/Users/hn/Documents/GitHub/PyOpinionGame/' # path to the 'staticParameters.cfg' staticParameters = path + 'staticParameters.cfg' config.readFromFile(staticParameters) # Read static parameters config.threshold = 0.0001 config.Kthreshold = 0.00001 config.startingseed = 10 config.learning_rate = 0.1 tau = 0.62 #tip of the tent potential function config.printOut()
Community_identity.ipynb
HNoorazar/PyOpinionGame
gpl-3.0
Set up the state of the system State of the system includes: Weight Matrix (Matrix of the coupling wieghts between topic) Initial Opinions of agents Adjacency matrix of the network This is just initialization of the state, later we update some elements of it.
# These are the default matrices for the state of the system: # If you want to change them, you can generate a new one in the following cell default_weights = og_coupling.weights_no_coupling(config.popSize, config.ntopics) default_initialOpinions = og_opinions.initialize_opinions(config.popSize, config.ntopics) default_adj = og_adj.make_adj(config.popSize, 'full') state = og_state.WorldState(adj=default_adj, couplingWeights=default_weights, initialOpinions=default_initialOpinions, initialHistorySize=100, historyGrowthScale=2) state.validate()
Community_identity.ipynb
HNoorazar/PyOpinionGame
gpl-3.0
User Defined States and parameters Can go in the following cell:
numberOfCommunities = 3 communityPopSize = 25 config.popSize = numberOfCommunities * communityPopSize # List of upper bound probability of interaction between communities uppBound_list = [0.0] # List of uniqueness Strength parameter individStrength = [0.0] config.learning_rate = 0.1 config.iterationMax = 10000 tau = 0.62 config.printOut() # # functions for use by the simulation engine # ufuncs = og_cfg.UserFunctions(og_select.PickTwoWeighted, og_stop.iterationStop, og_pot.createTent(tau)) # Number of different initial opinions, # i.e. number of different games with different initials. noInitials = np.arange(1) noGames = np.arange(1) # Number of different game orders. # Run experiments with different adjacencies, different initials, and different order of games. for uniqForce in individStrength: config.uniqstrength = uniqForce for upperBound in uppBound_list: # Generate different adjacency matrix with different prob. of interaction # between different communities state.adj = og_adj.CommunitiesMatrix(communityPopSize, numberOfCommunities, upperBound) for countInitials in noInitials: # Pick three communities with similar opinions to begin with! state.initialOpinions = np.zeros((config.popSize, 1)) state.initialOpinions[0:25] = np.random.uniform(low=0.0, high=.25, size=(25,1)) state.initialOpinions[25:50] = np.random.uniform(low=0.41, high=.58, size=(25,1)) state.initialOpinions[50:75] = np.random.uniform(low=0.74, high= 1, size=(25,1)) state.couplingWeights = og_coupling.weights_no_coupling(config.popSize, config.ntopics) all_experiments_history = {} print "(uniqForce, upperBound) = ({}, {})".format(uniqForce, upperBound) print "countInitials = {}".format(countInitials + 1) for gameOrders in noGames: #cProfile.run('og_core.run_until_convergence(config, state, ufuncs)') state = og_core.run_until_convergence(config, state, ufuncs) print("One Experiment Done" , "gameOrders = " , gameOrders+1) all_experiments_history[ 'experiment' + str(gameOrders+1)] = state.history[0:state.nextHistoryIndex,:,:] og_io.saveMatrix('uB' + str(upperBound) + '*uS' + str(config.uniqstrength) + '*initCount' + str(countInitials+21) + '.mat', all_experiments_history) print all_experiments_history.keys() print all_experiments_history['experiment1'].shape
Community_identity.ipynb
HNoorazar/PyOpinionGame
gpl-3.0
Plot the experiment done above:
time, population_size, no_of_topics = evolution = all_experiments_history['experiment1'].shape evolution = all_experiments_history['experiment1'].reshape(time, population_size) fig = plt.figure() plt.plot(evolution) plt.xlabel('Time') plt.ylabel('Opinionds') plt.title('Evolution of Opinions') fig.set_size_inches(10,5) plt.show()
Community_identity.ipynb
HNoorazar/PyOpinionGame
gpl-3.0
Skew Uniqueness Tendency Driver: I observed when having tendency for uniqueness is drawn from normal distribution, we do not get an interesting result. For example, initial intuition was that uniqueness for tendency would delay stabilization of the network, however, it did not. So, here we draw uniqueness tendencies from skew normal distribution. When most neighbors tend to go in one directions, then probability of individuals to go to the opposite direction would be more than the niose in the same direction:
state = og_state.WorldState(adj=default_adj, couplingWeights=default_weights, initialOpinions=default_initialOpinions, initialHistorySize=100, historyGrowthScale=2) state.validate() # # load configuration # config = og_cfg.staticParameters() config.readFromFile('staticParameters.cfg') config.threshold = 0.01 config.printOut() # # seed PRNG: must do this before any random numbers are # ever sampled during default generation # print(("SEEDING PRNG: "+str(config.startingseed))) np.random.seed(config.startingseed)
Community_identity.ipynb
HNoorazar/PyOpinionGame
gpl-3.0
Initiate State
# These are the default matrices for the state of the system: # If you want to change them, you can generate a new one in the following cell default_weights = og_coupling.weights_no_coupling(config.popSize, config.ntopics) default_initialOpinions = og_opinions.initialize_opinions(config.popSize, config.ntopics) default_adj = og_adj.make_adj(config.popSize, 'full') state = og_state.WorldState(adj=default_adj, couplingWeights=default_weights, initialOpinions=default_initialOpinions, initialHistorySize=100, historyGrowthScale=2) state.validate() # # run # numberOfCommunities = 3 communityPopSize = 25 config.popSize = numberOfCommunities * communityPopSize # List of upper bound probability of interaction between communities uppBound_list = np.array([.001, 0.004, 0.007, 0.01, 0.013, 0.016, 0.019]) # # List of uniqueness Strength parameter # individStrength = np.arange(0.00001, 0.000251, 0.00006) individStrength = np.append(0, individStrength) individStrength = np.array([0.0]) skewstrength = 2.0 tau = 0.62 config.iterationMax = 30000 config.printOut() # # functions for use by the simulation engine # ufuncs = og_cfg.UserFunctions(og_select.PickTwoWeighted, og_stop.iterationStop, og_pot.createTent(tau)) noInitials = np.arange(1) # Number of different initial opinions. noGames = np.arange(1) # Number of different game orders. # Run experiments with different adjacencies, different initials, and different order of games. for uniqForce in individStrength: config.uniqstrength = uniqForce for upperBound in uppBound_list: """ Generate different adjacency matrix with different prob. of interaction between different communities """ state.adj = og_adj.CommunitiesMatrix(communityPopSize, numberOfCommunities, upperBound) print"(upperBound, uniqForce) = (", upperBound, "," , uniqForce , ")" for countInitials in noInitials: # Pick three communities with similar opinions (stable state) to begin with! state.initialOpinions = np.zeros((config.popSize, 1)) state.initialOpinions[0:25] = np.random.uniform(low=0.08, high=.1, size=(25,1)) state.initialOpinions[25:50] = np.random.uniform(low=0.49, high=.51, size=(25,1)) state.initialOpinions[50:75] = np.random.uniform(low=0.9, high= .92, size=(25,1)) state.couplingWeights = og_coupling.weights_no_coupling(config.popSize, config.ntopics) all_experiments_history = {} print "countInitials=", countInitials + 1 for gameOrders in noGames: #cProfile.run('og_core.run_until_convergence(config, state, ufuncs)') state = og_core.run_until_convergence(config, state, ufuncs) state.history = state.history[0:state.nextHistoryIndex,:,:] idx_IN_columns = [i for i in xrange(np.shape(state.history)[0]) if (i % (config.popSize)) == 0] state.history = state.history[idx_IN_columns,:,:] all_experiments_history[ 'experiment' + str(gameOrders+1)] = state.history og_io.saveMatrix('uB' + str(upperBound) + '*uS' + str(config.uniqstrength) + '*initCount' + str(countInitials+1) + '.mat', all_experiments_history) all_experiments_history.keys() time, population_size, no_of_topics = all_experiments_history['experiment1'].shape evolution = all_experiments_history['experiment1'].reshape(time, population_size) fig = plt.figure() plt.plot(evolution) plt.xlabel('Time') plt.ylabel('Opinionds') plt.title('Evolution of Opinions of 3 communities') fig.set_size_inches(10, 5) plt.show()
Community_identity.ipynb
HNoorazar/PyOpinionGame
gpl-3.0
In this example we will optimize the 2D Six-Hump Camel function (available in GPyOpt). We will assume that exact evaluations of the function are observed. The explicit form of the function is: $$f(x_1,x_2) =4x_1^2 – 2.1x_1^4 + x_1^6/3 + x_1x_2 – 4x_2^2 + 4x_2^4$$
func = GPyOpt.objective_examples.experiments2d.sixhumpcamel()
manual/GPyOpt_constrained_optimization.ipynb
SheffieldML/GPyOpt
bsd-3-clause
Imagine that we were optimizing the function in the intervals $(-1,1)\times (-1.5,1.5)$. As usual, we can defined this box constraints as:
space =[{'name': 'var_1', 'type': 'continuous', 'domain': (-1,1)}, {'name': 'var_2', 'type': 'continuous', 'domain': (-1.5,1.5)}]
manual/GPyOpt_constrained_optimization.ipynb
SheffieldML/GPyOpt
bsd-3-clause
This will be an standard case of optimizing the function in an hypercube. However in this case we are going to study how to solve optimization problems with arbitrary constraints. In particular, we consider the problem of finding the minimum of the function in the region defined by $$-x_2 - .5 + |x_1| -\sqrt{1-x_1^2} \leq 0 $$ $$ x_2 + .5 + |x_1| -\sqrt{1-x_1^2} \leq 0 $$ We can define these constraints as
constraints = [{'name': 'constr_1', 'constraint': '-x[:,1] -.5 + abs(x[:,0]) - np.sqrt(1-x[:,0]**2)'}, {'name': 'constr_2', 'constraint': 'x[:,1] +.5 - abs(x[:,0]) - np.sqrt(1-x[:,0]**2)'}]
manual/GPyOpt_constrained_optimization.ipynb
SheffieldML/GPyOpt
bsd-3-clause
And create the feasible region od the problem by writting:
feasible_region = GPyOpt.Design_space(space = space, constraints = constraints)
manual/GPyOpt_constrained_optimization.ipynb
SheffieldML/GPyOpt
bsd-3-clause
Now, let's have a look to what we have. Let's make a plot of the feasible region and the function with the original box-constraints. Note that the function .indicator_constrains(X) takes value 1 if we are in the feasible region and 0 otherwise.
## Grid of points to make the plots grid = 400 bounds = feasible_region.get_continuous_bounds() X1 = np.linspace(bounds[0][0], bounds[0][1], grid) X2 = np.linspace(bounds[1][0], bounds[1][1], grid) x1, x2 = np.meshgrid(X1, X2) X = np.hstack((x1.reshape(grid*grid,1),x2.reshape(grid*grid,1))) ## Check the points in the feasible region. masked_ind = feasible_region.indicator_constraints(X).reshape(grid,grid) masked_ind = np.ma.masked_where(masked_ind > 0.5, masked_ind) masked_ind[1,1]=1 ## Make the plots plt.figure(figsize=(14,6)) # Feasible region plt.subplot(121) plt.contourf(X1, X2, masked_ind ,100, cmap= plt.cm.bone, alpha=1,origin ='lower') plt.text(-0.25,0,'FEASIBLE',size=20) plt.text(-0.3,1.1,'INFEASIBLE',size=20,color='white') plt.subplot(122) plt.plot() plt.contourf(X1, X2, func.f(X).reshape(grid,grid),100, alpha=1,origin ='lower') plt.plot(np.array(func.min)[:,0], np.array(func.min)[:,1], 'r.', markersize=20, label=u'Minimum') plt.legend() plt.title('Six-Hump Camel function',size=20)
manual/GPyOpt_constrained_optimization.ipynb
SheffieldML/GPyOpt
bsd-3-clause
The Six-Hump Camel function has two global minima. However, with the constraints that we are using, only one of the two is a valid one. We can see this by overlapping the two previous plots.
plt.figure(figsize=(6.5,6)) OB = plt.contourf(X1, X2, func.f(X).reshape(grid,grid),100,alpha=1) IN = plt.contourf(X1, X2, masked_ind ,100, cmap= plt.cm.bone, alpha=.5,origin ='lower') plt.text(-0.25,0,'FEASIBLE',size=20,color='white') plt.text(-0.3,1.1,'INFEASIBLE',size=20,color='white') plt.plot(np.array(func.min)[:,0], np.array(func.min)[:,1], 'r.', markersize=20, label=u'Minimum') plt.title('Six-Hump Camel with restrictions',size=20) plt.legend()
manual/GPyOpt_constrained_optimization.ipynb
SheffieldML/GPyOpt
bsd-3-clause
We will use the modular iterface to solve this problem. We start by generating an random inital design of 5 points to start the optimization. We just need to do:
# --- CHOOSE the intial design from numpy.random import seed # fixed seed seed(123456) initial_design = GPyOpt.experiment_design.initial_design('random', feasible_region, 10)
manual/GPyOpt_constrained_optimization.ipynb
SheffieldML/GPyOpt
bsd-3-clause
Importantly, the points are always generated within the feasible region as we can check here:
plt.figure(figsize=(6.5,6)) OB = plt.contourf(X1, X2, func.f(X).reshape(grid,grid),100,alpha=1) IN = plt.contourf(X1, X2, masked_ind ,100, cmap= plt.cm.bone, alpha=.5,origin ='lower') plt.text(-0.25,0,'FEASIBLE',size=20,color='white') plt.text(-0.3,1.1,'INFEASIBLE',size=20,color='white') plt.plot(np.array(func.min)[:,0], np.array(func.min)[:,1], 'r.', markersize=20, label=u'Minimum') plt.title('Six-Hump Camel with restrictions',size=20) plt.plot(initial_design[:,0],initial_design[:,1],'yx',label = 'Design') plt.legend()
manual/GPyOpt_constrained_optimization.ipynb
SheffieldML/GPyOpt
bsd-3-clause
Now, we choose the rest of the objects that we need to run the optimization. We will use a Gaussian Process with parameters fitted using MLE and the Expected improvement. We use the default BFGS optimizer of the acquisition. Evaluations of the function are done sequentially.
# --- CHOOSE the objective objective = GPyOpt.core.task.SingleObjective(func.f) # --- CHOOSE the model type model = GPyOpt.models.GPModel(exact_feval=True,optimize_restarts=10,verbose=False) # --- CHOOSE the acquisition optimizer aquisition_optimizer = GPyOpt.optimization.AcquisitionOptimizer(feasible_region) # --- CHOOSE the type of acquisition acquisition = GPyOpt.acquisitions.AcquisitionEI(model, feasible_region, optimizer=aquisition_optimizer) # --- CHOOSE a collection method evaluator = GPyOpt.core.evaluators.Sequential(acquisition)
manual/GPyOpt_constrained_optimization.ipynb
SheffieldML/GPyOpt
bsd-3-clause
Next, we create the BO object to run the optimization.
# BO object bo = GPyOpt.methods.ModularBayesianOptimization(model, feasible_region, objective, acquisition, evaluator, initial_design)
manual/GPyOpt_constrained_optimization.ipynb
SheffieldML/GPyOpt
bsd-3-clause
We first run the optimization for 5 steps and check how the results looks.
# --- Stop conditions max_time = None max_iter = 5 tolerance = 1e-8 # distance between two consecutive observations # Run the optimization bo.run_optimization(max_iter = max_iter, max_time = max_time, eps = tolerance, verbosity=False) bo.plot_acquisition()
manual/GPyOpt_constrained_optimization.ipynb
SheffieldML/GPyOpt
bsd-3-clause
See how the optimization is only done within the feasible region, out of it the value of the acquisition is zero, so no evaluation is selected in that region. We run 20 more iterations to see the acquisition and convergence.
# Run the optimization max_iter = 25 bo.run_optimization(max_iter = max_iter, max_time = max_time, eps = tolerance, verbosity=False) bo.plot_acquisition() bo.plot_convergence() # Best found value np.round(bo.x_opt,2) # True min np.round(func.min[0],2)
manual/GPyOpt_constrained_optimization.ipynb
SheffieldML/GPyOpt
bsd-3-clause
Now we build something more complicated to show where indexing can get tricky ...
import numpy as np import pandas as pd m3d=np.random.rand(3,4,5) m3d # how does Pandas arrange the data? n3d=m3d.reshape(4,3,5) n3d
PY_Basics/TMWP_PY_CrazyList_Indexing_and_Related_Experiments.ipynb
TheMitchWorksPro/DataTech_Playground
mit
Notice which numbers moved where. This would seem to indicate that in shape(a,b,c): - a is like the object's depth (how many groupings of rows/columns are there?) - b is like the object's rows per grouping (how many rows in each subgroup) - c is like the object's columns What if the object had 4 dimensions?
o3d=np.random.rand(2,3,4,5) o3d
PY_Basics/TMWP_PY_CrazyList_Indexing_and_Related_Experiments.ipynb
TheMitchWorksPro/DataTech_Playground
mit
Just analyzing how the numbers are arranged, we see that in shape(a,b,c,d), it just added the new extra dimensional layer to the front of the list so that now: - a = larger hyper grouping (2 of them) - b = first subgroup within (3 of them) - c = rows within these groupings (4 of them) - d = columns within these groupings (5 of them) It appears that rows always come before columns, and then it looks like groupings of rows and columns and groupings or groupings, etc. . . are added to the front of the index chain. Building something complex just to drill in more on how to access sub-elements:
# some simple arrays: simp1=np.array([[1,2,3,4,5]]) simp2=np.array([[10,9,8,7,6]]) simp3=[11,12,13] # a dictionary dfrm1 = {'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'], 'year': [2000, 2001, 2002, 2001, 2002], 'population': [1.5, 1.7, 3.6, 2.4, 2.9]} # convert dictionary to DataFrame dfrm1 = pd.DataFrame(dfrm1) dfrm1 # pandas indexing works a little differently: # * column headers are keys # * as shown here, can ask for columns, rows, and a filter based on values in the columns # in any order and the indexing will still work print(dfrm1["population"][dfrm1["population"] > 1.5][2:4]) # all of these return values from "population" column only print("---") # where "population" > 1.5 print(dfrm1["population"][2:4][dfrm1["population"] > 1.5]) # and row index is between 2 and 4 print("---") print(dfrm1[dfrm1["population"] > 1.5]["population"][2:4]) print("---") print(dfrm1[dfrm1["population"] > 1.5][2:4]["population"]) print("---") print(dfrm1[2:4]["population"][dfrm1["population"] > 1.5]) print("---") print(dfrm1[2:4][dfrm1["population"] > 1.5]["population"]) # this last one triggers a warning # breaking the above apart: print(dfrm1[dfrm1["population"] > 1.5]) # all rows and columns filtered by "population" values > 1.5 print("---") print(dfrm1["population"]) # return whole "population" column print("---") print(dfrm1[2:4]) # return whole rows 2 to 4 crazyList = [simp1, m3d, simp2, n3d, simp3, dfrm1, o3d] # Accessing the dataframe inside the list now that it is a sub element: crazyList[5]["population"][crazyList[5]["population"] > 1.5][2:4]
PY_Basics/TMWP_PY_CrazyList_Indexing_and_Related_Experiments.ipynb
TheMitchWorksPro/DataTech_Playground
mit
Now let's access other stuff in the list ...
crazyList[1] # this is the second object of the list (Python like many languages starts indicies at 0) # this is the full output of m3d crazyList[0] # after the above demo, no surprises here ... simp1 was the first object we added to the list
PY_Basics/TMWP_PY_CrazyList_Indexing_and_Related_Experiments.ipynb
TheMitchWorksPro/DataTech_Playground
mit
In the tests that follow ... anything that does not work is wrapped in exception handling (that displays the error) so this notebook can be run from start to finish ... Note that it is not good practice to use a catch all for all errors. In real coding errors should be handled individually by type. How do we access the first index (element 2) of the first array object in our complex list (which resides at index 0)?
try: # not this way ... crazyList[0][1] except Exception as ex: print("%s%s %s" %(type(ex), ":", ex)) # let's look at what we built: all the objects are here but are no longer named so we need to get indices right crazyList # note that both of these get the same data, but also note the difference in the format: "[[]]" and array([])". # look at the source and you will see we are drilling in at different levels of "[]" # there can be situations in real coding where extra layers are created by accident so this example is good to know print(crazyList[0]) crazyList[0][0]
PY_Basics/TMWP_PY_CrazyList_Indexing_and_Related_Experiments.ipynb
TheMitchWorksPro/DataTech_Playground
mit
Sub element 4 is a simple list nested within caryList: crazyList [ ... [content at index position 4] ...]
print(crazyList[4]) crazyList[4][1] # get 2nd element in the list within a list at position 4 (object 4 in the list)
PY_Basics/TMWP_PY_CrazyList_Indexing_and_Related_Experiments.ipynb
TheMitchWorksPro/DataTech_Playground
mit
So what about the array? The array was originally built in "simp1" and then added to crazyList. Its source looks like this:
print(type(simp1)) print(simp1.shape) print(simp1) print(simp1[0]) # note that the first two give us the same thing (whole array) simp1[0][1]
PY_Basics/TMWP_PY_CrazyList_Indexing_and_Related_Experiments.ipynb
TheMitchWorksPro/DataTech_Playground
mit
Note the [] versus the [[]] ... our "simple arrays" were copied from an example, but are actually nested objects of 1 list of 5 elements forming the first object inside the array. A true simple array would like this:
trueSimp1=np.array([10,9,8,7,6]) print(trueSimp1.shape) # note: output shows that Python thinks this is 5 rows, 1 column trueSimp1
PY_Basics/TMWP_PY_CrazyList_Indexing_and_Related_Experiments.ipynb
TheMitchWorksPro/DataTech_Playground
mit
Let's add the true simple array to our crazy object and then create working examples of accessing everything ...
crazyList.append(trueSimp1) # append mutates so this changes the original list crazyList # Warning! if you re-run this cell, you will keep adding more copies of the last object # to the end of this object. To be consistent with content in this NB # clear and re-run the whole notebook should that happen # The elements at either end of crazyList: print(crazyList[0]) print(crazyList[-1]) # ask for last item by counting backwards from the end # get a specific value by index from within the subelements at either end: print(crazyList[0][0][2]) # extra zero for the extra [] .. structurally this is really [0 [0 ], [1] ] but 1 does not exist print(crazyList[-1][2])
PY_Basics/TMWP_PY_CrazyList_Indexing_and_Related_Experiments.ipynb
TheMitchWorksPro/DataTech_Playground
mit
Looking at just that first element again:
crazyList[0] # first array to change
PY_Basics/TMWP_PY_CrazyList_Indexing_and_Related_Experiments.ipynb
TheMitchWorksPro/DataTech_Playground
mit
remember that this object if it were not in a list would be accessed like so:
simp1[0][1] # second element inside it
PY_Basics/TMWP_PY_CrazyList_Indexing_and_Related_Experiments.ipynb
TheMitchWorksPro/DataTech_Playground
mit
... so inside crazyList ? The answer is that the list is one level deep and the elements are yet another level in:
crazyList[0] crazyList[0][0][1]
PY_Basics/TMWP_PY_CrazyList_Indexing_and_Related_Experiments.ipynb
TheMitchWorksPro/DataTech_Playground
mit
<a id="mutation" name="mutation"></a> Sidebar: Mutation and Related Concerns Try this test and you will see it does not work: crazyList2 = crazyList.append(trueSimp1) What it did: crazyList got an element appended to the end and crazyList2 came out the other side empty. This is because append() returns None and operates on the original. The copy then gets nothing and the original gets an element added to it. To set up crazyList2 to append to only it, we might be tempted to try something like what is shown below, but if we do, note how it mutates:
aList = [1,2,3] bList = aList print(aList) print(bList)
PY_Basics/TMWP_PY_CrazyList_Indexing_and_Related_Experiments.ipynb
TheMitchWorksPro/DataTech_Playground
mit
Note how the second is really a reference to the first so changing one changes the other:
aList[0] = 0 bList[1] = 1 bList.append(4) print(aList) print(bList)
PY_Basics/TMWP_PY_CrazyList_Indexing_and_Related_Experiments.ipynb
TheMitchWorksPro/DataTech_Playground
mit
For a simple list ... we can fix that by simply using list() during our attempt to create the copy:
bList = list(aList) bList[0] = 999 aList[1] = 998 print(aList) print(bList) bList.append(19) print(aList) print(bList)
PY_Basics/TMWP_PY_CrazyList_Indexing_and_Related_Experiments.ipynb
TheMitchWorksPro/DataTech_Playground
mit
Mutation is avoided. Now we can change our two objects independantly. However, with complex objects like crazyList, this does not work. The following will illustrate the problem and later, options to get around it are presented.
crazyList2 = list(crazyList) crazyList2
PY_Basics/TMWP_PY_CrazyList_Indexing_and_Related_Experiments.ipynb
TheMitchWorksPro/DataTech_Playground
mit
Now we make some changes:
len(crazyList2)-1 # this is the position of the object we want to change crazyList2[7][1] = 13 # this will change element 2 of last object in crazyList2
PY_Basics/TMWP_PY_CrazyList_Indexing_and_Related_Experiments.ipynb
TheMitchWorksPro/DataTech_Playground
mit
Now we'll look at just the last object in both "crazyLists" showing what changed:
print(crazyList[7]) print(crazyList2[7])
PY_Basics/TMWP_PY_CrazyList_Indexing_and_Related_Experiments.ipynb
TheMitchWorksPro/DataTech_Playground
mit
The "13" replaced the value at this location in both crazyList and crazyList2. We are not dealing with true copies but rather references to the same data as further illustrated here:
crazyList[7][1] = 9 # change on of them again and both change print(crazyList[7]) print(crazyList2[7])
PY_Basics/TMWP_PY_CrazyList_Indexing_and_Related_Experiments.ipynb
TheMitchWorksPro/DataTech_Playground
mit
So ... how to make a copy that does not mutate? (we can change one without changing the other)?<br/> Let's look at some things that don't work first ...
crazyList3 = crazyList[:] # according to online topics ... this was supposed to work for the reason outlined below # it probably works with some complex objects but does not work with this one # some topics online indicate this should have worked because: # * the problem is avoided by "slicing" the original so Python behaves as if the thing you are copying is different # * if you used crazyList[2:3] ==> you would get a slice of the original you could store in the copy # * [:] utilizes slicing syntax but indicates "give me the whole thing" since by default, empty values are the min and max # indexing limits crazyList3[7][1] = 13 # this will change element 2 of the last object print(crazyList[7]) print(crazyList3[7]) # what if we do this? (slice it and then add back a missing element) crazyList3 = crazyList[:-1] print(len(crazyList3)) print(len(crazyList)) # crazyList 3 is now one element shorter than crazyList crazyList3.append(crazyList[7]) # add back missing element from crazyList print(len(crazyList3)) print(len(crazyList)) crazyList3[7][1] = 9 # this will change element 2 of the last object print(crazyList[7]) # note how again, both lists change print(crazyList3[7])
PY_Basics/TMWP_PY_CrazyList_Indexing_and_Related_Experiments.ipynb
TheMitchWorksPro/DataTech_Playground
mit
Python is hard to fool ... At first, I considered that we might now have two lists, but w/ just element 7 passed in by reference and so it mutates. But this shows our whole lists are still mutating:
print("before:") print(crazyList[4]) print(crazyList3[4]) crazyList3[4][0] = 14 print("after:") print(crazyList[4]) print(crazyList3[4]) # try other tests of other elements and you will get same results
PY_Basics/TMWP_PY_CrazyList_Indexing_and_Related_Experiments.ipynb
TheMitchWorksPro/DataTech_Playground
mit
deepcopy() comes from the copy library and the commands are documented at Python.org. For this situation, this solution seems to work for when mutation is undesirable:
import copy crazyList4 = copy.deepcopy(crazyList) print("before:") print(crazyList[4]) print(crazyList4[4]) crazyList4[4][0] = 15 print("") print("after:") print(crazyList[4]) print(crazyList4[4])
PY_Basics/TMWP_PY_CrazyList_Indexing_and_Related_Experiments.ipynb
TheMitchWorksPro/DataTech_Playground
mit
Should even deepcopy() not work, this topic online may prove helpful in these situations: Stack Overflow: When Deep Copy is not Enough. <a id="indexing" name="indexing"></a> Finding The Index of a Value Suppose we didn't know how to find the element but we knew the value we were looking for? How to get its index?
print(stupidList) print(stupidList[1].index(5)) # this works on lists # but for nested lists, you would need to loop through each sublist and handle the error that # gets thrown each time it does not find the answer for element in stupidList: try: test_i = element.index(5) except Exception as ex: print("%s%s %s" %(type(ex), ":", ex)) print(test_i) # this strategy will not work on numpy arrays though try: crazyList[0].index(2) except Exception as anyE: print(type(anyE), anyE) # because we have a list containing numpy arrays, we could look in each one like this: print(crazyList[0]) np.where(crazyList[0]==2) # the above indicates that 2 lives here: crazyList[0][0][1] # started with crazyList[0], then found it at [0][1] inside the data structure # For floating point numbers, the level of precision matters # details on how this works are presented in this notebook: TMWP_np_where_and_floatingPoint_numbers.ipynb # the simple test in the cells that follow should help illustrate the problem and what to do, but # see aforementioned notebook for more detail # to perform a where() test on a structure like this, it is important to note that print() # rounds the result to 8 decimal places. The real underlying numbers have more decimal places print(crazyList2[1]); print("") print(crazyList2[1][2][3][4]) # get a number to test with print("{0:.20}".format(crazyList2[1][2][3][4])) # show more decimal places of the test number # Warning! If you re-run this notebook, new random nubers are generated and the value used for the test in this # cell will probably then fail. To fix this, re-run previous cell and copy in the final number shown # above up to at least 17 decimal places. print(np.where(crazyList2[1]==0.95881217854380618)) # number copied from output of previous line up to 17 decimal places # np.where() can find this, but will also return other values # that match up to the first 16 decimal places (if they exist) # precision appears to be up to 16 decimal places on a 32 bit machine # np.isclose # for finding less precise answers: finds numbers that "are close" print(np.isclose(crazyList2[1], 0.95881)) print("") print(np.where(np.isclose(crazyList2[1], 0.95881))) # note that when numbers are "close" this returns multiple values # in this case (crazyList2) only one number was "close" # more detailed testing is provided in: # TMWP_np_where_and_floatingPoint_numbers.ipynb
PY_Basics/TMWP_PY_CrazyList_Indexing_and_Related_Experiments.ipynb
TheMitchWorksPro/DataTech_Playground
mit
Getting the data ready for work If the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame.
#get the data in gslib format into a pandas Dataframe mydata= pygslib.gslib.read_gslib_file('../datasets/cluster.dat') # This is a 2D file, in this GSLIB version we require 3D data and drillhole name or domain code # so, we are adding constant elevation = 0 and a dummy BHID = 1 mydata['Zlocation']=0. mydata['bhid']=1. # printing to verify results print (' \n **** 5 first rows in my datafile \n\n ', mydata.head(n=5))
doc/source/Ipython_templates/gamv3D.ipynb
opengeostat/pygslib
mit
This code creates a CP model container that allows the use of constraints that are specific to constraint programming or to scheduling. Declarations of decision variables Variable declarations define the type of each variable in the model. For example, to create a variable that equals the amount of material shipped from location i to location j, a variable named ship can be created as follows: <code> ship = [[integer_var(min=0) for j in range(N)] for i in range(N)] </code> This code declares an array (list of lists in Python) of non-negative integer decision variables; <code>ship[i][j]</code> is the decision variable handling the amount of material shipped from location i to location j. For scheduling there are specific additional decision variables, namely: * interval variables * sequence variables. Activities, operations andtasks are represented as interval decision variables. An interval has a start, a end, a length, and a size. An interval variable allows for these values to be variable within the model. The start is the lower endpoint of the interval and the end is the upper endpoint of the interval. By default, the size is equal to the length, which is the difference between the end and the start of the interval. In general, the size is a lower bound on the length. An interval variable may also be optional, and its presence in the solution is represented by a decision variable. If an interval is not present in the solution, this means that any constraints on this interval acts like the interval is “not there”. The exact semantics will depend on the specific constraint. The following example contains a dictionary of interval decision variables where the sizes of the interval variables are fixed and the keys are 2 dimensional: <code> itvs = {(h,t) : mdl.interval_var(size = Duration[t]) for h in Houses for t in TaskNames} </code> Objective function The objective function is an expression that has to be optimized. This function consists of variables and data that have been declared earlier in the model. The objective function is introduced by either the minimize or the maximize function. For example: <code> mdl.add(mdl.minimize(mdl.endOf(tasks["moving"]))) </code> indicates that the end of the interval variable <code>tasks["moving"]</code> needs to be minimized. Constraints The constraints indicate the conditions that are necessary for a feasible solution to the model. Several types of constraints can be placed on interval variables: * precedence constraints, which ensure that relative positions of intervals in the solution (For example a precedence constraint can model a requirement that an interval a must end before interval b starts, optionally with some minimum delay z); * no overlap constraints, which ensure that positions of intervals in the solution are disjointed in time; * span constraints, which ensure that one interval to cover those intervals in a set of intervals; * alternative constraints, which ensure that exactly one of a set of intervals be present in the solution; * synchronize constraints, which ensure that a set of intervals start and end at the same time as a given interval variable if it is present in the solution; * cumulative expression constraints, which restrict the bounds on the domains of cumulative expressions. Example This section provides a completed example model that can be tested. The problem is a house building problem. There are ten tasks of fixed size, and each of them needs to be assigned a starting time. The statements for creating the interval variables that represent the tasks are:
masonry = mdl0.interval_var(size=35) carpentry = mdl0.interval_var(size=15) plumbing = mdl0.interval_var(size=40) ceiling = mdl0.interval_var(size=15) roofing = mdl0.interval_var(size=5) painting = mdl0.interval_var(size=10) windows = mdl0.interval_var(size=5) facade = mdl0.interval_var(size=10) garden = mdl0.interval_var(size=5) moving = mdl0.interval_var(size=5)
examples/cp/jupyter/scheduling_tuto.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Here, the special constraint end_before_start() ensures that one interval variable ends before the other starts. If one of the interval variables is not present, the constraint is automatically satisfied. Calling the solve
# Solve the model print("\nSolving model....") msol0 = mdl0.solve(TimeLimit=10) print("done")
examples/cp/jupyter/scheduling_tuto.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Displaying the solution The interval variables and precedence constraints completely describe this simple problem. Print statements display the solution, after values have been assigned to the start and end of each of the interval variables in the model.
if msol0: var_sol = msol0.get_var_solution(masonry) print("Masonry : {}..{}".format(var_sol.get_start(), var_sol.get_end())) var_sol = msol0.get_var_solution(carpentry) print("Carpentry : {}..{}".format(var_sol.get_start(), var_sol.get_end())) var_sol = msol0.get_var_solution(plumbing) print("Plumbing : {}..{}".format(var_sol.get_start(), var_sol.get_end())) var_sol = msol0.get_var_solution(ceiling) print("Ceiling : {}..{}".format(var_sol.get_start(), var_sol.get_end())) var_sol = msol0.get_var_solution(roofing) print("Roofing : {}..{}".format(var_sol.get_start(), var_sol.get_end())) var_sol = msol0.get_var_solution(painting) print("Painting : {}..{}".format(var_sol.get_start(), var_sol.get_end())) var_sol = msol0.get_var_solution(windows) print("Windows : {}..{}".format(var_sol.get_start(), var_sol.get_end())) var_sol = msol0.get_var_solution(facade) print("Facade : {}..{}".format(var_sol.get_start(), var_sol.get_end())) var_sol = msol0.get_var_solution(moving) print("Moving : {}..{}".format(var_sol.get_start(), var_sol.get_end())) else: print("No solution found")
examples/cp/jupyter/scheduling_tuto.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
To understand the solution found by CP Optimizer to this satisfiability scheduling problem, consider the line: <code>Masonry : 0..35</code> The interval variable representing the masonry task, which has size 35, has been assigned the interval [0,35). Masonry starts at time 0 and ends at the time point 35. Graphical view of these tasks can be obtained with following additional code:
import docplex.cp.utils_visu as visu import matplotlib.pyplot as plt %matplotlib inline #Change the plot size from pylab import rcParams rcParams['figure.figsize'] = 15, 3 if msol0: wt = msol0.get_var_solution(masonry) visu.interval(wt, 'lightblue', 'masonry') wt = msol0.get_var_solution(carpentry) visu.interval(wt, 'lightblue', 'carpentry') wt = msol0.get_var_solution(plumbing) visu.interval(wt, 'lightblue', 'plumbing') wt = msol0.get_var_solution(ceiling) visu.interval(wt, 'lightblue', 'ceiling') wt = msol0.get_var_solution(roofing) visu.interval(wt, 'lightblue', 'roofing') wt = msol0.get_var_solution(painting) visu.interval(wt, 'lightblue', 'painting') wt = msol0.get_var_solution(windows) visu.interval(wt, 'lightblue', 'windows') wt = msol0.get_var_solution(facade) visu.interval(wt, 'lightblue', 'facade') wt = msol0.get_var_solution(moving) visu.interval(wt, 'lightblue', 'moving') visu.show()
examples/cp/jupyter/scheduling_tuto.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Solving a problem consists of finding a value for each decision variable so that all constraints are satisfied. It is not always know beforehand whether there is a solution that satisfies all the constraints of the problem. In some cases, there may be no solution. In other cases, there may be many solutions to a problem. Step 5: Solve the model and display the solution
# Solve the model print("\nSolving model....") msol1 = mdl1.solve(TimeLimit=20) print("done") if msol1: print("Cost will be " + str(msol1.get_objective_values()[0])) var_sol = msol1.get_var_solution(masonry) print("Masonry : {}..{}".format(var_sol.get_start(), var_sol.get_end())) var_sol = msol1.get_var_solution(carpentry) print("Carpentry : {}..{}".format(var_sol.get_start(), var_sol.get_end())) var_sol = msol1.get_var_solution(plumbing) print("Plumbing : {}..{}".format(var_sol.get_start(), var_sol.get_end())) var_sol = msol1.get_var_solution(ceiling) print("Ceiling : {}..{}".format(var_sol.get_start(), var_sol.get_end())) var_sol = msol1.get_var_solution(roofing) print("Roofing : {}..{}".format(var_sol.get_start(), var_sol.get_end())) var_sol = msol1.get_var_solution(painting) print("Painting : {}..{}".format(var_sol.get_start(), var_sol.get_end())) var_sol = msol1.get_var_solution(windows) print("Windows : {}..{}".format(var_sol.get_start(), var_sol.get_end())) var_sol = msol1.get_var_solution(facade) print("Facade : {}..{}".format(var_sol.get_start(), var_sol.get_end())) var_sol = msol1.get_var_solution(moving) print("Moving : {}..{}".format(var_sol.get_start(), var_sol.get_end())) else: print("No solution found")
examples/cp/jupyter/scheduling_tuto.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Graphical display of the same result is available with:
import docplex.cp.utils_visu as visu import matplotlib.pyplot as plt %matplotlib inline #Change the plot size from pylab import rcParams rcParams['figure.figsize'] = 15, 3 if msol1: wt = msol1.get_var_solution(masonry) visu.interval(wt, 'lightblue', 'masonry') wt = msol1.get_var_solution(carpentry) visu.interval(wt, 'lightblue', 'carpentry') wt = msol1.get_var_solution(plumbing) visu.interval(wt, 'lightblue', 'plumbing') wt = msol1.get_var_solution(ceiling) visu.interval(wt, 'lightblue', 'ceiling') wt = msol1.get_var_solution(roofing) visu.interval(wt, 'lightblue', 'roofing') wt = msol1.get_var_solution(painting) visu.interval(wt, 'lightblue', 'painting') wt = msol1.get_var_solution(windows) visu.interval(wt, 'lightblue', 'windows') wt = msol1.get_var_solution(facade) visu.interval(wt, 'lightblue', 'facade') wt = msol1.get_var_solution(moving) visu.interval(wt, 'lightblue', 'moving') visu.show()
examples/cp/jupyter/scheduling_tuto.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Step 7: Create the transition times Transition times can be modeled using tuples with three elements. The first element is the interval variable type of one task, the second is the interval variable type of the other task and the third element of the tuple is the transition time from the first to the second. An integer interval variable type can be associated with each interval variable. Given an interval variable a1 that precedes (not necessarily directly) an interval variable a2 in a sequence of non-overlapping interval variables, the transition time between a1 and a2 is an amount of time that must elapse between the end of a1 and the beginning of a2.
transitionTimes = transition_matrix([[int(abs(i - j)) for j in Houses] for i in Houses])
examples/cp/jupyter/scheduling_tuto.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Step 11: Solve the model The search for an optimal solution in this problem can potentiality take a long time. A fail limit can be placed on the solve process to limit the search process. The search stops when the fail limit is reached, even if optimality of the current best solution is not guaranteed. The code for limiting the solve process is provided below:
# Solve the model print("\nSolving model....") msol2 = mdl2.solve(FailLimit=30000) print("done") if msol2: print("Cost will be " + str(msol2.get_objective_values()[0])) else: print("No solution found") # Viewing the results of sequencing problems in a Gantt chart # (double click on the gantt to see details) import docplex.cp.utils_visu as visu import matplotlib.pyplot as plt %matplotlib inline #Change the plot size from pylab import rcParams rcParams['figure.figsize'] = 15, 3 def showsequence(msol, s, setup, tp): seq = msol.get_var_solution(s) visu.sequence(name=s.get_name()) vs = seq.get_value() for v in vs: nm = v.get_name() visu.interval(v, tp[TaskNames_ids[nm]], nm) for i in range(len(vs) - 1): end = vs[i].get_end() tp1 = tp[TaskNames_ids[vs[i].get_name()]] tp2 = tp[TaskNames_ids[vs[i + 1].get_name()]] visu.transition(end, end + setup.get_value(tp1, tp2)) if msol2: visu.timeline("Solution for SchedSetup") for w in WorkerNames: types=[h for h in Houses for t in TaskNames if Worker[t]==w] showsequence(msol2, workers[w], transitionTimes, types) visu.show()
examples/cp/jupyter/scheduling_tuto.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Chapter 4. Adding calendars to the house building problem This chapter introduces calendars into the house building problem, a problem of scheduling the tasks involved in building multiple houses in such a manner that minimizes the overall completion date of the houses. There are two workers, each of whom must perform a given subset of the necessary tasks. Each worker has a calendar detailing on which days he does not work, such as weekends and holidays. On a worker’s day off, he does no work on his tasks, and his tasks may not be scheduled to start or end on these days. Tasks that are in process by the worker are suspended during his days off. Following concepts are demonstrated: * use of the step functions, * use an alternative version of the constraint no_overlap, * use intensity expression, * use the constraints forbid_start and forbid_end, * use the length and size of an interval variable. Problem to be solved The problem consists of assigning start dates to a set of tasks in such a way that the schedule satisfies temporal constraints and minimizes a criterion. The criterion for this problem is to minimize the overall completion date. For each task type in the house building project, the following table shows the size of the task in days along with the tasks that must be finished before the task can start. In addition, each type of task can be performed by a given one of the two workers, Jim and Joe. A worker can only work on one task at a time. Once started, Problem to be solveda task may be suspended during a worker’s days off, but may not be interrupted by another task. House construction tasks are detailed in the folowing table: | Task | Duration | Worker | Preceding tasks | |-----------|----------|--------|-----------------------------------| | masonry | 35 | Joe | | | carpentry | 15 | Joe | masonry | | plumbing | 40 | Jim | masonry | | ceiling | 15 | Jim | masonry | | roofing | 5 | Joe | carpentry | | painting | 10 | Jim | ceiling | | windows | 5 | Jim | roofing | | facade | 10 | Joe | roofing, plumbing | | garden | 5 | Joe | roofing, plumbing | | moving | 5 | Jim | windows, facade, garden, painting | Solving the problem consists of determining starting dates for the tasks such that the overall completion date is minimized. Step 1: Describe the problem The first step in modeling the problem is to write a natural language description of the problem, identifying the decision variables and the constraints on these variables. What is the known information in this problem ? There are five houses to be built by two workers. For each house, there are ten house building tasks, each with a given size. For each task, there is a list of tasks that must be completed before the task can start. Each task must be performed by a given worker, and each worker has a calendar listing his days off. What are the decision variables or unknowns in this problem ? The unknowns are the start and end times of tasks which also determine the overall completion time. The actual length of a task depends on its position in time and on the calendar of the associated worker. What are the constraints on these variables ? There are constraints that specify that a particular task may not begin until one or more given tasks have been completed. In addition, there are constraints that specify that a worker can be assigned to only one task at a time. A task cannot start or end during the associated worker’s days off. What is the objective ? The objective is to minimize the overall completion date. Step 2: Prepare data A scheduling model starts with the declaration of the engine as follows:
import sys from docplex.cp.model import * mdl3 = CpoModel() NbHouses = 5; WorkerNames = ["Joe", "Jim" ] TaskNames = ["masonry","carpentry","plumbing","ceiling","roofing","painting","windows","facade","garden","moving"] Duration = [35,15,40,15,5,10,5,10,5,5] Worker = {"masonry":"Joe","carpentry":"Joe","plumbing":"Jim","ceiling":"Jim", "roofing":"Joe","painting":"Jim","windows":"Jim","facade":"Joe", "garden":"Joe","moving":"Jim"} Precedences = { ("masonry","carpentry"),("masonry","plumbing"), ("masonry","ceiling"),("carpentry","roofing"), ("ceiling","painting"),("roofing","windows"), ("roofing","facade"),("plumbing","facade"), ("roofing","garden"),("plumbing","garden"), ("windows","moving"),("facade","moving"), ("garden","moving"),("painting","moving") } Houses = range(NbHouses)
examples/cp/jupyter/scheduling_tuto.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Step 9: Solve the model The search for an optimal solution in this problem could potentiality take a long time, so a fail limit has been placed on the solve process. The search will stop when the fail limit is reached, even if optimality of the current best solution is not guaranteed. The code for limiting the solve process is provided below:
# Solve the model print("\nSolving model....") msol3 = mdl3.solve(FailLimit=30000) print("done") if msol3: print("Cost will be " + str( msol3.get_objective_values()[0] )) # Allocate tasks to workers tasks = {w : [] for w in WorkerNames} for k,v in Worker.items(): tasks[v].append(k) types = {t : i for i,t in enumerate(TaskNames)} import docplex.cp.utils_visu as visu import matplotlib.pyplot as plt %matplotlib inline #Change the plot size from pylab import rcParams rcParams['figure.figsize'] = 15, 3 visu.timeline('Solution SchedCalendar') for w in WorkerNames: visu.panel() visu.pause(Calendar[w]) visu.sequence(name=w, intervals=[(msol3.get_var_solution(itvs[h,t]), types[t], t) for t in tasks[w] for h in Houses]) visu.show() else: print("No solution found")
examples/cp/jupyter/scheduling_tuto.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Chapter 5. Using cumulative functions in the house building problem Some tasks must necessarily take place before other tasks, and each task has a predefined duration. Moreover, there are three workers, and each task requires any one of the three workers. A worker can be assigned to at most one task at a time. In addition, there is a cash budget with a starting balance. Each task consumes a certain amount of the budget at the start of the task, and the cash balance is increased every 60 days. This chapter introduces: * use the modeling function cumul_function, * use the functions pulse, step, step_at_start and step_at_end. Problem to be solved The problem consists of assigning start dates to a set of tasks in such a way that the schedule satisfies temporal constraints and minimizes a criterion. The criterion for this problem is to minimize the overall completion date. Each task requires 200 dollars per day of the task, payable at the start of the task. Every 60 days, starting at day 0, the amount of 30,000 dollars is added to the cash balance. For each task type in the house building project, the following table shows the duration of the task in days along with the tasks that must be finished before the task can start. Each task requires any one of the three workers. A worker can only work on one task at a time; each task, once started, may not be interrupted. House construction tasks: | Task | Duration | Preceding tasks | |-----------|----------|----------------------| | masonry | 35 | | | carpentry | 15 | masonry | | plumbing | 40 | masonry | | ceiling | 15 | masonry | | roofingv | 5 | carpentry | | painting | 10 | ceiling | | windows | 5 | roofing | | facade | 10 | roofing, plumbing | | garden | 5 | roofing, plumbing | | moving | 5 | windows, facade, garden,painting | There is an earliest starting date for each of the five houses that must be built. | House | Earliest starting date | |---|----| | 0 | 31 | | 1 | 0 | | 2 | 90 | | 3 | 120| | 4 | 90 | Solving the problem consists of determining starting dates for the tasks such that the overall completion date is minimized. Step 1: Describe the problem The first step in modeling and solving the problem is to write a natural language description of the problem, identifying the decision variables and the constraints on these variables. What is the known information in this problem ? There are five houses to be built by three workers. For each house, there are ten house building tasks, each with a given size and cost. For each task, there is a list of tasks that must be completed before the task can start. There is a starting cash balance of a given amount, and, each sixty days, the cash balance is increased by a given amount. What are the decision variables or unknowns in this problem ? The unknown is the point in time that each task will start. Once starting dates have been fixed, the overall completion date will also be fixed. What are the constraints on these variables ? There are constraints that specify that a particular task may not begin until one or more given tasks have been completed. Each task requires any one of the three workers. In addition, there are constraints that specify that a worker can be assigned to only one task at a time. Before a task can start, the cash balance must be large enough to pay the cost of the task. What is the objective ? The objective is to minimize the overall completion date. Step 2: Prepare data In the related data file, the data provided includes the number of houses (NbHouses), the number of workers (NbWorkers), the names of the tasks (TaskNames), the sizes of the tasks (Duration), the precedence relations (Precedences), and the earliest start dates of the houses (ReleaseDate). As each house has an earliest starting date, the task interval variables are declared to have a start date no earlier than that release date of the associated house. The ending dates of the tasks are not constrained, so the upper value of the range for the variables is maxint.
NbWorkers = 3 NbHouses = 5 TaskNames = {"masonry","carpentry","plumbing", "ceiling","roofing","painting", "windows","facade","garden","moving"} Duration = [35, 15, 40, 15, 5, 10, 5, 10, 5, 5] ReleaseDate = [31, 0, 90, 120, 90] Precedences = [("masonry", "carpentry"), ("masonry", "plumbing"), ("masonry", "ceiling"), ("carpentry", "roofing"), ("ceiling", "painting"), ("roofing", "windows"), ("roofing", "facade"), ("plumbing", "facade"), ("roofing", "garden"), ("plumbing", "garden"), ("windows", "moving"), ("facade", "moving"), ("garden", "moving"), ("painting", "moving")] Houses = range(NbHouses)
examples/cp/jupyter/scheduling_tuto.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Step 10: Solve the model The search for an optimal solution in this problem could potentiality take a long time, so a fail limit has been placed on the solve process. The search will stop when the fail limit is reached, even if optimality of the current best solution is not guaranteed. The code for limiting the solve process is:
# Solve the model print("\nSolving model....") msol4 = mdl4.solve(FailLimit=30000) print("done") if msol4: print("Cost will be " + str( msol4.get_objective_values()[0] )) import docplex.cp.utils_visu as visu import matplotlib.pyplot as plt %matplotlib inline #Change the plot size from pylab import rcParams rcParams['figure.figsize'] = 15, 3 workersF = CpoStepFunction() cashF = CpoStepFunction() for p in range(5): cashF.add_value(60 * p, INT_MAX, 30000) for h in Houses: for i,t in enumerate(TaskNames): itv = msol4.get_var_solution(itvs[h,t]) workersF.add_value(itv.get_start(), itv.get_end(), 1) cashF.add_value(itv.start, INT_MAX, -200 * Duration[i]) visu.timeline('Solution SchedCumul') visu.panel(name="Schedule") for h in Houses: for i,t in enumerate(TaskNames): visu.interval(msol4.get_var_solution(itvs[h,t]), h, t) visu.panel(name="Workers") visu.function(segments=workersF, style='area') visu.panel(name="Cash") visu.function(segments=cashF, style='area', color='gold') visu.show() else: print("No solution found")
examples/cp/jupyter/scheduling_tuto.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Step 9: Solve the model The search for an optimal solution in this problem could potentiality take a long time, so a fail limit has been placed on the solve process. The search will stop when the fail limit is reached, even if optimality of the current best solution is not guaranteed.
# Solve the model print("\nSolving model....") msol5 = mdl5.solve(FailLimit=30000) print("done") if msol5: print("Cost will be "+str( msol5.get_objective_values()[0] )) worker_idx = {w : i for i,w in enumerate(Workers)} worker_tasks = [[] for w in range(nbWorkers)] # Tasks assigned to a given worker for h in Houses: for s in Skills: worker = s[0] wt = wtasks[(h,s)] worker_tasks[worker_idx[worker]].append(wt) import docplex.cp.utils_visu as visu import matplotlib.pyplot as plt %matplotlib inline #Change the plot size from pylab import rcParams rcParams['figure.figsize'] = 15, 3 visu.timeline('Solution SchedOptional', 0, Deadline) for i,w in enumerate(Workers): visu.sequence(name=w) for t in worker_tasks[worker_idx[w]]: wt = msol5.get_var_solution(t) if wt.is_present(): #if desc[t].skills[w] == max(desc[t].skills): # Green-like color when task is using the most skilled worker # color = 'lightgreen' #else: # Red-like color when task does not use the most skilled worker # color = 'salmon' color = 'salmon' visu.interval(wt, color, wt.get_name()) visu.show() else: print("No solution found")
examples/cp/jupyter/scheduling_tuto.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Step 5: Create the transition times The transition time from a dirty state to a clean state is the same for all houses. As in the example Chapter 3, “Adding workers and transition times to the house building problem”, a tupleset ttime is created to represent the transition time between cleanliness states.
Index = {s : i for i,s in enumerate(AllStates)} ttvalues = [[0, 0], [0, 0]] ttvalues[Index["dirty"]][Index["clean"]] = 1 ttime = transition_matrix(ttvalues, name='TTime')
examples/cp/jupyter/scheduling_tuto.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Step 9: Solve the model The search for an optimal solution in this problem could potentiality take a long time, so a fail limit has been placed on the solve process. The search will stop when the fail limit is reached, even if optimality of the current best solution is not guaranteed. The code for limiting the solve process is given below:
# Solve the model print("\nSolving model....") msol6 = mdl6.solve(FailLimit=30000) print("done") if msol6: print("Cost will be " + str( msol6.get_objective_values()[0] )) import docplex.cp.utils_visu as visu import matplotlib.pyplot as plt %matplotlib inline #Change the plot size from pylab import rcParams rcParams['figure.figsize'] = 15, 3 workers_function = CpoStepFunction() for h in Houses: for t in TaskNames: itv = msol6.get_var_solution(task[h,t]) workers_function.add_value(itv.get_start(), itv.get_end(), 1) visu.timeline('Solution SchedState') visu.panel(name="Schedule") for h in Houses: for t in TaskNames: visu.interval(msol6.get_var_solution(task[h,t]), h, t) visu.panel(name="Houses state") for h in Houses: f = state[h] visu.sequence(name=f.get_name(), segments=msol6.get_var_solution(f)) visu.panel(name="Nb of workers") visu.function(segments=workers_function, style='line') visu.show() else: print("No solution found")
examples/cp/jupyter/scheduling_tuto.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Type a., then press tab to see attributes: Alternatively, use the dir(a) command to see the attributes (ignore everything starting with __):
dir(a)
workshops/Durham/reference/basics-python.ipynb
joommf/tutorial
bsd-3-clause
Imagine we want to find out what the append attribute is: use help(a.append) or a.append? to learn more about an attribute:
help(a.append)
workshops/Durham/reference/basics-python.ipynb
joommf/tutorial
bsd-3-clause
Let's try this:
print(a) a.append("New element") print(a)
workshops/Durham/reference/basics-python.ipynb
joommf/tutorial
bsd-3-clause
Comments Anything following a # sign is considered a comment (to the end of the line)
d = 20e-9 # distance in metres
workshops/Durham/reference/basics-python.ipynb
joommf/tutorial
bsd-3-clause
Importing libraries The core Python commands can be extened through importing additonal libraries. import syntax 1
import math math.sin(0)
workshops/Durham/reference/basics-python.ipynb
joommf/tutorial
bsd-3-clause
import syntax 2
import math as m m.sin(0)
workshops/Durham/reference/basics-python.ipynb
joommf/tutorial
bsd-3-clause
Functions A function is defined in Python using the def keyword. For example, the greet function accepts two input arguments, and concatenates them to become a greeting:
def greet(greeting, name): """Optional documentation string, inclosed in tripple quotes. Can extend over mutliple lines.""" print(greeting + " " + name) greet("Hello", "World") greet("Bonjour", "tout le monde")
workshops/Durham/reference/basics-python.ipynb
joommf/tutorial
bsd-3-clause
In above examples, the input argument to the function has been identified by the order of the arguments. In general, we prefer another way of passing the input arguments as - this provides additional clarity and - the order of the arguments stops to matter.
greet(greeting="Hello", name="World") greet(name="World", greeting="Hello")
workshops/Durham/reference/basics-python.ipynb
joommf/tutorial
bsd-3-clause
Note that the names of the input arguments can be displayed intermittently if you type greet( and then press SHIFT+TAB (the cursor needs to be just to the right of the opening paranthesis). A loop
def say_hello(name): # function print("Hello " + name) # main program starts here names = ["Landau", "Lifshitz", "Gilbert"] for name in names: say_hello(name=name)
workshops/Durham/reference/basics-python.ipynb
joommf/tutorial
bsd-3-clause
Link Prediction The idea of link prediction was first proposed by Liben-Nowell and Kleinberg in 2004 as the following question: "Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future?" It's an inticing idea and has led to many interesting developments in the network literature. For our example, the question could be rephrased as: "Given a snapshot of the Grey's Anatomy relationship network, can we infer which new relationships are likely to occur in the near future?" Sounds awesome, but how does it work? Jaccard Coefficient The most popular measures for link prediction analyze the “proximity” of nodes in a network. One way to measure proximity is to see what proportion of neighbors a pair of nodes share. This can be capture succintly with the Jaccard index. In the context of a network, we're comparing sets of neighbors: $$ Jaccard = \frac{|\Gamma(u) \cap \Gamma(v)|}{|\Gamma(u) \cup \Gamma(v)|} $$ where $\Gamma(u)$ denotes the set of neighbors of $u$.
preds_jc = nx.jaccard_coefficient(GA) pred_jc_dict = {} for u, v, p in preds_jc: pred_jc_dict[(u,v)] = p sorted(pred_jc_dict.items(), key=lambda x:x[1], reverse=True)[:10] extra_attrs = {'finn':('Finn Dandridge','M','S'), 'olivia':('Olivia Harper','F','S'), 'steve':('Steve Murphy','M','S'), 'torres':('Callie Torres','F','B'), 'colin':('Colin Marlow','M','S'), 'grey':('Meredith Grey','F','S'), 'mrs. seabury':('Dana Seabury','F','S'), 'altman':('Teddy Altman','F','S'), 'tucker':('Tucker Jones','M','S'), 'ben':('Ben Warren','M','S'), "o'malley":("George O'Malley",'M','S'), 'thatch grey':('Thatcher Grey','M','S'), 'susan grey':('Susan Grey','F','S'), 'derek':('Derek Shepherd','M','S'), 'chief':('Richard Webber','M','S'), 'addison':('Addison Montgomery','F','S'), 'karev':('Alex Karev','M','S'), 'hank':('Hank','M','S'), 'lexi':('Lexie Grey','F','S'), 'adele':('Adele Webber','F','S'), 'owen':('Owen Hunt','M','S'), 'sloan':('Mark Sloan','M','S'), 'arizona':('Arizona Robbins','F','G'), 'izzie':('Izzie Stevens','F','S'), 'preston':('Preston Burke','M','S'), 'kepner':('April Kepner','M','S'), 'bailey':('Miranda Bailey','F','S'), 'ellis grey':('Ellis Grey','F','S'), 'denny':('Denny Duquette','M','S'), 'yang':('Cristina Yang','F','S'), 'nancy':('Nancy Shepherd','F','S'), 'avery':('Jackson Avery','M','S')} for i in GA.nodes(): GA.node[i]["full_name"] = extra_attrs[i][0] GA.node[i]["gender"] = extra_attrs[i][1] GA.node[i]["orientation"] = extra_attrs[i][2] GA.node['grey']
notebooks/5. Link Prediction.ipynb
rtidatascience/connected-nx-tutorial
mit
Preferential Attachment The preferential attachement methods mirrors the “rich get richer” -- nodes with more connections will be the ones to be more likely to get future connections. Essentially, the measure is the product of a node pairs degrees: $$ PA = |\Gamma(u)| \bullet |\Gamma(v)|$$ where $\Gamma(u)$ denotes the set of neighbors (degree) of $u$.
preds_pa = nx.preferential_attachment(GA) pred_pa_dict = {} for u, v, p in preds_pa: pred_pa_dict[(u,v)] = p sorted(pred_pa_dict.items(), key=lambda x:x[1], reverse=True)[:10]
notebooks/5. Link Prediction.ipynb
rtidatascience/connected-nx-tutorial
mit
So far we have imported a dataset from a CSV file into a Pandas DataFrame using the read_csv() function. Then we displayed the data, first as a table, and secondly as a historgram. Questions About the Data There are a near infinite number of questions we could possibly ask about this data. But to get started, here are a few example questions that could be asked: How does length of employee titles correlate to salary? How much does the White House pay in total salary? Who are the highest and lowest paid staffers? What words are the most common in titles? How does the length of employee titles correlate to salary? Steps for figuring this out may look like the following: 1. Calculate the length of each employee title - should be able to use apply() to get this 1. Add a column to the DataFrame containing the length of the employee title 1. Plot length of employee title versus employee salary (could also use direct correlation, but visual plot is good)
# Calculate the length of each employee's title and add to the DataFrame white_house['LengthOfTitle'] = white_house['Position Title'].apply(len) white_house.head() # Plot the length of employee title versus salary to look for correlation plt.plot(white_house['LengthOfTitle'], white_house['Salary']) plt.title('How does length of employee titles correlate to salary?') plt.xlabel('Length of Employee Title') plt.ylabel('Salary ($)')
dataquest/JupyterNotebook/Basics.ipynb
tleonhardt/CodingPlayground
mit
Uh ok, maybe I was wrong about visuallizing being great for detecting correlation ;-) It looks like there may be a weak positive correlation. But it is really hard to tell. Maybe we should just numerically calculate the correlation. Also, it looks like there are some low salary outliers. Should we check to make sure we aren't mixing in monthly salaries with yearly ones?
# Get the values in Pay Basis and figure out how many unique ones there are types_of_pay_basis = set(white_house['Pay Basis']) types_of_pay_basis
dataquest/JupyterNotebook/Basics.ipynb
tleonhardt/CodingPlayground
mit
Ok, only one pay basis, annually. So that wasn't an issue.
# Compute pairwise correlation of columns, excluding NA/null values correlations = white_house.corr() correlations # Linear Regression using ordinary least squares import statsmodels.api as sm model = sm.OLS(white_house['Salary'], white_house['LengthOfTitle']) residuals = model.fit() print(residuals.summary())
dataquest/JupyterNotebook/Basics.ipynb
tleonhardt/CodingPlayground
mit
So yea, there is a real positive correlation between length of employee title and salary! How much does the White House pay in total salary?
total_salary = sum(white_house['Salary']) total_salary
dataquest/JupyterNotebook/Basics.ipynb
tleonhardt/CodingPlayground
mit
The white house pays about $40 Million per year in total salary. Who are the highest and lowest paid staffers?
highest_paid = white_house[white_house['Salary'] == max(white_house['Salary'])] highest_paid lowest_paid = white_house[white_house['Salary'] == min(white_house['Salary'])] lowest_paid
dataquest/JupyterNotebook/Basics.ipynb
tleonhardt/CodingPlayground
mit
この結果から古典的最小二乗法による推定式をまとめると、 [供給関数] $\hat Q_{i} = 4.8581 + 1.5094 P_{i} - 1.5202 E_{i} $ [需要関数] $\hat Q_{i} = 16.6747 - 0.9088 P_{i} - 1.0369 A_{i}$ となる。 しかし、説明変数Pと誤差の間に関係があるため、同時方程式バイアスが生じてしまいます。 そこで、以下では同時方程式体系の推定法として代表的な二段階最小二乗法を用いて推定し直します。
# 外生変数設定 inst = data[[ 'A', 'E']].as_matrix() inst = sm.add_constant(inst) # 2SLSの実行(Two Stage Least Squares: 二段階最小二乗法) model1 = IV2SLS(Y, X1, inst) model2 = IV2SLS(Y, X2, inst) result1 = model1.fit() result2 = model2.fit() print(result1.summary()) print(result2.summary())
SimultaneousEquation.ipynb
ogaway/Econometrics
gpl-3.0
Preliminary Report Read the following results/report. While you are reading it, think about if the conclusions are correct, incorrect, misleading or unfounded. Think about what you would change or what additional analyses you would perform. A. Initial observations based on the plot above + Overall, rate of readmissions is trending down with increasing number of discharges + With lower number of discharges, there is a greater incidence of excess rate of readmissions (area shaded red) + With higher number of discharges, there is a greater incidence of lower rates of readmissions (area shaded green) B. Statistics + In hospitals/facilities with number of discharges < 100, mean excess readmission rate is 1.023 and 63% have excess readmission rate greater than 1 + In hospitals/facilities with number of discharges > 1000, mean excess readmission rate is 0.978 and 44% have excess readmission rate greater than 1 C. Conclusions + There is a significant correlation between hospital capacity (number of discharges) and readmission rates. + Smaller hospitals/facilities may be lacking necessary resources to ensure quality care and prevent complications that lead to readmissions. D. Regulatory policy recommendations + Hospitals/facilties with small capacity (< 300) should be required to demonstrate upgraded resource allocation for quality care to continue operation. + Directives and incentives should be provided for consolidation of hospitals and facilities to have a smaller number of them with higher capacity and number of discharges. Exercise Include your work on the following in this notebook and submit to your Github account. A. Do you agree with the above analysis and recommendations? Why or why not? B. Provide support for your arguments and your own recommendations with a statistically sound analysis: Setup an appropriate hypothesis test. Compute and report the observed significance value (or p-value). Report statistical significance for $\alpha$ = .01. Discuss statistical significance and practical significance. Do they differ here? How does this change your recommendation to the client? Look at the scatterplot above. What are the advantages and disadvantages of using this plot to convey information? Construct another plot that conveys the same information in a more direct manner. You can compose in notebook cells using Markdown: + In the control panel at the top, choose Cell > Cell Type > Markdown + Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet
clean_hospital_read_df.head() clean_hospital_read_df.info() hospital_dropna_df = clean_hospital_read_df[np.isfinite(clean_hospital_read_df['Excess Readmission Ratio'])] hospital_dropna_df.info()
Reduce Hospital Readmissions Using EDA/sliderule_dsi_inferential_statistics_exercise_3.ipynb
llclave/Springboard-Mini-Projects
mit
A. Do you agree with the above analysis and recommendations? Why or why not? The analysis seems to base its conclusion on one scatterplot. I cannot agree with the above analysis and recommendations because there is not enough evidence to support it. Further investigation and statistical analysis should be completed to determine whether there is a significant correlation between hospital capacity (number of discharges) and readmission rates. It may be possible that the correlation may have come from chance. B. Provide support for your arguments and your own recommendations with a statistically sound analysis: 1) Setup an appropriate hypothesis test. $H_0$: There is no statistically significant correlation between number of discharges and readmission rates. $H_A$: There is a statistically significant negative correlation between number of discharges and readmission rates. 2) Compute and report the observed significance value (or p-value).
number_of_discharges = hospital_dropna_df['Number of Discharges'] excess_readmission_ratio = hospital_dropna_df['Excess Readmission Ratio'] pearson_r = np.corrcoef(number_of_discharges, excess_readmission_ratio)[0, 1] print('The Pearson correlation of the sample is', pearson_r) permutation_replicates = np.empty(100000) for i in range(len(permutation_replicates)): number_of_discharges_perm = np.random.permutation(number_of_discharges) permutation_replicates[i] = np.corrcoef(number_of_discharges_perm, excess_readmission_ratio)[0, 1] p = np.sum(permutation_replicates <= pearson_r) / len(permutation_replicates) print('p =', p)
Reduce Hospital Readmissions Using EDA/sliderule_dsi_inferential_statistics_exercise_3.ipynb
llclave/Springboard-Mini-Projects
mit
The p value was calculated to be extremely small above. This means our null hypothesis ($H_0$) should be rejected and the alternate hypothesis is more likely. There is a statistically significant negtive correlation between number of discharges and readmission rates. However the correlation is small as shown by the pearson correlation above of -0.097. 3) Report statistical significance for α = .01. Since the p value was calculated to be extremely small, we can still conclude it to be statistically significant with an alpha of 0.01. 4) Discuss statistical significance and practical significance. Do they differ here? How does this change your recommendation to the client? Statistical significance refers to how unlikely the Pearson correlation observed in the samples have occurred due to sampling errors. We can use it in hypothesis testing. Statisical significance tells us whether the value we were looking at, (in this case - pearson correlation) occured due to chance. Because the p value we calculated was so small, we can conclude that the correlation was statistically significant and that there likely is a negative correlation between number of discharges and readmission rates. Practical significance looks at whether the Pearson correlation is large enough to be a value that shows a significant relationship. Practical significance looks at the value itself. In this case, the Pearson correlation was very small (-0.097). There probably is a negative correlation. However it is so small and close to zero that it is not very significant to the relationship between number of discharges and readmission rates. Overall, size or number of discharges is not a good predictor for readmission rates. The recommendations above should not be followed. Instead further analysis should be performed to figure out what data has a higher correlation with readmission rates. 5) Look at the scatterplot above. What are the advantages and disadvantages of using this plot to convey information? Construct another plot that conveys the same information in a more direct manner. This plot above is able to display all of the data points at once. Sometimes a quick visual like this will indicate a relationship. However, it is hard to tell with this plot alone due to the data. If the plot included a line which showed the value of the negative correlation, it would more clearly present the relationship of the data.
import seaborn as sns sns.set() plt.scatter(number_of_discharges, excess_readmission_ratio, alpha=0.5) slope, intercept = np.polyfit(number_of_discharges, excess_readmission_ratio, 1) x = np.array([0, max(number_of_discharges)]) y = slope * x + intercept plt.plot(x, y) plt.xlabel('Number of discharges') plt.ylabel('Excess rate of readmissions') plt.title('Scatterplot of number of discharges vs. excess rate of readmissions') plt.show()
Reduce Hospital Readmissions Using EDA/sliderule_dsi_inferential_statistics_exercise_3.ipynb
llclave/Springboard-Mini-Projects
mit
<a id='ll'>2.3 Which data to use?</a> Here i am combining all the 3 data so as to collect the data of all files and removing 1st jan which is an outlier Here i have removed the data of 1st Jan because it was new year, and as we can see the number of stockouts were too high so we should remove it from our current analysis because .We are analysing for a month and it was an outlier because our test dataset does not contain any "Holiday or festival" See here, there are around 8000 stockouts on single day!
pre_stock.groupby(by='dt')['stockout'].sum().sort_values(ascending=False) %matplotlib inline import matplotlib.pyplot as plt pre_stock.groupby(by='dt')['stockout'].sum().plot(figsize=(10,4)) plt.xticks(rotation=40) plt.annotate("1St Jan",xy=(1,8000)) order['created_at']=pd.to_datetime(order.created_at) order['complete_at']=pd.to_datetime(order.complete_at) order_test['created_at']=pd.to_datetime(order_test.created_at) order_test['complete_at']=pd.to_datetime(order_test.complete_at) order['date_create']=order.created_at.dt.date order_test['date_create']=order_test.created_at.dt.date order['unique']=1 pre_stock_test.columns col = [ 'time_stamp_utc', 'dt', 'Latitude', 'Longitude', 'stockout', 'hour', 'minute', 'second', 'weekday'] alldata =pd.concat([train[col],pre_stock[col],pre_stock_test[col]],axis=0).reset_index(drop=True) alldata.dt.value_counts().index alldata =pd.concat([train[col],pre_stock[col],pre_stock_test[col]],axis=0).reset_index(drop=True) alldata_out = alldata.loc[alldata.dt != '1-Jan-18'].reset_index(drop=True) alldata_out.shape
ZOMATO_final.ipynb
kanavanand/kanavanand.github.io
mit
<a id='god'>3. Feature Engineering</a> <a id='ot'>3.1 Drivers Engaged</a> Here i am calculating how many drivers are engaged in given train-test combination of times. This would give us average engagement of a driver at particular time interval. ** Note: this would take 5-6 hours.
df = order.loc[(order.state=='COMPLETE')&(order.created_at.dt.day>29)].reset_index() df.head() df2 = order_test.loc[(order_test.state=='COMPLETE')&(order_test.created_at.dt.day>29)].reset_index() order_test.loc[(order_test.state=='COMPLETE')&(order_test.created_at.dt.day>25)].reset_index().shape from tqdm import tqdm
ZOMATO_final.ipynb
kanavanand/kanavanand.github.io
mit
i saved a backup here(train_driver.csv which contains driver engaged)
train.head() train=pd.read_csv('train_driver.csv') test =pd.read_csv('test/post_stockout_test_candidate.csv') train.head() train.head()
ZOMATO_final.ipynb
kanavanand/kanavanand.github.io
mit
<a id='ot2'>3.2 Backbone of my Analysis(Feature Engineering)</a> Making Aggregate features like Concat the train and test and make features like<br> 1) Stockout on day of week<br> 2)stockout in the hour of a day<br> 3)stockout in the hour of second<br> 4)stockout in the hour of every day of week<br> 5)stockout in the minute of an hour of every day of week<br> 6)Stockout in particular residential id.<br> 7)Stockout in particular residential id for a particular hour<br> 8)Stockout in particular residential id for a particular hour in that minute<br>
#train.head() def upd(train,pre_stock): train = train.merge(pd.DataFrame(alldata_out.groupby(by=['weekday'])['stockout'].sum()).reset_index(),on='weekday',how='left',suffixes=('','_week')) train = train.merge(pd.DataFrame(alldata_out.groupby(by=['hour'])['stockout'].sum()).reset_index(),on='hour',how='left',suffixes=('','_hour')) train = train.merge(pd.DataFrame(alldata_out.groupby(by=['second'])['stockout'].sum()).reset_index(),on='second',how='left',suffixes=('','_second')) train.head() train = train.merge(pd.DataFrame(alldata_out.groupby(by=['weekday','hour'])['stockout'].sum()).reset_index(),on=['weekday','hour'],how='left',suffixes=('','_week_hour')) train = train.merge(pd.DataFrame(alldata_out.groupby(by=['hour','minute'])['stockout'].sum()).reset_index(),on=['hour','minute'],how='left',suffixes=('','_hour_minute')) train.fillna(0,inplace=True) train = train.merge(pd.DataFrame(alldata_out.groupby(by=['weekday','hour','minute'])['stockout'].sum()).reset_index(),on=['weekday','hour','minute'],how='left',suffixes=('','_hour_week_minute')) train = train.merge(pd.DataFrame(pre_stock.groupby(by=['res_id'])['stockout'].sum()).reset_index(),on='res_id',how='left',suffixes=('','_x')) train = train.merge(pd.DataFrame(pre_stock.groupby(by=['res_id','hour'])['stockout'].sum()).reset_index(),on=['res_id','hour'],how='left',suffixes=('','_hour_res')) train.fillna(0,inplace=True) train = train.merge(pd.DataFrame(pre_stock.groupby(by=['res_id','hour','minute'])['stockout'].sum()).reset_index(),on=['res_id','hour','minute'],how='left',suffixes=('','_hour_res_minute')) #;;;;;;;;;;;;;;;;;;;;;;;;;;;;; train = train.merge(pd.DataFrame(pre_stock.groupby(by=['res_id'])['stockout'].count()).reset_index(),on='res_id',how='left',suffixes=('','_countx')) train = train.merge(pd.DataFrame(pre_stock.groupby(by=['res_id','hour'])['stockout'].count()).reset_index(),on=['res_id','hour'],how='left',suffixes=('','_counthour_res')) train.fillna(0,inplace=True) train = train.merge(pd.DataFrame(pre_stock.groupby(by=['res_id','hour','minute'])['stockout'].count()).reset_index(),on=['res_id','hour','minute'],how='left',suffixes=('','_counthour_res_minute')) train.fillna(0,inplace=True) return train
ZOMATO_final.ipynb
kanavanand/kanavanand.github.io
mit
Processing date
from datetime import datetime import datetime def dat1(X): return(datetime.datetime.strptime(X, "%d%b%Y:%H:%M:%S")) tm=train.time_stamp_utc.apply(dat1) tm2=test.time_stamp_utc.apply(dat1) train =upd(train,pre_stock) train.head() test =upd(test,pre_stock_test) test= test.merge(pd.DataFrame(alldata_out.groupby(by=['weekday'])['stockout'].sum()).reset_index(),on='weekday',how='left',suffixes=('','_week')) test.head()
ZOMATO_final.ipynb
kanavanand/kanavanand.github.io
mit
<a id='ot3'>3.3 Exploring hidden hints and making features.</a> This is the minute stockout history on every day-of-week basis
import seaborn as sns %matplotlib inline import matplotlib.pyplot as plt plt.figure(figsize=(20,5)) sns.heatmap(pre_stock.pivot_table(index='weekday',columns='minute',values='stockout',aggfunc=sum),linecolor='black') min_graph=pre_stock.pivot_table(index='weekday',columns='minute',values='stockout',aggfunc=sum) min_graph plt.figure(figsize=(20,5)) min_graph.loc[1].plot() sec_graph=pre_stock.pivot_table(index='weekday',columns='second',values='stockout',aggfunc=sum) sec_graph
ZOMATO_final.ipynb
kanavanand/kanavanand.github.io
mit
This is the one of the most shocking observation here, as you can see there is very high probability of getting a stockout during between 11-25 and 41-55 seconds of every minute.
plt.figure(figsize=(20,5)) sns.heatmap(pre_stock.pivot_table(index='weekday',columns='second',values='stockout',aggfunc=sum),linecolor='black')
ZOMATO_final.ipynb
kanavanand/kanavanand.github.io
mit
you can see that is somewhat perfect normal
plt.figure(figsize=(20,5)) sec_graph.loc[1].plot()
ZOMATO_final.ipynb
kanavanand/kanavanand.github.io
mit
Inference: So we can infer that there could be two possible case:<br> 1)As stockout starts occuring the Zomato's iternal system may alert the Part time driver.<br> 2)The data-organisers may have generated the random time samples where it was not uniform but somewhat normal. Using the Above fact lets discover a new feature based on Seconds
a=[] for i in train.second: if i<15: a.append(i) elif i<30: a.append(30-i) elif i<45: a.append(i-30) else: a.append(60-i) train['sec_fun']=a a=[] for i in test.second: if i<15: a.append(i) elif i<30: a.append(30-i) elif i<45: a.append(i-30) else: a.append(60-i) test['sec_fun']=a train.columns cat_vars =[ 'res_id', 'hour', 'minute', 'second', 'weekday', ] cont_vars =['Latitude', 'Longitude','stockout_week', 'stockout_hour', 'stockout_second', 'stockout_week_hour', 'stockout_hour_minute', 'stockout_hour_week_minute', 'stockout_x', 'stockout_hour_res', 'stockout_hour_res_minute', 'stockout_countx', 'stockout_counthour_res', 'stockout_counthour_res_minute'] for v in cat_vars: train[v] = train[v].astype('category').cat.as_ordered() for v in cont_vars: train[v] = train[v].astype('float32') for v in cat_vars: test[v] = test[v].astype('category').cat.as_ordered() for v in cont_vars: test[v] = test[v].astype('category').astype('float32')
ZOMATO_final.ipynb
kanavanand/kanavanand.github.io
mit
<a id='ot4'>3.4 Making Features using order file.</a> Using the order file , Let's try to calculate the total number of orders and aggregate features
order_comp=order.loc[order.state=='COMPLETE'] order_comp_test=order_test.loc[order.state=='COMPLETE']
ZOMATO_final.ipynb
kanavanand/kanavanand.github.io
mit