markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Let's run a tournament, playing each plan against every other, and returning a list of `[(plan, mean_game_points),...]`. I will also define `show` to pretty-print these results and display a histogram:
def tournament(plans): "Play each plan against each other; return a sorted list of [(plan: mean_points)]" rankdict = {A: mean_points(A, plans) for A in plans} return Counter(rankdict).most_common() def mean_points(A, opponents): "Mean points for A playing against all opponents (but not against itself)." return mean(play(A, B) for B in opponents if B is not A) def show(rankings, n=10): "Pretty-print the n best plans, and display a histogram of all plans." print('Top', n, 'of', len(rankings), 'plans:') for (plan, points) in rankings[:n]: print(pplan(plan), pct(points)) plt.hist([s for (p, s) in rankings], bins=20) def pct(x): return '{:6.1%}'.format(x) def pplan(plan): return '(' + ', '.join('{:2}'.format(c) for c in plan) + ')' # This is what the result of a tournament looks like: tournament({(26, 5, 5, 5, 6, 7, 26, 0, 0, 0), (25, 0, 0, 0, 0, 0, 0, 25, 25, 25), (0, 25, 0, 0, 0, 0, 0, 25, 25, 25)}) # A tournament with all 1202 plans: rankings = tournament(plans) show(rankings)
Top 10 of 1202 plans: ( 0, 3, 4, 7, 16, 24, 4, 34, 4, 4) 85.6% ( 5, 7, 9, 11, 15, 21, 25, 2, 2, 3) 84.1% ( 3, 5, 8, 10, 13, 1, 26, 30, 2, 2) 83.3% ( 2, 2, 6, 12, 2, 18, 24, 30, 2, 2) 83.3% ( 2, 8, 2, 2, 10, 18, 26, 26, 3, 3) 83.2% ( 3, 6, 7, 9, 11, 2, 27, 31, 2, 2) 83.2% ( 1, 1, 1, 5, 11, 16, 28, 29, 3, 5) 82.8% ( 1, 3, 1, 1, 17, 20, 21, 30, 3, 3) 82.6% ( 3, 6, 10, 12, 16, 21, 26, 2, 2, 2) 82.4% ( 6, 6, 6, 11, 20, 21, 21, 3, 3, 3) 82.2%
MIT
ipynb/Riddler Battle Royale.ipynb
mikiec84/pytudes
It looks like there are a few really bad plans in there. Let's just keep the top 1000 plans (out of 1202), and re-run the rankings:
plans = {A for (A, _) in rankings[:1000]} rankings = tournament(plans) show(rankings)
Top 10 of 1000 plans: ( 0, 3, 4, 7, 16, 24, 4, 34, 4, 4) 87.4% ( 5, 5, 5, 5, 5, 5, 27, 30, 6, 7) 84.8% ( 5, 5, 5, 5, 5, 5, 30, 30, 5, 5) 84.2% ( 3, 3, 5, 5, 7, 7, 30, 30, 5, 5) 84.1% ( 1, 2, 3, 4, 6, 16, 25, 33, 4, 6) 82.5% ( 2, 2, 2, 5, 5, 26, 26, 26, 3, 3) 82.4% ( 1, 1, 1, 5, 11, 16, 28, 29, 3, 5) 82.0% ( 0, 1, 3, 3, 11, 18, 25, 33, 3, 3) 82.0% ( 5, 7, 9, 11, 15, 21, 25, 2, 2, 3) 81.7% ( 0, 0, 5, 5, 25, 3, 25, 3, 31, 3) 81.5%
MIT
ipynb/Riddler Battle Royale.ipynb
mikiec84/pytudes
The top 10 plans are still winning over 80%, and the top plan remains `(0, 3, 4, 7, 16, 24, 4, 34, 4, 4)`. This is an interesting plan: it places most of the soldiers on castles 4+5+6+8, which totals only 23 points, so it needs to pick up 5 more points from the other castles (that have mostly 4 soldiers attacking each one). Is this a good strategy? Where should we optiomally allocate soldiers? To gain some insight, I'll create a plot with 10 curves, one for each castle. Each curve maps the number of soldiers sent to the castle (on the x-axis) to the expected points won (against the 1000 plans) on the y-axis:
def plotter(plans, X=range(41)): X = list(X) def mean_reward(c, s): return mean(reward(s, p[c], c+1) for p in plans) for c in range(10): plt.plot(X, [mean_reward(c, s) for s in X], '.-') plt.xlabel('Number of soldiers (on each of the ten castles)') plt.ylabel('Expected points won') plt.grid() plotter(plans)
_____no_output_____
MIT
ipynb/Riddler Battle Royale.ipynb
mikiec84/pytudes
For example, this says that for castle 10 (the orange line at top), there is a big gain in expected return as we increase from 0 to 4 soldiers, and after that the gains are relatively less steep. This plot is interesting, but I can't see how to directly read off a best plan from it. HillclimbingInstead I'll see if I can improve the existing plans, using a simple *hillclimbing* strategy: Take a Plan A, and change it by randomly moving some soldiers from one castle to another. If that yields more `mean_points`, then keep the updated plan, otherwise discard it. Repeat.
def hillclimb(A, plans=plans, steps=1000): "Try to improve Plan A, repeat `steps` times; return new plan and total." m = mean_points(A, plans) for _ in range(steps): B = mutate(A) m, A = max((m, A), (mean_points(B, plans), B)) return A, m def mutate(plan): "Return a new plan that is a slight mutation." plan = list(plan) # So we can modify it. i, j = random.sample(castles, 2) plan[i], plan[j] = random_split(plan[i] + plan[j]) return Plan(plan) def random_split(n): "Split the integer n into two integers that sum to n." r = random.randint(0, n) return r, n - r
_____no_output_____
MIT
ipynb/Riddler Battle Royale.ipynb
mikiec84/pytudes
Let's see how well this works. Remember, the best plan so far had a score of `87.4%`. Can we improve on that?
hillclimb((0, 3, 4, 7, 16, 24, 4, 34, 4, 4))
_____no_output_____
MIT
ipynb/Riddler Battle Royale.ipynb
mikiec84/pytudes
We got an improvement. Let's see what happens if we start with other plans:
hillclimb((10, 10, 10, 10, 10, 10, 10, 10, 10, 10)) hillclimb((0, 1, 2, 3, 4, 18, 18, 18, 18, 18)) hillclimb((2, 3, 5, 5, 5, 20, 20, 20, 10, 10)) hillclimb((0, 0, 5, 5, 25, 3, 25, 3, 31, 3))
_____no_output_____
MIT
ipynb/Riddler Battle Royale.ipynb
mikiec84/pytudes
What if we hillclimb 20 times longer?
hillclimb((0, 3, 4, 7, 16, 24, 4, 34, 4, 4), steps=20000)
_____no_output_____
MIT
ipynb/Riddler Battle Royale.ipynb
mikiec84/pytudes
Opponent modellingTo have a chance of winning the second round of this contest, we have to predict what the other entries will be like. Nobody knows for sure, but I can hypothesize that the entries will be slightly better than the first round, and try to approximate that by hillclimbing from each of the first-round plans for a small number of steps:
def hillclimbers(plans, steps=100): "Return a sorted list of [(improved_plan, mean_points), ...]" pairs = {hillclimb(plan, plans, steps) for plan in plans} return sorted(pairs, key=lambda pair: pair[1], reverse=True) # For example: hillclimbers({(26, 5, 5, 5, 6, 7, 26, 0, 0, 0), (25, 0, 0, 0, 0, 0, 0, 25, 25, 25), (0, 25, 0, 0, 0, 0, 0, 25, 25, 25)})
_____no_output_____
MIT
ipynb/Riddler Battle Royale.ipynb
mikiec84/pytudes
I will define `plans2` (and `rankings2`) to be my estimate of the entries for round 2:
%time rankings2 = hillclimbers(plans) plans2 = {A for (A, _) in rankings2} show(rankings2)
CPU times: user 6min 11s, sys: 3.21 s, total: 6min 14s Wall time: 6min 17s Top 10 of 1000 plans: ( 1, 4, 5, 15, 6, 21, 3, 31, 3, 11) 90.8% ( 0, 3, 5, 14, 7, 21, 3, 30, 4, 13) 90.6% ( 0, 4, 6, 15, 9, 21, 4, 31, 5, 5) 90.2% ( 2, 4, 3, 13, 5, 22, 3, 32, 4, 12) 90.1% ( 0, 3, 5, 15, 8, 21, 4, 32, 6, 6) 90.0% ( 0, 3, 5, 15, 6, 24, 3, 31, 5, 8) 90.0% ( 0, 3, 6, 13, 6, 21, 5, 30, 4, 12) 90.0% ( 3, 4, 5, 15, 7, 21, 2, 31, 6, 6) 89.9% ( 2, 3, 3, 13, 6, 21, 3, 30, 5, 14) 89.8% ( 0, 2, 2, 12, 2, 23, 4, 31, 3, 21) 89.8%
MIT
ipynb/Riddler Battle Royale.ipynb
mikiec84/pytudes
Even though we only took 100 steps, the `plans2` plans are greatly improved: Almost all of them defeat 75% or more of the first-round `plans`. The top 10 plans are all very similar, targeting castles 4+6+8+10 (for 28 points), but reserving 20 or soldiers to spread among the other castles. Let's look more carefully at every 40th plan, plus the last one:
for (p, m) in rankings2[::40] + [rankings2[-1]]: print(pplan(p), pct(m))
( 1, 4, 5, 15, 6, 21, 3, 31, 3, 11) 90.8% ( 0, 6, 3, 13, 3, 22, 2, 32, 4, 15) 88.9% ( 1, 3, 6, 13, 9, 22, 1, 30, 4, 11) 88.3% ( 2, 2, 1, 13, 3, 21, 2, 32, 3, 21) 87.9% ( 0, 2, 5, 5, 15, 2, 28, 31, 5, 7) 87.6% ( 2, 2, 4, 14, 9, 1, 27, 30, 6, 5) 87.3% ( 3, 2, 3, 12, 3, 28, 3, 32, 6, 8) 87.0% ( 1, 3, 2, 5, 18, 3, 26, 3, 33, 6) 86.7% ( 0, 4, 4, 6, 15, 3, 29, 30, 5, 4) 86.5% ( 5, 5, 4, 5, 13, 22, 2, 29, 3, 12) 86.2% ( 5, 6, 5, 6, 16, 24, 26, 1, 5, 6) 85.9% ( 0, 2, 5, 15, 8, 3, 20, 36, 6, 5) 85.7% ( 5, 1, 6, 12, 2, 24, 5, 32, 4, 9) 85.4% ( 2, 5, 8, 16, 11, 3, 2, 36, 5, 12) 85.1% ( 2, 7, 3, 15, 14, 2, 3, 31, 9, 12) 84.8% ( 6, 5, 8, 6, 7, 22, 30, 3, 7, 6) 84.6% ( 5, 3, 3, 5, 3, 21, 26, 26, 3, 5) 84.4% ( 0, 2, 4, 13, 2, 22, 17, 33, 2, 5) 84.0% ( 0, 7, 12, 6, 8, 21, 2, 29, 12, 3) 83.5% ( 5, 5, 4, 13, 18, 2, 26, 2, 6, 19) 83.0% ( 5, 6, 3, 15, 17, 24, 4, 2, 5, 19) 82.5% ( 5, 6, 5, 9, 6, 22, 34, 1, 7, 5) 81.8% ( 4, 3, 7, 17, 17, 22, 3, 3, 5, 19) 81.0% ( 0, 1, 2, 11, 12, 13, 28, 27, 2, 4) 80.4% ( 5, 6, 13, 16, 15, 26, 2, 4, 7, 6) 78.9% ( 0, 0, 1, 13, 0, 1, 24, 21, 36, 4) 70.3%
MIT
ipynb/Riddler Battle Royale.ipynb
mikiec84/pytudes
We see a wider variety in plans as we go farther down the rankings. Now for the plot:
plotter(plans2)
_____no_output_____
MIT
ipynb/Riddler Battle Royale.ipynb
mikiec84/pytudes
We see that many castles (e.g. 9 (green), 8 (blue), 7 (black), 6 (yellowish)) have two plateaus. Castle 7 (black) has a plateau at 3.5 points for 6 to 20 soldiers (suggesting that 6 soldiers is a good investment and 20 soldiers a bad investment), and then another plateau at 7 points for everything above 30 soldiers.Now that we have an estimate of the opponents, we can use `hillclimbers` to try to find a plan that does well against all the others:
%time rankings3 = hillclimbers(plans2) show(rankings3)
CPU times: user 5min 40s, sys: 1 s, total: 5min 41s Wall time: 5min 42s Top 10 of 1000 plans: ( 3, 8, 10, 18, 21, 3, 5, 6, 10, 16) 99.9% ( 1, 9, 10, 17, 21, 6, 4, 6, 9, 17) 99.9% ( 1, 8, 10, 18, 21, 4, 4, 6, 11, 17) 99.9% ( 0, 10, 10, 17, 20, 4, 5, 6, 7, 21) 99.9% ( 2, 11, 1, 16, 18, 7, 6, 6, 8, 25) 99.8% ( 1, 8, 11, 19, 20, 4, 6, 5, 7, 19) 99.8% ( 0, 1, 11, 15, 18, 7, 6, 5, 13, 24) 99.8% ( 2, 10, 1, 17, 18, 9, 5, 6, 8, 24) 99.8% ( 1, 9, 10, 17, 19, 4, 6, 6, 9, 19) 99.8% ( 0, 2, 11, 18, 21, 4, 6, 8, 8, 22) 99.8%
MIT
ipynb/Riddler Battle Royale.ipynb
mikiec84/pytudes
We can try even harder to improve the champ:
champ, _ = rankings3[0] hillclimb(champ, plans2, 10000)
_____no_output_____
MIT
ipynb/Riddler Battle Royale.ipynb
mikiec84/pytudes
Here are some champion plans from previous runs of this notebook:
champs = { (0, 1, 3, 16, 20, 3, 4, 5, 32, 16), (0, 1, 9, 16, 15, 24, 5, 5, 8, 17), (0, 1, 9, 16, 16, 24, 5, 5, 7, 17), (0, 2, 9, 16, 15, 24, 5, 5, 8, 16), (0, 2, 9, 16, 15, 25, 5, 4, 7, 17), (0, 3, 4, 7, 16, 24, 4, 34, 4, 4), (0, 3, 5, 6, 20, 4, 4, 33, 8, 17), (0, 4, 5, 7, 20, 4, 4, 33, 7, 16), (0, 4, 6, 7, 19, 4, 4, 31, 8, 17), (0, 4, 12, 18, 21, 7, 6, 4, 8, 20), (0, 4, 12, 19, 25, 4, 5, 6, 8, 17), (0, 5, 6, 7, 18, 4, 5, 32, 7, 16), (0, 5, 7, 3, 18, 4, 4, 34, 8, 17), (1, 2, 9, 16, 15, 24, 5, 4, 7, 17), (1, 2, 9, 16, 15, 24, 5, 4, 8, 16), (1, 2, 11, 16, 15, 24, 5, 4, 7, 15), (1, 3, 14, 18, 24, 4, 5, 6, 8, 17), (1, 6, 3, 16, 16, 24, 5, 5, 7, 17), (2, 3, 7, 16, 16, 25, 5, 5, 8, 13), (2, 3, 8, 16, 12, 25, 5, 4, 8, 17), (2, 3, 8, 16, 15, 24, 5, 4, 7, 16), (2, 3, 8, 16, 15, 25, 4, 5, 8, 14), (2, 3, 8, 16, 16, 24, 5, 5, 8, 13), (2, 3, 9, 15, 12, 25, 4, 5, 8, 17), (2, 3, 9, 16, 12, 24, 5, 5, 8, 16), (2, 4, 12, 18, 24, 4, 6, 5, 8, 17), (3, 3, 7, 16, 16, 24, 5, 5, 8, 13), (3, 3, 8, 16, 12, 25, 4, 4, 8, 17), (3, 3, 8, 16, 15, 25, 5, 4, 7, 14), (3, 4, 12, 18, 23, 4, 6, 5, 8, 17), (3, 4, 15, 18, 23, 4, 5, 6, 8, 14), (3, 5, 7, 16, 5, 4, 5, 34, 7, 14), (3, 6, 13, 17, 23, 4, 6, 5, 8, 15), (4, 3, 12, 18, 23, 4, 5, 6, 8, 17), (4, 5, 3, 15, 11, 23, 5, 5, 10, 19), (4, 6, 3, 16, 14, 25, 5, 5, 8, 14), (4, 6, 3, 16, 16, 24, 5, 5, 7, 14), (4, 6, 3, 16, 16, 24, 5, 5, 8, 13), (5, 3, 12, 17, 23, 4, 5, 6, 8, 17), (5, 5, 3, 16, 12, 25, 4, 5, 8, 17), (5, 6, 3, 16, 16, 24, 5, 5, 7, 13), (5, 6, 7, 3, 21, 4, 27, 5, 8, 14), (5, 6, 8, 3, 18, 4, 27, 5, 8, 16), (5, 6, 8, 3, 20, 4, 27, 5, 8, 14), (5, 6, 8, 3, 21, 4, 27, 5, 8, 13)}
_____no_output_____
MIT
ipynb/Riddler Battle Royale.ipynb
mikiec84/pytudes
We can evaluate each of them against the original `plans`, against the improved `plans2`, against their fellow champs, and against all of those put together:
def μ(plan, plans): return pct(mean_points(plan,plans)) all = plans | plans2 | champs print('Plan plans plans2 champs all') for p in sorted(champs, key=lambda p: -mean_points(p, all)): print(pplan(p), μ(p, plans), μ(p, plans2), μ(p, champs), μ(p, all))
Plan plans plans2 champs all ( 0, 5, 7, 3, 18, 4, 4, 34, 8, 17) 85.5% 96.0% 68.5% 90.2% ( 0, 4, 6, 7, 19, 4, 4, 31, 8, 17) 84.7% 95.0% 63.0% 89.2% ( 0, 1, 3, 16, 20, 3, 4, 5, 32, 16) 85.6% 95.2% 31.5% 89.0% ( 0, 3, 5, 6, 20, 4, 4, 33, 8, 17) 84.1% 95.2% 60.9% 89.0% ( 0, 5, 6, 7, 18, 4, 5, 32, 7, 16) 84.3% 96.3% 28.3% 88.9% ( 3, 5, 7, 16, 5, 4, 5, 34, 7, 14) 85.2% 95.7% 18.5% 88.8% ( 5, 6, 8, 3, 18, 4, 27, 5, 8, 16) 81.8% 96.4% 64.1% 88.6% ( 0, 4, 5, 7, 20, 4, 4, 33, 7, 16) 84.7% 95.0% 18.5% 88.2% ( 5, 6, 8, 3, 20, 4, 27, 5, 8, 14) 82.0% 96.2% 48.9% 88.2% ( 0, 1, 9, 16, 15, 24, 5, 5, 8, 17) 78.2% 98.6% 72.8% 88.0% ( 5, 6, 7, 3, 21, 4, 27, 5, 8, 14) 81.8% 96.0% 51.1% 88.0% ( 0, 1, 9, 16, 16, 24, 5, 5, 7, 17) 79.1% 98.5% 46.7% 87.8% ( 5, 6, 8, 3, 21, 4, 27, 5, 8, 13) 82.0% 95.2% 45.7% 87.6% ( 2, 3, 9, 15, 12, 25, 4, 5, 8, 17) 78.5% 97.9% 58.7% 87.5% ( 4, 5, 3, 15, 11, 23, 5, 5, 10, 19) 76.8% 97.8% 97.8% 87.5% ( 2, 3, 8, 16, 12, 25, 5, 4, 8, 17) 78.2% 98.1% 57.6% 87.5% ( 2, 3, 8, 16, 15, 25, 4, 5, 8, 14) 79.7% 97.9% 31.5% 87.5% ( 0, 2, 9, 16, 15, 24, 5, 5, 8, 16) 78.5% 98.2% 50.0% 87.4% ( 2, 3, 7, 16, 16, 25, 5, 5, 8, 13) 79.3% 97.5% 44.6% 87.4% ( 4, 6, 3, 16, 14, 25, 5, 5, 8, 14) 79.0% 97.4% 48.9% 87.3% ( 5, 5, 3, 16, 12, 25, 4, 5, 8, 17) 78.1% 97.8% 60.9% 87.3% ( 4, 6, 3, 16, 16, 24, 5, 5, 7, 14) 80.3% 97.2% 21.7% 87.3% ( 2, 3, 8, 16, 15, 24, 5, 4, 7, 16) 80.2% 97.8% 10.9% 87.2% ( 1, 2, 9, 16, 15, 24, 5, 4, 7, 17) 79.8% 97.8% 19.6% 87.2% ( 0, 2, 9, 16, 15, 25, 5, 4, 7, 17) 79.1% 97.9% 31.5% 87.2% ( 2, 3, 8, 16, 16, 24, 5, 5, 8, 13) 79.5% 97.5% 29.3% 87.2% ( 2, 3, 9, 16, 12, 24, 5, 5, 8, 16) 78.0% 98.2% 45.7% 87.1% ( 3, 3, 8, 16, 15, 25, 5, 4, 7, 14) 80.3% 97.6% 6.5% 87.1% ( 1, 2, 9, 16, 15, 24, 5, 4, 8, 16) 79.2% 97.6% 27.2% 87.0% ( 3, 3, 7, 16, 16, 24, 5, 5, 8, 13) 79.8% 97.1% 26.1% 87.0% ( 4, 6, 3, 16, 16, 24, 5, 5, 8, 13) 80.0% 96.7% 28.3% 87.0% ( 3, 3, 8, 16, 12, 25, 4, 4, 8, 17) 78.8% 97.8% 28.3% 87.0% ( 5, 6, 3, 16, 16, 24, 5, 5, 7, 13) 80.9% 96.5% 8.7% 86.9% ( 1, 2, 11, 16, 15, 24, 5, 4, 7, 15) 79.9% 97.2% 6.5% 86.7% ( 1, 6, 3, 16, 16, 24, 5, 5, 7, 17) 75.2% 97.9% 41.3% 85.5% ( 5, 3, 12, 17, 23, 4, 5, 6, 8, 17) 64.3% 99.5% 84.8% 82.0% ( 4, 3, 12, 18, 23, 4, 5, 6, 8, 17) 64.0% 99.5% 88.0% 81.9% ( 3, 4, 12, 18, 23, 4, 6, 5, 8, 17) 63.2% 99.5% 88.0% 81.5% ( 2, 4, 12, 18, 24, 4, 6, 5, 8, 17) 63.0% 99.5% 91.3% 81.5% ( 3, 4, 15, 18, 23, 4, 5, 6, 8, 14) 63.4% 99.5% 76.1% 81.3% ( 3, 6, 13, 17, 23, 4, 6, 5, 8, 15) 63.2% 99.4% 78.3% 81.2% ( 1, 3, 14, 18, 24, 4, 5, 6, 8, 17) 62.4% 99.5% 93.5% 81.2% ( 0, 4, 12, 19, 25, 4, 5, 6, 8, 17) 62.1% 99.5% 95.7% 81.1% ( 1, 9, 11, 17, 21, 6, 5, 4, 10, 16) 62.1% 100.0% 76.1% 80.9% ( 0, 4, 12, 18, 21, 7, 6, 4, 8, 20) 61.4% 99.6% 100.0% 80.9% ( 1, 7, 13, 17, 23, 6, 6, 5, 8, 14) 62.4% 99.5% 78.3% 80.9% ( 0, 3, 4, 7, 16, 24, 4, 34, 4, 4) 87.4% 37.6% 0.0% 61.1%
MIT
ipynb/Riddler Battle Royale.ipynb
mikiec84/pytudes
Individual EDA- Separate the states into 4 regions: Western, southern, eastern and northern.- Filter data based on assigned regions and explore with support from visualization- North East and South is the main focus in this EDA.___ Data Filtering
import pandas as pd import seaborn as sns import numpy as np import matplotlib.pyplot as plt # Add scripts module's directory to sys.path import sys, os sys.path.append(os.path.join(os.getcwd(),"..")) from scripts import project_functions as pf # Load 4 parts of raw data on State Names state_df = pf.load_and_process_many("../../data/raw/state") # Note that the project_fuctions module includes list of abbreviations for states separated in regions # Let's slice out only the north east and south n_df = state_df.loc[state_df["State"].isin(pf.NORTH_EAST)].reset_index(drop=True) s_df = state_df.loc[state_df["State"].isin(pf.SOUTH)].reset_index(drop=True)
_____no_output_____
MIT
analysis/Jamie/milestones2_EDA.ipynb
data301-2020-winter2/course-project-group_1039
___ Initial inspectionLet's have a general of the data set for each region. North East region
n_df.head(10) n_df.shape
_____no_output_____
MIT
analysis/Jamie/milestones2_EDA.ipynb
data301-2020-winter2/course-project-group_1039
For the North East, we see that there are **more than 1 million collected record** and **5 variable for each observation**.
n_df.columns
_____no_output_____
MIT
analysis/Jamie/milestones2_EDA.ipynb
data301-2020-winter2/course-project-group_1039
Indeed, we have 5 variables for each observation. **The state column is not important since we care only about regions.**
n_df.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 1077888 entries, 0 to 1077887 Data columns (total 5 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Name 1077888 non-null object 1 Year 1077888 non-null int64 2 Gender 1077888 non-null object 3 State 1077888 non-null object 4 Count 1077888 non-null int64 dtypes: int64(2), object(3) memory usage: 41.1+ MB
MIT
analysis/Jamie/milestones2_EDA.ipynb
data301-2020-winter2/course-project-group_1039
We see Year and Count are 64-bit integers type while other columns are categorial types.
n_df.describe(include=[object]).T
_____no_output_____
MIT
analysis/Jamie/milestones2_EDA.ipynb
data301-2020-winter2/course-project-group_1039
For categorial data:- We see that there are 3 categorical variable in the dataframe with other 2 numerical variable (Year and Count)- Here, we can see that are 15817 unique names in this region- There are 11 states recorded that equal to total number of states in this region. This means all states participates in this survey.- It is not clear whether John is the most popular all of times since we also have a count column
n_df.describe().T
_____no_output_____
MIT
analysis/Jamie/milestones2_EDA.ipynb
data301-2020-winter2/course-project-group_1039
Summary on numerical values do not give any useful information.
n_df["Year"].unique()
_____no_output_____
MIT
analysis/Jamie/milestones2_EDA.ipynb
data301-2020-winter2/course-project-group_1039
The data set span from 1910 to 2014 without any missing years.
len(n_df.loc[n_df["Count"]<=0])
_____no_output_____
MIT
analysis/Jamie/milestones2_EDA.ipynb
data301-2020-winter2/course-project-group_1039
This shows that we do not have negative values for names'count. South region
s_df.head(10) s_df.shape
_____no_output_____
MIT
analysis/Jamie/milestones2_EDA.ipynb
data301-2020-winter2/course-project-group_1039
For the South, we see that there are **more than 2 million collected record** and **5 variable for each observation**.
# We have 5 variables for each observation s_df.columns
_____no_output_____
MIT
analysis/Jamie/milestones2_EDA.ipynb
data301-2020-winter2/course-project-group_1039
This is similar to that of North East region.
s_df.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 2173021 entries, 0 to 2173020 Data columns (total 5 columns): # Column Dtype --- ------ ----- 0 Name object 1 Year int64 2 Gender object 3 State object 4 Count int64 dtypes: int64(2), object(3) memory usage: 82.9+ MB
MIT
analysis/Jamie/milestones2_EDA.ipynb
data301-2020-winter2/course-project-group_1039
The type of each column is also similar to that of North East dataset.
s_df.describe(include=[object]).T
_____no_output_____
MIT
analysis/Jamie/milestones2_EDA.ipynb
data301-2020-winter2/course-project-group_1039
For categorial data:- We see that there are 3 categorical variable in the dataframe with other 2 numerical variable (Year and Count)- Here, we can see that are 20860 unique names in this region- There are 17 states recorded that equal to total number of states in this region. This means all states participates in this survey- It is not clear whether Jessie is the most popular all of times since we also have a count column
s_df.describe().T
_____no_output_____
MIT
analysis/Jamie/milestones2_EDA.ipynb
data301-2020-winter2/course-project-group_1039
Summary on numerical values do not give any useful information.
s_df.loc[s_df["Count"]<=0]
_____no_output_____
MIT
analysis/Jamie/milestones2_EDA.ipynb
data301-2020-winter2/course-project-group_1039
This shows that we do not have negative values for names'count.
s_df["Year"].unique()
_____no_output_____
MIT
analysis/Jamie/milestones2_EDA.ipynb
data301-2020-winter2/course-project-group_1039
The data set also spans from 1910 to 2014 without gaps! ___ Analysis Top 5 of all times in South and North We start by aggregating sum of counts of every name in each region for all years.
# Define processing function def get_top5_all_time(data=None): if data is None: return data return (data.groupby(by="Name") .aggregate("sum") .drop(columns=["Year"]) # We do not analyze with time .reset_index() .sort_values(by="Count", ascending=False) .head() ) # For the north east top5_n = get_top5_all_time(n_df) # For the south top5_s = get_top5_all_time(s_df) top5_n top5_s
_____no_output_____
MIT
analysis/Jamie/milestones2_EDA.ipynb
data301-2020-winter2/course-project-group_1039
The code works properly to return top 5 all times in these two regions. Now, we can build plots. In this case, for counting the number of occurence for each discrete entry, bar plots is ideal.
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12,7), sharex=True) # Check similarity between 2 regions sns.set_theme(context="notebook", style="ticks", font_scale=1.3) def get_top5_all_time(data, ax, region): plot = sns.barplot(y="Name", x="Count", data=data, order=data["Name"], ax=ax ) plot.set_title(f"{region} Top 5 Name All Time") return plot # North graph north = get_top5_all_time(top5_n , ax[0], "North East") # South graph south = get_top5_all_time(top5_s, ax[1], "South") # Show plot sns.despine() fig.tight_layout(pad=3.0) fig.suptitle("Top 5 names of all times in North East and South Region") plt.show()
_____no_output_____
MIT
analysis/Jamie/milestones2_EDA.ipynb
data301-2020-winter2/course-project-group_1039
Observations- We can see that top 5 in these 2 regions are quite similar with the appearance of **James, William, Robert and John**. The difference is that **Michael** is in top 5 in the North East while **Mary** is in the top 5 in the South.- All names in top 5 list in both region pass the mark of **1 million** count of all time. Maximum count in the *North East* is almost **1.6 millions** while that of the *South* surpasses **2 millions**.- In the **North East** region, **John is the most popular name all times**, followed by Robert. James is at the last of the list.- In the **South region**, however, **James appears to the most popular name of all times**, followed by John, who takes the top in the South.___ Top 5 of all times of each gender South and North We start by filtering out data set based on gender.
# Function for filter data based on gender def get_top5_gender(data, region, gender): return (data.loc[data["Gender"] == gender] .groupby(by="Name") .agg(np.sum) .sort_values(by="Count", ascending=False) .head() .drop(columns="Year") # We do not care about year .assign(Region=region, Gender=gender) .reset_index() ) # In the North East top5_male_n, top5_female_n = (get_top5_gender(n_df, "NE", "M"), get_top5_gender(n_df, "NE", "F") ) # In the South top5_male_s, top5_female_s = (get_top5_gender(s_df, "S", "M"), get_top5_gender(s_df, "S", "F") )
_____no_output_____
MIT
analysis/Jamie/milestones2_EDA.ipynb
data301-2020-winter2/course-project-group_1039
Now, we can plot. In this case, we will will bar plot to indicate counts and FacetGrid as way to categorize plot based on region and gender.
# Settings sns.set_theme(style="ticks", font_scale=1.3) fig,ax= plt.subplots(1,2, figsize=(12,7), sharex=True) def draw_gender_plot(axes, data_list, result_axes=None): if result_axes is None: result_axes = list() for i in range(len(ax)): data = data_list[i] ax_ij = sns.barplot(x="Count", y="Name", data=data, ax=ax[i] ) region, gender = data["Region"][0], data["Gender"][0] ax_ij.set_title(f"Region = {region} | Gender = {gender}") result_axes.append(ax_ij) return result_axes draw_gender_plot(ax, [top5_male_n,top5_male_s]) # Configure figure object sns.despine() fig.tight_layout(pad=2.7) fig.suptitle("Top 5 male names of all times in North East and South Region") plt.show()
_____no_output_____
MIT
analysis/Jamie/milestones2_EDA.ipynb
data301-2020-winter2/course-project-group_1039
Observation- Interestingly, two regions have the same names in the top 5 male names all times. This might result from the fact that these two regions are close to each other.- However, the pattern is different. In the **North East**, **John is most occuring name** with over 1.5 million counts. In the **South**, **James is at top** with over 2 million counts.- Generally, **count in the South is higher than that in the North East for the top 5 male name.**
fig, ax = plt.subplots(1,2, figsize=(12,7), sharex=True) draw_gender_plot(ax,[top5_female_n,top5_female_s]) # Configure figure object sns.despine() fig.tight_layout(pad=2.7) fig.suptitle("Top 5 female names of all times in North East and South Region") plt.show()
_____no_output_____
MIT
analysis/Jamie/milestones2_EDA.ipynb
data301-2020-winter2/course-project-group_1039
Observations- Even more interesting, the top 5 female names see Mary at top for both region. **Mary's count is almost double that of other names in the list.**- The two list seems similar with the appearance of Mary, Patricia, Elizabeth. Unlike male top 5, **this list differs by 2 names between two region**. In the North East, Barbara and Margaret are in the list. In the South, Linda and Betty are the list.- Generally, **count in the South is higher than that of the North East for top 5 female names** Summary- There are particular difference between male and female top 5. Male top 5 has the same pattern between 2 regions. Female top 5 see the same name at the top.- The difference in count between top name in male and other names the list is not significant. However, in the female list, that difference is almost double.___ Proportion of each name from 1910 to 2014
def get_proportion_df(data, region): p_df = (data.pivot_table(index="Year", columns="Name", values="Count", aggfunc="sum") .fillna(0) ) y = data.groupby(by="Year").sum() for year in range(1910,2015): p_df.loc[year,:] = p_df.loc[year,:]/y.loc[year,"Count"] l = list() for i in range(1, len(p_df.columns)): l.append(p_df.iloc[:,i]) df = pd.DataFrame(pd.concat(l), columns=["Percentage"]).reset_index() return df threshold = 0.005 # Only consider percentage above this point n_prop= get_proportion_df(n_df, "NE").loc[lambda df: df.Percentage > threshold].reset_index(drop=True) n_prop fig = plt.figure(figsize=(12,8)) sns.set_theme(style="whitegrid", font_scale=1.4) first = n_prop.loc[n_prop.Year <= 1936].assign(Group="1910-1936") second = n_prop.loc[(n_prop.Year > 1936) & (n_prop.Year <= 1962)].assign(Group="1937-1962") third = n_prop.loc[(n_prop.Year > 1962) & (n_prop.Year <= 1988)].assign(Group="1962-1988") last = n_prop.loc[n_prop.Year > 1988].assign(Group="1988-2014") merge = pd.concat([first, second, third, last]) n_plot = sns.boxplot(x=merge.Group,y=merge.Percentage) n_plot.set(title="Percentage distribution of names in each year group of Norht East Region", ylabel="Percentage", xlabel="Year group") plt.show() s_prop= get_proportion_df(s_df, "S").loc[lambda df: df.Percentage > threshold].reset_index(drop=True) s_prop fig = plt.figure(figsize=(12,8)) sns.set_theme(style="whitegrid", font_scale=1.4) first = s_prop.loc[s_prop.Year <= 1936].assign(Group="1910-1936") second = s_prop.loc[(s_prop.Year > 1936) & (s_prop.Year <= 1962)].assign(Group="1937-1962") third = s_prop.loc[(s_prop.Year > 1962) & (s_prop.Year <= 1988)].assign(Group="1962-1988") last = s_prop.loc[s_prop.Year > 1988].assign(Group="1988-2014") merge = pd.concat([first, second, third, last]) n_plot = sns.boxplot(x=merge.Group,y=merge.Percentage) n_plot.set(title="Percentage distribution of names in each year group of South Region", ylabel="Percentage", xlabel="Year group") plt.show()
_____no_output_____
MIT
analysis/Jamie/milestones2_EDA.ipynb
data301-2020-winter2/course-project-group_1039
PyTorch Training and Serving in SageMaker "Script Mode"Script mode is a training script format for PyTorch that lets you execute any PyTorch training script in SageMaker with minimal modification. The [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk) handles transferring your script to a SageMaker training instance. On the training instance, SageMaker's native PyTorch support sets up training-related environment variables and executes your training script. In this tutorial, we use the SageMaker Python SDK to launch a training job and deploy the trained model.Script mode supports training with a Python script, a Python module, or a shell script. In this example, we use a Python script to train a classification model on the [MNIST dataset](http://yann.lecun.com/exdb/mnist/). In this example, we will show how easily you can train a SageMaker using PyTorch scripts with SageMaker Python SDK. In addition, this notebook demonstrates how to perform real time inference with the [SageMaker PyTorch Serving container](https://github.com/aws/sagemaker-pytorch-serving-container). The PyTorch Serving container is the default inference method for script mode. For full documentation on deploying PyTorch models, please visit [here](https://github.com/aws/sagemaker-python-sdk/blob/master/doc/using_pytorch.rstdeploy-pytorch-models). Contents1. [Background](Background)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Host](Host)--- BackgroundMNIST is a widely used dataset for handwritten digit classification. It consists of 70,000 labeled 28x28 pixel grayscale images of hand-written digits. The dataset is split into 60,000 training images and 10,000 test images. There are 10 classes (one for each of the 10 digits). This tutorial will show how to train and test an MNIST model on SageMaker using PyTorch.For more information about the PyTorch in SageMaker, please visit [sagemaker-pytorch-containers](https://github.com/aws/sagemaker-pytorch-containers) and [sagemaker-python-sdk](https://github.com/aws/sagemaker-python-sdk) github repositories.--- Setup_This notebook was created and tested on an ml.m4.xlarge notebook instance._ Install SageMaker Python SDK
!pip install sagemaker --upgrade --ignore-installed --no-cache --user !pip install torch==1.3.1 torchvision==0.4.2 --upgrade --ignore-installed --no-cache --user
_____no_output_____
Apache-2.0
07_train/archive/extras/pytorch/pytorch_mnist.ipynb
MarcusFra/workshop
Forcing `pillow==6.2.1` due to https://discuss.pytorch.org/t/cannot-import-name-pillow-version-from-pil/66096
!pip uninstall -y pillow !pip install pillow==6.2.1 --upgrade --ignore-installed --no-cache --user
_____no_output_____
Apache-2.0
07_train/archive/extras/pytorch/pytorch_mnist.ipynb
MarcusFra/workshop
Restart the Kernel to Recognize New Dependencies Above
from IPython.display import display_html display_html("<script>Jupyter.notebook.kernel.restart()</script>", raw=True) !pip3 list
_____no_output_____
Apache-2.0
07_train/archive/extras/pytorch/pytorch_mnist.ipynb
MarcusFra/workshop
Create the SageMaker Session
import os import sagemaker from sagemaker import get_execution_role sagemaker_session = sagemaker.Session()
_____no_output_____
Apache-2.0
07_train/archive/extras/pytorch/pytorch_mnist.ipynb
MarcusFra/workshop
Setup the Service Execution Role and RegionGet IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the `sagemaker.get_execution_role()` with a the appropriate full IAM role arn string(s).
role = get_execution_role() print('RoleARN: {}\n'.format(role)) region = sagemaker_session.boto_session.region_name print('Region: {}'.format(region))
_____no_output_____
Apache-2.0
07_train/archive/extras/pytorch/pytorch_mnist.ipynb
MarcusFra/workshop
Training Data Copy the Training Data to Your Notebook Disk
local_data_path = './data' from torchvision import datasets, transforms normalization_mean = 0.1307 normalization_std = 0.3081 # download the dataset # this will not only download data to ./mnist folder, but also load and transform (normalize) them datasets.MNIST(local_data_path, download=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((normalization_mean,), (normalization_std,)) ])) !ls -R {local_data_path}
_____no_output_____
Apache-2.0
07_train/archive/extras/pytorch/pytorch_mnist.ipynb
MarcusFra/workshop
Upload the Data to S3 for Distributed Training Across Many WorkersWe are going to use the `sagemaker.Session.upload_data` function to upload our datasets to an S3 location. The return value inputs identifies the location -- we will use later when we start the training job.This is S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.
bucket = sagemaker_session.default_bucket() data_prefix = 'sagemaker/pytorch-mnist/data' training_data_uri = sagemaker_session.upload_data(path=local_data_path, bucket=bucket, key_prefix=data_prefix) print('Input spec (S3 path): {}'.format(training_data_uri)) !aws s3 ls --recursive {training_data_uri}
_____no_output_____
Apache-2.0
07_train/archive/extras/pytorch/pytorch_mnist.ipynb
MarcusFra/workshop
Train Training ScriptThe `mnist_pytorch.py` script provides all the code we need for training and hosting a SageMaker model (`model_fn` function to load a model).The training script is very similar to a training script you might run outside of SageMaker, but you can access useful properties about the training environment through various environment variables, such as:* `SM_MODEL_DIR`: A string representing the path to the directory to write model artifacts to. These artifacts are uploaded to S3 for model hosting.* `SM_NUM_GPUS`: The number of gpus available in the current container.* `SM_CURRENT_HOST`: The name of the current container on the container network.* `SM_HOSTS`: JSON encoded list containing all the hosts .Supposing one input channel, 'training', was used in the call to the PyTorch estimator's `fit()` method, the following will be set, following the format `SM_CHANNEL_[channel_name]`:* `SM_CHANNEL_TRAINING`: A string representing the path to the directory containing data in the 'training' channel.For more information about training environment variables, please visit [SageMaker Containers](https://github.com/aws/sagemaker-containers).A typical training script loads data from the input channels, configures training with hyperparameters, trains a model, and saves a model to `model_dir` so that it can be hosted later. Hyperparameters are passed to your script as arguments and can be retrieved with an `argparse.ArgumentParser` instance.Because the SageMaker imports the training script, you should put your training code in a main guard (``if __name__=='__main__':``) if you are using the same script to host your model as we do in this example, so that SageMaker does not inadvertently run your training code at the wrong point in execution.For example, the script run by this notebook:
!ls ./src/mnist_pytorch.py
_____no_output_____
Apache-2.0
07_train/archive/extras/pytorch/pytorch_mnist.ipynb
MarcusFra/workshop
You can add custom Python modules to the `src/requirements.txt` file. They will automatically be installed - and made available to your training script.
!cat ./src/requirements.txt
_____no_output_____
Apache-2.0
07_train/archive/extras/pytorch/pytorch_mnist.ipynb
MarcusFra/workshop
Train with SageMaker `PyTorch` EstimatorThe `PyTorch` class allows us to run our training function as a training job on SageMaker infrastructure. We need to configure it with our training script, an IAM role, the number of training instances, the training instance type, and hyperparameters. In this case we are going to run our training job on two(2) `ml.p3.2xlarge` instances. Alternatively, you can specify `ml.c4.xlarge` instances. This example can be ran on one or multiple, cpu or gpu instances ([full list of available instances](https://aws.amazon.com/sagemaker/pricing/instance-types/)). The hyperparameters parameter is a dict of values that will be passed to your training script -- you can see how to access these values in the `mnist.py` script above. After we've constructed our `PyTorch` object, we can fit it using the data we uploaded to S3. SageMaker makes sure our data is available in the local filesystem of each worker, so our training script can simply read the data from disk. `fit` the Model (Approx. 15 mins)To start a training job, we call `estimator.fit(training_data_uri)`.
from sagemaker.pytorch import PyTorch import time model_output_path = 's3://{}/sagemaker/pytorch-mnist/training-runs'.format(bucket) mnist_estimator = PyTorch( entry_point='mnist_pytorch.py', source_dir='./src', output_path=model_output_path, role=role, framework_version='1.3.1', train_instance_count=1, train_instance_type='ml.c5.2xlarge', enable_sagemaker_metrics=True, hyperparameters={ 'epochs': 5, 'backend': 'gloo' }, # Assuming the logline from the PyTorch training job is as follows: # Test set: Average loss: 0.3230, Accuracy: 9103/10000 (91%) metric_definitions=[ {'Name':'test:loss', 'Regex':'Test set: Average loss: (.*?),'}, {'Name':'test:accuracy', 'Regex':'(.*?)%;'} ] ) mnist_estimator.fit(inputs={'training': training_data_uri}, wait=False) training_job_name = mnist_estimator.latest_training_job.name print('training_job_name: {}'.format(training_job_name))
_____no_output_____
Apache-2.0
07_train/archive/extras/pytorch/pytorch_mnist.ipynb
MarcusFra/workshop
Attach to a training job to monitor the logs._Note: Each instance in the training job (2 in this example) will appear as a different color in the logs. 1 color per instance._
mnist_estimator = PyTorch.attach(training_job_name=training_job_name)
_____no_output_____
Apache-2.0
07_train/archive/extras/pytorch/pytorch_mnist.ipynb
MarcusFra/workshop
Option 1: Perform Batch Predictions Directly in the Notebook Use PyTorch Core to load the model from `model_output_path`
!aws --region {region} s3 ls --recursive {model_output_path}/{training_job_name}/output/ !aws --region {region} s3 cp {model_output_path}/{training_job_name}/output/model.tar.gz ./model/model.tar.gz !ls ./model !tar -xzvf ./model/model.tar.gz -C ./model # Based on https://github.com/pytorch/examples/blob/master/mnist/main.py import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 10, kernel_size=5) self.conv2 = nn.Conv2d(10, 20, kernel_size=5) self.conv2_drop = nn.Dropout2d() self.fc1 = nn.Linear(320, 50) self.fc2 = nn.Linear(50, 10) def forward(self, x): x = F.relu(F.max_pool2d(self.conv1(x), 2)) x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2)) x = x.view(-1, 320) x = F.relu(self.fc1(x)) x = F.dropout(x, training=self.training) x = self.fc2(x) return F.log_softmax(x, dim=1) import torch loaded_model = Net().to('cpu') # single-machine multi-gpu case or single-machine or multi-machine cpu case loaded_model = torch.nn.DataParallel(loaded_model) print(loaded_model) loaded_model.load_state_dict(torch.load('./model/model.pth', map_location='cpu')) test_loader = torch.utils.data.DataLoader( datasets.MNIST('./data', train=False, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=256, shuffle=True ) single_loaded_img = test_loader.dataset.data[0] single_loaded_img = single_loaded_img.to('cpu') single_loaded_img = single_loaded_img[None, None] single_loaded_img = single_loaded_img.type('torch.FloatTensor') # instead of DoubleTensor print(single_loaded_img.numpy()) from matplotlib import pyplot as plt plt.imshow(single_loaded_img.numpy().reshape(28, 28), cmap='Greys') result = loaded_model(single_loaded_img) prediction = result.max(1, keepdim=True)[1][0][0].numpy() print(prediction)
_____no_output_____
Apache-2.0
07_train/archive/extras/pytorch/pytorch_mnist.ipynb
MarcusFra/workshop
Option 2: Create a SageMaker Endpoint and Perform REST-based Predictions Deploy the Trained Model to a SageMaker Endpoint (Approx. 10 mins)After training, we use the `PyTorch` estimator object to build and deploy a `PyTorchPredictor`. This creates a Sagemaker Endpoint -- a hosted prediction service that we can use to perform inference.As mentioned above we have implementation of `model_fn` in the `pytorch_mnist.py` script that is required. We are going to use default implementations of `input_fn`, `predict_fn`, `output_fn` and `transform_fm` defined in [sagemaker-pytorch-containers](https://github.com/aws/sagemaker-pytorch-containers).The arguments to the deploy function allow us to set the number and type of instances that will be used for the Endpoint. These do not need to be the same as the values we used for the training job. For example, you can train a model on a set of GPU-based instances, and then deploy the Endpoint to a fleet of CPU-based instances, but you need to make sure that you return or save your model as a cpu model similar to what we did in `mnist.py`.
predictor = mnist_estimator.deploy(initial_instance_count=1, instance_type='ml.c5.2xlarge')
_____no_output_____
Apache-2.0
07_train/archive/extras/pytorch/pytorch_mnist.ipynb
MarcusFra/workshop
Invoke the EndpointWe can now use this predictor to classify hand-written digits. Drawing into the image box loads the pixel data into a `data` variable in this notebook, which we can then pass to the `predictor`.
from IPython.display import HTML HTML(open("input.html").read())
_____no_output_____
Apache-2.0
07_train/archive/extras/pytorch/pytorch_mnist.ipynb
MarcusFra/workshop
The value of `data` is retrieved from the HTML above.
print(data) import numpy as np image = np.array([data], dtype=np.float32) response = predictor.predict(image) prediction = response.argmax(axis=1)[0] print(prediction)
_____no_output_____
Apache-2.0
07_train/archive/extras/pytorch/pytorch_mnist.ipynb
MarcusFra/workshop
(Optional) Cleanup EndpointAfter you have finished with this example, remember to delete the prediction endpoint to release the instance(s) associated with it
sagemaker.Session().delete_endpoint(predictor.endpoint)
_____no_output_____
Apache-2.0
07_train/archive/extras/pytorch/pytorch_mnist.ipynb
MarcusFra/workshop
Train a Plane Detection Model from Voxel51 DatasetThis notebook trains a plane detection model using transfer learning. Depending on the label used, it can just detect a plane or it can try to detect the model of the plane.A pre-trained model is used as a starting point. This means that fewer example images are needed and the training process is faster.Images are exported from a Voxel51 Dataset into TensorFlow Records.The examples in the TFRecord are based on a selected Field from the Samples in the Voxel51 dataset. The V51 Sample field you choose should have 1 or more "detections", which are bounding boxes with a label.From: https://colab.research.google.com/drive/1sLqFKVV94wm-lglFq_0kGo2ciM0kecWDscrollTo=wHfsJ5nWLWh9&uniqifier=1Good stuff here too: https://www.inovex.de/blog/deep-learning-mobile-tensorflow-lite/ Configure the Training
training_name="881images-efficientdet-d0-model" # The name for the model. All of the different directories will be based on this label_field = "detections" # The field from the V51 Samples around which will be used for the Labels for training. dataset_name = "jsm-test-dataset" # The name of the V51 dataset that will be used # Available Model Configs (You can add more from the TF2 Model Zoo) MODELS_CONFIG = { 'ssd_mobilenet_v2': { 'model_name': 'ssd_mobilenet_v2_320x320_coco17_tpu-8', 'base_pipeline_file': 'ssd_mobilenet_v2_320x320_coco17_tpu-8.config', 'pretrained_checkpoint': 'ssd_mobilenet_v2_320x320_coco17_tpu-8.tar.gz', 'batch_size': 24 }, 'ssd_mobilenet_v2_fpnlite': { 'model_name': 'ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8', 'base_pipeline_file': 'ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8.config', 'pretrained_checkpoint': 'ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8.tar.gz', 'batch_size': 18 }, 'efficientdet-d0': { 'model_name': 'efficientdet_d0_coco17_tpu-32', 'base_pipeline_file': 'ssd_efficientdet_d0_512x512_coco17_tpu-8.config', 'pretrained_checkpoint': 'efficientdet_d0_coco17_tpu-32.tar.gz', 'batch_size': 18 }, 'efficientdet-d1': { 'model_name': 'efficientdet_d1_coco17_tpu-32', 'base_pipeline_file': 'ssd_efficientdet_d1_640x640_coco17_tpu-8.config', 'pretrained_checkpoint': 'efficientdet_d1_coco17_tpu-32.tar.gz', 'batch_size': 18 }, 'efficientdet-d2': { 'model_name': 'efficientdet_d2_coco17_tpu-32', 'base_pipeline_file': 'ssd_efficientdet_d2_768x768_coco17_tpu-8.config', 'pretrained_checkpoint': 'efficientdet_d2_coco17_tpu-32.tar.gz', 'batch_size': 18 }, 'efficientdet-d3': { 'model_name': 'efficientdet_d3_coco17_tpu-32', 'base_pipeline_file': 'ssd_efficientdet_d3_896x896_coco17_tpu-32.config', 'pretrained_checkpoint': 'efficientdet_d3_coco17_tpu-32.tar.gz', 'batch_size': 18 } } # change chosen model to deploy different models chosen_model = 'efficientdet-d0' #'ssd_mobilenet_v2' num_steps = 40000 # The more steps, the longer the training. Increase if your loss function is still decreasing and validation metrics are increasing. num_eval_steps = 500 # Perform evaluation after so many steps # The different directories and filenames to use train_record_fname = "/tf/dataset-export/" + training_name + "/train/tf.records" val_record_fname = "/tf/dataset-export/" + training_name + "/val/tf.records" val_export_dir = "/tf/dataset-export/" + training_name + "/val/" train_export_dir = "/tf/dataset-export/" + training_name + "/train/" model_export_dir = "/tf/model-export/" + training_name +"/" label_map_file = "/tf/dataset-export/" + training_name + "/label_map.pbtxt" model_name = MODELS_CONFIG[chosen_model]['model_name'] pretrained_checkpoint = MODELS_CONFIG[chosen_model]['pretrained_checkpoint'] base_pipeline_file = MODELS_CONFIG[chosen_model]['base_pipeline_file'] batch_size = MODELS_CONFIG[chosen_model]['batch_size'] #if you can fit a large batch in memory, it may speed up your training pipeline_fname = '/tf/models/research/deploy/' + base_pipeline_file fine_tune_checkpoint = '/tf/models/research/deploy/' + model_name + '/checkpoint/ckpt-0' pipeline_file = '/tf/models/research/deploy/pipeline_file.config' model_dir = '/tf/training/'+training_name+'/' # Install the different packages needed #! apt install -y protobuf-compiler libgl1-mesa-glx wget
Reading package lists... Done Building dependency tree Reading state information... Done protobuf-compiler is already the newest version (3.0.0-9.1ubuntu1). libgl1-mesa-glx is already the newest version (20.0.8-0ubuntu1~18.04.1). wget is already the newest version (1.19.4-1ubuntu2.2). 0 upgraded, 0 newly installed, 0 to remove and 21 not upgraded.
Apache-2.0
ml-model/notebooks/Train Plane Detection Model.ipynb
wiseman/SkyScan
Download and Install TF ModelsThe TF Object Detection API is available here: https://github.com/tensorflow/models
import os import pathlib # Clone the tensorflow models repository if it doesn't already exist if "models" in pathlib.Path.cwd().parts: while "models" in pathlib.Path.cwd().parts: os.chdir('..') elif not pathlib.Path('models').exists(): # pull v2.5.0 of tensorflow models to make deterministic !git clone --depth 1 https://github.com/tensorflow/models/tree/v2.5.0 /tf/models %%bash cd /tf/models/research ls protoc object_detection/protos/*.proto --python_out=. cp object_detection/packages/tf2/setup.py . python -m pip install . import matplotlib import matplotlib.pyplot as plt import os import random import io import imageio import scipy.misc import numpy as np from six import BytesIO from PIL import Image, ImageDraw, ImageFont from IPython.display import display, Javascript from IPython.display import Image as IPyImage import tensorflow as tf from object_detection.protos.string_int_label_map_pb2 import StringIntLabelMap, StringIntLabelMapItem from google.protobuf import text_format from object_detection.utils import label_map_util from object_detection.utils import config_util from object_detection.utils import visualization_utils as viz_utils from object_detection.builders import model_builder %matplotlib inline
_____no_output_____
Apache-2.0
ml-model/notebooks/Train Plane Detection Model.ipynb
wiseman/SkyScan
Export the Training and Val Dataset from Voxel 51
import fiftyone as fo import math dataset = fo.load_dataset(dataset_name)
_____no_output_____
Apache-2.0
ml-model/notebooks/Train Plane Detection Model.ipynb
wiseman/SkyScan
Explore the dataset contentHere are some basic stats on the Voxel51 dataset you are going to build training the model on. An example of the samples is also printed out. In the Sample, make sure the *label_field* you selected has some detections in it.
print("\t\tDataset\n-----------------------------------") view = dataset.match_tags("training").shuffle(seed=51) # You can add additional things to the query to further refine it. eg .match_tags("good_box") print(view) print("\n\n\tExample Sample\n-----------------------------------") print(view.first())
Dataset ----------------------------------- Dataset: jsm-test-dataset Media type: image Num samples: 881 Tags: ['capture-3-29', 'capture-3-30', 'capture-5-13', 'training'] Sample fields: id: fiftyone.core.fields.ObjectIdField filepath: fiftyone.core.fields.StringField tags: fiftyone.core.fields.ListField(fiftyone.core.fields.StringField) metadata: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.metadata.Metadata) external_id: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Classification) bearing: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Classification) elevation: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Classification) distance: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Classification) icao24: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Classification) model: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Classification) manufacturer: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Classification) norm_model: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Classification) labelbox_id: fiftyone.core.fields.StringField detections: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections) operatorcallsign: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Classification) predict_model: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections) dolt_predict: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections) dolt_40k_predict: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections) dolt_bg_predict: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections) dolt_400_predict: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections) 400_predict: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections) 400_aug_5k_predict: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections) 914_mega_predict: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections) 914_40k_predict: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections) 914_40k_predict_full: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections) eval_tp: fiftyone.core.fields.IntField eval_fp: fiftyone.core.fields.IntField eval_fn: fiftyone.core.fields.IntField View stages: 1. MatchTags(tags=['training'], bool=True) 2. Shuffle(seed=51) Example Sample ----------------------------------- <SampleView: { 'id': '60a3bf2ef3610f2b7a828f88', 'media_type': 'image', 'filepath': '/tf/media/capture-5-13/Airbus Industrie A321-211/a11eb7_275_77_9303_2021-05-13-11-17-33.jpg', 'tags': BaseList(['training', 'capture-5-13']), 'metadata': <ImageMetadata: { 'size_bytes': 489161, 'mime_type': 'image/jpeg', 'width': 1920, 'height': 1080, 'num_channels': 3, }>, 'external_id': <Classification: { 'id': '60a3bf2ef3610f2b7a828f83', 'tags': BaseList([]), 'label': 'a11eb7_275_77_9303_2021-05-13-11-17-33', 'confidence': None, 'logits': None, }>, 'bearing': <Classification: { 'id': '60a3bf2ef3610f2b7a828f84', 'tags': BaseList([]), 'label': '275', 'confidence': None, 'logits': None, }>, 'elevation': <Classification: { 'id': '60a3bf2ef3610f2b7a828f85', 'tags': BaseList([]), 'label': '77', 'confidence': None, 'logits': None, }>, 'distance': <Classification: { 'id': '60a3bf2ef3610f2b7a828f86', 'tags': BaseList([]), 'label': '9303', 'confidence': None, 'logits': None, }>, 'icao24': <Classification: { 'id': '60a3bf2ef3610f2b7a828f87', 'tags': BaseList([]), 'label': 'a11eb7', 'confidence': None, 'logits': None, }>, 'model': <Classification: { 'id': '60d5e7045e08d80243cba70b', 'tags': BaseList([]), 'label': 'A321-211', 'confidence': None, 'logits': None, }>, 'manufacturer': <Classification: { 'id': '60d5e7045e08d80243cba70c', 'tags': BaseList([]), 'label': 'Airbus Industrie', 'confidence': None, 'logits': None, }>, 'norm_model': <Classification: { 'id': '60d5e7ba5e08d80243ce5c32', 'tags': BaseList([]), 'label': 'A321', 'confidence': None, 'logits': None, }>, 'labelbox_id': 'ckou3l1fk9ns80y99bzz3fusq', 'detections': <Detections: { 'detections': BaseList([ <Detection: { 'id': '60e702f215f87e1a607696c7', 'attributes': BaseDict({}), 'tags': BaseList([]), 'label': 'plane', 'bounding_box': BaseList([ 0.5755208333333334, 0.40185185185185185, 0.121875, 0.16296296296296298, ]), 'mask': None, 'confidence': None, 'index': None, }>, ]), }>, 'operatorcallsign': <Classification: { 'id': '60d5e7045e08d80243cba70d', 'tags': BaseList([]), 'label': 'AMERICAN', 'confidence': None, 'logits': None, }>, 'predict_model': <Detections: { 'detections': BaseList([ <Detection: { 'id': '60d0ccaf218c23e19b1f9e85', 'attributes': BaseDict({}), 'tags': BaseList([]), 'label': 'plane', 'bounding_box': BaseList([ 0.5768705606460571, 0.4023531973361969, 0.12218141555786133, 0.16684052348136902, ]), 'mask': None, 'confidence': 0.9999737739562988, 'index': None, }>, ]), }>, 'dolt_predict': <Detections: { 'detections': BaseList([ <Detection: { 'id': '60d35f37218c23e19b1fb5d0', 'attributes': BaseDict({}), 'tags': BaseList([]), 'label': 'plane', 'bounding_box': BaseList([ 0.5762593746185303, 0.40124379263983834, 0.12189579010009766, 0.16604603661431205, ]), 'mask': None, 'confidence': 0.9999697208404541, 'index': None, }>, ]), }>, 'dolt_40k_predict': <Detections: { 'detections': BaseList([ <Detection: { 'id': '60d543ac20cf8e383b417810', 'attributes': BaseDict({}), 'tags': BaseList([]), 'label': 'plane', 'bounding_box': BaseList([ 0.5774428248405457, 0.3997461001078288, 0.11862397193908691, 0.1671259138319227, ]), 'mask': None, 'confidence': 1.0, 'index': None, }>, ]), }>, 'dolt_bg_predict': None, 'dolt_400_predict': None, '400_predict': <Detections: { 'detections': BaseList([ <Detection: { 'id': '60d7ee853f3353cca1d7c600', 'attributes': BaseDict({}), 'tags': BaseList([]), 'label': 'plane', 'bounding_box': BaseList([ 0.5755680799484253, 0.4013982084062364, 0.12205994129180908, 0.16460511419508195, ]), 'mask': None, 'confidence': 1.0, 'index': None, }>, ]), }>, '400_aug_5k_predict': None, '914_mega_predict': None, '914_40k_predict': <Detections: { 'detections': BaseList([ <Detection: { 'id': '60e4805010424668fddefc67', 'attributes': BaseDict({}), 'tags': BaseList([]), 'label': 'plane', 'bounding_box': BaseList([ 0.5735075355817875, 0.40489327907562256, 0.12403148040175438, 0.1557837724685669, ]), 'mask': None, 'confidence': 0.9985151886940002, 'index': None, }>, ]), }>, '914_40k_predict_full': None, 'eval_tp': None, 'eval_fp': None, 'eval_fn': None, }>
Apache-2.0
ml-model/notebooks/Train Plane Detection Model.ipynb
wiseman/SkyScan
Export the dataset into TFRecordsThe selected dataset samples will be exported to TensorFlow Records (TFRecords). They will be split between Training and Validation. The ratio can be adjusted below. You only need to do this once to build the dataset. If you run this a second time with the same **model_name** additional samples will be appended to the end.
# The Dataset or DatasetView to export sample_len = len(view) val_len = math.floor(sample_len * 0.2) train_len = math.floor(sample_len * 0.8) print("Total: {} Val: {} Train: {}".format(sample_len,val_len,train_len)) val_view = view.take(val_len) train_view = view.skip(val_len).take(train_len) # Export the dataset val_view.export( export_dir=val_export_dir, dataset_type=fo.types.TFObjectDetectionDataset,#fo.types.COCODetectionDataset,#fo.types.TFObjectDetectionDataset, label_field=label_field, ) train_view.export( export_dir=train_export_dir, dataset_type=fo.types.TFObjectDetectionDataset,#fo.types.COCODetectionDataset,#fo.types.TFObjectDetectionDataset, label_field=label_field, )
Total: 881 Val: 176 Train: 704 100% |█████████████████| 176/176 [4.1s elapsed, 0s remaining, 52.9 samples/s] 100% |█████████████████| 704/704 [13.2s elapsed, 0s remaining, 54.4 samples/s]
Apache-2.0
ml-model/notebooks/Train Plane Detection Model.ipynb
wiseman/SkyScan
Create a file with the Labels for the objectsThe TF2 Object Detection API looks for a map of the labels used and a corresponding Id. You can build a list of the unique classnames by itterating the dataset. You can also just hardcode it if there only a few.
def convert_classes(classes, start=1): msg = StringIntLabelMap() for id, name in enumerate(classes, start=start): msg.item.append(StringIntLabelMapItem(id=id, name=name)) text = str(text_format.MessageToBytes(msg, as_utf8=True), 'utf-8') return text # If labelfield is a classification class_names=[] for sample in view.select_fields(label_field): if sample[label_field].label not in class_names: class_names.append(sample[label_field].label) print(class_names) # If labelfield is detections class_names=[] for sample in view.select_fields(label_field): if sample[label_field] is not None: for detection in sample[label_field].detections: label = detection["label"] if label not in class_names: class_names.append(label) print(class_names) # You can hard wire it too class_names=["plane"] txt = convert_classes(class_names) print(txt) with open(label_map_file, 'w') as f: f.write(txt)
item { name: "plane" id: 1 }
Apache-2.0
ml-model/notebooks/Train Plane Detection Model.ipynb
wiseman/SkyScan
Download a pretrained Model & default ConfigA list of the models can be found here: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.mdThe configs are here: https://raw.githubusercontent.com/tensorflow/models/master/research/object_detection/configs/tf2/
#download pretrained weights %mkdir /tf/models/research/deploy/ %cd /tf/models/research/deploy/ import tarfile download_tar = 'http://download.tensorflow.org/models/object_detection/tf2/20200711/' + pretrained_checkpoint !wget {download_tar} tar = tarfile.open(pretrained_checkpoint) tar.extractall() tar.close() #download base training configuration file %cd /tf/models/research/deploy download_config = 'https://raw.githubusercontent.com/tensorflow/models/master/research/object_detection/configs/tf2/' + base_pipeline_file !wget {download_config}
/tf/models/research/deploy --2021-07-08 20:52:50-- https://raw.githubusercontent.com/tensorflow/models/master/research/object_detection/configs/tf2/ssd_efficientdet_d0_512x512_coco17_tpu-8.config Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.109.133, 185.199.110.133, 185.199.111.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.109.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 4630 (4.5K) [text/plain] Saving to: ‘ssd_efficientdet_d0_512x512_coco17_tpu-8.config.8’ ssd_efficientdet_d0 100%[===================>] 4.52K --.-KB/s in 0s 2021-07-08 20:52:50 (43.3 MB/s) - ‘ssd_efficientdet_d0_512x512_coco17_tpu-8.config.8’ saved [4630/4630]
Apache-2.0
ml-model/notebooks/Train Plane Detection Model.ipynb
wiseman/SkyScan
Build the Config for trainingThe default config for the model being trained needs to be updated with the correct parameters and paths to the data. This just adds some standard settings, you may need to do some additional tuning if the training is not working well.
# Gets the total number of classes from the Label Map def get_num_classes(pbtxt_fname): from object_detection.utils import label_map_util label_map = label_map_util.load_labelmap(pbtxt_fname) categories = label_map_util.convert_label_map_to_categories( label_map, max_num_classes=90, use_display_name=True) category_index = label_map_util.create_category_index(categories) return len(category_index.keys()) num_classes = get_num_classes(label_map_file) print("working with {} classes".format(num_classes))
working with 1 classes
Apache-2.0
ml-model/notebooks/Train Plane Detection Model.ipynb
wiseman/SkyScan
You may need to adjust the learning rate section below. The number used here are from the EfficentDet config. I noticed that this learning rate worked well for the small bounding boxes I was using when planes were at a high altitude. You can try increasing it if the planes take up more of the image. If the initial loss rates are high (>0) that is a probably a sign that you should adjust the Learning Rate.You may also want to look at other aspects of the config file. They set the parameters for the model training and may need to be adjusted based on the Model Architecture you are using and the images you are training on.
# write custom configuration file by slotting our dataset, model checkpoint, and training parameters into the base pipeline file import re %cd /tf/models/research/deploy print('writing custom configuration file') with open(pipeline_fname) as f: s = f.read() with open('pipeline_file.config', 'w') as f: # fine_tune_checkpoint s = re.sub('fine_tune_checkpoint: ".*?"', 'fine_tune_checkpoint: "{}"'.format(fine_tune_checkpoint), s) # tfrecord files train and test. s = re.sub( '(input_path: ".*?)(PATH_TO_BE_CONFIGURED/train)(.*?")', 'input_path: "{}"'.format(train_record_fname), s) s = re.sub( '(input_path: ".*?)(PATH_TO_BE_CONFIGURED/val)(.*?")', 'input_path: "{}"'.format(val_record_fname), s) # label_map_path s = re.sub( 'label_map_path: ".*?"', 'label_map_path: "{}"'.format(label_map_file), s) # Set training batch_size. s = re.sub('batch_size: [0-9]+', 'batch_size: {}'.format(batch_size), s) # Set training steps, num_steps s = re.sub('num_steps: [0-9]+', 'num_steps: {}'.format(num_steps), s) # Set learning_rate_base in learning_rate, sane default # s = re.sub('learning_rate_base: [.0-9]+', # 'learning_rate_base: {}'.format("8e-2"), s) # Set warmup_learning_rate in learning_rate, sane default s = re.sub('warmup_learning_rate: [.0-9]+', 'warmup_learning_rate: {}'.format(.001), s) # Set warmup_steps in learning_rate, sane default s = re.sub('warmup_steps: [.0-9]+', 'warmup_steps: {}'.format(2500), s) # Set total_steps in learning_rate, num_steps s = re.sub('total_steps: [0-9]+', 'total_steps: {}'.format(num_steps), s) # Set number of classes num_classes. s = re.sub('num_classes: [0-9]+', 'num_classes: {}'.format(num_classes), s) # Setup the data augmentation preprocessor - not sure if this is a good one to use, commenting out for now and going with defaults. #s = re.sub('random_scale_crop_and_pad_to_square {\s+output_size: 896\s+scale_min: 0.1\s+scale_max: 2.0\s+}', # 'random_crop_image {\n\tmin_object_covered: 1.0\n\tmin_aspect_ratio: 0.75\n\tmax_aspect_ratio: 1.5\n\tmin_area: 0.25\n\tmax_area: 0.875\n\toverlap_thresh: 0.5\n\trandom_coef: 0.125\n}',s, flags=re.MULTILINE) #s = re.sub('ssd_random_crop {\s+}', # 'random_crop_image {\n\tmin_object_covered: 1.0\n\tmin_aspect_ratio: 0.75\n\tmax_aspect_ratio: 1.5\n\tmin_area: 0.10\n\tmax_area: 0.75\n\toverlap_thresh: 0.5\n\trandom_coef: 0.125\n}',s, flags=re.MULTILINE) # replacing the default data augmentation with something more comprehensive # the available options are listed here: https://github.com/tensorflow/models/blob/master/research/object_detection/protos/preprocessor.proto data_augmentation = ("data_augmentation_options {\n random_distort_color: { \n } \n}\n\n" "data_augmentation_options {\n random_horizontal_flip: { \n } \n}\n\n" "data_augmentation_options {\n random_vertical_flip: { \n } \n}\n\n" "data_augmentation_options {\n random_rotation90: { \n } \n}\n\n" "data_augmentation_options {\n random_jitter_boxes: { \n } \n}\n\n" "data_augmentation_options {\n random_crop_image {\n\tmin_object_covered: 1.0\n\tmin_aspect_ratio: 0.95\n\tmax_aspect_ratio: 1.05\n\tmin_area: 0.25\n\tmax_area: 0.875\n\toverlap_thresh: 0.9\n\trandom_coef: 0.5\n}\n}\n\n" "data_augmentation_options {\n random_jpeg_quality: {\n\trandom_coef: 0.5\n\tmin_jpeg_quality: 40\n\tmax_jpeg_quality: 90\n } \n}\n\n" ) s = re.sub('data_augmentation_options {[\s\w]*{[\s\w\:\.]*}\s*}\s* data_augmentation_options {[\s\w]*{[\s\w\:\.]*}\s*}', data_augmentation,s, flags=re.MULTILINE) #fine-tune checkpoint type s = re.sub( 'fine_tune_checkpoint_type: "classification"', 'fine_tune_checkpoint_type: "{}"'.format('detection'), s) f.write(s) %cat /tf/models/research/deploy/pipeline_file.config
# SSD with EfficientNet-b0 + BiFPN feature extractor, # shared box predictor and focal loss (a.k.a EfficientDet-d0). # See EfficientDet, Tan et al, https://arxiv.org/abs/1911.09070 # See Lin et al, https://arxiv.org/abs/1708.02002 # Trained on COCO, initialized from an EfficientNet-b0 checkpoint. # # Train on TPU-8 model { ssd { inplace_batchnorm_update: true freeze_batchnorm: false num_classes: 1 add_background_class: false box_coder { faster_rcnn_box_coder { y_scale: 10.0 x_scale: 10.0 height_scale: 5.0 width_scale: 5.0 } } matcher { argmax_matcher { matched_threshold: 0.5 unmatched_threshold: 0.5 ignore_thresholds: false negatives_lower_than_unmatched: true force_match_for_each_row: true use_matmul_gather: true } } similarity_calculator { iou_similarity { } } encode_background_as_zeros: true anchor_generator { multiscale_anchor_generator { min_level: 3 max_level: 7 anchor_scale: 4.0 aspect_ratios: [1.0, 2.0, 0.5] scales_per_octave: 3 } } image_resizer { keep_aspect_ratio_resizer { min_dimension: 512 max_dimension: 512 pad_to_max_dimension: true } } box_predictor { weight_shared_convolutional_box_predictor { depth: 64 class_prediction_bias_init: -4.6 conv_hyperparams { force_use_bias: true activation: SWISH regularizer { l2_regularizer { weight: 0.00004 } } initializer { random_normal_initializer { stddev: 0.01 mean: 0.0 } } batch_norm { scale: true decay: 0.99 epsilon: 0.001 } } num_layers_before_predictor: 3 kernel_size: 3 use_depthwise: true } } feature_extractor { type: 'ssd_efficientnet-b0_bifpn_keras' bifpn { min_level: 3 max_level: 7 num_iterations: 3 num_filters: 64 } conv_hyperparams { force_use_bias: true activation: SWISH regularizer { l2_regularizer { weight: 0.00004 } } initializer { truncated_normal_initializer { stddev: 0.03 mean: 0.0 } } batch_norm { scale: true, decay: 0.99, epsilon: 0.001, } } } loss { classification_loss { weighted_sigmoid_focal { alpha: 0.25 gamma: 1.5 } } localization_loss { weighted_smooth_l1 { } } classification_weight: 1.0 localization_weight: 1.0 } normalize_loss_by_num_matches: true normalize_loc_loss_by_codesize: true post_processing { batch_non_max_suppression { score_threshold: 1e-8 iou_threshold: 0.5 max_detections_per_class: 100 max_total_detections: 100 } score_converter: SIGMOID } } } train_config: { fine_tune_checkpoint: "/tf/models/research/deploy/efficientdet_d0_coco17_tpu-32/checkpoint/ckpt-0" fine_tune_checkpoint_version: V2 fine_tune_checkpoint_type: "detection" batch_size: 18 sync_replicas: true startup_delay_steps: 0 replicas_to_aggregate: 8 use_bfloat16: true num_steps: 5000 data_augmentation_options { random_distort_color: { } } data_augmentation_options { random_horizontal_flip: { } } data_augmentation_options { random_vertical_flip: { } } data_augmentation_options { random_rotation90: { } } data_augmentation_options { random_jitter_boxes: { } } data_augmentation_options { random_crop_image { min_object_covered: 1.0 min_aspect_ratio: 0.75 max_aspect_ratio: 1.5 min_area: 0.25 max_area: 0.875 overlap_thresh: 0.5 random_coef: 0.125 } } optimizer { momentum_optimizer: { learning_rate: { cosine_decay_learning_rate { learning_rate_base: 8e-2 total_steps: 5000 warmup_learning_rate: 0.001 warmup_steps: 2500 } } momentum_optimizer_value: 0.9 } use_moving_average: false } max_number_of_boxes: 100 unpad_groundtruth_tensors: false } train_input_reader: { label_map_path: "/tf/dataset-export/lb-400images-efficientdet-d0-augment-model/label_map.pbtxt" tf_record_input_reader { input_path: "/tf/dataset-export/lb-400images-efficientdet-d0-augment-model/train/tf.records" } } eval_config: { metrics_set: "coco_detection_metrics" use_moving_averages: false batch_size: 18; } eval_input_reader: { label_map_path: "/tf/dataset-export/lb-400images-efficientdet-d0-augment-model/label_map.pbtxt" shuffle: false num_epochs: 1 tf_record_input_reader { input_path: "/tf/dataset-export/lb-400images-efficientdet-d0-augment-model/val/tf.records" } }
Apache-2.0
ml-model/notebooks/Train Plane Detection Model.ipynb
wiseman/SkyScan
Train Custom TF2 Object DetectorThis step will launch the TF2 Object Detection training. It can take a while to start-up. If you get an error about not finding the GPU, try shutting down the Jupyter kernel and restarting it.While it is running, it should print out the Current Loss and which Step it is on.* pipeline_file: defined above in writing custom training configuration* model_dir: the location tensorboard logs and saved model checkpoints will save to* num_train_steps: how long to train for* num_eval_steps: perform eval on validation set after this many steps
# 2:48 PM ET Tuesday, May 25, 2021 !python /tf/models/research/object_detection/model_main_tf2.py \ --pipeline_config_path={pipeline_file} \ --model_dir={model_dir} \ --alsologtostderr \ --num_train_steps={num_steps} \ --sample_1_of_n_eval_examples=1 \ --num_eval_steps={num_eval_steps}
2021-07-08 20:53:56.154660: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0 2021-07-08 20:53:59.234024: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1 2021-07-08 20:53:59.259096: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-07-08 20:53:59.259998: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: pciBusID: 0000:00:1e.0 name: Tesla K80 computeCapability: 3.7 coreClock: 0.8235GHz coreCount: 13 deviceMemorySize: 11.17GiB deviceMemoryBandwidth: 223.96GiB/s 2021-07-08 20:53:59.260066: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0 2021-07-08 20:53:59.264586: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11 2021-07-08 20:53:59.264668: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11 2021-07-08 20:53:59.265942: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcufft.so.10 2021-07-08 20:53:59.266271: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcurand.so.10 2021-07-08 20:53:59.267456: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusolver.so.11 2021-07-08 20:53:59.268525: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusparse.so.11 2021-07-08 20:53:59.268757: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8 2021-07-08 20:53:59.268885: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-07-08 20:53:59.269705: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-07-08 20:53:59.270442: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0 2021-07-08 20:53:59.270765: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-07-08 20:53:59.271185: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-07-08 20:53:59.271986: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: pciBusID: 0000:00:1e.0 name: Tesla K80 computeCapability: 3.7 coreClock: 0.8235GHz coreCount: 13 deviceMemorySize: 11.17GiB deviceMemoryBandwidth: 223.96GiB/s 2021-07-08 20:53:59.272113: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-07-08 20:53:59.272913: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-07-08 20:53:59.273640: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0 2021-07-08 20:53:59.273696: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0 2021-07-08 20:53:59.833997: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix: 2021-07-08 20:53:59.834052: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264] 0 2021-07-08 20:53:59.834078: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0: N 2021-07-08 20:53:59.834369: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-07-08 20:53:59.835221: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-07-08 20:53:59.836071: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-07-08 20:53:59.836837: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10661 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:00:1e.0, compute capability: 3.7) WARNING:tensorflow:Collective ops is not configured at program startup. Some performance features may not be enabled. W0708 20:53:59.839558 139809137821504 mirrored_strategy.py:379] Collective ops is not configured at program startup. Some performance features may not be enabled. INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',) I0708 20:54:00.073076 139809137821504 mirrored_strategy.py:369] Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',) INFO:tensorflow:Maybe overwriting train_steps: 40000 I0708 20:54:00.080240 139809137821504 config_util.py:552] Maybe overwriting train_steps: 40000 INFO:tensorflow:Maybe overwriting use_bfloat16: False I0708 20:54:00.080397 139809137821504 config_util.py:552] Maybe overwriting use_bfloat16: False I0708 20:54:00.098092 139809137821504 ssd_efficientnet_bifpn_feature_extractor.py:143] EfficientDet EfficientNet backbone version: efficientnet-b0 I0708 20:54:00.098223 139809137821504 ssd_efficientnet_bifpn_feature_extractor.py:144] EfficientDet BiFPN num filters: 64 I0708 20:54:00.098315 139809137821504 ssd_efficientnet_bifpn_feature_extractor.py:146] EfficientDet BiFPN num iterations: 3 I0708 20:54:00.111051 139809137821504 efficientnet_model.py:147] round_filter input=32 output=32 INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',). I0708 20:54:00.157430 139809137821504 cross_device_ops.py:621] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',). INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',). I0708 20:54:00.162168 139809137821504 cross_device_ops.py:621] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',). INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',). I0708 20:54:00.166090 139809137821504 cross_device_ops.py:621] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',). INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',). I0708 20:54:00.167571 139809137821504 cross_device_ops.py:621] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',). INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',). I0708 20:54:00.176658 139809137821504 cross_device_ops.py:621] Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
Apache-2.0
ml-model/notebooks/Train Plane Detection Model.ipynb
wiseman/SkyScan
Evaluate trained modelAfter the model has finished training, try running it against some data to see if it atleast works.
import matplotlib import matplotlib.pyplot as plt import io, os, glob import scipy.misc import numpy as np from six import BytesIO from PIL import Image, ImageDraw, ImageFont import tensorflow as tf from object_detection.utils import label_map_util from object_detection.utils import config_util from object_detection.utils import visualization_utils as viz_utils from object_detection.builders import model_builder %matplotlib inline def load_image_into_numpy_array(path): """Load an image from file into a numpy array. Puts image into numpy array to feed into tensorflow graph. Note that by convention we put it into a numpy array with shape (height, width, channels), where channels=3 for RGB. Args: path: the file path to the image Returns: uint8 numpy array with shape (img_height, img_width, 3) """ img_data = tf.io.gfile.GFile(path, 'rb').read() image = Image.open(BytesIO(img_data)) (im_width, im_height) = image.size return np.array(image.getdata()).reshape((im_height, im_width, 3)).astype(np.uint8) %ls {model_dir}
_____no_output_____
Apache-2.0
ml-model/notebooks/Train Plane Detection Model.ipynb
wiseman/SkyScan
Load model from a training checkpointSelect a checkpoint index from above
# generally you want to put the last ckpt index from training in here checkpoint_index=41 # recover our saved model pipeline_config = pipeline_file checkpoint = model_dir + "ckpt-" + str(checkpoint_index) configs = config_util.get_configs_from_pipeline_file(pipeline_config) model_config = configs['model'] detection_model = model_builder.build(model_config=model_config, is_training=False) # Restore checkpoint ckpt = tf.compat.v2.train.Checkpoint(model=detection_model) ckpt.restore(os.path.join(checkpoint)).expect_partial() def get_model_detection_function(model): """Get a tf.function for detection.""" @tf.function def detect_fn(image): """Detect objects in image.""" image, shapes = model.preprocess(image) prediction_dict = model.predict(image, shapes) detections = model.postprocess(prediction_dict, shapes) return detections, prediction_dict, tf.reshape(shapes, [-1]) return detect_fn detect_fn = get_model_detection_function(detection_model) # map labels for inference decoding label_map_path = configs['eval_input_config'].label_map_path label_map = label_map_util.load_labelmap(label_map_path) categories = label_map_util.convert_label_map_to_categories( label_map, max_num_classes=label_map_util.get_max_label_map_index(label_map), use_display_name=True) category_index = label_map_util.create_category_index(categories) label_map_dict = label_map_util.get_label_map_dict(label_map, use_display_name=True) #run detector on test image #it takes a little longer on the first run and then runs at normal speed. import random TEST_IMAGE_PATHS = glob.glob('/tf/media/capture-5-13/Textron Aviation Inc 680A/*.jpg') #/tf/dataset-export/pet/images/keeshond_171.jpg') #'/tf/testing/Dassault Aviation FALCON 2000/*.jpg') image_path = random.choice(TEST_IMAGE_PATHS) image_np = load_image_into_numpy_array(image_path) input_tensor = tf.convert_to_tensor(np.expand_dims(image_np, 0), dtype=tf.float32) detections, predictions_dict, shapes = detect_fn(input_tensor) print(detections['detection_scores']) label_id_offset = 1 # Depending on whether your LabelMap starts at 0 or 1 image_np_with_detections = image_np.copy() viz_utils.visualize_boxes_and_labels_on_image_array( image_np_with_detections, detections['detection_boxes'][0].numpy(), (detections['detection_classes'][0].numpy() + label_id_offset).astype(int), detections['detection_scores'][0].numpy(), category_index, use_normalized_coordinates=True, max_boxes_to_draw=200, min_score_thresh=.2, agnostic_mode=False, ) plt.figure(figsize=(20,25)) plt.imshow(image_np_with_detections) plt.show()
_____no_output_____
Apache-2.0
ml-model/notebooks/Train Plane Detection Model.ipynb
wiseman/SkyScan
Export the modelWhen you have a working model, use the TF2 Object Detection API to export it to a saved model. Export a Saved Model that uses Image Tensors
image_tensor_model_export_dir = model_export_dir + "image_tensor_saved_model" print(image_tensor_model_export_dir) !python /tf/models/research/object_detection/exporter_main_v2.py \ --input_type image_tensor \ --trained_checkpoint_dir={model_dir} \ --pipeline_config_path={pipeline_file} \ --output_directory {image_tensor_model_export_dir}
2021-06-28 23:00:37.233618: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0 2021-06-28 23:00:39.839076: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1 2021-06-28 23:00:39.864436: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-06-28 23:00:39.865310: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: pciBusID: 0000:00:1e.0 name: Tesla K80 computeCapability: 3.7 coreClock: 0.8235GHz coreCount: 13 deviceMemorySize: 11.17GiB deviceMemoryBandwidth: 223.96GiB/s 2021-06-28 23:00:39.865362: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0 2021-06-28 23:00:39.869447: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11 2021-06-28 23:00:39.869533: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11 2021-06-28 23:00:39.870937: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcufft.so.10 2021-06-28 23:00:39.871318: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcurand.so.10 2021-06-28 23:00:39.872613: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusolver.so.11 2021-06-28 23:00:39.873689: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusparse.so.11 2021-06-28 23:00:39.873938: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8 2021-06-28 23:00:39.874103: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-06-28 23:00:39.874960: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-06-28 23:00:39.875740: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0 2021-06-28 23:00:39.876140: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-06-28 23:00:39.876588: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-06-28 23:00:39.877376: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: pciBusID: 0000:00:1e.0 name: Tesla K80 computeCapability: 3.7 coreClock: 0.8235GHz coreCount: 13 deviceMemorySize: 11.17GiB deviceMemoryBandwidth: 223.96GiB/s 2021-06-28 23:00:39.877497: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-06-28 23:00:39.878292: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-06-28 23:00:39.879020: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0 2021-06-28 23:00:39.879080: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0 2021-06-28 23:00:40.521033: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix: 2021-06-28 23:00:40.521120: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264] 0 2021-06-28 23:00:40.521143: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0: N 2021-06-28 23:00:40.521465: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-06-28 23:00:40.522340: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-06-28 23:00:40.523146: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-06-28 23:00:40.523906: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10661 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:00:1e.0, compute capability: 3.7) I0628 23:00:40.785022 140532255090496 ssd_efficientnet_bifpn_feature_extractor.py:143] EfficientDet EfficientNet backbone version: efficientnet-b0 I0628 23:00:40.785330 140532255090496 ssd_efficientnet_bifpn_feature_extractor.py:144] EfficientDet BiFPN num filters: 64 I0628 23:00:40.785445 140532255090496 ssd_efficientnet_bifpn_feature_extractor.py:146] EfficientDet BiFPN num iterations: 3 I0628 23:00:40.792782 140532255090496 efficientnet_model.py:147] round_filter input=32 output=32 I0628 23:00:40.825772 140532255090496 efficientnet_model.py:147] round_filter input=32 output=32 I0628 23:00:40.825912 140532255090496 efficientnet_model.py:147] round_filter input=16 output=16 I0628 23:00:40.894950 140532255090496 efficientnet_model.py:147] round_filter input=16 output=16 I0628 23:00:40.895127 140532255090496 efficientnet_model.py:147] round_filter input=24 output=24 I0628 23:00:41.066718 140532255090496 efficientnet_model.py:147] round_filter input=24 output=24 I0628 23:00:41.066934 140532255090496 efficientnet_model.py:147] round_filter input=40 output=40 I0628 23:00:41.239912 140532255090496 efficientnet_model.py:147] round_filter input=40 output=40 I0628 23:00:41.240132 140532255090496 efficientnet_model.py:147] round_filter input=80 output=80 I0628 23:00:41.503206 140532255090496 efficientnet_model.py:147] round_filter input=80 output=80 I0628 23:00:41.503429 140532255090496 efficientnet_model.py:147] round_filter input=112 output=112 I0628 23:00:41.763689 140532255090496 efficientnet_model.py:147] round_filter input=112 output=112 I0628 23:00:41.763906 140532255090496 efficientnet_model.py:147] round_filter input=192 output=192 I0628 23:00:42.249581 140532255090496 efficientnet_model.py:147] round_filter input=192 output=192 I0628 23:00:42.249801 140532255090496 efficientnet_model.py:147] round_filter input=320 output=320 I0628 23:00:42.333547 140532255090496 efficientnet_model.py:147] round_filter input=1280 output=1280 I0628 23:00:42.368739 140532255090496 efficientnet_model.py:458] Building model efficientnet with params ModelConfig(width_coefficient=1.0, depth_coefficient=1.0, resolution=224, dropout_rate=0.2, blocks=(BlockConfig(input_filters=32, output_filters=16, kernel_size=3, num_repeat=1, expand_ratio=1, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=16, output_filters=24, kernel_size=3, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=24, output_filters=40, kernel_size=5, num_repeat=2, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=40, output_filters=80, kernel_size=3, num_repeat=3, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=80, output_filters=112, kernel_size=5, num_repeat=3, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=112, output_filters=192, kernel_size=5, num_repeat=4, expand_ratio=6, strides=(2, 2), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise'), BlockConfig(input_filters=192, output_filters=320, kernel_size=3, num_repeat=1, expand_ratio=6, strides=(1, 1), se_ratio=0.25, id_skip=True, fused_conv=False, conv_type='depthwise')), stem_base_filters=32, top_base_filters=1280, activation='simple_swish', batch_norm='default', bn_momentum=0.99, bn_epsilon=0.001, weight_decay=5e-06, drop_connect_rate=0.2, depth_divisor=8, min_depth=None, use_se=True, input_channels=3, num_classes=1000, model_name='efficientnet', rescale_input=False, data_format='channels_last', dtype='float32')
Apache-2.0
ml-model/notebooks/Train Plane Detection Model.ipynb
wiseman/SkyScan
Export a Saved Model that uses TF Examples
# Ignore for now - we do not need to use the TF Example approach. #tf_example_model_export_dir = model_export_dir + "tf_example_saved_model" #!python /tf/models/research/object_detection/exporter_main_v2.py \ # --input_type=tf_example \ # --trained_checkpoint_dir={model_dir} \ # --pipeline_config_path={pipeline_file} \ # --output_directory {tf_example_model_export_dir}
_____no_output_____
Apache-2.0
ml-model/notebooks/Train Plane Detection Model.ipynb
wiseman/SkyScan
Export a TFLite compatible modelRemeber that only Detection models that use SSDs are supported
!python /tf/models/research/object_detection/export_tflite_graph_tf2.py \ --pipeline_config_path={pipeline_file} \ --trained_checkpoint_dir={model_dir} \ --output_directory={model_export_dir}tflite-compatible # I think we skip this step... #! tflite_convert \ # --saved_model_dir="{model_export_dir}tflite-compatible/saved_model" \ # --output_file="{model_export_dir}output.tflite" #https://github.com/tensorflow/models/issues/9033#issuecomment-706573546 import cv2 import glob import numpy as np train_images = [] def representative_data_gen(): path = '/tf/testing/Airbus A319-115' dataset_list = tf.data.Dataset.list_files(path + '/*.jpg') for i in range(100): image = next(iter(dataset_list)) image = tf.io.read_file(image) image = tf.io.decode_jpeg(image, channels=3) image = tf.image.resize(image, [300, 300]) image = tf.cast(image / 255., tf.float32) image = tf.expand_dims(image, 0) yield [image] converter = tf.lite.TFLiteConverter.from_saved_model(model_export_dir+"tflite-compatible/saved_model") converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8, tf.lite.OpsSet.TFLITE_BUILTINS] #converter.optimizations = [tf.lite.Optimize.DEFAULT] #converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8, tf.lite.OpsSet.SELECT_TF_OPS] #converter.inference_input_type = tf.int8 #converter.inference_output_type = tf.int8 converter.representative_dataset = representative_data_gen # Ensure that if any ops can't be quantized, the converter throws an error #converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] # Set the input and output tensors to uint8 (APIs added in r2.3) #converter.inference_input_type = tf.uint8 #converter.inference_output_type = tf.uint8 tflite_model = converter.convert() # Save the model. with open(model_export_dir+'model.tflite', 'wb') as f: f.write(tflite_model) !curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - !echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | tee /etc/apt/sources.list.d/coral-edgetpu.list !apt-get update !apt-get -y install edgetpu-compiler !edgetpu_compiler -s {model_export_dir}model.tflite -o {model_export_dir}
_____no_output_____
Apache-2.0
ml-model/notebooks/Train Plane Detection Model.ipynb
wiseman/SkyScan
Export a TensorJS compatible modelFrom: https://www.tensorflow.org/js/tutorials/conversion/import_saved_model
!pip install tensorflowjs ! tensorflowjs_converter \ --input_format=tf_saved_model \ {model_export_dir}image_tensor_saved_model/saved_model \ {model_export_dir}web_model !saved_model_cli show --dir /tf/models/research/deploy/ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model --all
_____no_output_____
Apache-2.0
ml-model/notebooks/Train Plane Detection Model.ipynb
wiseman/SkyScan
Python Homework 1 - The challengeTake the python challenge found on www.pythonchallenge.com/.You will copy this notebook. Rename it as:YOURLASTNAME-FIRSTINITIAL-python-challenge-xx-Sept-2017with your name replacing your last name and first initial and the xx replaced by the date you started or submitted.Do the first 10 challenges and put each question and your python solution and the final resulting url in this notebook or a series of connected notebooks.Discuss your attempt what parts of python were you using.Upload your completed jupyter notebook zip file to elearning site as your homework submission. Do not put this notebook on your github. Note: 3 points for 10 correct answers... 4 points for 15 and 5 points for all 33 Python challenge question 1 This challenge is straight forward. It hints to change URL and so I tried to change the URL to "http://www.pythonchallenge.com/pc/def/1.html". The URL showed the message "2**38 is much much larger", from which I got the clue.
#http://www.pythonchallenge.com/pc/def/0.html print(2**38) print(pow(2,38)) #http://www.pythonchallenge.com/pc/def/map.html
274877906944 274877906944
MIT
Homeworks/Homework1/VANGUMALLI-D-python-challenge-04-sept-2017.ipynb
DineshVangumalli/big-data-python-class
Python challenge question 2 I changed the URL to "http://www.pythonchallenge.com/pc/def/274877906944.html" which redirected to "http://www.pythonchallenge.com/pc/def/map.html". This challenge has a picture with letters on i and a text below it. It can be seen that the letters on the right are two characters after the letters on the left, (K->M, O->Q, E->G) and so got the clue that all the letters in the text must be moved by 2 letters. I googled "mapping characters in python" and found that there is a function called "str.maketrans" in the library "string".
#http://www.pythonchallenge.com/pc/def/274877906944.html #http://www.pythonchallenge.com/pc/def/map.html import string inp="abcdefghijklmnopqrstuvwxyz" outp="cdefghijklmnopqrstuvwxyzab" trans=str.maketrans(inp, outp) strg = "g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr amknsrcpq ypc dmp. bmgle gr gl zw fylb gq glcddgagclr ylb rfyr'q ufw rfgq rcvr gq qm jmle. sqgle qrpgle.kyicrpylq() gq pcamkkclbcb. lmu ynnjw ml rfc spj." print(strg.translate(trans)) print ("map".translate(trans)) #Apply the function on "map" #http://www.pythonchallenge.com/pc/def/ocr.html
i hope you didnt translate it by hand. thats what computers are for. doing it in by hand is inefficient and that's why this text is so long. using string.maketrans() is recommended. now apply on the url. ocr
MIT
Homeworks/Homework1/VANGUMALLI-D-python-challenge-04-sept-2017.ipynb
DineshVangumalli/big-data-python-class
Python challenge question 3 I then changed the URL to "http://www.pythonchallenge.com/pc/def/ocr.html". This challenge shows a picture of a book and it says to recognize the characters, giving a clue to check the page source. When I checked it, I found big block of characters in the page source which I thought should be read into python using "urllib" library. There are already functions that look for specific patterns using regular expressions.
#http://www.pythonchallenge.com/pc/def/ocr.html import urllib.request url_ocr = urllib.request.urlopen("http://www.pythonchallenge.com/pc/def/ocr.html").read().decode() #print(url_ocr) import re content = re.findall("<!--(.*?)-->", url_ocr, re.S)[-1] #findall() matches all occurrences of a pattern #re.S makes the '.' special character match any character at all, including a newline; without this flag, '.' will match anything except a newline. print(re.findall("[A-Za-z]", content)) #http://www.pythonchallenge.com/pc/def/equality.html
['e', 'q', 'u', 'a', 'l', 'i', 't', 'y']
MIT
Homeworks/Homework1/VANGUMALLI-D-python-challenge-04-sept-2017.ipynb
DineshVangumalli/big-data-python-class
Python challenge question 4 Then, I changed the URL to "http://www.pythonchallenge.com/pc/def/equality.html" and there is text which says "One small letter, surrounded by EXACTLY three big bodyguards on each of its sides". I checked the page source to find any other clues and there is a big block of text, just as previous challenge. With a little thought, I guessed that I should look for characters in the pattern of "one small letter" surrounded by "three capital letters" on both the sides.
#http://www.pythonchallenge.com/pc/def/equality.html import urllib.request url_eq = urllib.request.urlopen("http://www.pythonchallenge.com/pc/def/equality.html").read().decode() import re data = re.findall("<!--(.*?)-->", url_eq, re.S)[-1] #findall() matches all occurrences of a pattern #re.S makes the '.' special character match any character at all, including a newline; without this flag, '.' will match anything except a newline. print(re.findall("[^A-Z]+[A-Z]{3}([a-z])[A-Z]{3}[^A-Z]+", data)) #The first part is ^A-Z means that the first character should be anything but a capital A through Z. The next three characters must be a capital letter A thorugh Z, denoted by the {3}. #The next element must be a lower case a through z. Again three more upper case A through Z elements. And finally the first part repeats. print("".join(re.findall("[^A-Z]+[A-Z]{3}([a-z])[A-Z]{3}[^A-Z]+", data))) #Joins the list with no space inbetween. #http://www.pythonchallenge.com/pc/def/linkedlist.php
['l', 'i', 'n', 'k', 'e', 'd', 'l', 'i', 's', 't'] linkedlist
MIT
Homeworks/Homework1/VANGUMALLI-D-python-challenge-04-sept-2017.ipynb
DineshVangumalli/big-data-python-class
Python challenge question 5 For the next challenge, I changed the URL to "http://www.pythonchallenge.com/pc/def/linkedlist.html" but it showed text "linkedlist.php". So, I changed the URL to "http://www.pythonchallenge.com/pc/def/linkedlist.php". When I checked the page source, it has "urllib may help. DON'T TRY ALL NOTHINGS, since it will never end. 400 times is more than enough."There is another link "linkedlist.php?nothing=12345", which when I clicked took me to a page with text "and the next nothing is 44827". I tried to chage the "next nothing" on the URL to the numbers it suggested, it gave me other numbers. I thought that this is about web pages, and from the text it showed, I guessed there are a lot of webpages. It also gave a clue to use urllib.I tried to print all the numbers it was generating and at one point, it stopped at "and the next nothing is 16044" and "Yes. Divide by two and keep going". So, I divided the number 16044 by 2 and kept on printing until it gave "peak.html".
#http://www.pythonchallenge.com/pc/def/linkedlist.php import urllib import re url_ll = ("http://www.pythonchallenge.com/pc/def/linkedlist.php?nothing=%s") num="12345" #num=16044/2 while num!="": data = urllib.request.urlopen(url_ll % num).read().decode() #print(data) num = "".join(re.findall("and the next nothing is (\d+)",data)) else : print("Came to an End") num=16044/2 while num!="": data = urllib.request.urlopen(url_ll % num).read().decode() print(data) num = "".join(re.findall("and the next nothing is (\d+)",data)) else : print("Came to an End") #http://www.pythonchallenge.com/pc/def/peak.html
Came to an End and the next nothing is 25357 and the next nothing is 89879 and the next nothing is 80119 and the next nothing is 50290 and the next nothing is 9297 and the next nothing is 30571 and the next nothing is 7414 and the next nothing is 30978 and the next nothing is 16408 and the next nothing is 80109 and the next nothing is 55736 and the next nothing is 15357 and the next nothing is 80887 and the next nothing is 35014 and the next nothing is 16523 and the next nothing is 50286 and the next nothing is 34813 and the next nothing is 77562 and the next nothing is 54746 and the next nothing is 22680 and the next nothing is 19705 and the next nothing is 77000 and the next nothing is 27634 and the next nothing is 21008 and the next nothing is 64994 and the next nothing is 66109 and the next nothing is 37855 and the next nothing is 36383 and the next nothing is 68548 and the next nothing is 96070 and the next nothing is 83051 and the next nothing is 58026 and the next nothing is 44726 and the next nothing is 35748 and the next nothing is 61287 and the next nothing is 559 and the next nothing is 81318 and the next nothing is 50443 and the next nothing is 1570 and the next nothing is 75244 and the next nothing is 56265 and the next nothing is 17694 and the next nothing is 48033 and the next nothing is 56523 and the next nothing is 51253 and the next nothing is 85750 and the next nothing is 42760 and the next nothing is 11877 and the next nothing is 15962 and the next nothing is 75494 and the next nothing is 87283 and the next nothing is 40396 and the next nothing is 49574 and the next nothing is 82682 There maybe misleading numbers in the text. One example is 82683. Look only for the next nothing and the next nothing is 63579 and the next nothing is 37278 and the next nothing is 53548 and the next nothing is 66081 and the next nothing is 67753 and the next nothing is 56337 and the next nothing is 3356 and the next nothing is 94525 and the next nothing is 89574 and the next nothing is 4413 and the next nothing is 82294 and the next nothing is 56060 and the next nothing is 95493 and the next nothing is 80865 and the next nothing is 66242 and the next nothing is 16065 and the next nothing is 62145 and the next nothing is 23147 and the next nothing is 83763 and the next nothing is 62381 and the next nothing is 76841 and the next nothing is 91706 and the next nothing is 9268 and the next nothing is 64814 and the next nothing is 80809 and the next nothing is 14039 and the next nothing is 73355 and the next nothing is 81905 and the next nothing is 36402 and the next nothing is 27221 and the next nothing is 79607 and the next nothing is 91763 and the next nothing is 11631 and the next nothing is 76396 and the next nothing is 69905 and the next nothing is 11073 and the next nothing is 71281 and the next nothing is 54345 and the next nothing is 19047 and the next nothing is 34376 and the next nothing is 3193 and the next nothing is 74258 and the next nothing is 62712 and the next nothing is 1823 and the next nothing is 21232 and the next nothing is 87890 and the next nothing is 21545 and the next nothing is 37136 and the next nothing is 23060 and the next nothing is 5385 and the next nothing is 4620 and the next nothing is 39111 and the next nothing is 35914 and the next nothing is 60310 and the next nothing is 19178 and the next nothing is 44671 and the next nothing is 45736 and the next nothing is 9216 and the next nothing is 12585 and the next nothing is 11302 and the next nothing is 33096 and the next nothing is 13967 and the next nothing is 57004 and the next nothing is 64196 and the next nothing is 73929 and the next nothing is 24800 and the next nothing is 25081 and the next nothing is 90033 and the next nothing is 45919 and the next nothing is 54827 and the next nothing is 73950 and the next nothing is 56978 and the next nothing is 8133 and the next nothing is 61900 and the next nothing is 47769 and the next nothing is 631 and the next nothing is 2284 and the next nothing is 60074 and the next nothing is 35959 and the next nothing is 57158 and the next nothing is 90990 and the next nothing is 27935 and the next nothing is 99927 and the next nothing is 41785 and the next nothing is 32660 and the next nothing is 4328 and the next nothing is 42067 and the next nothing is 8743 and the next nothing is 38613 and the next nothing is 21100 and the next nothing is 77864 and the next nothing is 6523 and the next nothing is 6927 and the next nothing is 82930 and the next nothing is 35846 and the next nothing is 31785 and the next nothing is 41846 and the next nothing is 72387 and the next nothing is 59334 and the next nothing is 65520 and the next nothing is 93781 and the next nothing is 55840 and the next nothing is 80842 and the next nothing is 59022 and the next nothing is 23298 and the next nothing is 27709 and the next nothing is 96791 and the next nothing is 75635 and the next nothing is 52899 and the next nothing is 66831 peak.html Came to an End
MIT
Homeworks/Homework1/VANGUMALLI-D-python-challenge-04-sept-2017.ipynb
DineshVangumalli/big-data-python-class
Python challenge question 6 For the next challenge, I changed the URL to "http://www.pythonchallenge.com/pc/def/peak.html" which showed a picture of a hill with the text “pronounce it”. When I checked the page source, it showed some text "peak hell sounds familiar ?" and a file named "banner.p" which again took me to "http://www.pythonchallenge.com/pc/def/banner.p" and it has some text in it.I changed the URL to "peakhell.html" but nothing showed up. I googled "peakhell" and there were results regarding the Python Challenge itself and found that it was refering to a Python object serialization module called "pickle".So,, I changed the URL to "http://www.pythonchallenge.com/pc/def/pickle.html" and it showed a text "yes! pickle!" which confirmed the usage of module "pickle".I learnt quiet a few concepts regarding pickling when I googled ".p files in Python". "Pickle" is used for serializing and de-serializing a Python object structure. Any object in python can be pickled. It “serialises” the object first before writing it to file. Pickling is a way to convert a python object (list, dict, etc.) into a character stream. The idea is that this character stream contains all the information necessary to reconstruct the object in another python script.When I checked the URL "banner.p", it showed a text which looked like output of something that has been pickled. I used "urllib" and "pickle" to load the file. When I checked the file, there are a list of tuples. It's like a character and the number of times it is repeated.
#http://www.pythonchallenge.com/pc/def/peak.html import urllib.request url_ban = urllib.request.urlopen("http://www.pythonchallenge.com/pc/def/banner.p") import pickle data = pickle.load(url_ban) #Reads a pickled object representation from the open file object given in the constructor, and return the reconstituted object hierarchy specified therein. #print(data) #Printed a list of tuples. for row in data: print("".join([r[1] * r[0] for r in row])) #http://www.pythonchallenge.com/pc/def/channel.html
##### ##### #### #### #### #### #### #### #### #### #### #### #### #### #### #### ### #### ### ### ##### ### ##### ### ### #### ### ## #### ####### ## ### #### ####### #### ####### ### ### #### ### ### ##### #### ### #### ##### #### ##### #### ### ### #### ### #### #### ### ### #### #### #### #### ### #### #### ### #### #### ### #### #### #### #### ### ### #### #### #### #### ## ### #### #### #### #### #### ### #### #### #### #### ########## #### #### #### #### ############## #### #### #### #### ### #### #### #### #### #### #### #### #### #### #### #### ### #### #### #### #### #### #### ### #### #### #### ### #### #### #### #### ### #### ### ## #### #### ### #### #### #### #### #### ### ## #### ### ## #### #### ########### #### #### #### #### ### ## #### ### ###### ##### ## #### ###### ########### ##### ### ######
MIT
Homeworks/Homework1/VANGUMALLI-D-python-challenge-04-sept-2017.ipynb
DineshVangumalli/big-data-python-class
Python challenge question 7 For the next challenge, I changed the URL to "http://www.pythonchallenge.com/pc/def/channel.html" and it showed a picture of a zipper and I felt it is something related to zip files. When I checked page source, it showed some text. I changed the URl to ".zip" and got a file with a lot of text files. I checked a couple of them, it shoed some text. I found a readme file at the end, which when I checked, has text as shown below: Welcome to my zipped list.hint1: start from 90052hint2: answer is inside the zipI did some reading regarding "zipfile" module. Upon opening 90052.txt it said “Next nothing is 94191”. I tried to print the content in the text file and it stopped at a point and asked to collect the comments inside the zip file.When I printed that out the message, it said "HOCKEY".
#http://www.pythonchallenge.com/pc/def/channel.html import urllib import zipfile import re url_ll = "http://www.pythonchallenge.com/pc/def/channel.html" zf = zipfile.ZipFile("channel.zip", 'r') print(zf.read("readme.txt").decode()) num = "90052" comments = "" while num != "" : data = zf.read(num + ".txt").decode() comments += zf.getinfo(num+".txt").comment.decode() num = "".join(re.findall("Next nothing is (\d+)",data)) #print(data) else : print(data) print(comments) #http://www.pythonchallenge.com/pc/def/oxygen.html
welcome to my zipped list. hint1: start from 90052 hint2: answer is inside the zip Collect the comments. **************************************************************** **************************************************************** ** ** ** OO OO XX YYYY GG GG EEEEEE NN NN ** ** OO OO XXXXXX YYYYYY GG GG EEEEEE NN NN ** ** OO OO XXX XXX YYY YY GG GG EE NN NN ** ** OOOOOOOO XX XX YY GGG EEEEE NNNN ** ** OOOOOOOO XX XX YY GGG EEEEE NN ** ** OO OO XXX XXX YYY YY GG GG EE NN ** ** OO OO XXXXXX YYYYYY GG GG EEEEEE NN ** ** OO OO XX YYYY GG GG EEEEEE NN ** ** ** **************************************************************** **************************************************************
MIT
Homeworks/Homework1/VANGUMALLI-D-python-challenge-04-sept-2017.ipynb
DineshVangumalli/big-data-python-class
Python challenge question 8 For the next challenge, I tried URL "http://www.pythonchallenge.com/pc/def/hockey.html" but it gave a text saying "it's in the air. look at the letters". Then, I tried "http://www.pythonchallenge.com/pc/def/oxygen.html". It gave a picture in which the center of the picture was grey scaled from left. So, there might be something encoded in this line. I did some work on image analysis previously and so was quick enough to get the clue for this challenge.We can get the pixels using Python Image Library, PIL. I printed the pixel values alon the width of the image at exactly half point of the height because that's where the image is grey scaled. The pixel values will be same for R,G,B if the image is greyscaled. The values of the pixels changed after 7 blocks, on observing the pixel values along the width. It gave a clue in the form of "smart guy, you made it. the next level is [105, 110, 116, 101, 103, 114, 105, 116, 121]". The blocks were 7 pixels wide and so I took out the first number in the color given to me and printed string representing a character.
#http://www.pythonchallenge.com/pc/def/oxygen.html import urllib.request from PIL import Image import requests from io import BytesIO url = "http://www.pythonchallenge.com/pc/def/oxygen.png" img_oxy = requests.get(url) img = Image.open(BytesIO(img_oxy.content)) width,height = img.size print(width) print(height) #for w in range(width): # print(img.getpixel((w,height/2))) #Prints the pixel values at the greyscale along the width of the image. for w in range(0,width,7): print(chr(img.getpixel((w,height/2))[0]), end='') #Return the string representing a character whose Unicode code point is the integer. print(''.join(map(chr, [105, 110, 116, 101, 103, 114, 105, 116, 121]))) #http://www.pythonchallenge.com/pc/def/integrity.html
629 95 smart guy, you made it. the next level is [105, 110, 116, 101, 103, 114, 105, 116, 121]pe_integrity
MIT
Homeworks/Homework1/VANGUMALLI-D-python-challenge-04-sept-2017.ipynb
DineshVangumalli/big-data-python-class
Python challenge question 9 For the next challenge, I changed the URL to "http://www.pythonchallenge.com/pc/def/integrity.html" which showed a picture of a bee with text "Where is the missing link?". It seemed the bee is clickable and when clicked, it asked for a a userame and password.Also, when I checked page source, there was a text:"un: 'BZh91AY&SYA\xaf\x82\r\x00\x00\x01\x01\x80\x02\xc0\x02\x00 \x00!\x9ah3M\x07<]\xc9\x14\xe1BA\x06\xbe\x084' andpw: 'BZh91AY&SY\x94$|\x0e\x00\x00\x00\x81\x00\x03$ \x00!\x9ah3M\x13<]\xc9\x14\xe1BBP\x91\xf08'"I googled regarding this challenge as I had no idea what it's asking to do. Quickly, I found words "BZ2" and so googled regarding "bz2 in Python" and had briefly studied regarding what it does. I got a hunch that the strings that were in page source might need to be decompressed. When I did it, I got 'huge' and 'file', which are username and password.
#http://www.pythonchallenge.com/pc/def/integrity.html import bz2 usr = b"BZh91AY&SYA\xaf\x82\r\x00\x00\x01\x01\x80\x02\xc0\x02\x00 \x00!\x9ah3M\x07<]\xc9\x14\xe1BA\x06\xbe\x084" pwd = b"BZh91AY&SY\x94$|\x0e\x00\x00\x00\x81\x00\x03$ \x00!\x9ah3M\x13<]\xc9\x14\xe1BBP\x91\xf08" print(bz2.BZ2Decompressor().decompress(usr)) #Decompress data (a bytes-like object), returns uncompressed data as bytes. print(bz2.BZ2Decompressor().decompress(pwd)) #http://www.pythonchallenge.com/pc/return/good.html
b'huge' b'file'
MIT
Homeworks/Homework1/VANGUMALLI-D-python-challenge-04-sept-2017.ipynb
DineshVangumalli/big-data-python-class
Python challenge question 10 For this challenge, I gave username and password previously obtained which took me to URL "http://www.pythonchallenge.com/pc/return/good.html". It has a picture of a stem with black dots. It seemed like we need to connect the dots to get the answer. Looking at page source, my intuition is correct that we need to connect dots and there are lists with numbers for 'first' and 'second'. Also, "first+second=?" seemed like a clue. So, I joined first and second and tried to draw an image with size mentioned i page source, 640 by 480. I got an image of bull.
#http://www.pythonchallenge.com/pc/return/good.html from PIL import Image, ImageDraw first=[ 146,399,163,403,170,393,169,391,166,386,170,381,170,371,170,355,169,346,167,335,170,329,170,320,170, 310,171,301,173,290,178,289,182,287,188,286,190,286,192,291,194,296,195,305,194,307,191,312,190,316, 190,321,192,331,193,338,196,341,197,346,199,352,198,360,197,366,197,373,196,380,197,383,196,387,192, 389,191,392,190,396,189,400,194,401,201,402,208,403,213,402,216,401,219,397,219,393,216,390,215,385, 215,379,213,373,213,365,212,360,210,353,210,347,212,338,213,329,214,319,215,311,215,306,216,296,218, 290,221,283,225,282,233,284,238,287,243,290,250,291,255,294,261,293,265,291,271,291,273,289,278,287, 279,285,281,280,284,278,284,276,287,277,289,283,291,286,294,291,296,295,299,300,301,304,304,320,305, 327,306,332,307,341,306,349,303,354,301,364,301,371,297,375,292,384,291,386,302,393,324,391,333,387, 328,375,329,367,329,353,330,341,331,328,336,319,338,310,341,304,341,285,341,278,343,269,344,262,346, 259,346,251,349,259,349,264,349,273,349,280,349,288,349,295,349,298,354,293,356,286,354,279,352,268, 352,257,351,249,350,234,351,211,352,197,354,185,353,171,351,154,348,147,342,137,339,132,330,122,327, 120,314,116,304,117,293,118,284,118,281,122,275,128,265,129,257,131,244,133,239,134,228,136,221,137, 214,138,209,135,201,132,192,130,184,131,175,129,170,131,159,134,157,134,160,130,170,125,176,114,176, 102,173,103,172,108,171,111,163,115,156,116,149,117,142,116,136,115,129,115,124,115,120,115,115,117, 113,120,109,122,102,122,100,121,95,121,89,115,87,110,82,109,84,118,89,123,93,129,100,130,108,132,110, 133,110,136,107,138,105,140,95,138,86,141,79,149,77,155,81,162,90,165,97,167,99,171,109,171,107,161, 111,156,113,170,115,185,118,208,117,223,121,239,128,251,133,259,136,266,139,276,143,290,148,310,151, 332,155,348,156,353,153,366,149,379,147,394,146,399] second=[ 156,141,165,135,169,131,176,130,187,134,191,140,191,146,186,150,179,155,175,157,168,157,163,157,159, 157,158,164,159,175,159,181,157,191,154,197,153,205,153,210,152,212,147,215,146,218,143,220,132,220, 125,217,119,209,116,196,115,185,114,172,114,167,112,161,109,165,107,170,99,171,97,167,89,164,81,162, 77,155,81,148,87,140,96,138,105,141,110,136,111,126,113,129,118,117,128,114,137,115,146,114,155,115, 158,121,157,128,156,134,157,136,156,136] all_d= first + second img = Image.new("RGB", (640,480), "rgb(60%,60%,90%)") pic = ImageDraw.Draw(img) pic.line(all_d, fill='black') img #http://www.pythonchallenge.com/pc/return/bull.html
_____no_output_____
MIT
Homeworks/Homework1/VANGUMALLI-D-python-challenge-04-sept-2017.ipynb
DineshVangumalli/big-data-python-class
Python challenge question 11 For the next challenge, I tried URL "http://www.pythonchallenge.com/pc/return/bull.html" and it showed a picture of a bull. In text below it says ‘len(a[30]) = ?’. When I clicked the bull, which is clickable, a new page shoed a sequence ‘a = [1, 11, 21, 1211, 111221,..]'. When I googled this sequence, I came to know that it is called the look and say sequence. With little study about it, I got to know about it and it seemed that, in the challenge, we need to find the length of the 30th element in the sequence.
#http://www.pythonchallenge.com/pc/return/bull.html from itertools import groupby def lookandsay(n): return (''.join(str(len(list(g))) + k for k,g in groupby(n))) n='1' for i in range(30): print("Term", i,"--", n) n = lookandsay(n) type(n) len(n) #http://www.pythonchallenge.com/pc/return/5808.html
Term 0 -- 1 Term 1 -- 11 Term 2 -- 21 Term 3 -- 1211 Term 4 -- 111221 Term 5 -- 312211 Term 6 -- 13112221 Term 7 -- 1113213211 Term 8 -- 31131211131221 Term 9 -- 13211311123113112211 Term 10 -- 11131221133112132113212221 Term 11 -- 3113112221232112111312211312113211 Term 12 -- 1321132132111213122112311311222113111221131221 Term 13 -- 11131221131211131231121113112221121321132132211331222113112211 Term 14 -- 311311222113111231131112132112311321322112111312211312111322212311322113212221 Term 15 -- 132113213221133112132113311211131221121321131211132221123113112221131112311332111213211322211312113211 Term 16 -- 11131221131211132221232112111312212321123113112221121113122113111231133221121321132132211331121321231231121113122113322113111221131221 Term 17 -- 31131122211311123113321112131221123113112211121312211213211321322112311311222113311213212322211211131221131211132221232112111312111213111213211231131122212322211331222113112211 Term 18 -- 1321132132211331121321231231121113112221121321132122311211131122211211131221131211132221121321132132212321121113121112133221123113112221131112311332111213122112311311123112111331121113122112132113213211121332212311322113212221 Term 19 -- 11131221131211132221232112111312111213111213211231132132211211131221131211221321123113213221123113112221131112311332211211131221131211132211121312211231131112311211232221121321132132211331121321231231121113112221121321133112132112312321123113112221121113122113121113123112112322111213211322211312113211 Term 20 -- 311311222113111231133211121312211231131112311211133112111312211213211312111322211231131122211311122122111312211213211312111322211213211321322113311213212322211231131122211311123113223112111311222112132113311213211221121332211211131221131211132221232112111312111213111213211231132132211211131221232112111312211213111213122112132113213221123113112221131112311311121321122112132231121113122113322113111221131221 Term 21 -- 132113213221133112132123123112111311222112132113311213211231232112311311222112111312211311123113322112132113213221133122112231131122211211131221131112311332211211131221131211132221232112111312111213322112132113213221133112132113221321123113213221121113122123211211131221222112112322211231131122211311123113321112131221123113111231121113311211131221121321131211132221123113112211121312211231131122211211133112111311222112111312211312111322211213211321322113311213211331121113122122211211132213211231131122212322211331222113112211 Term 22 -- 111312211312111322212321121113121112131112132112311321322112111312212321121113122112131112131221121321132132211231131122211331121321232221121113122113121113222123112221221321132132211231131122211331121321232221123113112221131112311332111213122112311311123112112322211211131221131211132221232112111312211322111312211213211312111322211231131122111213122112311311221132211221121332211213211321322113311213212312311211131122211213211331121321123123211231131122211211131221131112311332211213211321223112111311222112132113213221123123211231132132211231131122211311123113322112111312211312111322212321121113122123211231131122113221123113221113122112132113213211121332212311322113212221 Term 23 -- 3113112221131112311332111213122112311311123112111331121113122112132113121113222112311311221112131221123113112221121113311211131122211211131221131211132221121321132132212321121113121112133221123113112221131112311332111213213211221113122113121113222112132113213221232112111312111213322112132113213221133112132123123112111311222112132113311213211221121332211231131122211311123113321112131221123113112221132231131122211211131221131112311332211213211321223112111311222112132113212221132221222112112322211211131221131211132221232112111312111213111213211231132132211211131221232112111312211213111213122112132113213221123113112221133112132123222112111312211312112213211231132132211211131221131211132221121311121312211213211312111322211213211321322113311213212322211231131122211311123113321112131221123113112211121312211213211321222113222112132113223113112221121113122113121113123112112322111213211322211312113211 Term 24 -- 132113213221133112132123123112111311222112132113311213211231232112311311222112111312211311123113322112132113212231121113112221121321132132211231232112311321322112311311222113111231133221121113122113121113221112131221123113111231121123222112132113213221133112132123123112111312111312212231131122211311123113322112111312211312111322111213122112311311123112112322211211131221131211132221232112111312111213111213211231132132211211131221232112111312212221121123222112132113213221133112132123123112111311222112132113213221132213211321322112311311222113311213212322211211131221131211221321123113213221121113122113121132211332113221122112133221123113112221131112311332111213122112311311123112111331121113122112132113121113222112311311221112131221123113112221121113311211131122211211131221131211132221121321132132212321121113121112133221123113112221131112212211131221121321131211132221123113112221131112311332211211133112111311222112111312211311123113322112111312211312111322212321121113121112133221121321132132211331121321231231121113112221121321132122311211131122211211131221131211322113322112111312211322132113213221123113112221131112311311121321122112132231121113122113322113111221131221 Term 25 -- 1113122113121113222123211211131211121311121321123113213221121113122123211211131221121311121312211213211321322112311311222113311213212322211211131221131211221321123113213221121113122113121113222112131112131221121321131211132221121321132132211331121321232221123113112221131112311322311211131122211213211331121321122112133221121113122113121113222123211211131211121311121321123113111231131122112213211321322113311213212322211231131122211311123113223112111311222112132113311213211221121332211231131122211311123113321112131221123113111231121113311211131221121321131211132221123113112211121312211231131122113221122112133221121113122113121113222123211211131211121311121321123113213221121113122113121113222113221113122113121113222112132113213221232112111312111213322112311311222113111221221113122112132113121113222112311311222113111221132221231221132221222112112322211213211321322113311213212312311211131122211213211331121321123123211231131122211211131221131112311332211213211321223112111311222112132113213221123123211231132132211231131122211311123113322112111312211312111322111213122112311311123112112322211213211321322113312211223113112221121113122113111231133221121321132132211331121321232221123123211231132132211231131122211331121321232221123113112221131112311332111213122112311311123112112322211211131221131211132221232112111312111213111213211231132132211211131221131211221321123113213221123113112221131112211322212322211231131122211322111312211312111322211213211321322113311213211331121113122122211211132213211231131122212322211331222113112211 Term 26 -- 31131122211311123113321112131221123113111231121113311211131221121321131211132221123113112211121312211231131122211211133112111311222112111312211312111322211213211321322123211211131211121332211231131122211311122122111312211213211312111322211231131122211311123113322112111331121113112221121113122113111231133221121113122113121113222123211211131211121332211213211321322113311213211322132112311321322112111312212321121113122122211211232221123113112221131112311332111213122112311311123112111331121113122112132113311213211321222122111312211312111322212321121113121112133221121321132132211331121321132213211231132132211211131221232112111312212221121123222112132113213221133112132123123112111311222112132113311213211231232112311311222112111312211311123113322112132113212231121113112221121321132122211322212221121123222112311311222113111231133211121312211231131112311211133112111312211213211312111322211231131122211311123113322113223113112221131112311332211211131221131211132211121312211231131112311211232221121321132132211331221122311311222112111312211311123113322112132113213221133122211332111213112221133211322112211213322112111312211312111322212321121113121112131112132112311321322112111312212321121113122112131112131221121321132132211231131122211331121321232221121113122113121122132112311321322112111312211312111322211213111213122112132113121113222112132113213221133112132123222112311311222113111231132231121113112221121321133112132112211213322112111312211312111322212311222122132113213221123113112221133112132123222112111312211312111322212321121113121112133221121311121312211213211312111322211213211321322123211211131211121332211213211321322113311213212312311211131122211213211331121321122112133221123113112221131112311332111213122112311311123112111331121113122112132113121113222112311311222113111221221113122112132113121113222112132113213221133122211332111213322112132113213221132231131122211311123113322112111312211312111322212321121113122123211231131122113221123113221113122112132113213211121332212311322113212221 Term 27 -- 13211321322113311213212312311211131122211213211331121321123123211231131122211211131221131112311332211213211321223112111311222112132113213221123123211231132132211231131122211311123113322112111312211312111322111213122112311311123112112322211213211321322113312211223113112221121113122113111231133221121321132132211331121321232221123123211231132132211231131122211331121321232221123113112221131112311332111213122112311311123112112322211211131221131211132221232112111312211322111312211213211312111322211231131122111213122112311311221132211221121332211213211321322113311213212312311211131122211213211331121321123123211231131122211211131221232112111312211312113211223113112221131112311332111213122112311311123112112322211211131221131211132221232112111312211322111312211213211312111322211231131122111213122112311311221132211221121332211211131221131211132221232112111312111213111213211231132132211211131221232112111312211213111213122112132113213221123113112221133112132123222112111312211312112213211231132132211211131221131211322113321132211221121332211213211321322113311213212312311211131122211213211331121321123123211231131122211211131221131112311332211213211321322113311213212322211322132113213221133112132123222112311311222113111231132231121113112221121321133112132112211213322112111312211312111322212311222122132113213221123113112221133112132123222112111312211312111322212311322123123112111321322123122113222122211211232221123113112221131112311332111213122112311311123112111331121113122112132113121113222112311311221112131221123113112221121113311211131122211211131221131211132221121321132132212321121113121112133221123113112221131112212211131221121321131211132221123113112221131112311332211211133112111311222112111312211311123113322112111312211312111322212321121113121112133221121321132132211331121321132213211231132132211211131221232112111312212221121123222112311311222113111231133211121321321122111312211312111322211213211321322123211211131211121332211231131122211311123113321112131221123113111231121123222112111331121113112221121113122113111231133221121113122113121113221112131221123113111231121123222112111312211312111322212321121113121112131112132112311321322112111312212321121113122122211211232221121321132132211331121321231231121113112221121321133112132112312321123113112221121113122113111231133221121321132132211331221122311311222112111312211311123113322112111312211312111322212311322123123112112322211211131221131211132221132213211321322113311213212322211231131122211311123113321112131221123113112211121312211213211321222113222112132113223113112221121113122113121113123112112322111213211322211312113211 Term 28 -- 11131221131211132221232112111312111213111213211231132132211211131221232112111312211213111213122112132113213221123113112221133112132123222112111312211312112213211231132132211211131221131211132221121311121312211213211312111322211213211321322113311213212322211231131122211311123113223112111311222112132113311213211221121332211211131221131211132221231122212213211321322112311311222113311213212322211211131221131211132221232112111312111213322112131112131221121321131211132221121321132132212321121113121112133221121321132132211331121321231231121113112221121321133112132112211213322112311311222113111231133211121312211231131122211322311311222112111312211311123113322112132113212231121113112221121321132122211322212221121123222112111312211312111322212321121113121112131112132112311321322112111312212321121113122112131112131221121321132132211231131122111213122112311311222113111221131221221321132132211331121321231231121113112221121321133112132112211213322112311311222113111231133211121312211231131122211322311311222112111312211311123113322112132113212231121113112221121321132122211322212221121123222112311311222113111231133211121312211231131112311211133112111312211213211312111322211231131122111213122112311311222112111331121113112221121113122113121113222112132113213221232112111312111213322112311311222113111221221113122112132113121113222112311311222113111221132221231221132221222112112322211211131221131211132221232112111312111213111213211231132132211211131221232112111312211213111213122112132113213221123113112221133112132123222112111312211312111322212321121113121112133221132211131221131211132221232112111312111213322112132113213221133112132113221321123113213221121113122123211211131221222112112322211231131122211311123113321112132132112211131221131211132221121321132132212321121113121112133221123113112221131112311332111213211322111213111213211231131211132211121311222113321132211221121332211213211321322113311213212312311211131122211213211331121321123123211231131122211211131221131112311332211213211321223112111311222112132113213221123123211231132132211231131122211311123113322112111312211312111322111213122112311311123112112322211213211321322113312211223113112221121113122113111231133221121321132132211331121321232221123123211231132132211231131122211331121321232221123113112221131112311332111213122112311311123112112322211211131221131211132221232112111312211322111312211213211312111322211231131122111213122112311311221132211221121332211213211321322113311213212312311211131211131221223113112221131112311332211211131221131211132211121312211231131112311211232221121321132132211331121321231231121113112221121321133112132112211213322112312321123113213221123113112221133112132123222112311311222113111231132231121113112221121321133112132112211213322112311311222113111231133211121312211231131112311211133112111312211213211312111322211231131122111213122112311311221132211221121332211211131221131211132221232112111312111213111213211231132132211211131221232112111312211213111213122112132113213221123113112221133112132123222112111312211312111322212311222122132113213221123113112221133112132123222112311311222113111231133211121321132211121311121321122112133221123113112221131112311332211322111312211312111322212321121113121112133221121321132132211331121321231231121113112221121321132122311211131122211211131221131211322113322112111312211322132113213221123113112221131112311311121321122112132231121113122113322113111221131221 Term 29 -- 3113112221131112311332111213122112311311123112111331121113122112132113121113222112311311221112131221123113112221121113311211131122211211131221131211132221121321132132212321121113121112133221123113112221131112212211131221121321131211132221123113112221131112311332211211133112111311222112111312211311123113322112111312211312111322212321121113121112133221121321132132211331121321132213211231132132211211131221232112111312212221121123222112311311222113111231133211121321321122111312211312111322211213211321322123211211131211121332211231131122211311123113321112131221123113111231121123222112111331121113112221121113122113111231133221121113122113121113221112131221123113111231121123222112111312211312111322212321121113121112131112132112311321322112111312212321121113122122211211232221121321132132211331121321231231121113112221121321132132211322132113213221123113112221133112132123222112111312211312112213211231132132211211131221131211322113321132211221121332211231131122211311123113321112131221123113111231121113311211131221121321131211132221123113112211121312211231131122211211133112111311222112111312211312111322211213211321223112111311222112132113213221133122211311221122111312211312111322212321121113121112131112132112311321322112111312212321121113122122211211232221121321132132211331121321231231121113112221121321132132211322132113213221123113112221133112132123222112111312211312112213211231132132211211131221131211322113321132211221121332211213211321322113311213212312311211131122211213211331121321123123211231131122211211131221131112311332211213211321223112111311222112132113213221123123211231132132211231131122211311123113322112111312211312111322111213122112311311123112112322211213211321322113312211223113112221121113122113111231133221121321132132211331222113321112131122211332113221122112133221123113112221131112311332111213122112311311123112111331121113122112132113121113222112311311221112131221123113112221121113311211131122211211131221131211132221121321132132212321121113121112133221123113112221131112311332111213122112311311123112112322211322311311222113111231133211121312211231131112311211232221121113122113121113222123211211131221132211131221121321131211132221123113112211121312211231131122113221122112133221121321132132211331121321231231121113121113122122311311222113111231133221121113122113121113221112131221123113111231121123222112132113213221133112132123123112111312211322311211133112111312211213211311123113223112111321322123122113222122211211232221121113122113121113222123211211131211121311121321123113213221121113122123211211131221121311121312211213211321322112311311222113311213212322211211131221131211221321123113213221121113122113121113222112131112131221121321131211132221121321132132211331121321232221123113112221131112311322311211131122211213211331121321122112133221121113122113121113222123112221221321132132211231131122211331121321232221121113122113121113222123211211131211121332211213111213122112132113121113222112132113213221232112111312111213322112132113213221133112132123123112111311222112132113311213211221121332211231131122211311123113321112131221123113112221132231131122211211131221131112311332211213211321223112111311222112132113212221132221222112112322211211131221131211132221232112111312111213111213211231131112311311221122132113213221133112132123222112311311222113111231132231121113112221121321133112132112211213322112111312211312111322212321121113121112131112132112311321322112111312212321121113122122211211232221121311121312211213211312111322211213211321322123211211131211121332211213211321322113311213211322132112311321322112111312212321121113122122211211232221121321132132211331121321231231121113112221121321133112132112312321123113112221121113122113111231133221121321132122311211131122211213211321222113222122211211232221123113112221131112311332111213122112311311123112111331121113122112132113121113222112311311221112131221123113112221121113311211131122211211131221131211132221121321132132212321121113121112133221123113112221131112311332111213213211221113122113121113222112132113213221232112111312111213322112132113213221133112132123123112111312211322311211133112111312212221121123222112132113213221133112132123222113223113112221131112311332111213122112311311123112112322211211131221131211132221232112111312111213111213211231132132211211131221131211221321123113213221123113112221131112211322212322211231131122211322111312211312111322211213211321322113311213211331121113122122211211132213211231131122212322211331222113112211
MIT
Homeworks/Homework1/VANGUMALLI-D-python-challenge-04-sept-2017.ipynb
DineshVangumalli/big-data-python-class
Python challenge question 12 For the next challenge, I tried URL "http://www.pythonchallenge.com/pc/return/5808.html" and it showed a blurry picture with page title 'odd even'. When I checked page source, there is nothing much except cave.jpg, which when clicked got to the same image. I tried searching '"cave in python" to see if I'll find a module named 'cave' but got nothing.I tried opening the image in "paint" and observed that the image some black pixels alternately. I had no idea where to go in this challenge and so googled regarding this challenge for some hints and got some. When I tried to check the pixels to confirm what I found when I opened the image on paint. When I used im.getpixels() to get the pixel values, the odd pixels have the image in, and the even pixels have some other image.I tried to blank the even pixels and odd pixels alternatively to check the changes in the image. The image changed when I blanked the odd pixels to get a new image with "evil" on it.
#http://www.pythonchallenge.com/pc/return/5808.html import urllib.request from PIL import Image from io import StringIO #url = 'http://www.pythonchallenge.com/pc/return/cave.jpg' #img_cav = urllib.request.urlopen(url).read() #img = Image.open(StringIO.StringIO(img_cav)) im = Image.open('cave.jpg') im.size w, h = im.size #new = Image.new("RGB", (w, h)) print(im.getpixel((0,0))) print(im.getpixel((0,1))) print(im.getpixel((1,0))) print(im.getpixel((1,1))) print(im.getpixel((1,2))) print(im.getpixel((1,3))) print(im.getpixel((1,4))) print(im.getpixel((1,5))) for i in range(w): for j in range(h): #if (i + j) % 2 == 0: # Blanked the even pixels if (i + j) % 2 == 1: # Blanked the odd pixels im.putpixel((i,j), 0) im #http://www.pythonchallenge.com/pc/return/evil.html
_____no_output_____
MIT
Homeworks/Homework1/VANGUMALLI-D-python-challenge-04-sept-2017.ipynb
DineshVangumalli/big-data-python-class
Python challenge question 13 For the next challenge, I tried URL "http://www.pythonchallenge.com/pc/return/evil.html" and it showed a picture of a man dealing with cards. When I checked page source, there is a link which redirected me to the URL "http://www.pythonchallenge.com/pc/return/evil1.jpg". When I changed the URL to "http://www.pythonchallenge.com/pc/return/evil2.jpg", it showed some other image that said "not jpg - .gfx". I further checked "http://www.pythonchallenge.com/pc/return/evil3.jpg", it showed an image saying "no more evils".When I changed the URL of "evil2" to .gfx, it downloaded a file with name "evil2.gfx". The image previously showed a hand dealing with 5 cards, so the byte code generated from .gfx file is split into 5 images (googled to get this clue). Then, I got 5 images which said "dis", "pro", "port", "ional" and "ity". I first tried "disproportionality" to get to the next challenge but didnot work. O examining, I observed that "ity" is striked out in the image and so used "http://www.pythonchallenge.com/pc/return/disproportional.html" for the next challenge.
#http://www.pythonchallenge.com/pc/return/evil.html import requests from PIL import Image #url_evl = "http://www.pythonchallenge.com/pc/return/evil2.gfx" #un, pw = 'huge', 'file' #d = requests.get(url_evl, auth=(un, pw)).content #print(d) data = open("evil2.gfx", "rb").read() #print(data) for i in range(0,5): open('%d.png' %i ,'wb').write(data[i::5]) im0 = Image.open('0.png') im0 im1 = Image.open('1.png') im1 im2 = Image.open('2.png') im2 im3 = Image.open('3.png') im3 im4 = Image.open('4.png') im4 #http://www.pythonchallenge.com/pc/return/disproportional.html
_____no_output_____
MIT
Homeworks/Homework1/VANGUMALLI-D-python-challenge-04-sept-2017.ipynb
DineshVangumalli/big-data-python-class
Python challenge question 14 For the next challenge, I tried URL "http://www.pythonchallenge.com/pc/return/disproportional.html" and it gave an image with numbers on phone and text "phone that evil". The number "5" is clickable and it took me to URL "http://www.pythonchallenge.com/pc/phonebook.php" which is XML file.On checking page source, there is a text saying "phone that evil". I googled "remote module python" and with some digging, I found about xmlrpc client. Also previously, we know that number "5" is clickable. With these two clues, I found that there is something that needs to be done in the previous challenge that has a link to this challenge.I tried to open evil4.jpg but nothing came. So, I read it to see if I get anything and I got "'Bert is evil! go back!". With xmlrpc, I found the list of methods and with a phone picture and "phonebook.php" clues, I decided to use "phone" method.And with the clue obtained previously "Bert", I tried to use the name to get the number.
#http://www.pythonchallenge.com/pc/return/disproportional.html url_evl = "http://www.pythonchallenge.com/pc/return/evil4.jpg" un, pw = 'huge', 'file' d = requests.get(url_evl, auth=(un, pw)).content print(d) import xmlrpc.client url_pb = 'http://www.pythonchallenge.com/pc/phonebook.php' with xmlrpc.client.ServerProxy(url_pb) as proxy: print(proxy.system.listMethods()) print(proxy.system.methodHelp('phone')) print(proxy.system.methodSignature('phone')) print(proxy.phone('Bert')) #http://www.pythonchallenge.com/pc/return/italy.html
b'Bert is evil! go back!\n' ['phone', 'system.listMethods', 'system.methodHelp', 'system.methodSignature', 'system.multicall', 'system.getCapabilities'] Returns the phone of a person [['string', 'string']] 555-ITALY
MIT
Homeworks/Homework1/VANGUMALLI-D-python-challenge-04-sept-2017.ipynb
DineshVangumalli/big-data-python-class
Python challenge question 15 For the next challenge, I tried URL "http://www.pythonchallenge.com/pc/return/italy.html" and it gave an image of a roll in spiral form and other square image with vertical lines. The pagetitle is "walk around". When I checked the page source, it has a link to "http://www.pythonchallenge.com/pc/return/wire.png" which I saved. The title has (10000 by 1) and when I checked the "wire.png" on paint, it has dimension of 10000 by 1. When I zoomed the image, there is a line.Also, there is a text "remember: 100*100 = (100+99+99+98) + (... "
#http://www.pythonchallenge.com/pc/return/italy.html #http://www.pythonchallenge.com/pc/return/uzi.html
_____no_output_____
MIT
Homeworks/Homework1/VANGUMALLI-D-python-challenge-04-sept-2017.ipynb
DineshVangumalli/big-data-python-class
Python challenge question 16 For this challenge, I had to see the URL of the previous challenge. For the next challenge, I tried URL "http://www.pythonchallenge.com/pc/return/uzi.html" and it gave an image with calendar with year 1_6 and January 26th rounded, which is a Monday. Also, when I checked the page source, there is a text "todo: buy flowers for tomorrow" which means it is an important day. Also, the text "he ain't the youngest, he is the second" means that he is second youngest.Also, the calendar shows February hs 29 days, so, I figured that this is a leap year. The year might be anything between 1006 and 1996. With a little digging in Python, I found that there are a couple of modules useful for this challenge "datetime" and "calendar". So, I tried to get the year, but gave 5 years, i.e, 1176, 1356, 1576, 1756 and 1976.With the clues mentioned previously, second youngest means 1756 and so tried to use that for the next challenge, but returned nothing. Then I got a hunch from other clue "to buy flowers" and so googled the year and date to find that it was Mozart's birthday after a couple of tries with "Benjamin Franklin" too.
#http://www.pythonchallenge.com/pc/return/uzi.html import datetime import calendar for year in range(1006, 2000, 10): if calendar.isleap(year) and datetime.date(year, 1, 26).weekday() == 0: print(year) #http://www.pythonchallenge.com/pc/return/mozart.html
1176 1356 1576 1756 1976
MIT
Homeworks/Homework1/VANGUMALLI-D-python-challenge-04-sept-2017.ipynb
DineshVangumalli/big-data-python-class
Python challenge question 17
#http://www.pythonchallenge.com/pc/return/mozart.html
_____no_output_____
MIT
Homeworks/Homework1/VANGUMALLI-D-python-challenge-04-sept-2017.ipynb
DineshVangumalli/big-data-python-class
Testing a 1D case
from scipy.interpolate import interp1d from scipy.optimize import bisect # 4th-order Runge-Kutta def rk4(x, t, h, f): # x is coordinates (as a vector) # h is timestep # f(x) is a function that returns the derivative # "Slopes" k1 = f(x, t) k2 = f(x + k1*h/2, t + h/2) k3 = f(x + k2*h/2, t + h/2) k4 = f(x + k3*h, t + h) # Update time and position x_ = x + h*(k1 + 2*k2 + 2*k3 + k4)/6 return x_ def trajectory(X0, Tmax, h, f, integrator, progressbar = False): # Number of timesteps Nt = int((Tmax) / h) # Add 2 for initial position and fractional step at end # X0.size lets X hold Nt+2 arrays of the same size as X0 X0 = np.array(X0) X = np.zeros((Nt+2, X0.size)) T = np.zeros(Nt+2) # Initialise X[0,:] = X0 T[0] = 0 if progressbar: iterator = trange else: iterator = range # Loop over timesteps t = 0 for i in iterator(Nt+1): # Make sure the last step stops exactly at Tmax h = min(h, Tmax - t) # Calculate next position X[i+1,:] = integrator(X[i,:], t, h, f) T[i+1] = T[i] + h # Increment time t += h return X, T # 4th-order Runge-Kutta def rk4_dense(x, t, h, f): # x is coordinates (as a vector) # h is timestep # f(x) is a function that returns the derivative # "Slopes" k1 = f(x, t) k2 = f(x + k1*h/2, t + h/2) k3 = f(x + k2*h/2, t + h/2) k4 = f(x + k3*h, t + h) # Update time and position x_ = x + h*(k1 + 2*k2 + 2*k3 + k4)/6 return x_, k1 def hermite(x0, k0, t0, x1, k1, t, h): # Calculate theta, a number in [0, 1] indicating position # within each interval theta = (t - t0) / h return (1-theta)*x0 + theta*x1 + theta*(theta-1)*((1-2*theta)*(x1-x0) + (theta-1)*h*k0 + theta*h*k1) def trajectory_special(X0, Tmax, h0, f, integrator, discontinuities, progressbar = False): # Initialise X = [X0] T = [0.0] # keep track of the position relative to # the discontinuities. j = np.searchsorted(discontinuities, X0) # Loop over timesteps t = 0 x = X0 # iteration counter i = 0 # Progress bar for long simulations if progressbar: pbar = tqdm(total = Tmax) while t < Tmax: # Make sure the last step stops exactly at Tmax h = min(h0, Tmax - t) # tentatively calculate next position x_, k = integrator(X[i], t, h, f) t_ = t + h # check for crossing of discontinuity j_ = np.searchsorted(discontinuities, x_) if j_ != j: # We have crossed one or more discontinuities, # find the time at which we crossed the first. if j_ > j: x_cross = discontinuities[j] else: x_cross = discontinuities[j-1] # if we are exactly at boundary, accept and move on if x_cross != x: # Get derivative at end of step # (k is already the derivative at the start of the step) k_ = f(x_, t_) # create hermite interpolator to use in bisection dense = lambda t_: hermite(x, k, t, x_, k_, t_, h) - x_cross # find time of crossing t_cross = bisect(dense, t, t + h) # Step to that time instead of the original time # (but never step across Tmax) h = min(t_cross - t, Tmax - t) x_, k = integrator(X[i], t, h, f) t_ = t + h # Update variables x = x_ t = t_ i += 1 j = np.searchsorted(discontinuities, x) # Store progress X.append(x) T.append(t) if progressbar: # Update progress pbar.update(h) # Break to prevent infinite loop # (should never happen, but convenient in debugging) if i > 10*(Tmax/h0): print('Seems to get stuck in infinite loop') print('(or at least a very long loop)') print(X, T) break if progressbar: pbar.close() return X, T
_____no_output_____
MIT
notebooks/1D_test.ipynb
nordam/Discontinuities
Run a quick test to verify that results don't look crazy
# Problem properties X0 = 50 Tmax = 10 dt = 0.01 # Interpolation points xc = np.linspace(0, 100, 1001) # kind of interpolation #kind = 'linear' kind = 'quadratic' #kind = 'cubic' fig = plt.figure(figsize = (9, 5)) # Positive derivative interpolator = interp1d(xc, 1.2 + np.sin(2*np.pi*xc), kind = kind) f = lambda x, t: interpolator(x) X_, T_ = trajectory_special(X0, Tmax, dt, f, rk4_dense, xc) X, T = trajectory(X0, Tmax, dt, f, rk4) plt.plot(T, X, label = 'RK4') plt.plot(T_, X_, '--', label = 'RK4 event detection') # Negative derivative interpolator = interp1d(xc, -1.2 - np.sin(2*np.pi*xc), kind = kind) f = lambda x, t: interpolator(x) X_, T_ = trajectory_special(X0, Tmax, dt, f, rk4_dense, xc) X, T = trajectory(X0, Tmax, dt, f, rk4) plt.plot(T, X, label = 'RK4') plt.plot(T_, X_, '--', label = 'RK4 event detection') plt.xlabel('Time') plt.ylabel('X') plt.legend() plt.tight_layout()
_____no_output_____
MIT
notebooks/1D_test.ipynb
nordam/Discontinuities
Run convergence test
X0 = 0 Tmax = 10 # Interopolation points xc = np.linspace(0, 100, 1001) # kind of interpolation kind = 'linear' #kind = 'quadratic' #kind = 'cubic' # create interpolator, and wrap with lambda to get f(x, t) interpolator = interp1d(xc, 2 + np.sin(2*np.pi*xc), kind = kind) f = lambda x, t: interpolator(x) # Reference solution # (calculating the reference solution with the special integrator # was found to work better) dt_ref = 0.0002 X_ref_, T_ref = trajectory_special(X0, Tmax, dt_ref, f, rk4_dense, xc, progressbar = True) # List of timesteps to investigate dt_list = np.logspace(-3, -1, 100) # Arrays to keep track of errors errors = np.zeros(len(dt_list)) errors_special = np.zeros(len(dt_list)) # Loop over timesteps and calculate error for i, dt in tqdm(enumerate(dt_list), total = len(dt_list)): X, T = trajectory(X0, Tmax, dt, f, rk4) errors[i] = np.abs(X_ref_[-1] - X[-1]) X_, T_ = trajectory_special(X0, Tmax, dt, f, rk4_dense, xc) errors_special[i] = np.abs(X_ref_[-1] - X_[-1]) fig = plt.figure(figsize = (7, 4)) # Plot errors plt.plot(dt_list, errors, label = 'RK4') plt.plot(dt_list, errors_special, label = 'RK4 event detection') # Plot trendlines plt.plot(dt_list, 1e-1*dt_list**2, '--', c = 'k', label = '$h^2$') plt.plot(dt_list, 1e-0*dt_list**3, '-.', c = 'k', label = '$h^3$') plt.plot(dt_list, 1e+1*dt_list**4, ':', c = 'k', label = '$h^4$') # scales and labels, etc. plt.xscale('log') plt.yscale('log') plt.ylabel('Global error') plt.xlabel('Timestep, $h$') plt.legend(fontsize = 12, loc = 'lower right') plt.tight_layout()
_____no_output_____
MIT
notebooks/1D_test.ipynb
nordam/Discontinuities
SSD512 Training正しく学習できたらこんな感じ[SSD300 "07+12" training summary](https://github.com/pierluigiferrari/ssd_keras/blob/master/training_summaries/ssd300_pascal_07%2B12_training_summary.md)
from tensorflow.python.keras.optimizers import Adam, SGD from tensorflow.python.keras.callbacks import ModelCheckpoint, LearningRateScheduler, TerminateOnNaN, CSVLogger from tensorflow.python.keras import backend as K from tensorflow.python.keras.models import load_model from math import ceil import numpy as np from matplotlib import pyplot as plt from models.keras_ssd512 import ssd_512 from keras_loss_function.keras_ssd_loss import SSDLoss from keras_layers.keras_layer_AnchorBoxes import AnchorBoxes from keras_layers.keras_layer_DecodeDetections import DecodeDetections from keras_layers.keras_layer_DecodeDetectionsFast import DecodeDetectionsFast from keras_layers.keras_layer_L2Normalization import L2Normalization from ssd_encoder_decoder.ssd_input_encoder import SSDInputEncoder from ssd_encoder_decoder.ssd_output_decoder import decode_detections, decode_detections_fast from data_generator.object_detection_2d_data_generator import DataGenerator from data_generator.object_detection_2d_geometric_ops import Resize from data_generator.object_detection_2d_photometric_ops import ConvertTo3Channels from data_generator.data_augmentation_chain_original_ssd import SSDDataAugmentation from data_generator.object_detection_2d_misc_utils import apply_inverse_transforms from make_annotation import Make_PicXML from make_annotation import Make_txt %matplotlib inline home_path = 'C:/tensorflow1/ssd_512_2/' img_height = 512 # Height of the model input images img_width = 512 # Width of the model input images img_channels = 3 # Number of color channels of the model input images mean_color = [123, 117, 104] # The per-channel mean of the images in the dataset. Do not change this value if you're using any of the pre-trained weights. swap_channels = [2, 1, 0] # The color channel order in the original SSD is BGR, so we'll have the model reverse the color channel order of the input images. n_classes = 34 # Number of positive classes, e.g. 20 for Pascal VOC, 80 for MS COCO scales = [0.07, 0.15, 0.3, 0.45, 0.6, 0.75, 0.9, 1.05] # The anchor box scaling factors used in the original SSD300 for the Pascal VOC datasets aspect_ratios = [[1.0, 2.0, 0.5], [1.0, 2.0, 0.5, 3.0, 1.0/3.0], [1.0, 2.0, 0.5, 3.0, 1.0/3.0], [1.0, 2.0, 0.5, 3.0, 1.0/3.0], [1.0, 2.0, 0.5, 3.0, 1.0/3.0], [1.0, 2.0, 0.5], [1.0, 2.0, 0.5]] two_boxes_for_ar1 = True steps = [8, 16, 32, 64, 128, 256, 512] # The space between two adjacent anchor box center points for each predictor layer. offsets=[0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5] # The offsets of the first anchor box center points from the top and left borders of the image as a fraction of the step size for each predictor layer. clip_boxes = False # Whether or not to clip the anchor boxes to lie entirely within the image boundaries variances = [0.1, 0.1, 0.2, 0.2] # The variances by which the encoded target coordinates are divided as in the original implementation normalize_coords = True # 1: Build the Keras model. K.clear_session() # Clear previous models from memory. model = ssd_512(image_size=(img_height, img_width, img_channels), n_classes=n_classes, mode='training', l2_regularization=0.0005, scales=scales, aspect_ratios_per_layer= aspect_ratios, two_boxes_for_ar1=two_boxes_for_ar1, steps=steps, offsets=offsets, clip_boxes=clip_boxes, variances=variances, normalize_coords=normalize_coords, subtract_mean=mean_color, swap_channels=swap_channels) # 2: Load some weights into the model. weights_path = home_path + 'VGG_ILSVRC_16_layers_fc_reduced.h5' model.load_weights(weights_path, by_name=True) # 3: Instantiate an optimizer and the SSD loss function and compile the model. # Adam optimizer がおすすめ. adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0) #sgd = SGD(lr=0.001, momentum=0.9, decay=0.0, nesterov=False) ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0) model.compile(optimizer=adam, loss=ssd_loss.compute_loss)
_____no_output_____
Apache-2.0
ssd512_training _2.ipynb
hidekazu300/ssd_512_2
0. Make annotation data
datasize = 10 Make_PicXML(sample_filename = 'sample/home' , save_pic_filename = 'DATASET/JPEGImages', save_xml_filename = 'DATASET/Annotations', robust = 0 , datasize = datasize ) Make_txt( save_file = 'DATASET', datasize = datasize , percent = 0.2 )
_____no_output_____
Apache-2.0
ssd512_training _2.ipynb
hidekazu300/ssd_512_2
1. Set the model configuration parametersパラメーターを設定する。 2. Build or load the model初めてであれば2.1を、2回目の学習以降は2.2を実行。両方はだめ 2.1 Create a new model and load trained VGG-16 weights into it (or trained SSD weights)If you want to create a new SSD300 model, this is the relevant section for you. If you want to load a previously saved SSD300 model, skip ahead to section 2.2.The code cell below does the following things:1. It calls the function `ssd_300()` to build the model.2. It then loads the weights file that is found at `weights_path` into the model. You could load the trained VGG-16 weights or you could load the weights of a trained model. If you want to reproduce the original SSD training, load the pre-trained VGG-16 weights. In any case, you need to set the path to the weights file you want to load on your local machine. Download links to all the trained weights are provided in the [README](https://github.com/pierluigiferrari/ssd_keras/blob/master/README.md) of this repository.3. Finally, it compiles the model for the training. In order to do so, we're defining an optimizer (Adam) and a loss function (SSDLoss) to be passed to the `compile()` method.Normally, the optimizer of choice would be Adam (commented out below), but since the original implementation uses plain SGD with momentum, we'll do the same in order to reproduce the original training. Adam is generally the superior optimizer, so if your goal is not to have everything exactly as in the original training, feel free to switch to Adam. You might need to adjust the learning rate scheduler below slightly in case you use Adam.Note that the learning rate that is being set here doesn't matter, because further below we'll pass a learning rate scheduler to the training function, which will overwrite any learning rate set here, i.e. what matters are the learning rates that are defined by the learning rate scheduler.`SSDLoss` is a custom Keras loss function that implements the multi-task that consists of a log loss for classification and a smooth L1 loss for localization. `neg_pos_ratio` and `alpha` are set as in the paper. 2.2 Load a previously created modelIf you have previously created and saved a model and would now like to load it, execute the next code cell. The only thing you need to do here is to set the path to the saved model HDF5 file that you would like to load.The SSD model contains custom objects: Neither the loss function nor the anchor box or L2-normalization layer types are contained in the Keras core library, so we need to provide them to the model loader.This next code cell assumes that you want to load a model that was created in 'training' mode. If you want to load a model that was created in 'inference' or 'inference_fast' mode, you'll have to add the `DecodeDetections` or `DecodeDetectionsFast` layer type to the `custom_objects` dictionary below.
""" # TODO: Set the path to the `.h5` file of the model to be loaded. model_path = 'path/to/trained/model.h5' # We need to create an SSDLoss object in order to pass that to the model loader. ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0) K.clear_session() # Clear previous models from memory. model = load_model(model_path, custom_objects={'AnchorBoxes': AnchorBoxes, 'L2Normalization': L2Normalization, 'compute_loss': ssd_loss.compute_loss}) """
_____no_output_____
Apache-2.0
ssd512_training _2.ipynb
hidekazu300/ssd_512_2
3. Set up the data generators for the trainingThe code cells below set up the data generators for the training and validation datasets to train the model. The settings below reproduce the original SSD training on Pascal VOC 2007 `trainval` plus 2012 `trainval` and validation on Pascal VOC 2007 `test`.The only thing you need to change here are the filepaths to the datasets on your local machine. Note that parsing the labels from the XML annotations files can take a while.Note that the generator provides two options to speed up the training. By default, it loads the individual images for a batch from disk. This has two disadvantages. First, for compressed image formats like JPG, this is a huge computational waste, because every image needs to be decompressed again and again every time it is being loaded. Second, the images on disk are likely not stored in a contiguous block of memory, which may also slow down the loading process. The first option that `DataGenerator` provides to deal with this is to load the entire dataset into memory, which reduces the access time for any image to a negligible amount, but of course this is only an option if you have enough free memory to hold the whole dataset. As a second option, `DataGenerator` provides the possibility to convert the dataset into a single HDF5 file. This HDF5 file stores the images as uncompressed arrays in a contiguous block of memory, which dramatically speeds up the loading time. It's not as good as having the images in memory, but it's a lot better than the default option of loading them from their compressed JPG state every time they are needed. Of course such an HDF5 dataset may require significantly more disk space than the compressed images (around 9 GB total for Pascal VOC 2007 `trainval` plus 2012 `trainval` and another 2.6 GB for 2007 `test`). You can later load these HDF5 datasets directly in the constructor.The original SSD implementation uses a batch size of 32 for the training. In case you run into GPU memory issues, reduce the batch size accordingly. You need at least 7 GB of free GPU memory to train an SSD300 with 20 object classes with a batch size of 32.The `DataGenerator` itself is fairly generic. I doesn't contain any data augmentation or bounding box encoding logic. Instead, you pass a list of image transformations and an encoder for the bounding boxes in the `transformations` and `label_encoder` arguments of the data generator's `generate()` method, and the data generator will then apply those given transformations and the encoding to the data. Everything here is preset already, but if you'd like to learn more about the data generator and its data augmentation capabilities, take a look at the detailed tutorial in [this](https://github.com/pierluigiferrari/data_generator_object_detection_2d) repository.The data augmentation settings defined further down reproduce the data augmentation pipeline of the original SSD training. The training generator receives an object `ssd_data_augmentation`, which is a transformation object that is itself composed of a whole chain of transformations that replicate the data augmentation procedure used to train the original Caffe implementation. The validation generator receives an object `resize`, which simply resizes the input images.An `SSDInputEncoder` object, `ssd_input_encoder`, is passed to both the training and validation generators. As explained above, it matches the ground truth labels to the model's anchor boxes and encodes the box coordinates into the format that the model needs.In order to train the model on a dataset other than Pascal VOC, either choose `DataGenerator`'s appropriate parser method that corresponds to your data format, or, if `DataGenerator` does not provide a suitable parser for your data format, you can write an additional parser and add it. Out of the box, `DataGenerator` can handle datasets that use the Pascal VOC format (use `parse_xml()`), the MS COCO format (use `parse_json()`) and a wide range of CSV formats (use `parse_csv()`).
# 1: Instantiate two `DataGenerator` objects: One for training, one for validation. # Optional: If you have enough memory, consider loading the images into memory for the reasons explained above. train_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None) val_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None) # 2: Parse the image and label lists for the training and validation datasets. This can take a while. # The directories that contain the images. images_dir = home_path + 'DATASET/JPEGImages/' # The directories that contain the annotations. annotations_dir = home_path + 'DATASET/Annotations/' # The paths to the image sets. val_image_set_filename = home_path + 'DATASET/val.txt' trainval_image_set_filename = home_path + 'DATASET/trainval.txt' # The XML parser needs to now what object class names to look for and in which order to map them to integers. classes = ['1m','2m','3m','4m','5m','6m','7m','8m','9m','1p','2p','3p','4p','5p','6p', '7p','8p','9p','1s','2s','3s','4s','5s','6s','7s','8s','9s', 'east','south','west','north','white','hatsu','tyun'] train_dataset.parse_xml(images_dirs=[images_dir], image_set_filenames=[trainval_image_set_filename], annotations_dirs=[annotations_dir], classes=classes, include_classes='all', exclude_truncated=False, exclude_difficult=False, ret=False) val_dataset.parse_xml(images_dirs=[images_dir], image_set_filenames=[val_image_set_filename], annotations_dirs=[annotations_dir], classes=classes, include_classes='all', exclude_truncated=False, exclude_difficult=True, ret=False) # Optional: Convert the dataset into an HDF5 dataset. This will require more disk space, but will # speed up the training. Doing this is not relevant in case you activated the `load_images_into_memory` # option in the constructor, because in that cas the images are in memory already anyway. If you don't # want to create HDF5 datasets, comment out the subsequent two function calls. train_dataset.create_hdf5_dataset(file_path='DATASET_trainval.h5', resize=False, variable_image_size=True, verbose=True) val_dataset.create_hdf5_dataset(file_path='DATASET_test.h5', resize=False, variable_image_size=True, verbose=True) # 3: Set the batch size. batch_size = 2 # Change the batch size if you like, or if you run into GPU memory issues. # 4: Set the image transformations for pre-processing and data augmentation options. # For the training generator: ssd_data_augmentation = SSDDataAugmentation(img_height=img_height, img_width=img_width, background=mean_color) # For the validation generator: convert_to_3_channels = ConvertTo3Channels() resize = Resize(height=img_height, width=img_width) # 5: Instantiate an encoder that can encode ground truth labels into the format needed by the SSD loss function. # The encoder constructor needs the spatial dimensions of the model's predictor layers to create the anchor boxes. predictor_sizes = [model.get_layer('conv4_3_norm_mbox_conf').output_shape[1:3], model.get_layer('fc7_mbox_conf').output_shape[1:3], model.get_layer('conv6_2_mbox_conf').output_shape[1:3], model.get_layer('conv7_2_mbox_conf').output_shape[1:3], model.get_layer('conv8_2_mbox_conf').output_shape[1:3], model.get_layer('conv9_2_mbox_conf').output_shape[1:3], model.get_layer('conv9_2_mbox_conf').output_shape[1:3]] ssd_input_encoder = SSDInputEncoder(img_height=img_height, img_width=img_width, n_classes=n_classes, predictor_sizes=predictor_sizes, scales=scales, aspect_ratios_per_layer=aspect_ratios, two_boxes_for_ar1=two_boxes_for_ar1, steps=steps, offsets=offsets, clip_boxes=clip_boxes, variances=variances, matching_type='multi', pos_iou_threshold=0.5, neg_iou_limit=0.5, normalize_coords=normalize_coords) # 6: Create the generator handles that will be passed to Keras' `fit_generator()` function. train_generator = train_dataset.generate(batch_size=batch_size, shuffle=True, transformations=[ssd_data_augmentation], label_encoder=ssd_input_encoder, returns={'processed_images', 'encoded_labels'}, keep_images_without_gt=False) val_generator = val_dataset.generate(batch_size=batch_size, shuffle=False, transformations=[convert_to_3_channels, resize], label_encoder=ssd_input_encoder, returns={'processed_images', 'encoded_labels'}, keep_images_without_gt=False) # Get the number of samples in the training and validations datasets. train_dataset_size = train_dataset.get_dataset_size() val_dataset_size = val_dataset.get_dataset_size() print("Number of images in the training dataset:\t{:>6}".format(train_dataset_size)) print("Number of images in the validation dataset:\t{:>6}".format(val_dataset_size))
_____no_output_____
Apache-2.0
ssd512_training _2.ipynb
hidekazu300/ssd_512_2
4. Set the remaining training parametersWe've already chosen an optimizer and set the batch size above, now let's set the remaining training parameters. I'll set one epoch to consist of 1,000 training steps. The next code cell defines a learning rate schedule that replicates the learning rate schedule of the original Caffe implementation for the training of the SSD300 Pascal VOC "07+12" model. That model was trained for 120,000 steps with a learning rate of 0.001 for the first 80,000 steps, 0.0001 for the next 20,000 steps, and 0.00001 for the last 20,000 steps. If you're training on a different dataset, define the learning rate schedule however you see fit.I'll set only a few essential Keras callbacks below, feel free to add more callbacks if you want TensorBoard summaries or whatever. We obviously need the learning rate scheduler and we want to save the best models during the training. It also makes sense to continuously stream our training history to a CSV log file after every epoch, because if we didn't do that, in case the training terminates with an exception at some point or if the kernel of this Jupyter notebook dies for some reason or anything like that happens, we would lose the entire history for the trained epochs. Finally, we'll also add a callback that makes sure that the training terminates if the loss becomes `NaN`. Depending on the optimizer you use, it can happen that the loss becomes `NaN` during the first iterations of the training. In later iterations it's less of a risk. For example, I've never seen a `NaN` loss when I trained SSD using an Adam optimizer, but I've seen a `NaN` loss a couple of times during the very first couple of hundred training steps of training a new model when I used an SGD optimizer.
# Define a learning rate schedule. def lr_schedule(epoch): if epoch < 80: return 0.001 elif epoch < 100: return 0.0001 else: return 0.00001 # Define model callbacks. # TODO: Set the filepath under which you want to save the model. model_checkpoint = ModelCheckpoint(filepath='trained_model/ssd512_epoch-{epoch:02d}_loss-{loss:.4f}_val_loss-{val_loss:.4f}.h5', monitor='val_loss', verbose=1, save_best_only=True, save_weights_only=False, mode='auto', period=1) #model_checkpoint.best = csv_logger = CSVLogger(filename='trained_model/ssd512_training_log.csv', separator=',', append=True) learning_rate_scheduler = LearningRateScheduler(schedule=lr_schedule, verbose=1) terminate_on_nan = TerminateOnNaN() callbacks = [model_checkpoint, csv_logger, learning_rate_scheduler, terminate_on_nan]
_____no_output_____
Apache-2.0
ssd512_training _2.ipynb
hidekazu300/ssd_512_2
5. Train In order to reproduce the training of the "07+12" model mentioned above, at 1,000 training steps per epoch you'd have to train for 120 epochs. That is going to take really long though, so you might not want to do all 120 epochs in one go and instead train only for a few epochs at a time. You can find a summary of a full training [here](https://github.com/pierluigiferrari/ssd_keras/blob/master/training_summaries/ssd300_pascal_07%2B12_training_summary.md).In order to only run a partial training and resume smoothly later on, there are a few things you should note:1. Always load the full model if you can, rather than building a new model and loading previously saved weights into it. Optimizers like SGD or Adam keep running averages of past gradient moments internally. If you always save and load full models when resuming a training, then the state of the optimizer is maintained and the training picks up exactly where it left off. If you build a new model and load weights into it, the optimizer is being initialized from scratch, which, especially in the case of Adam, leads to small but unnecessary setbacks every time you resume the training with previously saved weights.2. In order for the learning rate scheduler callback above to work properly, `fit_generator()` needs to know which epoch we're in, otherwise it will start with epoch 0 every time you resume the training. Set `initial_epoch` to be the next epoch of your training. Note that this parameter is zero-based, i.e. the first epoch is epoch 0. If you had trained for 10 epochs previously and now you'd want to resume the training from there, you'd set `initial_epoch = 10` (since epoch 10 is the eleventh epoch). Furthermore, set `final_epoch` to the last epoch you want to run. To stick with the previous example, if you had trained for 10 epochs previously and now you'd want to train for another 10 epochs, you'd set `initial_epoch = 10` and `final_epoch = 20`.3. In order for the model checkpoint callback above to work correctly after a kernel restart, set `model_checkpoint.best` to the best validation loss from the previous training. If you don't do this and a new `ModelCheckpoint` object is created after a kernel restart, that object obviously won't know what the last best validation loss was, so it will always save the weights of the first epoch of your new training and record that loss as its new best loss. This isn't super-important, I just wanted to mention it.
# If you're resuming a previous training, set `initial_epoch` and `final_epoch` accordingly. initial_epoch = 0 final_epoch = 120 steps_per_epoch = 1000 history = model.fit_generator(generator=train_generator, steps_per_epoch=steps_per_epoch, epochs=final_epoch, callbacks=callbacks, validation_data=val_generator, validation_steps=ceil(val_dataset_size/batch_size), initial_epoch=initial_epoch)
_____no_output_____
Apache-2.0
ssd512_training _2.ipynb
hidekazu300/ssd_512_2
6. Make predictionsNow let's make some predictions on the validation dataset with the trained model. For convenience we'll use the validation generator that we've already set up above. Feel free to change the batch size.You can set the `shuffle` option to `False` if you would like to check the model's progress on the same image(s) over the course of the training.
# 1: Set the generator for the predictions. predict_generator = val_dataset.generate(batch_size=1, shuffle=True, transformations=[convert_to_3_channels, resize], label_encoder=None, returns={'processed_images', 'filenames', 'inverse_transform', 'original_images', 'original_labels'}, keep_images_without_gt=False) # 2: Generate samples. batch_images, batch_filenames, batch_inverse_transforms, batch_original_images, batch_original_labels = next(predict_generator) i = 0 # Which batch item to look at print("Image:", batch_filenames[i]) print() print("Ground truth boxes:\n") print(np.array(batch_original_labels[i])) # 3: Make predictions. y_pred = model.predict(batch_images)
_____no_output_____
Apache-2.0
ssd512_training _2.ipynb
hidekazu300/ssd_512_2
Now let's decode the raw predictions in `y_pred`.Had we created the model in 'inference' or 'inference_fast' mode, then the model's final layer would be a `DecodeDetections` layer and `y_pred` would already contain the decoded predictions, but since we created the model in 'training' mode, the model outputs raw predictions that still need to be decoded and filtered. This is what the `decode_detections()` function is for. It does exactly what the `DecodeDetections` layer would do, but using Numpy instead of TensorFlow (i.e. on the CPU instead of the GPU).`decode_detections()` with default argument values follows the procedure of the original SSD implementation: First, a very low confidence threshold of 0.01 is applied to filter out the majority of the predicted boxes, then greedy non-maximum suppression is performed per class with an intersection-over-union threshold of 0.45, and out of what is left after that, the top 200 highest confidence boxes are returned. Those settings are for precision-recall scoring purposes though. In order to get some usable final predictions, we'll set the confidence threshold much higher, e.g. to 0.5, since we're only interested in the very confident predictions.
# 4: Decode the raw predictions in `y_pred`. y_pred_decoded = decode_detections(y_pred, confidence_thresh=0.5, iou_threshold=0.4, top_k=200, normalize_coords=normalize_coords, img_height=img_height, img_width=img_width)
_____no_output_____
Apache-2.0
ssd512_training _2.ipynb
hidekazu300/ssd_512_2
We made the predictions on the resized images, but we'd like to visualize the outcome on the original input images, so we'll convert the coordinates accordingly. Don't worry about that opaque `apply_inverse_transforms()` function below, in this simple case it just aplies `(* original_image_size / resized_image_size)` to the box coordinates.
# 5: Convert the predictions for the original image. y_pred_decoded_inv = apply_inverse_transforms(y_pred_decoded, batch_inverse_transforms) np.set_printoptions(precision=2, suppress=True, linewidth=90) print("Predicted boxes:\n") print(' class conf xmin ymin xmax ymax') print(y_pred_decoded_inv[i])
_____no_output_____
Apache-2.0
ssd512_training _2.ipynb
hidekazu300/ssd_512_2
Finally, let's draw the predicted boxes onto the image. Each predicted box says its confidence next to the category name. The ground truth boxes are also drawn onto the image in green for comparison.
# 5: Draw the predicted boxes onto the image # Set the colors for the bounding boxes colors = plt.cm.hsv(np.linspace(0, 1, n_classes+1)).tolist() classes = ['1m','2m','3m','4m','5m','6m','7m','8m','9m','1p','2p','3p','4p','5p','6p', '7p','8p','9p','1s','2s','3s','4s','5s','6s','7s','8s','9s', 'east','south','west','north','white','hatsu','tyun'] plt.figure(figsize=(20,12)) plt.imshow(batch_original_images[i]) current_axis = plt.gca() for box in batch_original_labels[i]: xmin = box[1] ymin = box[2] xmax = box[3] ymax = box[4] label = '{}'.format(classes[int(box[0])]) current_axis.add_patch(plt.Rectangle((xmin, ymin), xmax-xmin, ymax-ymin, color='green', fill=False, linewidth=2)) current_axis.text(xmin, ymin, label, size='x-large', color='white', bbox={'facecolor':'green', 'alpha':1.0}) for box in y_pred_decoded_inv[i]: xmin = box[2] ymin = box[3] xmax = box[4] ymax = box[5] color = colors[int(box[0])] label = '{}: {:.2f}'.format(classes[int(box[0])], box[1]) current_axis.add_patch(plt.Rectangle((xmin, ymin), xmax-xmin, ymax-ymin, color=color, fill=False, linewidth=2)) current_axis.text(xmin, ymin, label, size='x-large', color='white', bbox={'facecolor':color, 'alpha':1.0})
_____no_output_____
Apache-2.0
ssd512_training _2.ipynb
hidekazu300/ssd_512_2
Build a machine learning workflow using Step Functions and SageMaker1. [Introduction](Introduction)1. [Setup](Setup)1. [Build a machine learning workflow](Build-a-machine-learning-workflow) IntroductionThis notebook describes using the AWS Step Functions Data Science SDK to create and manage workflows. The Step Functions SDK is an open source library that allows data scientists to easily create and execute machine learning workflows using AWS Step Functions and Amazon SageMaker. For more information, see the following.* [AWS Step Functions](https://aws.amazon.com/step-functions/)* [AWS Step Functions Developer Guide](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html)* [AWS Step Functions Data Science SDK](https://aws-step-functions-data-science-sdk.readthedocs.io)In this notebook we will use the SDK to create steps, link them together to create a workflow, and execute the workflow in AWS Step Functions. The first tutorial shows how to create an ML pipeline workflow, and the second shows how to run multiple experiments in parallel.
%%sh pip -q install --upgrade stepfunctions
_____no_output_____
Apache-2.0
step-functions-data-science-sdk/machine_learning_workflow_abalone/machine_learning_workflow_abalone.ipynb
juliensimon/amazon-sagemaker-examples
Setup Add a policy to your SageMaker role in IAM**If you are running this notebook on an Amazon SageMaker notebook instance**, the IAM role assumed by your notebook instance needs permission to create and run workflows in AWS Step Functions. To provide this permission to the role, do the following.1. Open the Amazon [SageMaker console](https://console.aws.amazon.com/sagemaker/). 2. Select **Notebook instances** and choose the name of your notebook instance3. Under **Permissions and encryption** select the role ARN to view the role on the IAM console4. Choose **Attach policies** and search for `AWSStepFunctionsFullAccess`.5. Select the check box next to `AWSStepFunctionsFullAccess` and choose **Attach policy**If you are running this notebook in a local environment, the SDK will use your configured AWS CLI configuration. For more information, see [Configuring the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html).Next, create an execution role in IAM for Step Functions. Create an execution role for Step FunctionsYou need an execution role so that you can create and execute workflows in Step Functions.1. Go to the [IAM console](https://console.aws.amazon.com/iam/)2. Select **Roles** and then **Create role**.3. Under **Choose the service that will use this role** select **Step Functions**4. Choose **Next** until you can enter a **Role name**5. Enter a name such as `StepFunctionsWorkflowExecutionRole` and then select **Create role**Attach a policy to the role you created. The following steps attach a policy that provides full access to Step Functions, however as a good practice you should only provide access to the resources you need. 1. Under the **Permissions** tab, click **Add inline policy**2. Enter the following in the **JSON** tab```json{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "sagemaker:CreateTransformJob", "sagemaker:DescribeTransformJob", "sagemaker:StopTransformJob", "sagemaker:CreateTrainingJob", "sagemaker:DescribeTrainingJob", "sagemaker:StopTrainingJob", "sagemaker:CreateHyperParameterTuningJob", "sagemaker:DescribeHyperParameterTuningJob", "sagemaker:StopHyperParameterTuningJob", "sagemaker:CreateModel", "sagemaker:CreateEndpointConfig", "sagemaker:CreateEndpoint", "sagemaker:DeleteEndpointConfig", "sagemaker:DeleteEndpoint", "sagemaker:UpdateEndpoint", "sagemaker:ListTags", "lambda:InvokeFunction", "sqs:SendMessage", "sns:Publish", "ecs:RunTask", "ecs:StopTask", "ecs:DescribeTasks", "dynamodb:GetItem", "dynamodb:PutItem", "dynamodb:UpdateItem", "dynamodb:DeleteItem", "batch:SubmitJob", "batch:DescribeJobs", "batch:TerminateJob", "glue:StartJobRun", "glue:GetJobRun", "glue:GetJobRuns", "glue:BatchStopJobRun" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "iam:PassRole" ], "Resource": "*", "Condition": { "StringEquals": { "iam:PassedToService": "sagemaker.amazonaws.com" } } }, { "Effect": "Allow", "Action": [ "events:PutTargets", "events:PutRule", "events:DescribeRule" ], "Resource": [ "arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTrainingJobsRule", "arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTransformJobsRule", "arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTuningJobsRule", "arn:aws:events:*:*:rule/StepFunctionsGetEventsForECSTaskRule", "arn:aws:events:*:*:rule/StepFunctionsGetEventsForBatchJobsRule" ] } ]}```3. Choose **Review policy** and give the policy a name such as `StepFunctionsWorkflowExecutionPolicy`4. Choose **Create policy**. You will be redirected to the details page for the role.5. Copy the **Role ARN** at the top of the **Summary** Configure execution roles
import sagemaker # SageMaker Execution Role # You can use sagemaker.get_execution_role() if running inside sagemaker's notebook instance sagemaker_execution_role = sagemaker.get_execution_role() #Replace with ARN if not in an AWS SageMaker notebook # paste the StepFunctionsWorkflowExecutionRole ARN from above workflow_execution_role = 'arn:aws:iam::ACCOUNT_NUMBER:role/StepFunctionsWorkflowExecutionRole'
_____no_output_____
Apache-2.0
step-functions-data-science-sdk/machine_learning_workflow_abalone/machine_learning_workflow_abalone.ipynb
juliensimon/amazon-sagemaker-examples