markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Endpoint InterfaceUsing the Endpoint interface to request IPA content is fairly straighforward1. Identify the required IPA Endpoint (URL)2. Use the Endpoint Interface to send a request to the Endpoint3. Decode the response and extract the IPA data Identifying the Surfaces EndpointTo ascertain the Endpoint, we can use the Refinitiv Data Platform's API Playground - an interactive documentation site you can access once you have a valid Refinitiv Data Platform account. | vs_endpoint = rdp.Endpoint(session,
"https://api.refinitiv.com/data/quantitative-analytics-curves-and-surfaces/v1/surfaces") | _____no_output_____ | Apache-2.0 | Vol Surfaces Webinar.ipynb | Refinitiv-API-Samples/Article.RDPLibrary.Python.VolatilitySurfaces_Curves |
Build our JSON RequestUsing the reference documentation or by referring to the example queries shown on the above API playground page, I can build up my Request. Currently there are four Underlying Types of Volatility Surface supported:* Eti : exchange-traded instruments like equities, equity indices, and futures.* Fx : Fx instruments.* Swaption : Rate swaptions volatility cube.* Cap : Rate caps volatilitiesFor example the JSON request below, will allow me to generate Volatility Surfaces:* for Renault, Peugeot, BMW and VW* express the axes in Dates and Moneyness* and return the data in a matrix formatNote from the request below, how I can obtain data for multiple entities in a single request. | eti_request_body={
"universe": [
{ "surfaceTag": "RENAULT",
"underlyingType": "Eti",
"underlyingDefinition": {
"instrumentCode": "RENA.PA"
},
"surfaceParameters": {
"xAxis": "Date",
"yAxis": "Moneyness",
"timeStamp":"Close"
},
"surfaceLayout": { "format": "Matrix" }
},
{ "surfaceTag": "PEUGEOT",
"underlyingType": "Eti",
"underlyingDefinition": {
"instrumentCode": "PEUP.PA"
},
"surfaceParameters": {
"xAxis": "Date",
"yAxis": "Moneyness",
"timeStamp":"Close"
},
"surfaceLayout": {"format": "Matrix" }
},
{ "surfaceTag": "BMW",
"underlyingType": "Eti",
"underlyingDefinition": {
"instrumentCode": "BMWG.DE"
},
"surfaceParameters": {
"xAxis": "Date",
"yAxis": "Moneyness",
"timeStamp":"Close"
},
"surfaceLayout": {"format": "Matrix" }
},
{ "surfaceTag": "VW",
"underlyingType": "Eti",
"underlyingDefinition": {
"instrumentCode": "VOWG_p.DE"
},
"surfaceParameters": {
"xAxis": "Date",
"yAxis": "Moneyness",
"timeStamp":"Close"
},
"surfaceLayout": {"format": "Matrix" }
}]
}
| _____no_output_____ | Apache-2.0 | Vol Surfaces Webinar.ipynb | Refinitiv-API-Samples/Article.RDPLibrary.Python.VolatilitySurfaces_Curves |
I then send the request to the Platform using the instance of Endpoint interface I created: | eti_response = vs_endpoint.send_request(
method = rdp.Endpoint.RequestMethod.POST,
body_parameters = eti_request_body
)
print(json.dumps(eti_response.data.raw, indent=2)) | {
"data": [
{
"surfaceTag": "RENAULT",
"surface": [
[
null,
"0.5",
"0.6",
"0.7",
"0.75",
"0.8",
"0.85",
"0.9",
"0.95",
"0.975",
"1",
"1.025",
"1.05",
"1.1",
"1.15",
"1.2",
"1.25",
"1.3",
"1.4",
"1.5"
],
[
"2021-09-17",
147.08419792403998,
124.887128486199,
102.502608735503,
90.7725390709383,
78.3224754024164,
64.7347605843219,
49.5452700945521,
34.7540272345762,
31.4667054231154,
32.0453039114781,
34.512655172437704,
37.541128880630296,
43.6039134806104,
49.0859621495022,
53.9581726356531,
58.3160482626392,
62.25107287197859,
69.1277973238572,
74.9979157202164
],
[
"2021-10-15",
53.069388222896194,
48.5924854712817,
44.4596013501265,
42.481538137102596,
40.545928644530996,
38.642879888308,
36.764926387466204,
34.910013376827095,
33.995033240751596,
33.0976677871358,
32.243729081637,
31.5385617522967,
32.9317587757963,
36.540667097254,
39.8547710463726,
42.8206310435714,
45.4972388560462,
50.1755078982006,
54.1734706301841
],
[
"2021-11-19",
51.010168194334305,
47.2586137425893,
43.8369524931666,
42.215783288497,
40.6407820868742,
39.1035551143142,
37.5966976300453,
36.113513046636804,
35.3788420621087,
34.6477842934691,
33.9196052012799,
33.4083257579711,
33.970220758767496,
34.4985849874748,
34.9969846589742,
35.4684559278907,
35.915604065343,
36.745644518348705,
37.5018840990136
],
[
"2021-12-17",
57.233242294493905,
51.164053019880896,
45.701088650208,
43.1961463100387,
40.8664673512826,
38.7492162510077,
36.8984869723095,
35.3826377109506,
34.773166630264605,
34.2737384577619,
33.8906209420233,
33.6275219669106,
33.460272549395796,
33.7371002456965,
34.3813221293777,
35.298242246575,
36.3977365918439,
38.8705864290812,
41.4290579735572
],
[
"2022-03-18",
44.6596592255869,
42.0460856501159,
39.7202965404191,
38.644801723719105,
37.622080867111,
36.6519175260592,
35.7389270526863,
34.896264117111905,
34.5095990171748,
34.1537003957255,
33.8373185227046,
33.5720427853049,
33.2482704002882,
33.2492925176735,
33.5159983803838,
33.928850537993796,
34.4042590290927,
35.3962100605949,
36.3580111985554
],
[
"2022-06-17",
49.1870500594481,
44.9031179692494,
41.2680211636964,
39.69311034954,
38.288670999534006,
37.0639050681352,
36.0273893913099,
35.184454053269995,
34.8357727616879,
34.5348824790694,
34.2806770212382,
34.0716215264747,
33.7809379861908,
33.6439199904793,
33.6387371081713,
33.7429317144256,
33.9352165833992,
34.5107887236788,
35.2464953006523
],
[
"2022-12-16",
45.915430866261296,
41.568666875793696,
38.0589155632095,
36.7374739303178,
35.756679762792,
35.092477323812396,
34.6758141327614,
34.431121339744294,
34.354025406618796,
34.2993866650758,
34.2628112321313,
34.2407940558594,
34.229839222543504,
34.2502985967091,
34.291723154303796,
34.3472511126111,
34.412282078027204,
34.559230167779,
34.7171235514547
]
]
},
{
"surfaceTag": "PEUGEOT",
"error": {
"id": "232f6765-a920-4584-8b54-77f39b315da1/388f77ae-866e-44fe-989f-b954595da7f6",
"status": "Error",
"message": "Unknown underlying : PEUP.PA@RIC",
"code": "VolSurf.10008"
}
},
{
"surfaceTag": "BMW",
"surface": [
[
null,
"0.5",
"0.6",
"0.7",
"0.75",
"0.8",
"0.85",
"0.9",
"0.95",
"0.975",
"1",
"1.025",
"1.05",
"1.1",
"1.15",
"1.2",
"1.25",
"1.3",
"1.4",
"1.5"
],
[
"2021-09-17",
116.845285831284,
98.4955466076639,
79.8759711420056,
70.0693756839943,
59.648390837661005,
48.3928694609162,
36.659259523477,
27.8917636007667,
26.112867383184202,
25.7473437323123,
26.2111895589014,
27.1032791041882,
29.3810577196575,
31.7834555215809,
34.0931448372836,
36.2565214779287,
38.268806180789596,
41.8833019536874,
45.039930489630194
],
[
"2021-10-15",
66.4722438122797,
57.5692211952808,
48.8876367694066,
44.526206198251,
40.106867647071894,
35.6199006830361,
31.124524575682,
26.8858686281106,
25.0663745235899,
23.6223245491879,
22.6790698362936,
22.2830851870678,
22.838913504619,
24.3941728766485,
26.2802553341711,
28.2041512791805,
30.05887196084,
33.4545816221734,
36.4385434208208
],
[
"2021-11-19",
59.6548040147561,
51.676797795483594,
43.9545932082731,
40.1290827936518,
36.334848434780795,
32.647505885872405,
29.271374496206498,
26.5607752421821,
25.5533213548146,
24.7846805441186,
24.2280587712883,
23.8449233153254,
23.4499564127721,
23.364860568818898,
23.4512533780394,
23.6332095618005,
23.8686642855439,
24.4137497636682,
24.988755094251598
],
[
"2021-12-17",
57.000970257372096,
49.1530128772988,
41.6746242855461,
38.061545312743,
34.582295989538494,
31.342785189096702,
28.528564514887698,
26.3775814320784,
25.611275627130297,
25.061410076782497,
24.7181541429707,
24.560415940794698,
24.6897568420552,
25.2279090586339,
26.000041490541097,
26.891655530662,
27.8339749368404,
29.7296077956909,
31.538236502954497
],
[
"2022-03-18",
34.6517801480973,
33.025354724913605,
31.6136221315428,
30.9749328298299,
30.377048394941504,
29.818577916088003,
29.299317389159,
28.8202403595205,
28.5963970847672,
28.383495230161,
28.181976544217203,
27.9923398485676,
27.6509074086288,
27.363675150843903,
27.1345570901111,
26.9657285226333,
26.8565726713302,
26.799338623303896,
26.9072105062878
],
[
"2022-06-17",
41.7685383611724,
35.765823375570896,
31.4025984950861,
29.843831526977,
28.6450177020749,
27.743950355993903,
27.078791153078203,
26.5958597181018,
26.4088241855986,
26.251943454376097,
26.1213386702238,
26.0136362793075,
25.855616826955703,
25.7588339299166,
25.708958559441502,
25.6951736343913,
25.7092656860147,
25.7974083262819,
25.9384282659597
],
[
"2022-12-16",
32.4567965023246,
30.3259923139122,
28.465159965014802,
27.6227740152669,
26.836305731271498,
26.1059801603091,
25.433785668915498,
24.8234563051983,
24.5431246580115,
24.2804200751625,
24.0362566646918,
23.8116193408349,
23.4250812803751,
23.1291124084223,
22.9310563772648,
22.8357291149445,
22.8439015971646,
23.1491439341391,
23.763006647344103
],
[
"2023-06-16",
32.8275913709902,
29.825454499724902,
27.1644029353807,
26.069388721946602,
25.3369159794596,
25.0224640729829,
24.9362882169239,
24.9393990255266,
24.9561139814056,
24.9784593215957,
25.004568455885902,
25.0332082655576,
25.0949882246438,
25.1596879533267,
25.2252008561141,
25.290393165269297,
25.354639439640604,
25.4790779043358,
25.5973370893097
]
]
},
{
"surfaceTag": "VW",
"surface": [
[
null,
"0.5",
"0.6",
"0.7",
"0.75",
"0.8",
"0.85",
"0.9",
"0.95",
"0.975",
"1",
"1.025",
"1.05",
"1.1",
"1.15",
"1.2",
"1.25",
"1.3",
"1.4",
"1.5"
],
[
"2021-09-17",
114.9314664709,
95.9894396686403,
76.6135440169291,
66.3472579190723,
55.4554726431587,
44.0344023956383,
33.7968257283678,
28.6851007510666,
28.1593306280496,
28.425046914714404,
29.140319709019202,
30.0946506746575,
32.2895785056828,
34.5475040494519,
36.7228908488976,
38.773967692826,
40.6945799450279,
44.1729901278008,
47.236493355838
],
[
"2021-10-15",
57.375246904915,
49.7825190662113,
42.5110323707841,
38.9611462854511,
35.4972590663025,
32.2134120429363,
29.3269545789032,
27.220565532367402,
26.593124698883102,
26.2860116277915,
26.284168490728398,
26.542959907222002,
27.6095695147525,
29.073599537630702,
30.678396354383196,
32.2960412468414,
33.8690560728042,
36.8038642621664,
39.4408213508689
],
[
"2021-11-19",
40.5164104901332,
37.5595161431696,
34.8643942746955,
33.5881627181228,
32.3487808701543,
31.1396611441992,
29.9550064044728,
28.789590185127402,
28.212574963201497,
27.6385789383842,
27.072623752272403,
27.4596183067678,
28.191684121696696,
28.8738666327561,
29.512238570718203,
30.111833833553497,
30.676872532730897,
31.7170519014931,
32.6556679383878
],
[
"2021-12-17",
55.608286147016194,
46.3040334948096,
40.4869770451672,
38.7152177248228,
37.488555251203195,
36.660398926279804,
36.117712281089794,
35.7785936997772,
35.666533484297204,
35.5851259433475,
35.5297584981015,
35.4965526522386,
35.4840584191147,
35.5270834682839,
35.6107986288532,
35.7243885274329,
35.8598807184815,
36.1743109491648,
36.522097375719596
],
[
"2022-03-18",
36.218856024754,
34.6807394151219,
33.324940583323794,
32.6999237680908,
32.1042464631892,
31.534446006490903,
30.987629291031897,
30.4613523293718,
30.2052526732421,
29.9535301598657,
29.7059690407111,
29.6409172737924,
31.121220791685,
32.472724324283,
33.7159724734816,
34.8668336413211,
35.9378533093031,
37.8789782799983,
39.600689996803
],
[
"2022-06-17",
40.6967382519956,
38.311602443083395,
36.1725087878902,
35.1730025482411,
34.2116033179677,
33.2832220760772,
32.3835544387683,
31.5089096515214,
31.0799529653275,
30.656082115330403,
30.236956333202503,
29.8222549207731,
29.0050062933094,
29.462628985363796,
30.1645965867815,
30.822896827282996,
31.442397039942797,
32.580810970083604,
33.6060043256551
],
[
"2022-12-16",
38.1576610298997,
36.1560428699018,
34.3728829559457,
33.5441055270122,
32.7498556584893,
31.985813170711403,
31.2483497326988,
30.5343812634024,
30.1853648866574,
29.841257308345497,
29.501781160512902,
29.166676726352396,
28.5086227430281,
28.1754694907583,
28.835064113784497,
29.4538568882352,
30.0363718203165,
31.107276612058598,
32.0721402018659
],
[
"2023-06-16",
39.3045381949598,
37.2057489339186,
35.3341218932086,
34.4635311488169,
33.6287516786616,
32.8252654304549,
32.0492747228427,
31.2975482300669,
30.9299001178461,
30.5673054909468,
30.209472801624198,
29.8561287927744,
29.161894864456002,
28.482721133135403,
28.005970047486002,
28.5473004789567,
29.057899229047703,
29.998966102327202,
30.8492854578305
],
[
"2025-12-19",
39.6026078187924,
36.8041510266428,
34.2602356279919,
33.058316528484696,
31.8930279193662,
30.7582320458014,
29.648578883315203,
28.5593035829799,
28.0209387344818,
27.4860669744102,
26.9541910066398,
26.4248249083007,
25.371716824556202,
24.3229682621768,
23.2748105524925,
22.223454158775798,
21.1653611786271,
19.459881926613,
20.043464945436902
]
]
}
]
}
| Apache-2.0 | Vol Surfaces Webinar.ipynb | Refinitiv-API-Samples/Article.RDPLibrary.Python.VolatilitySurfaces_Curves |
Once I get the response back, I extract the payload and use the Matplotlib library to plot my surface. For example, below I extract and plot the Volatility Surface data for 'VW'. | surfaces = eti_response.data.raw['data']
plot_surface(surfaces, 'VW') | _____no_output_____ | Apache-2.0 | Vol Surfaces Webinar.ipynb | Refinitiv-API-Samples/Article.RDPLibrary.Python.VolatilitySurfaces_Curves |
Smile CurveI can also use the same surfaces response data to plot a Smile Curve.For example, to compare the volatility smiles of the 4 equities at the chosen expiry time (where the maturity value of 1 is the first expiry): | plot_smile(surfaces, 1) | _____no_output_____ | Apache-2.0 | Vol Surfaces Webinar.ipynb | Refinitiv-API-Samples/Article.RDPLibrary.Python.VolatilitySurfaces_Curves |
Volatility TermsWe can also use the same surfaces response data to plot the Term Structure (the full code for all the plots can be found in the plotting_helper file)Let the user to choose the Moneyness index - **integer value** - to use for the chart: | moneyness=1
plot_term_volatility(surfaces, moneyness) | _____no_output_____ | Apache-2.0 | Vol Surfaces Webinar.ipynb | Refinitiv-API-Samples/Article.RDPLibrary.Python.VolatilitySurfaces_Curves |
Equity Volatility Surface - advanced usageLet's dig deeper into some advanced parameters for ETI volaitlity surfaces.The request is highly configurable and the various parameters & options are listed on the API playground. For example, the options are :* Timestamp : Default, Close, Settle* Volatility Model : SVI or SSVI (Surface SVI)* Input : Quoted or Implied | eti_advanced_request_body={
"universe": [
{ "surfaceTag": "ANZ",
"underlyingType": "Eti",
"underlyingDefinition": {
"instrumentCode": "ANZ.AX"
},
"surfaceParameters": {
"timestamp" :"Close",
"volatilityModel": "SSVI",
"inputVolatilityType": "Quoted",
"xAxis": "Date",
"yAxis": "Moneyness"
},
"surfaceLayout": { "format": "Matrix" }
}]
}
eti_advanced_response = vs_endpoint.send_request(
method = rdp.Endpoint.RequestMethod.POST,
body_parameters = eti_advanced_request_body
)
print(json.dumps(eti_advanced_response.data.raw, indent=2)) | {
"data": [
{
"surfaceTag": "ANZ",
"surface": [
[
null,
"0.5",
"0.6",
"0.7",
"0.75",
"0.8",
"0.85",
"0.9",
"0.95",
"0.975",
"1",
"1.025",
"1.05",
"1.1",
"1.15",
"1.2",
"1.25",
"1.3",
"1.4",
"1.5"
],
[
"2021-09-16",
44.215580320551,
39.384400912296705,
34.7800228493512,
32.5086783876381,
30.229884187990702,
27.920386602104003,
25.5524713455681,
23.0902099740693,
21.8086373419636,
20.4825037874273,
19.1006636259614,
17.648210352753598,
15.2912919064396,
15.333638881446602,
15.3750705984084,
15.4147805281901,
15.452858085471599,
15.524573298155001,
15.591053307580099
],
[
"2021-10-21",
40.268373906731505,
35.971852044603295,
31.8907446318008,
29.8841717706608,
27.876707178773902,
25.8493687163475,
23.78017277318,
21.641567407859,
20.535071232697398,
19.3959478810723,
18.216486708197,
16.9867351771227,
15.0192370013619,
15.0544493472079,
15.0887087652118,
15.121543021561001,
15.153035690817502,
15.2123773500846,
15.267422836358
],
[
"2021-11-18",
38.891959445774695,
34.7764395122118,
30.8717117027911,
28.953996753112897,
27.0372424180578,
25.103775279093398,
23.133323782130898,
21.1007671885408,
20.051133465927702,
18.9722802862759,
17.8574083657368,
16.69783501446,
14.8502496869929,
14.883359077157701,
14.915450686012798,
14.9462016862988,
14.975696862606998,
15.031280594443,
15.0828488985881
],
[
"2021-12-16",
37.9683249064433,
33.9729760324035,
30.1851625381529,
28.326252330969897,
26.469443329715,
24.597898423086402,
22.6924292580793,
20.7294358633867,
19.716981241133798,
18.6774245726983,
17.6045170857599,
16.490332938911102,
14.7196777513595,
14.7514649589374,
14.782175339061299,
14.8115974438598,
14.8398178007343,
14.8930020874205,
14.9423491713074
],
[
"2022-01-20",
37.1335353956471,
33.2473570108961,
29.5657755076431,
27.7603010232178,
25.9579627277524,
24.1426774209663,
22.296229600496,
20.39637807934,
19.4176433110459,
18.4136971680357,
17.378773173444902,
16.3056031300525,
14.604227916297798,
14.634834433150301,
14.6643046880924,
14.692533084512899,
14.719608065553599,
14.770636108223101,
14.817986691757701
],
[
"2022-02-17",
36.6200786297623,
32.8024623820977,
29.1877209113227,
27.4159216924563,
25.6479526407429,
23.868212725756,
22.059109453855598,
20.1992714910722,
19.2419301605687,
18.260593736133902,
17.249801854361298,
16.2027018988401,
14.545367762292999,
14.575221007484402,
14.6039028780167,
14.631372576920802,
14.6577196210635,
14.707377296836398,
14.753459024798099
],
[
"2022-03-17",
36.1987677775504,
32.4391220476939,
28.8810487583197,
27.1378685935572,
25.399162367132,
23.6497494730486,
21.8725826467905,
20.0470533340941,
19.1080942880255,
18.146214522521,
17.1562214288044,
16.131625162074002,
14.5123765139664,
14.5415710123286,
14.5695699302906,
14.596382850651901,
14.622099884802001,
14.6705717125003,
14.715555545683301
],
[
"2022-06-16",
35.228003490300104,
31.614150937188,
28.1996528929695,
26.529451972484804,
24.865735191124898,
23.1944664605749,
21.5000962827198,
19.7641222449269,
18.873410736636,
17.9627976420532,
17.0278270774212,
16.063004470001,
14.545423371757698,
14.5727944279693,
14.5989603613333,
14.624014605642998,
14.6480458816378,
14.6933461294565,
14.7353945653852
],
[
"2022-09-15",
34.6155015055886,
31.1117641037652,
27.8070770596431,
26.1932886040008,
24.587995237156502,
22.9781268833767,
21.349409929778602,
19.685141538644498,
18.833349933457598,
17.9642997421071,
17.0741548203317,
16.1582526961339,
14.7243025202511,
14.750073947352698,
14.7747203344747,
14.798322942691499,
14.8209646137536,
14.8636524632119,
14.9032841561248
],
[
"2022-12-15",
34.2029802947993,
30.7898415587424,
27.576442313314498,
26.009940043882,
24.453903158529698,
22.896109001181,
21.3234042782336,
19.720659497893102,
18.902388182044,
18.0692110770993,
17.2178185982692,
16.344249530721598,
14.9826594152141,
15.0069589398063,
15.0302838891618,
15.052629775604402,
15.0740699962055,
15.1145006616754,
15.1520448352742
],
[
"2023-03-16",
33.9128002270195,
30.5773479856755,
27.442787061818702,
25.9173395397919,
24.4042147556661,
22.891932493095197,
21.3683164963514,
19.8195973627979,
19.0307816691092,
18.229122789976802,
17.411750740203598,
16.5752851060859,
15.276893298893098,
15.299828594837,
15.321995299260799,
15.3432448022779,
15.363638246876398,
15.4021035328229,
15.4378304315921
],
[
"2023-06-15",
33.700997234424804,
30.4336238110701,
27.3684495461472,
25.879237926780302,
24.4040475094587,
22.9320435137161,
21.4518926344678,
19.950986147035298,
19.1882057777644,
18.4143630208485,
17.6269497955623,
16.8230609791982,
15.579878690246199,
15.6015476608239,
15.6226990147947,
15.6429922882934,
15.6624740911848,
15.6992288627604,
15.7333744766756
],
[
"2023-12-21",
33.406368886015805,
30.257402243593802,
27.313206500552102,
25.887235940069296,
24.4782488172535,
23.0764831627008,
21.67198757769,
20.2540233847856,
19.5362361161057,
18.810296872019,
18.0742528201199,
17.3259263469246,
16.1761589819322,
16.1954610987016,
16.2148820701821,
16.2335604030759,
16.251506612858897,
16.2853837864848,
16.3168704283149
],
[
"2024-06-20",
33.2199288041686,
30.162983150063898,
27.3127374892735,
25.9357910864233,
24.5780143391923,
23.2304099686929,
21.8839925630813,
20.5293014236692,
19.8456265838656,
19.155819273184697,
18.458286291441002,
17.751326467616,
16.6704569809582,
16.6877294529446,
16.705854547663602,
16.7233419028153,
16.740160881573498,
16.7719306658501,
16.8014720087808
]
]
}
]
}
| Apache-2.0 | Vol Surfaces Webinar.ipynb | Refinitiv-API-Samples/Article.RDPLibrary.Python.VolatilitySurfaces_Curves |
Once again, I extract the payload and plot my surface for 'ANZ'. | surfaces = eti_advanced_response.data.raw['data']
plot_surface(surfaces, 'ANZ') | _____no_output_____ | Apache-2.0 | Vol Surfaces Webinar.ipynb | Refinitiv-API-Samples/Article.RDPLibrary.Python.VolatilitySurfaces_Curves |
Equity Volatility Surface - Weights and Goodness of FitIn this section, I will apply my own weights per moneyness range and check the goodness of fit.I will keep the same ANZ request and simply add my weighting assumptions:* Options with moneyness below 50% will have a 0.5 weight* Options with moneyness above 150% will have a 0.1 weight* All other options will have a higher weight of 1 | eti_weights_request_body={
"universe": [
{ "surfaceTag": "ANZ_withWeights",
"underlyingType": "Eti",
"underlyingDefinition": {
"instrumentCode": "ANZ.AX"
},
"surfaceParameters": {
"timestamp" :"Close",
"volatilityModel": "SSVI",
"inputVolatilityType": "Quoted",
"xAxis": "Date",
"yAxis": "Moneyness",
"weights":[
{
"minMoneyness": 0,
"maxMoneyness": 50,
"weight":0.5
},
{
"minMoneyness": 50,
"maxMoneyness": 150,
"weight":1
},
{
"minMoneyness": 150,
"maxMoneyness": 200,
"weight":0.1
}
]
},
"surfaceLayout": { "format": "Matrix" }
}],
"outputs":["GoodnessOfFit"]
}
eti_weights_response = vs_endpoint.send_request(
method = rdp.Endpoint.RequestMethod.POST,
body_parameters = eti_weights_request_body
)
print(json.dumps(eti_weights_response.data.raw, indent=2)) | {
"data": [
{
"surfaceTag": "ANZ_withWeights",
"surface": [
[
null,
"0.5",
"0.6",
"0.7",
"0.75",
"0.8",
"0.85",
"0.9",
"0.95",
"0.975",
"1",
"1.025",
"1.05",
"1.1",
"1.15",
"1.2",
"1.25",
"1.3",
"1.4",
"1.5"
],
[
"2021-09-16",
41.315495980594,
36.5253696257479,
31.9557391811034,
29.7066013809671,
27.4616980356841,
25.21142782207,
22.955576812435798,
20.7162815042377,
19.622718689895997,
18.567888134949502,
17.5775937193865,
16.6857223980305,
15.341115919464501,
14.697517905566801,
14.6353291667322,
14.9195722795092,
15.3757115326768,
16.4633953668468,
17.5669389175231
],
[
"2021-10-21",
39.2216136397337,
34.9053691465696,
30.8014355515297,
28.784657807506196,
26.7709705639395,
24.746975102867598,
22.7029205976156,
20.640398334469403,
19.6113748452545,
18.5985504642363,
17.6261663110522,
16.736705373379902,
15.4613134233612,
15.0815716957889,
15.296007593521798,
15.745802217226402,
16.2706501587475,
17.340326001204,
18.3435460478668
],
[
"2021-11-18",
38.4879008299554,
34.336970894388,
30.3956066199626,
28.460069716193697,
26.527366963999,
24.582741108212698,
22.6126310522936,
20.608891124493102,
19.5970809197701,
18.587251245453,
17.597944504141598,
16.6702666012245,
15.3495873875858,
15.120927801956,
15.4805111757866,
15.9999209070145,
16.549095179661798,
17.6098672787138,
18.5778725146864
],
[
"2021-12-16",
37.9421552195927,
33.8967820017211,
30.058448285781804,
28.1741232811926,
26.2923152867652,
24.3974523231311,
22.4735859008732,
20.5060037486518,
19.5035864126491,
18.4917955640797,
17.481717428622503,
16.5058493057563,
15.0904046395378,
15.017818880622,
15.495270302042,
16.0612941366775,
16.625046521395202,
17.6794544788745,
18.6260000483635
],
[
"2022-01-20",
37.3835012373585,
33.4281074436749,
29.6768122022955,
27.8355001522332,
25.9963450153757,
24.1432266747749,
22.2584732386441,
20.322326768409802,
19.3286585851551,
18.3156791623199,
17.2854951686495,
16.2531868023427,
14.6565299587187,
14.8132154528031,
15.401932877930099,
16.0033180223134,
16.576265590456497,
17.6230741598152,
18.5517072680784
],
[
"2022-02-17",
37.0007651293515,
33.0983426411029,
29.3979609599716,
27.5817498352298,
25.7675325827609,
23.939025913978902,
22.0779017645407,
20.1622872399854,
19.175882474317,
18.1655555713383,
17.128266780597002,
16.0656116783051,
14.2743924445729,
14.6522640152314,
15.296674591237,
15.911533679693601,
16.4861525458593,
17.5256530997815,
18.4432108427342
],
[
"2022-03-17",
36.6623223233816,
32.8026300286961,
29.1432592264402,
27.347317795327402,
25.5533658093529,
23.7451496212653,
21.9041716998503,
20.0077629171988,
19.0298235218302,
18.0259984327897,
16.9906499939984,
15.917411127183101,
13.8971864807301,
14.5244653349413,
15.1929486734354,
15.8104834138289,
16.3825985486068,
17.4130396182272,
18.3206453265016
],
[
"2022-06-16",
35.7981299103931,
32.0402253847461,
28.479548009001697,
26.7332680279399,
24.9901480877225,
23.234986231792902,
21.450993940889198,
19.6188782019596,
18.6780967321083,
17.717576791242802,
16.7369373797048,
15.744201213013302,
14.153759189612899,
14.397934872577501,
14.9822915357957,
15.557797789094,
16.1008147159367,
17.0887478823804,
17.9639317693529
],
[
"2022-09-15",
35.1928667394721,
31.510481828563304,
28.025437226183804,
26.3187608179081,
24.6178859894785,
22.9095311849001,
21.1804756860674,
19.419181070171,
18.5251453175884,
17.6253264978474,
16.7298296280421,
15.8674145279838,
14.622898901585902,
14.5650924807574,
14.985718819157201,
15.4830648084742,
15.9790947680092,
16.909654808757598,
17.748194381643
],
[
"2022-12-15",
34.7541235732357,
31.1352217455262,
27.7158653133326,
26.044758285894197,
24.3829485151109,
22.7194174134355,
21.0450859568991,
19.356920755831002,
18.5116767098232,
17.6741146203684,
16.8606158678916,
16.1053956508601,
15.042153968583499,
14.8492685844727,
15.1292418160549,
15.5442022984051,
15.988113038310301,
16.8549474192877,
17.6539529570262
],
[
"2023-03-16",
34.4234931856566,
30.860569709735604,
27.500607009271498,
25.862409265998398,
24.2373831716031,
22.6168130391744,
20.9956746695873,
19.3785961778735,
18.5797928497816,
17.79922451453,
17.0554751724925,
16.3807130700788,
15.4305604921757,
15.1691090433219,
15.343637612518702,
15.684236042331301,
16.0775185033467,
16.880293959407,
17.6392531557133
],
[
"2023-06-15",
34.1652414557869,
30.6527349359139,
27.347255703781,
25.7397136027552,
24.1493073838355,
22.5695090459534,
20.998966761461,
19.4486130256881,
18.6921511806067,
17.9615475201608,
17.2752971300539,
16.661415384387098,
15.790032199648198,
15.4906764716507,
15.5875608964465,
15.8646795338133,
16.2115534276183,
16.9531162016424,
17.6730760367693
],
[
"2023-12-21",
33.7740848149711,
30.352948025053,
27.148082580009902,
25.597950570549,
24.0728714220735,
22.5702436871798,
21.094789304787,
19.6663052182271,
18.9838414025144,
18.3363418559689,
17.7395360677505,
17.213237339050398,
16.4500190903426,
16.1146145763006,
16.1067900559485,
16.2832105494534,
16.5494140529078,
17.1777200022157,
17.8229296610213
],
[
"2024-06-20",
33.5104252876891,
30.164560454337202,
27.0439388514777,
25.542360144188,
24.0726953908179,
22.6352872647925,
21.2388751890971,
19.9076997493886,
19.281316057188,
18.693649360439597,
18.157381347932102,
17.686764339106702,
16.991213733526,
16.6437823715975,
16.5762128926257,
16.6848853641344,
16.8910800025105,
17.4284478971494,
18.0106492386048
]
],
"goodnessOfFit": [
[
"Expiry",
"Is Calibrated",
"Average Spread Explained",
"Min Strike",
"Max Strike"
],
[
"2021-09-16",
1,
-0.0236784532251888,
25.0,
32.0
],
[
"2021-10-21",
1,
0.7659749064959646,
20.0,
33.0
],
[
"2021-11-18",
1,
0.660221662992283,
22.0,
33.0
],
[
"2021-12-16",
1,
0.8037144814184869,
14.5,
36.0
],
[
"2022-01-20",
1,
0.7231027163746602,
23.5,
33.0
],
[
"2022-02-17",
1,
0.670650304907773,
24.5,
32.0
],
[
"2022-03-17",
1,
0.8559676539236349,
13.0,
33.0
],
[
"2022-06-16",
1,
0.7625571978107928,
12.0,
37.0
],
[
"2022-09-15",
1,
0.8282767342491224,
13.5,
34.0
],
[
"2022-12-15",
1,
0.86728339986809,
10.0,
34.0
],
[
"2023-03-16",
1,
0.7068975659007458,
23.0,
34.0
],
[
"2023-06-15",
1,
0.7289880741072924,
13.5,
34.0
],
[
"2023-12-21",
1,
0.6232327789069758,
19.0,
34.0
],
[
"2024-06-20",
1,
0.5356213655234314,
23.0,
34.0
]
]
}
]
}
| Apache-2.0 | Vol Surfaces Webinar.ipynb | Refinitiv-API-Samples/Article.RDPLibrary.Python.VolatilitySurfaces_Curves |
Once again, I extract the payload and plot my new surface for 'ANZ' | surfaces = eti_weights_response.data.raw['data']
plot_surface(surfaces, 'ANZ_withWeights') | _____no_output_____ | Apache-2.0 | Vol Surfaces Webinar.ipynb | Refinitiv-API-Samples/Article.RDPLibrary.Python.VolatilitySurfaces_Curves |
Since we changed the weights, I might want to view the Goodness Of Fit for this new generated surface. | pd.DataFrame(data=surfaces[0]["goodnessOfFit"]) | _____no_output_____ | Apache-2.0 | Vol Surfaces Webinar.ipynb | Refinitiv-API-Samples/Article.RDPLibrary.Python.VolatilitySurfaces_Curves |
FX Volatility SurfaceI can also use the same IPA Endpoint to request FX Volatility Surfaces For example the request below, will allow me to generate an FX volatility surface:for USDSGD cross rates express the axes in Dates and Delta and return the data in a matrix format As I mentioned earlier, the request is configurable and the parameters & options are listed on the API playground. For example, some of the parameters I could have used and their current options:* Volatility Model : SVI, SABR or CubicSpline* Axes : Delta/Strike and Tenor/Date* Data format : Matrix or ListThe ***calculationDate*** defaults to today's date and can be overridden - as I have done below: | fx_request_body={
"universe": [
{
"underlyingType": "Fx",
"surfaceTag": "FxVol-USDSGD",
"underlyingDefinition": {
"fxCrossCode": "USDSGD"
},
"surfaceLayout": {
"format": "Matrix",
"yValues": [ "-0.1","-0.15","-0.2","-0.25","-0.3","-0.35","-0.4","-0.45","0.5","0.45","0.4","0.35","0.3","0.25","0.2","0.15","0.1"]
},
"surfaceParameters": {
"xAxis": "Date",
"yAxis": "Delta",
"calculationDate": "2018-08-20T00:00:00Z",
"returnAtm": "False",
}
}
]
}
fx_response = vs_endpoint.send_request(
method = rdp.Endpoint.RequestMethod.POST,
body_parameters = fx_request_body
)
print(json.dumps(fx_response.data.raw, indent=2)) | {
"data": [
{
"surfaceTag": "FxVol-USDSGD",
"surface": [
[
null,
-0.1,
-0.15,
-0.2,
-0.25,
-0.3,
-0.35,
-0.4,
-0.45,
0.5,
0.45,
0.4,
0.35,
0.3,
0.25,
0.2,
0.15,
0.1
],
[
"2018-08-27T00:00:00Z",
4.974518945137208,
4.9247903047436346,
4.895074247096458,
4.877100340780957,
4.867337079224067,
4.864070228079728,
4.866473476189378,
4.874251953956021,
4.887496247068092,
4.907066584956367,
4.9337187499424795,
4.96900763257067,
5.0154418883190885,
5.077176643403251,
5.161547115598151,
5.2829683285775895,
5.475356718910382
],
[
"2018-09-20T00:00:00Z",
4.838527122989458,
4.843787440893674,
4.85126411709241,
4.862869538975778,
4.88174669231309,
4.911209267581311,
4.951400201648122,
4.999152668623619,
5.051421974374983,
5.109240579052077,
5.170386918643754,
5.235417623568648,
5.30553223129875,
5.382727225421624,
5.470338334976124,
5.574467589409287,
5.708360109968956
],
[
"2018-10-19T00:00:00Z",
4.976602059455342,
4.984217546752533,
4.994602710984536,
5.009816191457399,
5.032864816536398,
5.06678549938586,
5.112211026548413,
5.16680385348491,
5.227436144759485,
5.296726810289366,
5.370770734168017,
5.45009246134266,
5.53608229396217,
5.631174279062314,
5.73951487958861,
5.868753028874092,
6.035558088853484
],
[
"2018-11-21T00:00:00Z",
4.899023926398146,
4.9066510729213935,
4.91712311166735,
4.932687503516131,
4.956893271086363,
4.993804532150235,
5.044788813051125,
5.107081015495158,
5.176479804937106,
5.257104928631679,
5.343340755265668,
5.435785785387591,
5.536087471886052,
5.647131379255422,
5.773827458832362,
5.925229487531591,
6.121074812610049
],
[
"2019-02-21T00:00:00Z",
4.882792528557175,
4.88443065902285,
4.889861934685688,
4.900608319229801,
4.920051264997402,
4.953708959523724,
5.006726143183973,
5.078668144684444,
5.163100927677579,
5.267708752609406,
5.38151275127147,
5.504690853542167,
5.639233246500674,
5.789007148254301,
5.960769320383284,
6.167089945185912,
6.435476838213143
],
[
"2019-05-21T00:00:00Z",
4.921136276223659,
4.928211628073138,
4.93731525310799,
4.950024671162654,
4.96907049170929,
4.999157002578717,
5.046672648889839,
5.115603260797006,
5.201436067968562,
5.315685339215837,
5.443024281808281,
5.582535265823437,
5.735976174505293,
5.907576608724943,
6.105067580198182,
6.343023555317753,
6.653477186893951
],
[
"2019-08-21T00:00:00Z",
4.986184898093804,
4.974531372775044,
4.976443944311139,
4.989812066716617,
5.014724679375405,
5.052169096551939,
5.103414322320825,
5.169558360792649,
5.248596169645055,
5.360589621598737,
5.493973399909737,
5.649978399602314,
5.831353145811134,
6.043543286008076,
6.2968106631478475,
6.611247441333315,
7.032021277180051
],
[
"2020-08-20T00:00:00Z",
5.310322240047066,
5.209752269547287,
5.1604849810684295,
5.146492315706495,
5.1606186465052595,
5.198656911965351,
5.257509815184085,
5.334636041614901,
5.421433548361906,
5.553255563972978,
5.706954087994622,
5.883821344089949,
6.08731438569529,
6.324022026284657,
6.605884708786896,
6.955774374718987,
7.42459700544887
]
]
}
]
}
| Apache-2.0 | Vol Surfaces Webinar.ipynb | Refinitiv-API-Samples/Article.RDPLibrary.Python.VolatilitySurfaces_Curves |
Once again, I extract the payload and plot my surface - below I extract and plot the Volatility Surface for 'Singapore Dollar / US Dollar'. | fx_surfaces = fx_response.data.raw['data']
plot_surface(fx_surfaces, 'FxVol-USDSGD', True) | _____no_output_____ | Apache-2.0 | Vol Surfaces Webinar.ipynb | Refinitiv-API-Samples/Article.RDPLibrary.Python.VolatilitySurfaces_Curves |
Let's use the Surfaces to price OTC optionsNow that we know how to build a surface, we will see how we can use them to price OTC contracts. | fc_endpoint = rdp.Endpoint(session,
"https://api.refinitiv.com/data/quantitative-analytics/v1/financial-contracts") | _____no_output_____ | Apache-2.0 | Vol Surfaces Webinar.ipynb | Refinitiv-API-Samples/Article.RDPLibrary.Python.VolatilitySurfaces_Curves |
In the request below, I will price two OTC 'BNPP' options: | option_request_body = {
"fields": ["InstrumentTag","ExerciseType","OptionType","ExerciseStyle","EndDate","StrikePrice",\
"MarketValueInDealCcy","VolatilityPercent","DeltaPercent","ErrorMessage"],
"universe":[{
"instrumentType": "Option",
"instrumentDefinition": {
"instrumentTag" :"BNPP 15Jan 20",
"underlyingType": "Eti",
"strike": 20,
"endDate": "2022-01-15",
"callPut": "Call",
"underlyingDefinition": {
"instrumentCode": "BNPP.PA"
}
}
},
{
"instrumentType": "Option",
"instrumentDefinition": {
"instrumentTag" :"BNPP 15Jan 21",
"underlyingType": "Eti",
"strike": 21,
"endDate": "2022-01-15",
"callPut": "Call",
"underlyingDefinition": {
"instrumentCode": "BNPP.PA"
}
}
}],
"pricingParameters": {
"timeStamp" : "Close"
},
"outputs": ["Data","Headers"]
}
fc_response = fc_endpoint.send_request(
method = rdp.Endpoint.RequestMethod.POST,
body_parameters = option_request_body
)
print(json.dumps(fc_response.data.raw, indent=2))
headers_name = [h['name'] for h in fc_response.data.raw['headers']]
pd.DataFrame(data=fc_response.data.raw['data'], columns=headers_name) | _____no_output_____ | Apache-2.0 | Vol Surfaces Webinar.ipynb | Refinitiv-API-Samples/Article.RDPLibrary.Python.VolatilitySurfaces_Curves |
Creating a Pipeline using a Corpus ClassWhen working with a corpus of texts it can quickly become confusing to keep track of which step in an NLP pipeline you are on. Say you want to run a Frequency Distribution, did you remember to tokenize the text? To pull out the stopwords? While this is simple enough if you are working with a small group of texts in a discrete timeperiod, this quickly becomes challenging when working with a large body of texts or when working over a longer period of time. Matters become more complicated if you want to switch between corpus-level analysis and text-level analysis. The realities of your project may quickly mean that manually performing each step in your pipeline becomes redundant, hard to keep track of, or a waste of time. This is where objects and classes can come in. This can get confusing so we'll start with an example. I own a cat. Cats have certain qualities:* furry* color* four legs* personalityAnd they do certain things:* eat* sleep* scratch* generally enrich the lives of all around themAny one cat might be different than another. Your cat might not have fur. It might have fewer than four legs. It might not enrich your life (hard to believe). What we have here is a set of characteristics and verbs that describe the thing that is a cat. Not all cats, but one type of cat. One more example. Consider a house. We might assume that it has certain qualities: * a roof* a front door* walls* a windowAnd you can do certain things to, with, or in a house:* open the front door* sell it* paint a wallYou could debate these features and these actions, particularly their regional and socioeconomic specificity. Not all houses look like this. It's perhaps better to think of these lists as the template for a certain kind of house rather than all houses. **Object-oriented programming** is a way of organizing your code into patterns like this, separating the qualities of your data (its "attributes") from the instructions for things you want to execute on those attributes (its "methods"). The result is that, rather than thinking of your code as a directional sequence of events, we are instead thinking about the underlying collections of data and the characteristics that define them. And we arrange the code itself accordingly. To take a more technical example, you might consider an Email object. **Email Object**Attributes* has a sender* has a date* has a body* may have some attachmentsMethods* can be sent* can be received* can be trashedIt is not too difficult to imagine associated pieces of code meant to store these pieces of information or to do each of these particular things. You might have a funtion that defines the sender of an email based on some input, and you might have another that looks to a mail server to send out that note when instructed to do so. Ultimately, thinking in objects allows you to more easily organize text-level and corpus-level functions, is easier to grasp when working at scale, and allows you to store your parameters so they can be imported as a module (a file that contains Python definitions and statements). There are other ways of organizing your code, with their own sets of advantages and disadvantages, but this particular way can often help humanists better grasp what they are working on. To go back to our house example, if the house is the object then a **class** is the blueprint for how one of those objects is built. A class is the template that we write in Python that helps to pull everything together using the attributes and methods we specify. Classes can be as simple or as complex as you want them to be. In the following template, we will define a "Corpus" class as well as a "Text" class and assign to each class the different attributes we want it to contain and sample methods that might commonly be executed within an NLP project on those attributes. You might want to create your own classes for different use cases. But we find that thinking about corpus and the individual texts within it as distinct texts can be a helpful way to organize things. In the example below, we describe our corpus like so:**Corpus Object**Attributes* has a corresponding directory* has a series of filenames corresponding to the text files contained within that folder* has a list of stopwords associated with it* contains many different Text objectsAnd we describe our texts like so:**Text Object**Attributes* has a filename associated with it* has a raw version of the text* has a tokenized version* has a cleaned tokenized version* has an NLTK version of the text for some quick functionalityMethods* the text can be converted from a file into a raw version* the raw version can be tokenizedAnd so on. In what follows below, the two large code blocks contain our classes. This script could be saved as a file in your working directory and updated as neccessary. The subsequent code block allows us to import the script directly into the Python interpreter to play with our classes directly. Working with classes in the way we describe below enables you to move back-and-forth between modifying your code and interacting with it within the interpreter. | import os
import nltk
import string
class Corpus(object):
# rather than enter the data bit by bit, we create a constructor that takes in the data at one time
# all the attributes we want the class to have follow the __init__ syntax
def __init__(self, corpus_dir):
# all the attributes we want the class to have
self.dir = corpus_dir # where corpus_dir is - the corpus' filepath
# classes may contain functions we define ourselves, the all_files() function is defined below
self.filenames = self.all_files()
# this attribute calls nltk's built in English stopwords list and supplements it with punctuation and some extra characters we defined.
self.stopwords = nltk.corpus.stopwords.words('english') + [char for char in string.punctuation] + ['``', "''"]
self.texts = [Text(fn, self.stopwords) for fn in self.filenames]
def all_files(self):
"""given a directory, return the filenames in it"""
texts = []
for (root, _, files) in os.walk(self.dir):
for fn in files:
print(fn)
if fn[0] == '.': # ignore dot files
pass
else:
path = os.path.join(root, fn)
texts.append(path)
return texts
# the Text class works the same as the Corpus, but will contain text-level only attributes
class Text(object):
# now create the blueprint for our text object
def __init__(self, fn, stopwords):
# given a filename, store it
self.filename = fn
# a text has raw_text associated with it
self.raw_text = self.get_text()
# a text has raw tokens
self.raw_tokens = nltk.word_tokenize(self.raw_text)
# a text will have a clean version of those tokens
self.cleaned_tokens = self.clean_tokens(stopwords)
# we also want, in this case, to make an NLTK text object
self.nltk_text = nltk.Text(self.cleaned_tokens)
def get_text(self):
with open(self.filename) as fin:
return fin.read()
def clean_tokens(self, stopwords):
return [token.lower() for token in self.raw_tokens if token not in stopwords]
# this is what runs if you run the file as a one-off event - $ python3 class_practice.py
def main():
corpus_dir = 'corpus/sonnets/'
print('As mentioned above, this output presents as though it is being run from the command line.') # anything that you might want to jump to, such as a graph, FreqDist, etc. would go here
# this allows you to import the classes as a module. it uses the special built-in variable __name__ set to the value "__main__" if the module is being run as the main program
if __name__ == "__main__":
main() | As mentioned above, this output presents as though it is being run from the command line.
| CC-BY-4.0 | corpus.ipynb | walshbr/humanists-nlp-cookbook |
The payoff of organzing your project within classes is that you can run them as a module from the interpreter or as a Python file from the terminal. For the remainder of this section, we have inserted the above code into a file called file class_practice.py. The following code blocks show how you might go about importing the class and working with it in the terminal. To work with our class in the Python in the interpreter, we first import our script and instantiate our Corpus class. | # import the script as a module--file name without the extension
import class_practice
# store the path to the corpus directory
corpus_dir = "corpus/sonnets/"
# create a new corpus using our class template
this_corpus = class_practice.Corpus(corpus_dir)
# now we can access elements of our corpus by accessing this_corpus
print(this_corpus.dir) # will show the directory of the corpus
print(this_corpus.filenames) # returns all the filenames in the corpus
# to work with the text class, instantiate the particular text you want to use
| corpus/sonnets/
['corpus/sonnets/sonnet_two.txt', 'corpus/sonnets/sonnet_five.txt', 'corpus/sonnets/sonnet_four.txt', 'corpus/sonnets/sonnets_three.txt', 'corpus/sonnets/sonnet_one.txt']
| CC-BY-4.0 | corpus.ipynb | walshbr/humanists-nlp-cookbook |
Now that our corpus is in the interpreter, we can confirm that it contains many texts: | this_corpus.texts | _____no_output_____ | CC-BY-4.0 | corpus.ipynb | walshbr/humanists-nlp-cookbook |
That is a little confusing. As a humanist, we might expect to see the titles of the text or something similar. But we haven't told our class anything like that. Instead, our corpus points to particular texts, represented by their locations in our computer's memory. But, since this is just a list, we can pull out individual texts like you would any other item in a list: | first_text = this_corpus.texts[0]
# from here, any of our text level attributes will be available to us:
print(first_text.filename)
print(first_text.raw_text) | corpus/sonnets/sonnet_two.txt
When forty winters shall besiege thy brow,
And dig deep trenches in thy beauty's field,
Thy youth's proud livery so gazed on now,
Will be a totter'd weed of small worth held:
Then being asked, where all thy beauty lies,
Where all the treasure of thy lusty days;
To say, within thine own deep sunken eyes,
Were an all-eating shame, and thriftless praise.
How much more praise deserv'd thy beauty's use,
If thou couldst answer 'This fair child of mine
Shall sum my count, and make my old excuse,'
Proving his beauty by succession thine!
This were to be new made when thou art old,
And see thy blood warm when thou feel'st it cold.
| CC-BY-4.0 | corpus.ipynb | walshbr/humanists-nlp-cookbook |
We could loop over our corpus to pull out information from each text: | for text in this_corpus.texts:
print(text.filename)
# get the first few characters from each line
for text in this_corpus.texts:
print(text.raw_text[0:40]) | corpus/sonnets/sonnet_two.txt
corpus/sonnets/sonnet_five.txt
corpus/sonnets/sonnet_four.txt
corpus/sonnets/sonnets_three.txt
corpus/sonnets/sonnet_one.txt
When forty winters shall besiege thy bro
Those hours, that with gentle work did f
Unthrifty loveliness, why dost thou spen
Look in thy glass and tell the face thou
FROM fairest creatures we desire increas
| CC-BY-4.0 | corpus.ipynb | walshbr/humanists-nlp-cookbook |
Depending on how complex you've made your Text class, you can get to some interesting analysis right away. Here, we take advantage of the fact that we use NLTK's more robust Text class to look at the top words in each text. Even though both this small example and NLTK both have created classes named "Text", they contain different functions. | for text in this_corpus.texts:
print(text.nltk_text.vocab().most_common(10))
print('=======') | [('thy', 7), ('beauty', 4), ("'s", 3), ('thou', 3), ('shall', 2), ('and', 2), ('deep', 2), ("'d", 2), ('thine', 2), ('praise', 2)]
=======
[('beauty', 3), ('every', 2), ('doth', 2), ('summer', 2), ('winter', 2), ("'s", 2), ('those', 1), ('hours', 1), ('gentle', 1), ('work', 1)]
=======
[('thy', 6), ('thou', 5), ('dost', 4), ('self', 4), ('thee', 3), ('beauty', 2), ("'s", 2), ('nature', 2), ('then', 2), ('canst', 2)]
=======
[('thou', 6), ('thy', 4), ('glass', 2), ('face', 2), ('time', 2), ('whose', 2), ('mother', 2), ('thee', 2), ('thine', 2), ('look', 1)]
=======
[('thy', 4), ("'s", 3), ('world', 3), ('might', 2), ('but', 2), ('tender', 2), ('thou', 2), ('thine', 2), ('and', 2), ('from', 1)]
=======
| CC-BY-4.0 | corpus.ipynb | walshbr/humanists-nlp-cookbook |
Theoretically, the process is agnostic of what texts are actually in the corpus folder. So you could use this as a starting point for analysis without having to reinvent the wheel each time. We could, for example, create a new corpus from a different set of texts and quickly grab the most common words from those texts. Let's do this with a small Woolf corpus. | corpus_dir = "corpus/woolf/"
new_corpus = class_practice.Corpus(corpus_dir)
print(new_corpus.texts)
for text in new_corpus.texts:
print(text.filename)
print(text.nltk_text.vocab().most_common(10))
print('======') | [<class_practice.Text object at 0x10d2e0310>, <class_practice.Text object at 0x12a02f910>, <class_practice.Text object at 0x12a2815b0>]
corpus/woolf/1922_jacobs_room.txt
[('--', 546), ('said', 425), ("'s", 411), ('jacob', 390), ('the', 360), ('one', 291), ('i', 236), ('mrs.', 225), ('like', 165), ('but', 153)]
======
corpus/woolf/1915_the_voyage_out.txt
[('i', 1609), ("'s", 1007), ('--', 976), ('said', 874), ('one', 801), ('she', 646), ('rachel', 579), ('the', 531), ("n't", 513), ('mrs.', 437)]
======
corpus/woolf/1919_night_and_day.txt
[('i', 1967), ('katharine', 1193), ("'s", 1139), ('she', 935), ('--', 841), ('said', 796), ('one', 774), ("n't", 720), ('he', 615), ('upon', 582)]
======
| CC-BY-4.0 | corpus.ipynb | walshbr/humanists-nlp-cookbook |
We don't have to rework all of the basic details of what a corpus looks like. And, looking at these results, we might very quickly notice some changes we want to make to our pipeline! Thanks to how we've organized things, this should not be too challenging. However, this reproducibility is also a potential challenge. Each corpus is different and likely to present its own difficulties. For example, if we wanted to use a TEI-encoded text, this class would not be able to accommodate such a thing. But organizing things with classes means that we could add that to our pipeline fairly easily if we wished. A Note on Making Changes while working in the Terminal As you make changes to your class_practice.py file, it's important to know how these changes will or will not be represented in your working copy of the objects you've created. If, as suggested above, you are working with a class to examine your corpus in the terminal, you must be mindful of one extra step. Once you import your module and create a new instance of your class, any changes to the underlying files for that work will not be represented in the terminal. In order to update your object with new changes, you have to re-import the module into python and re-instantiate your classes. This makes sure you are running the most up-to-date version of your file. You would do that like so, using the above example: | import importlib
importlib.reload(class_practice)
#re-instantiate the corpus or text
this_corpus = class_practice.Corpus(corpus_dir) | _____no_output_____ | CC-BY-4.0 | corpus.ipynb | walshbr/humanists-nlp-cookbook |
Outlier DetectionThe goal is to remove outlier cells from the ```hpacellseg``` output.Outliers on the training set:* (Shape) Cells where the minimum bounding rectangle has a (h,w) ratio outside of 95% of the data range.* (Shape) Cells that are very large compared to the image size or the other cells in the image. (?)* (Shape) TBD: Cells where the nucleus is outside 95% quantile to distance to center. (deformed cells?)* (Color) Cells that have atypical mean and std in their image channels.* (Position) Cells that are touching the edge of the image.* (Position) TBD: Cells where the nucleus is missing, or intersecting with the edge of the image.Outliers on the testing set:* (Position) TBD: Cells where the nucleus is missing, or intersecting with the edge of the image. | import os
import importlib
import numpy
import pandas
import sklearn
import matplotlib.pyplot as plt
import cv2
import skimage
import pycocotools
import json
import ast
import src.utils
importlib.reload(src.utils)
from tqdm import tqdm
import multiprocessing, logging
from joblib import Parallel, delayed
train = pandas.read_csv("./data/train_cells.csv")
train.head() | _____no_output_____ | BSD-3-Clause | notebooks/Outlier-Detection.ipynb | mpff/hpa-single-cell-classification |
Functions for parsing precomputed and compressed train and test dataset rles. | def get_rle_from_df(row):
string = row.RLEmask
h = row.ImageHeight
w = row.ImageWidth
rle = src.utils.decode_b64_string(string, h, w)
return rle
def get_mask_from_rle(rle):
mask = pycocotools._mask.decode([rle])[:,:,0]
return mask
rles = train.apply(get_rle_from_df, axis=1)
rles.head()
rles.head()
masks = rles.apply(get_mask_from_rle)
masks.head() | _____no_output_____ | BSD-3-Clause | notebooks/Outlier-Detection.ipynb | mpff/hpa-single-cell-classification |
Generate Outlier MetricsCalculate the **bounding box**. | def get_bbox_from_rle(rle):
"""x,y = bottom left!"""
bbox = pycocotools._mask.toBbox([encoded_mask])[0]
x, y, w, h = (int(l) for l in bbox)
return x, y, w, h | _____no_output_____ | BSD-3-Clause | notebooks/Outlier-Detection.ipynb | mpff/hpa-single-cell-classification |
Calculate the **minimum bounding rectangle** (rotated bounding box). | def get_mbr_from_mask(mask):
return x, y, l1, l2, phi
def get_hw_from_mbr(mbr):
return h, w
if not n_workers: n_workers=num_cores
processed_list = Parallel(n_jobs=int(n_workers))(
delayed(segment_image)(i, segmentator, images_frame, test) for i in tqdm(images)
)
touch = train.touches.apply(ast.literal_eval) | _____no_output_____ | BSD-3-Clause | notebooks/Outlier-Detection.ipynb | mpff/hpa-single-cell-classification |
Policy Iteration | #Inplace
game = GridWorld(4, (2,1), inplace=True, policy_iteration = True)
print(game.play(epochs=50, threshold=0.00000000000001))
game = GridWorld(4, (2,1), inplace=False, policy_iteration = True)
print(game.play(epochs=50, threshold=0.00000000000001)) | Initial V
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
Initial Policy
[3 2 1 1]
[1 2 2 2]
[3 3 3 3]
[3 3 1 1]
Final Policy:
[1 3 0 0]
[1 3 0 0]
[1 0 0 0]
[1 2 0 0]
Final Value Function:
[-0.25578487 -0.23139462 -0.25578487 -0.25639462]
[-0.23139462 0.74421513 -0.23139462 -0.25578487]
[ 0.74421513 -0.23139462 0.74421513 -0.23139462]
[-0.23139462 0.74421513 -0.23139462 -0.25578487]
(8.881784197001252e-16, 12)
| Apache-2.0 | RL using Dynamic Programming.ipynb | SanketAgrawal/ReinforcementLearning |
Value Iteration | #Inplace
game = GridWorld(4, (2,1), inplace=True, policy_iteration = False)
print(game.play(epochs=50, threshold=0.00000000000001))
game = GridWorld(4, (2,1), inplace=False, policy_iteration = False)
print(game.play(epochs=50, threshold=0.00000000000001)) | Initial V
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
Initial Policy
[2 2 2 3]
[1 2 3 2]
[3 1 2 2]
[3 2 2 3]
Final Policy:
[1 3 0 0]
[1 3 0 0]
[1 0 0 0]
[1 2 0 0]
Final Value Function:
[-0.25578487 -0.23139462 -0.25578487 -0.25639462]
[-0.23139462 0.74421513 -0.23139462 -0.25578487]
[ 0.74421513 -0.23139462 0.74421513 -0.23139462]
[-0.23139462 0.74421513 -0.23139462 -0.25578487]
(6.38378239159465e-16, 11)
| Apache-2.0 | RL using Dynamic Programming.ipynb | SanketAgrawal/ReinforcementLearning |
Yandex geocoding APIThis notebook works with [Yandex Geocoding API](https://tech.yandex.ru/maps/geocoder/doc/desc/concepts/about-docpage/), you have to get an API key to replicate the process.Free limit for HTTP GET request is now only 1000 addresses per day. [Open data about housing](https://www.reformagkh.ru/opendata?gid=2353101&cids=house_management&page=1&pageSize=10) is used for the example. | import pandas as pd
import requests
import geopandas as gpd
from pyproj import CRS
from shapely import wkt
import matplotlib.pyplot as plt
import contextily as ctx
import warnings
warnings.filterwarnings("ignore")
# reading api keys
api_keys = pd.read_excel('../api_keys.xlsx')
api_keys.set_index('key_name', inplace=True)
# API Yandex organization search
geocoding_api_key = api_keys.loc['yandex_geocoding']['key']
# importing housing data
tab = pd.read_csv('./input_data/opendata_reform_tatarstan.csv', sep = ';')
# selecting a city with a number of houses within our geocoder limit
df_sample = tab[tab['formalname_city']=='Верхний Услон']
len(df_sample)
URL = 'https://geocode-maps.yandex.ru/1.x'
# retrieving the coordinates in wkt format
def geocode(address):
params = {
"geocode" : address,
"apikey": geocoding_api_key,
"format": "json"
}
response = requests.get(URL, params=params)
response_json = response.json()
try:
point = response_json['response']['GeoObjectCollection']['featureMember'][0]['GeoObject']['Point']['pos']
wkt_point = 'POINT ({})'.format(point)
return wkt_point
except Exception as e:
print("for address", address)
print("result is", response_json)
print("which raises", e)
return ""
# applying geocoding
df_sample['coordinates'] = df_sample['address'].apply(geocode)
# shapely wkt submodule to parse wkt format
def wkt_loads(x):
try:
return wkt.loads(x)
except Exception:
return None
df_sample['coords_wkt'] = df_sample['coordinates'].apply(wkt_loads)
df_sample = df_sample.dropna(subset=['coords_wkt'])
print ('Number of geocoded houses - ', len(df_sample))
# transform to geodataframe
housing_sample = gpd.GeoDataFrame(df_sample, geometry='coords_wkt')
housing_sample = housing_sample.set_crs(epsg=4326)
# write the result to Shapefile
housing_sample.to_file('./output/housing_test.shp')
# Control figure size in here
fig, ax = plt.subplots(figsize=(15,15))
# Plot the data
housing_sample.to_crs(epsg=3857).plot(ax=ax)
# Add basemap with basic OpenStreetMap visualization
ctx.add_basemap(ax, source=ctx.providers.OpenStreetMap.Mapnik) | _____no_output_____ | MIT | geocoding_yandex_api.ipynb | skaryaeva/urban_studies_notebooks |
Setup directory variables | print(os.environ['PIPELINEDIR'])
if not os.path.exists(os.environ['PIPELINEDIR']): os.makedirs(os.environ['PIPELINEDIR'])
figdir = os.path.join(os.environ['OUTPUTDIR'], 'figs')
print(figdir)
if not os.path.exists(figdir): os.makedirs(figdir)
phenos = ['Overall_Psychopathology','Psychosis_Positive','Psychosis_NegativeDisorg','AnxiousMisery','Externalizing','Fear']
phenos_short = ['Ov. Psych.', 'Psy. (pos.)', 'Psy. (neg.)', 'Anx.-mis.', 'Ext.', 'Fear']
phenos_label = ['Overall psychopathology','Psychosis (positive)','Psychosis (negative)','Anxious-misery','Externalizing','Fear']
print(phenos)
metrics = ['ct', 'vol']
metrics_label = ['Thickness', 'Volume']
algs = ['rr',]
scores = ['corr', 'rmse', 'mae']
seeds = np.arange(0,100)
num_algs = len(algs)
num_metrics = len(metrics)
num_phenos = len(phenos)
num_scores = len(scores) | _____no_output_____ | MIT | 1_code/10_results_model_performance.ipynb | lindenmp/normative_neurodev_cs_t1 |
Setup plots | if not os.path.exists(figdir): os.makedirs(figdir)
os.chdir(figdir)
sns.set(style='white', context = 'paper', font_scale = 0.8)
cmap = my_get_cmap('psych_phenos') | _____no_output_____ | MIT | 1_code/10_results_model_performance.ipynb | lindenmp/normative_neurodev_cs_t1 |
Load data | def load_data(indir, phenos, alg, score, metric):
accuracy_mean = np.zeros((100, len(phenos)))
accuracy_std = np.zeros((100, len(phenos)))
y_pred_var = np.zeros((100, len(phenos)))
p_vals = pd.DataFrame(columns = phenos)
sig_points = pd.DataFrame(columns = phenos)
for p, pheno in enumerate(phenos):
accuracy_mean[:,p] = np.loadtxt(os.path.join(indir, alg + '_' + score + '_' + metric + '_' + pheno, 'accuracy_mean.txt'))
accuracy_std[:,p] = np.loadtxt(os.path.join(indir, alg + '_' + score + '_' + metric + '_' + pheno, 'accuracy_std.txt'))
y_pred_out_repeats = np.loadtxt(os.path.join(indir, alg + '_' + score + '_' + metric + '_' + pheno, 'y_pred_out_repeats.txt'))
y_pred_var[:,p] = y_pred_out_repeats.var(axis = 0)
in_file = os.path.join(indir, alg + '_' + score + '_' + metric + '_' + pheno, 'permuted_acc.txt')
if os.path.isfile(in_file):
permuted_acc = np.loadtxt(in_file)
acc = np.mean(accuracy_mean[:,p])
p_vals.loc[metric,pheno] = np.sum(permuted_acc >= acc) / len(permuted_acc)
sig_points.loc[metric,pheno] = np.percentile(permuted_acc,95)
# if score == 'rmse' or score == 'mae':
# accuracy_mean = np.abs(accuracy_mean)
# accuracy_std = np.abs(accuracy_std)
return accuracy_mean, accuracy_std, y_pred_var, p_vals, sig_points
s = 0; score = scores[s]; print(score)
a = 0; alg = algs[a]; print(alg)
m = 1; metric = metrics[m]; print(metric)
covs = ['ageAtScan1_Years', 'sex_adj']
# covs = ['ageAtScan1_Years', 'sex_adj', 'medu1']
# predictiondir = os.path.join(os.environ['PIPELINEDIR'], '8_prediction', 'out', outfile_prefix)
predictiondir = os.path.join(os.environ['PIPELINEDIR'], '8_prediction_fixedpcs', 'out', outfile_prefix)
print(predictiondir)
modeldir = predictiondir+'predict_symptoms_rcv_nuis_'+'_'.join(covs)
print(modeldir) | /Users/lindenmp/Google-Drive-Penn/work/research_projects/normative_neurodev_cs_t1/2_pipeline/8_prediction_fixedpcs/out/t1Exclude_schaefer_400_
/Users/lindenmp/Google-Drive-Penn/work/research_projects/normative_neurodev_cs_t1/2_pipeline/8_prediction_fixedpcs/out/t1Exclude_schaefer_400_predict_symptoms_rcv_nuis_ageAtScan1_Years_sex_adj
| MIT | 1_code/10_results_model_performance.ipynb | lindenmp/normative_neurodev_cs_t1 |
Load whole-brain results | accuracy_mean, accuracy_std, _, p_vals, sig_points = load_data(modeldir, phenos, alg, score, metric)
p_vals = get_fdr_p_df(p_vals)
p_vals[p_vals < 0.05]
accuracy_mean_z, accuracy_std_z, _, p_vals_z, sig_points_z = load_data(modeldir+'_z', phenos, alg, score, metric)
p_vals_z = get_fdr_p_df(p_vals_z)
p_vals_z[p_vals_z < 0.05] | _____no_output_____ | MIT | 1_code/10_results_model_performance.ipynb | lindenmp/normative_neurodev_cs_t1 |
Plot | stats = pd.DataFrame(index = phenos, columns = ['meanx', 'meany', 'test_stat', 'pval'])
for i, pheno in enumerate(phenos):
df = pd.DataFrame(columns = ['model','pheno'])
for model in ['wb','wbz']:
df_tmp = pd.DataFrame(columns = df.columns)
if model == 'wb':
df_tmp.loc[:,'score'] = accuracy_mean[:,i]
elif model == 'wbz':
df_tmp.loc[:,'score'] = accuracy_mean_z[:,i]
df_tmp.loc[:,'pheno'] = pheno
df_tmp.loc[:,'model'] = model
df = pd.concat((df, df_tmp), axis = 0)
x = df.loc[df.loc[:,'model'] == 'wb','score']
y = df.loc[df.loc[:,'model'] == 'wbz','score']
stats.loc[pheno,'meanx'] = np.round(np.mean(x),3)
stats.loc[pheno,'meany'] = np.round(np.mean(y),3)
stats.loc[pheno,'test_stat'] = stats.loc[pheno,'meanx']-stats.loc[pheno,'meany']
stats.loc[pheno,'pval'] = get_exact_p(x, y)
stats.loc[:,'pval_corr'] = get_fdr_p(stats.loc[:,'pval'])
stats.loc[:,'sig'] = stats.loc[:,'pval_corr'] < 0.05
stats
sig_points_plot = (sig_points + sig_points_z)/2
idx = np.argsort(accuracy_mean_z.mean(axis = 0))[::-1][:]
if metric == 'ct':
idx = np.array([5, 1, 0, 3, 4, 2])
elif metric == 'vol':
idx = np.array([0, 1, 5, 4, 2, 3])
f, ax = plt.subplots(len(phenos),1)
f.set_figwidth(2.25)
f.set_figheight(4)
# for i, pheno in enumerate(phenos):
for i, ii in enumerate(idx):
pheno = phenos[ii]
for model in ['wb','wbz']:
# ax[i].axvline(x=sig_points_plot.values.mean(), ymax=1.2, clip_on=False, color='gray', alpha=0.5, linestyle='--', linewidth=1.5)
# if i == 0:
# ax[i].text(sig_points_plot.values.mean(), 40, '$p$ < 0.05', fontweight="regular", color='gray',
# ha="left", va="center", rotation=270)
if model == 'wb':
if p_vals.loc[:,pheno].values[0]<.05:
sns.kdeplot(x=accuracy_mean[:,ii], ax=ax[i], bw_adjust=.75, clip_on=False, color=cmap[ii], alpha=0.5, linewidth=2)
# add point estimate
ax[i].axvline(x=accuracy_mean[:,ii].mean(), ymax=0.25, clip_on=False, color=cmap[ii], linewidth=2)
else:
sns.kdeplot(x=accuracy_mean[:,ii], ax=ax[i], bw_adjust=.75, clip_on=False, color=cmap[ii], linewidth=.25)
# add point estimate
ax[i].axvline(x=accuracy_mean[:,ii].mean(), ymax=0.25, clip_on=False, color=cmap[ii], linewidth=0.5)
# ax[i].axvline(x=sig_points.loc[:,pheno].values[0], ymax=1, clip_on=False, color='gray', alpha=0.5, linestyle='--', linewidth=1.5)
elif model == 'wbz':
if p_vals_z.loc[:,pheno].values[0]<.05:
sns.kdeplot(x=accuracy_mean_z[:,ii], ax=ax[i], bw_adjust=.75, clip_on=False, color=cmap[ii], alpha=0.75, linewidth=0, fill=True)
# sns.kdeplot(x=accuracy_mean_z[:,ii], ax=ax[i], bw_adjust=.75, clip_on=False, color="w", alpha=1, linewidth=1)
# add point estimate
ax[i].axvline(x=accuracy_mean_z[:,ii].mean(), ymax=0.25, clip_on=False, color='w', linewidth=2)
else:
sns.kdeplot(x=accuracy_mean_z[:,ii], ax=ax[i], bw_adjust=.75, clip_on=False, color=cmap[ii], alpha=0.2, linewidth=0, fill=True)
# sns.kdeplot(x=accuracy_mean_z[:,ii], ax=ax[i], bw_adjust=.75, clip_on=False, color="w", alpha=1, linewidth=1)
# add point estimate
ax[i].axvline(x=accuracy_mean_z[:,ii].mean(), ymax=0.25, clip_on=False, color='w', linewidth=1)
# ax[i].axvline(x=sig_points_z.loc[:,pheno].values[0], ymax=1, clip_on=False, color='gray', alpha=0.5, linestyle='--', linewidth=1.5)
# ax[i].text(sig_points_z.loc[:,pheno].values[0], 40, '$p$<.05', fontweight="regular", color='gray',
# ha="left", va="bottom", rotation=270)
# note between model significant performance difference
if stats.loc[pheno,'sig']:
ax[i].plot([accuracy_mean[:,ii].mean(),accuracy_mean_z[:,ii].mean()],[ax[i].get_ylim()[1],ax[i].get_ylim()[1]], color='gray', linewidth=1)
# ax[i].text(accuracy_mean[:,ii].mean()+[accuracy_mean_z[:,ii].mean()-accuracy_mean[:,ii].mean()],
# ax[i].get_ylim()[1], '$p$<.05', fontweight="regular", color='gray', ha="left", va="center")
# ax[i].axvline(x=accuracy_mean[:,ii].mean(), ymin=ax[i].get_ylim()[1], clip_on=False, color='gray', linewidth=1)
# ax[i].axvline(x=accuracy_mean_z[:,ii].mean(), ymin=ax[i].get_ylim()[1], clip_on=False, color='gray', linewidth=1)
# ax[i].axhline(y=25, linewidth=2, xmin=accuracy_mean[:,ii].mean(), xmax=accuracy_mean_z[:,ii].mean(), color = 'gray')
# ax[i].axhline(y=25, linewidth=2, color = 'black')
if score == 'corr':
ax[i].set_xlim([accuracy_mean_z.min(),
accuracy_mean_z.max()])
ax[i].axhline(y=0, linewidth=2, clip_on=False, color=cmap[ii])
for spine in ax[i].spines.values():
spine.set_visible(False)
ax[i].set_ylabel('')
ax[i].set_yticklabels([])
ax[i].set_yticks([])
# if score == 'corr':
# if i != len(idx)-1:
# ax[i].set_xticklabels([])
if i == len(idx)-1:
if score == 'corr': ax[i].set_xlabel('corr(y_true,y_pred)')
elif score == 'rmse': ax[i].set_xlabel('neg[RMSE] (higher = better)')
elif score == 'mae': ax[i].set_xlabel('neg[MAE] (higher = better)')
ax[i].tick_params(pad = -2)
if score == 'corr':
ax[i].text(0, 0.75, phenos_label[ii], fontweight="regular", color=cmap[ii],
ha="left", va="center", transform=ax[i].transAxes)
f.subplots_adjust(hspace=1)
# f.suptitle(alg+'_'+score+'_'+metric+' | '+'_'.join(covs))
f.savefig(outfile_prefix+'performance_comparison_'+alg+'_'+score+'_'+metric+'.svg', dpi = 600, bbox_inches = 'tight') | _____no_output_____ | MIT | 1_code/10_results_model_performance.ipynb | lindenmp/normative_neurodev_cs_t1 |
Code for comparison tests | final_stations[final_stations.ZIP=='10019']
final_stations[final_stations.STATION=='5 AV']
final_stations[final_stations.STATION.str.contains('5 AV')]
final_stations.dropna(subset=['NAME'])[final_stations.dropna(subset=['NAME']).NAME.str.contains('2 AV')]
station_test[station_test.STATION.str.contains('5 AV')]
geo_test[geo_test.NAME.str.contains('7th Ave')]
stations_geo_nr[stations_geo_nr.NAME.str.contains('7 AV')]
station_borough[station_borough.STATION.str.contains('/')]
unmerged_station[unmerged_station.STATION.str.contains('2 AV')]
stations_geo[stations_geo.NAME.str.contains('AVE')] | _____no_output_____ | SGI-B-2.0 | fuzzywuzzy match geo and turnstile.ipynb | Lwaggaman/EDA_Project |
Advanced Lane Finding ProjectThe goals / steps of this project are the following:* Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.* Apply a distortion correction to raw images.* Use color transforms, gradients, etc., to create a thresholded binary image.* Apply a perspective transform to rectify binary image ("birds-eye view").* Detect lane pixels and fit to find the lane boundary.* Determine the curvature of the lane and vehicle position with respect to center.* Warp the detected lane boundaries back onto the original image.* Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.--- First, I'll compute the camera calibration using chessboard images | import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
%matplotlib qt
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*9,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d points in real world space
imgpoints = [] # 2d points in image plane.
# Make a list of calibration images
images = glob.glob('../camera_cal/calibration*.jpg')
# Step through the list and search for chessboard corners
for fname in images:
img = cv2.imread(fname)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chessboard corners
ret, corners = cv2.findChessboardCorners(gray, (9,6),None)
# If found, add object points, image points
if ret == True:
objpoints.append(objp)
imgpoints.append(corners)
# Draw and display the corners
img = cv2.drawChessboardCorners(img, (9,6), corners, ret)
cv2.imshow('img',img)
cv2.waitKey(500)
cv2.destroyAllWindows() | _____no_output_____ | MIT | examples/.ipynb_checkpoints/example-checkpoint.ipynb | sLakshmiprasad/advanced-lane-lines |
Calibrate the camera using the objpoints and imgpoints | %matplotlib inline
img = cv2.imread('../test_images/test2.jpg')
img_size = (img.shape[1], img.shape[0])
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, img_size,None,None)
dst = cv2.undistort(img, mtx, dist, None, mtx)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))
ax1.imshow(img)
ax1.set_title('Original Image', fontsize=30)
ax2.imshow(dst)
ax2.set_title('Undistorted Image', fontsize=30) | Camera calibration matrix [[1.15777942e+03 0.00000000e+00 6.67111050e+02]
[0.00000000e+00 1.15282305e+03 3.86129068e+02]
[0.00000000e+00 0.00000000e+00 1.00000000e+00]]
Distortion coefficient [[-0.24688832 -0.02372817 -0.00109843 0.00035105 -0.00259133]]
| MIT | examples/.ipynb_checkpoints/example-checkpoint.ipynb | sLakshmiprasad/advanced-lane-lines |
■ 딥러닝 복습 1장. numpy 2장. 퍼셉트론 3장. 3층 신경망 구현 4장. 2층 신경망 구현(수치미분) 5장. 2층 신경망 구현(오차역전파) tensorflow 1.x -> tensorflow 2.x -------------------------------------------------------------------- 6장. 신경망 학습시키는 기술들 7장. CNN을 이용한 신경망 구현 -------------------------------------------------------------------- 자전거 타는 법 6장. 신경망 학습시키는 기술들 1. 언더피팅 방지하는 방법 - 가중치 초기값 선정 ① Xavier ② He - 배치 정규화 2. 오버피팅 방지하는 방법 ■ 텐서플로우로 가중치 초기값 선정하는 방법 1. Xavier$$ {{1} \over {\sqrt{n}}} \cdot \rm{np.random.randn(r,c)}$$ | W1 = tf.get_variable(name='W1', shape=[784,50], initializer = tf.contrib.layers.xavier_initializer()) | _____no_output_____ | MIT | .ipynb_checkpoints/(200818) 2.-checkpoint.ipynb | Adrian123K/tensor |
2. He 가중치 초기값 구성$$ \sqrt{{{2} \over {n}}} \cdot \rm{np.random.randn(r,c)} $$ | W1 = tf.get_variable(name="W1", shape=[784,50], initializer = tf.contrib.layers.variance_scaling_initializer()) | _____no_output_____ | MIT | .ipynb_checkpoints/(200818) 2.-checkpoint.ipynb | Adrian123K/tensor |
예제1. 어제 마지막 문제로 만들었던 텐서 플로우로 구현한 신경망 코드에 가중치 초기값을 xavier로 해서 구현하시오 | import tensorflow as tf
import numpy as np
import warnings
import os
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot = True)
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
warnings.filterwarnings('ignore')
tf.reset_default_graph()
# 은닉1층
x = tf.placeholder('float',[None,784])
W1 = tf.get_variable(name='W1', shape=[784,50], initializer = tf.contrib.layers.xavier_initializer())
b1 = tf.Variable(tf.ones([1,50]))
y = tf.matmul(x, W1) + b1
y_hat = tf.nn.relu(y)
# 출력층
W2 = tf.get_variable(name='W2', shape=[50,10], initializer = tf.contrib.layers.xavier_initializer())
b2 = tf.Variable(tf.ones([1,10]))
z = tf.matmul(y_hat,W2) + b2
z_hat = tf.nn.softmax(z)
y_predict = tf.argmax(z_hat, axis=1)
# 정확도 확인
y_onehot = tf.placeholder('float',[None,10]) # 정답 데이터를 담을 배열
y_label = tf.argmax(y_onehot, axis=1)
correction_prediction = tf.equal(y_predict, y_label)
accuracy = tf.reduce_mean(tf.cast(correction_prediction,'float'))
# 오차 확인
loss = -tf.reduce_sum(y_onehot * tf.log(z_hat+0.0000001), axis=1)
rs = tf.reduce_mean(loss)
# 학습
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train = optimizer.minimize(loss)
# 변수 초기화
init = tf.global_variables_initializer()
# 그래프 실행
sess = tf.Session()
sess.run(init)
for i in range(1,601*20):
batch_xs, batch_ys = mnist.train.next_batch(100)
batch_x_test, batch_y_test = mnist.test.next_batch(100)
sess.run(train, feed_dict={x:batch_xs, y_onehot:batch_ys})
if not i % 600:
print(i//600,"에폭 훈련데이터 정확도 : ",sess.run(accuracy, feed_dict={x:batch_xs, y_onehot:batch_ys}),"\t","테스트 데이터 정확도:", sess.run(accuracy, feed_dict={x:batch_x_test, y_onehot:batch_y_test})) | WARNING:tensorflow:From <ipython-input-9-a5d4a56c599c>:6: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
WARNING:tensorflow:From C:\Users\knitwill\anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Please write your own downloading logic.
WARNING:tensorflow:From C:\Users\knitwill\anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting MNIST_data/train-images-idx3-ubyte.gz
WARNING:tensorflow:From C:\Users\knitwill\anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:267: extract_labels (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting MNIST_data/train-labels-idx1-ubyte.gz
WARNING:tensorflow:From C:\Users\knitwill\anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:110: dense_to_one_hot (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.one_hot on tensors.
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
WARNING:tensorflow:From C:\Users\knitwill\anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:290: DataSet.__init__ (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
{1} 에폭 훈련데이터 정확도 : 0.98 테스트 데이터 정확도: 0.92
{2} 에폭 훈련데이터 정확도 : 0.98 테스트 데이터 정확도: 0.98
{3} 에폭 훈련데이터 정확도 : 0.99 테스트 데이터 정확도: 0.99
{4} 에폭 훈련데이터 정확도 : 0.99 테스트 데이터 정확도: 0.94
{5} 에폭 훈련데이터 정확도 : 0.98 테스트 데이터 정확도: 0.97
{6} 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.99
{7} 에폭 훈련데이터 정확도 : 0.96 테스트 데이터 정확도: 0.97
{8} 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.95
{9} 에폭 훈련데이터 정확도 : 0.98 테스트 데이터 정확도: 0.96
{10} 에폭 훈련데이터 정확도 : 0.99 테스트 데이터 정확도: 0.98
{11} 에폭 훈련데이터 정확도 : 0.99 테스트 데이터 정확도: 0.95
{12} 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.96
{13} 에폭 훈련데이터 정확도 : 0.99 테스트 데이터 정확도: 0.97
{14} 에폭 훈련데이터 정확도 : 0.98 테스트 데이터 정확도: 0.95
{15} 에폭 훈련데이터 정확도 : 0.98 테스트 데이터 정확도: 0.94
{16} 에폭 훈련데이터 정확도 : 0.98 테스트 데이터 정확도: 0.96
{17} 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.97
{18} 에폭 훈련데이터 정확도 : 0.97 테스트 데이터 정확도: 0.97
{19} 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.99
{20} 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 1.0
| MIT | .ipynb_checkpoints/(200818) 2.-checkpoint.ipynb | Adrian123K/tensor |
※ 문제137. 이번에는 가중치 초기값을 He로 해서 수행하시오 | import tensorflow as tf
import numpy as np
import warnings
import os
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot = True)
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
warnings.filterwarnings('ignore')
tf.reset_default_graph() # 텐서 그래프 초기화 하는 코드
# 은닉1층
x = tf.placeholder('float',[None,784])
W1 = tf.get_variable(name="W1", shape=[784,50], initializer = tf.contrib.layers.variance_scaling_initializer())
b1 = tf.Variable(tf.ones([1,50]))
y = tf.matmul(x, W1) + b1
y_hat = tf.nn.relu(y)
# 출력층
W2 = tf.get_variable(name="W2", shape=[50,10], initializer = tf.contrib.layers.variance_scaling_initializer())
b2 = tf.Variable(tf.ones([1,10]))
z = tf.matmul(y_hat,W2) + b2
z_hat = tf.nn.softmax(z)
y_predict = tf.argmax(z_hat, axis=1)
# 정확도 확인
y_onehot = tf.placeholder('float',[None,10]) # 정답 데이터를 담을 배열
y_label = tf.argmax(y_onehot, axis=1)
correction_prediction = tf.equal(y_predict, y_label)
accuracy = tf.reduce_mean(tf.cast(correction_prediction,'float'))
# 오차 확인
loss = -tf.reduce_sum(y_onehot * tf.log(z_hat+0.0000001), axis=1)
rs = tf.reduce_mean(loss)
# 학습
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train = optimizer.minimize(loss)
# 변수 초기화
init = tf.global_variables_initializer()
# 그래프 실행
sess = tf.Session()
sess.run(init)
for i in range(1,601*20):
batch_xs, batch_ys = mnist.train.next_batch(100)
batch_x_test, batch_y_test = mnist.test.next_batch(100)
sess.run(train, feed_dict={x:batch_xs, y_onehot:batch_ys})
if not i % 600:
print(i//600,"에폭 훈련데이터 정확도 : ",sess.run(accuracy, feed_dict={x:batch_xs, y_onehot:batch_ys}),"\t","테스트 데이터 정확도:", sess.run(accuracy, feed_dict={x:batch_x_test, y_onehot:batch_y_test})) | Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
{1} 에폭 훈련데이터 정확도 : 0.99 테스트 데이터 정확도: 0.96
{2} 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 1.0
{3} 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.99
{4} 에폭 훈련데이터 정확도 : 0.99 테스트 데이터 정확도: 0.99
{5} 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.97
{6} 에폭 훈련데이터 정확도 : 0.97 테스트 데이터 정확도: 0.93
{7} 에폭 훈련데이터 정확도 : 0.99 테스트 데이터 정확도: 0.98
{8} 에폭 훈련데이터 정확도 : 0.99 테스트 데이터 정확도: 0.97
{9} 에폭 훈련데이터 정확도 : 0.99 테스트 데이터 정확도: 0.97
{10} 에폭 훈련데이터 정확도 : 0.99 테스트 데이터 정확도: 0.98
{11} 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.97
{12} 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.98
{13} 에폭 훈련데이터 정확도 : 0.99 테스트 데이터 정확도: 0.95
{14} 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.99
{15} 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.98
{16} 에폭 훈련데이터 정확도 : 0.99 테스트 데이터 정확도: 0.96
{17} 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.99
{18} 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.98
{19} 에폭 훈련데이터 정확도 : 0.97 테스트 데이터 정확도: 0.97
{20} 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.96
| MIT | .ipynb_checkpoints/(200818) 2.-checkpoint.ipynb | Adrian123K/tensor |
※ 문제138. 위의 2층 신경망을 3층 신경망으로 변경하시오 기존층 : 입력층 -------> 은닉1층 ------> 출력층 784 100 10 변경후 : 입력층 -------> 은닉1층 ------> 은닉2층 -------> 출력층 784 100 50 10 | import tensorflow as tf
import numpy as np
import warnings
import os
from tensorflow.examples.tutorials.mnist import input_data
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
warnings.filterwarnings('ignore')
mnist = input_data.read_data_sets("MNIST_data/", one_hot = True)
tf.reset_default_graph() # 텐서 그래프 초기화 하는 코드
# 은닉1층
x = tf.placeholder('float',[None,784])
W1 = tf.get_variable(name="W1", shape=[784,100], initializer = tf.contrib.layers.variance_scaling_initializer())
b1 = tf.Variable(tf.ones([1,100]))
y = tf.matmul(x, W1) + b1
y_hat = tf.nn.relu(y)
# 은닉2층
W2 = tf.get_variable(name="W2", shape=[100,50], initializer = tf.contrib.layers.variance_scaling_initializer())
b2 = tf.Variable(tf.ones([1,50]))
y2 = tf.matmul(y_hat,W2) + b2
y2_hat = tf.nn.relu(y2)
# 출력층
W3 = tf.get_variable(name="W3", shape=[50,10], initializer = tf.contrib.layers.variance_scaling_initializer())
b3 = tf.Variable(tf.ones([1,10]))
z = tf.matmul(y2_hat,W3) + b3
z_hat = tf.nn.softmax(z)
y_predict = tf.argmax(z_hat, axis=1)
# 정확도 확인
y_onehot = tf.placeholder('float',[None,10]) # 정답 데이터를 담을 배열
y_label = tf.argmax(y_onehot, axis=1)
correction_prediction = tf.equal(y_predict, y_label)
accuracy = tf.reduce_mean(tf.cast(correction_prediction,'float'))
# 오차 확인
loss = -tf.reduce_sum(y_onehot * tf.log(z_hat+0.0000001), axis=1)
rs = tf.reduce_mean(loss)
# 학습
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train = optimizer.minimize(loss)
# 변수 초기화
init = tf.global_variables_initializer()
# 그래프 실행
sess = tf.Session()
sess.run(init)
for i in range(1,601*20):
batch_xs, batch_ys = mnist.train.next_batch(100)
batch_x_test, batch_y_test = mnist.test.next_batch(100)
sess.run(train, feed_dict={x:batch_xs, y_onehot:batch_ys})
if not i % 600:
print(i//600,"에폭 훈련데이터 정확도 : ",sess.run(accuracy, feed_dict={x:batch_xs, y_onehot:batch_ys}),"\t","테스트 데이터 정확도:", sess.run(accuracy, feed_dict={x:batch_x_test, y_onehot:batch_y_test})) | Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
{1} 에폭 훈련데이터 정확도 : 0.91 테스트 데이터 정확도: 0.95
{2} 에폭 훈련데이터 정확도 : 0.98 테스트 데이터 정확도: 0.97
{3} 에폭 훈련데이터 정확도 : 0.97 테스트 데이터 정확도: 0.98
{4} 에폭 훈련데이터 정확도 : 0.99 테스트 데이터 정확도: 0.93
{5} 에폭 훈련데이터 정확도 : 0.97 테스트 데이터 정확도: 0.97
{6} 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.96
{7} 에폭 훈련데이터 정확도 : 0.97 테스트 데이터 정확도: 0.96
{8} 에폭 훈련데이터 정확도 : 0.99 테스트 데이터 정확도: 0.96
{9} 에폭 훈련데이터 정확도 : 0.98 테스트 데이터 정확도: 0.96
{10} 에폭 훈련데이터 정확도 : 0.99 테스트 데이터 정확도: 0.97
{11} 에폭 훈련데이터 정확도 : 0.98 테스트 데이터 정확도: 0.97
{12} 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.98
{13} 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.97
{14} 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.97
{15} 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.94
{16} 에폭 훈련데이터 정확도 : 0.99 테스트 데이터 정확도: 0.93
{17} 에폭 훈련데이터 정확도 : 0.99 테스트 데이터 정확도: 0.98
{18} 에폭 훈련데이터 정확도 : 0.99 테스트 데이터 정확도: 0.98
{19} 에폭 훈련데이터 정확도 : 0.98 테스트 데이터 정확도: 0.96
{20} 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.99
| MIT | .ipynb_checkpoints/(200818) 2.-checkpoint.ipynb | Adrian123K/tensor |
■ 텐서 플로우로 배치 정규화 구현하는 방법 배치 정규화 - 신경망 학습시 가중치 값의 데이터가 골고루 분산될 수 있도록 강제화 하는 장치 층이 깊어져도 가중치의 정규분포를 계속 유지할 수 있도록 층마다 강제화 하는 장치 ```pythonbatch_z1 = tf.contrib.layers.batch_norm(z1, True)``` Affine1 ------> 배치 정규화 ------> ReLU (z1) 예제1. 지금까지 완성한 신경망에 배치 정규화를 은닉 1층에 구현하시오 | import tensorflow as tf
import numpy as np
import warnings
import os
from tensorflow.examples.tutorials.mnist import input_data
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
warnings.filterwarnings('ignore')
mnist = input_data.read_data_sets("MNIST_data/", one_hot = True)
tf.reset_default_graph() # 텐서 그래프 초기화 하는 코드
# 은닉1층
x = tf.placeholder('float',[None,784])
W1 = tf.get_variable(name="W1", shape=[784,100], initializer = tf.contrib.layers.variance_scaling_initializer())
b1 = tf.Variable(tf.ones([1,100]))
y1 = tf.matmul(x, W1) + b1
batch_y1 = tf.contrib.layers.batch_norm(y1, True)
y1_hat = tf.nn.relu(batch_y1)
# 은닉2층
W2 = tf.get_variable(name="W2", shape=[100,50], initializer = tf.contrib.layers.variance_scaling_initializer())
b2 = tf.Variable(tf.ones([1,50]))
y2 = tf.matmul(y1_hat,W2) + b2
y2_hat = tf.nn.relu(y2)
# 출력층
W3 = tf.get_variable(name="W3", shape=[50,10], initializer = tf.contrib.layers.variance_scaling_initializer())
b3 = tf.Variable(tf.ones([1,10]))
z = tf.matmul(y2_hat,W3) + b3
z_hat = tf.nn.softmax(z)
y_predict = tf.argmax(z_hat, axis=1)
# 정확도 확인
y_onehot = tf.placeholder('float',[None,10]) # 정답 데이터를 담을 배열
y_label = tf.argmax(y_onehot, axis=1)
correction_prediction = tf.equal(y_predict, y_label)
accuracy = tf.reduce_mean(tf.cast(correction_prediction,'float'))
# 오차 확인
loss = -tf.reduce_sum(y_onehot * tf.log(z_hat+0.0000001), axis=1)
rs = tf.reduce_mean(loss)
# 학습
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train = optimizer.minimize(loss)
# 변수 초기화
init = tf.global_variables_initializer()
# 그래프 실행
sess = tf.Session()
sess.run(init)
for i in range(1,601*20):
batch_xs, batch_ys = mnist.train.next_batch(100)
batch_x_test, batch_y_test = mnist.test.next_batch(100)
sess.run(train, feed_dict={x:batch_xs, y_onehot:batch_ys})
if not i % 600:
print(i//600,"에폭 훈련데이터 정확도 : ",sess.run(accuracy, feed_dict={x:batch_xs, y_onehot:batch_ys}),"\t","테스트 데이터 정확도:", sess.run(accuracy, feed_dict={x:batch_x_test, y_onehot:batch_y_test})) | Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
1 에폭 훈련데이터 정확도 : 0.99 테스트 데이터 정확도: 0.97
2 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.98
3 에폭 훈련데이터 정확도 : 0.98 테스트 데이터 정확도: 0.98
4 에폭 훈련데이터 정확도 : 0.96 테스트 데이터 정확도: 0.98
5 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.99
6 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.97
7 에폭 훈련데이터 정확도 : 0.99 테스트 데이터 정확도: 0.94
8 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.94
9 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.99
10 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.97
11 에폭 훈련데이터 정확도 : 0.99 테스트 데이터 정확도: 0.98
12 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.98
13 에폭 훈련데이터 정확도 : 0.99 테스트 데이터 정확도: 0.93
14 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.98
15 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.95
16 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 1.0
17 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.99
18 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.98
19 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 1.0
20 에폭 훈련데이터 정확도 : 1.0 테스트 데이터 정확도: 0.97
| MIT | .ipynb_checkpoints/(200818) 2.-checkpoint.ipynb | Adrian123K/tensor |
※ 문제139. 은닉 2층에도 배치 정규화를 적용하시오 | import tensorflow as tf
import numpy as np
import warnings
import os
from tensorflow.examples.tutorials.mnist import input_data
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
warnings.filterwarnings('ignore')
mnist = input_data.read_data_sets("MNIST_data/", one_hot = True)
tf.reset_default_graph() # 텐서 그래프 초기화 하는 코드
# 은닉1층
x = tf.placeholder('float',[None,784])
W1 = tf.get_variable(name="W1", shape=[784,100], initializer = tf.contrib.layers.variance_scaling_initializer())
b1 = tf.Variable(tf.ones([1,100]))
y1 = tf.matmul(x, W1) + b1
batch_y1 = tf.contrib.layers.batch_norm(y1, True)
y1_hat = tf.nn.relu(batch_y1)
# 은닉2층
W2 = tf.get_variable(name="W2", shape=[100,50], initializer = tf.contrib.layers.variance_scaling_initializer())
b2 = tf.Variable(tf.ones([1,50]))
y2 = tf.matmul(y1_hat,W2) + b2
batch_y2 = tf.contrib.layers.batch_norm(y2, True)
y2_hat = tf.nn.relu(batch_y2)
# 출력층
W3 = tf.get_variable(name="W3", shape=[50,10], initializer = tf.contrib.layers.variance_scaling_initializer())
b3 = tf.Variable(tf.ones([1,10]))
z = tf.matmul(y2_hat,W3) + b3
z_hat = tf.nn.softmax(z)
y_predict = tf.argmax(z_hat, axis=1)
# 정확도 확인
y_onehot = tf.placeholder('float',[None,10]) # 정답 데이터를 담을 배열
y_label = tf.argmax(y_onehot, axis=1)
correction_prediction = tf.equal(y_predict, y_label)
accuracy = tf.reduce_mean(tf.cast(correction_prediction,'float'))
# 오차 확인
loss = -tf.reduce_sum(y_onehot * tf.log(z_hat+0.0000001), axis=1)
rs = tf.reduce_mean(loss)
# 학습
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train = optimizer.minimize(loss)
# 변수 초기화
init = tf.global_variables_initializer()
# 그래프 실행
sess = tf.Session()
sess.run(init)
for i in range(1,601*20):
batch_xs, batch_ys = mnist.train.next_batch(100)
batch_x_test, batch_y_test = mnist.test.next_batch(100)
sess.run(train, feed_dict={x:batch_xs, y_onehot:batch_ys})
if not i % 600:
print(i//600,"에폭 훈련데이터 정확도 : ",sess.run(accuracy, feed_dict={x:batch_xs, y_onehot:batch_ys}),"\t","테스트 데이터 정확도:", sess.run(accuracy, feed_dict={x:batch_x_test, y_onehot:batch_y_test}))
import tensorflow as tf
import numpy as np
import warnings
import os
from tensorflow.examples.tutorials.mnist import input_data
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
warnings.filterwarnings('ignore')
mnist = input_data.read_data_sets("MNIST_data/", one_hot = True)
tf.reset_default_graph() # 텐서 그래프 초기화 하는 코드
# 입력층
x = tf.placeholder('float',[None,784])
x1 = tf.reshape(x,[-1,28,28,1]) # 흑백사진, 1층, batch 개수를 모르므로 -1. 2차원 -> 4차원으로 변경
# Convolution 1층
W1 = tf.Variable(tf.random_normal([5,5,1,32], stddev=0.01)) # 필터 32개 생성
b1 = tf.Variable(tf.ones([32])) # 숫자 1로 채워진 bias 생성
y1 = tf.nn.conv2d(x1, W1, strides=[1,1,1,1], padding='SAME')
y1 = y1 + b1
y1 = tf.nn.relu(y1)
y1 = tf.nn.max_pool(y1, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME') # ksize : 필터 사이즈
y1 = tf.reshape(y1, [-1,14*14*32]) # y1 4차원 -> 2차원
# 완전연결계층 1층 (2층)
W2 = tf.get_variable(name="W2", shape=[14*14*32,100], initializer = tf.contrib.layers.variance_scaling_initializer())
b2 = tf.Variable(tf.ones([1,100]))
y2 = tf.matmul(y1, W2) + b2
batch_y2 = tf.contrib.layers.batch_norm(y2, True)
y2_hat = tf.nn.relu(batch_y2)
# 완전연결계층 2층 (3층)
W3 = tf.get_variable(name="W3", shape=[100,50], initializer = tf.contrib.layers.variance_scaling_initializer())
b3 = tf.Variable(tf.ones([1,50]))
y3 = tf.matmul(y2_hat, W3) + b3
batch_y3 = tf.contrib.layers.batch_norm(y3, True)
y3_hat = tf.nn.relu(batch_y3)
# 출력층 (4층)
W4 = tf.get_variable(name="W4", shape=[50,10], initializer = tf.contrib.layers.variance_scaling_initializer())
b4 = tf.Variable(tf.ones([1,10]))
z = tf.matmul(y3_hat,W4) + b4
z_hat = tf.nn.softmax(z)
y_predict = tf.argmax(z_hat, axis=1)
# 정확도 확인
y_onehot = tf.placeholder('float',[None,10]) # 정답 데이터를 담을 배열
y_label = tf.argmax(y_onehot, axis=1)
correction_prediction = tf.equal(y_predict, y_label)
accuracy = tf.reduce_mean(tf.cast(correction_prediction,'float'))
# 오차 확인
loss = -tf.reduce_sum(y_onehot * tf.log(z_hat+0.0000001), axis=1)
rs = tf.reduce_mean(loss)
# 학습
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train = optimizer.minimize(loss)
# 변수 초기화
init = tf.global_variables_initializer()
# 그래프 실행
sess = tf.Session()
sess.run(init)
for i in range(1,601*20):
batch_xs, batch_ys = mnist.train.next_batch(100)
batch_x_test, batch_y_test = mnist.test.next_batch(100)
sess.run(train, feed_dict={x:batch_xs, y_onehot:batch_ys})
if not i % 600:
print(i//600,"에폭 훈련데이터 정확도 : ",sess.run(accuracy, feed_dict={x:batch_xs, y_onehot:batch_ys}),"\t","테스트 데이터 정확도:", sess.run(accuracy, feed_dict={x:batch_x_test, y_onehot:batch_y_test}))
| _____no_output_____ | MIT | .ipynb_checkpoints/(200818) 2.-checkpoint.ipynb | Adrian123K/tensor |
■ 텐서 플로우로 dropout 적용하는 방법 드롭아웃(dropout) 사용해야하는 이유 - 오버피팅 방지 구현 예시```python keep_prob = tf.placeholder('float') 0.8 -> 전체 뉴런 중 80%만 남기고 20% 랜덤으로 삭제 1.0 -> 모든 뉴런을 그대로 남겨둔다y3_drop = tf.nn.dropout(y3, keep_prob)``` 훈련할 때는 뉴런을 삭제하고 테스트 할 때는 뉴런을 삭제하지 않으려고 keep_prob로 남겨둠 |
import tensorflow as tf
import numpy as np
import warnings
import os
from tensorflow.examples.tutorials.mnist import input_data
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
warnings.filterwarnings('ignore')
mnist = input_data.read_data_sets("MNIST_data/", one_hot = True)
tf.reset_default_graph() # 텐서 그래프 초기화 하는 코드
# 입력층
x = tf.placeholder('float',[None,784])
x1 = tf.reshape(x,[-1,28,28,1]) # 흑백사진, 1층, batch 개수를 모르므로 -1. 2차원 -> 4차원으로 변경
# Convolution 1층
W1 = tf.Variable(tf.random_normal([5,5,1,32], stddev=0.01)) # 필터 32개 생성
b1 = tf.Variable(tf.ones([32])) # 숫자 1로 채워진 bias 생성
y1 = tf.nn.conv2d(x1, W1, strides=[1,1,1,1], padding='SAME')
y1 = y1 + b1
y1 = tf.nn.relu(y1)
y1 = tf.nn.max_pool(y1, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME') # ksize : 필터 사이즈
y1 = tf.reshape(y1, [-1,14*14*32]) # y1 4차원 -> 2차원
# 완전연결계층 1층 (2층)
W2 = tf.get_variable(name="W2", shape=[14*14*32,100], initializer = tf.contrib.layers.variance_scaling_initializer())
b2 = tf.Variable(tf.ones([1,100]))
y2 = tf.matmul(y1, W2) + b2
batch_y2 = tf.contrib.layers.batch_norm(y2, True)
y2_hat = tf.nn.relu(batch_y2)
# drop out
keep_prob = tf.placeholder('float')
y2_hat_drop = tf.nn.dropout(y2_hat, keep_prob)
# 완전연결계층 2층 (3층)
W3 = tf.get_variable(name="W3", shape=[100,50], initializer = tf.contrib.layers.variance_scaling_initializer())
b3 = tf.Variable(tf.ones([1,50]))
y3 = tf.matmul(y2_hat_drop, W3) + b3
batch_y3 = tf.contrib.layers.batch_norm(y3, True)
y3_hat = tf.nn.relu(batch_y3)
# 출력층 (4층)
W4 = tf.get_variable(name="W4", shape=[50,10], initializer = tf.contrib.layers.variance_scaling_initializer())
b4 = tf.Variable(tf.ones([1,10]))
z = tf.matmul(y3_hat,W4) + b4
z_hat = tf.nn.softmax(z)
y_predict = tf.argmax(z_hat, axis=1)
# 정확도 확인
y_onehot = tf.placeholder('float',[None,10]) # 정답 데이터를 담을 배열
y_label = tf.argmax(y_onehot, axis=1)
correction_prediction = tf.equal(y_predict, y_label)
accuracy = tf.reduce_mean(tf.cast(correction_prediction,'float'))
# 오차 확인
loss = -tf.reduce_sum(y_onehot * tf.log(z_hat+0.0000001), axis=1)
rs = tf.reduce_mean(loss)
# 학습
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train = optimizer.minimize(loss)
# 변수 초기화
init = tf.global_variables_initializer()
# 그래프 실행
sess = tf.Session()
sess.run(init)
for i in range(1,601*20):
batch_xs, batch_ys = mnist.train.next_batch(100)
batch_x_test, batch_y_test = mnist.test.next_batch(100)
sess.run(train, feed_dict={x:batch_xs, y_onehot:batch_ys})
if not i % 600:
print(i//600,"에폭 훈련데이터 정확도 : ",sess.run(accuracy, feed_dict={x:batch_xs, y_onehot:batch_ys}),"\t","테스트 데이터 정확도:", sess.run(accuracy, feed_dict={x:batch_x_test, y_onehot:batch_y_test}))
import tensorflow as tf
import numpy as np
import warnings
import os
from tensorflow.examples.tutorials.mnist import input_data
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
warnings.filterwarnings('ignore')
mnist = input_data.read_data_sets("MNIST_data/", one_hot = True)
tf.reset_default_graph() # 텐서 그래프 초기화 하는 코드
# 입력층
x = tf.placeholder('float',[None,784])
x1 = tf.reshape(x,[-1,28,28,1]) # 흑백사진, 1층, batch 개수를 모르므로 -1. 2차원 -> 4차원으로 변경
# Convolution 1층
W1 = tf.Variable(tf.random_normal([5,5,1,32], stddev=0.01)) # 필터 32개 생성
b1 = tf.Variable(tf.ones([32])) # 숫자 1로 채워진 bias 생성
y1 = tf.nn.conv2d(x1, W1, strides=[1,1,1,1], padding='SAME')
y1 = y1 + b1
y1 = tf.nn.relu(y1)
y1 = tf.nn.max_pool(y1, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME') # ksize : 필터 사이즈
y1 = tf.reshape(y1, [-1,14*14*32]) # y1 4차원 -> 2차원
# 완전연결계층 1층 (2층)
W2 = tf.get_variable(name="W2", shape=[14*14*32,100], initializer = tf.contrib.layers.variance_scaling_initializer())
b2 = tf.Variable(tf.ones([1,100]))
y2 = tf.matmul(y1, W2) + b2
batch_y2 = tf.contrib.layers.batch_norm(y2, True)
y2_hat = tf.nn.relu(batch_y2)
# drop out
keep_prob = tf.placeholder('float')
y2_hat_drop = tf.nn.dropout(y2_hat, keep_prob)
# 완전연결계층 2층 (3층)
W3 = tf.get_variable(name="W3", shape=[100,50], initializer = tf.contrib.layers.variance_scaling_initializer())
b3 = tf.Variable(tf.ones([1,50]))
y3 = tf.matmul(y2_hat_drop, W3) + b3
batch_y3 = tf.contrib.layers.batch_norm(y3, True)
y3_hat = tf.nn.relu(batch_y3)
# 출력층 (4층)
W4 = tf.get_variable(name="W4", shape=[50,10], initializer = tf.contrib.layers.variance_scaling_initializer())
b4 = tf.Variable(tf.ones([1,10]))
z = tf.matmul(y3_hat,W4) + b4
z_hat = tf.nn.softmax(z)
y_predict = tf.argmax(z_hat, axis=1)
# 정확도 확인
y_onehot = tf.placeholder('float',[None,10]) # 정답 데이터를 담을 배열
y_label = tf.argmax(y_onehot, axis=1)
correction_prediction = tf.equal(y_predict, y_label)
accuracy = tf.reduce_mean(tf.cast(correction_prediction,'float'))
# 오차 확인
loss = -tf.reduce_sum(y_onehot * tf.log(z_hat+0.0000001), axis=1)
rs = tf.reduce_mean(loss)
# 학습
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train = optimizer.minimize(loss)
# 변수 초기화
init = tf.global_variables_initializer()
train_acc_list = []
test_acc_list = []
# 그래프 실행
with tf.Session() as sess:
sess.run(init)
for j in range(20):
for i in range(600):
batch_xs, batch_ys = mnist.train.next_batch(100)
test_xs, test_ys = mnist.test.next_batch(100)
sess.run(train, feed_dict={x: batch_xs, y_onehot: batch_ys, keep_prob: 0.9})
if i == 0: # 1에폭마다 정확도 확인
train_acc = sess.run(accuracy, feed_dict={x: batch_xs, y_onehot: batch_ys, keep_prob: 1.0}) # 훈련 데이터의 정확도
test_acc = sess.run(accuracy, feed_dict={x: test_xs, y_onehot: test_ys, keep_prob: 1.0}) # 테스트 데이터의 정확도
# 그래프용 리스트에 정확도 담기
train_acc_list.append(train_acc)
test_acc_list.append(test_acc)
print('훈련', str(j + 1) + '에폭 정확도 :', train_acc)
print('테스트', str(j + 1) + '에폭 정확도 :', test_acc)
print('-----------------------------------------------') | Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
WARNING:tensorflow:From <ipython-input-15-159da87ef0a0>:38: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
훈련 1에폭 정확도 : 0.65
테스트 1에폭 정확도 : 0.46
-----------------------------------------------
훈련 2에폭 정확도 : 0.99
테스트 2에폭 정확도 : 0.98
-----------------------------------------------
훈련 3에폭 정확도 : 0.98
테스트 3에폭 정확도 : 0.99
-----------------------------------------------
훈련 4에폭 정확도 : 1.0
테스트 4에폭 정확도 : 0.98
-----------------------------------------------
훈련 5에폭 정확도 : 0.99
테스트 5에폭 정확도 : 1.0
-----------------------------------------------
훈련 6에폭 정확도 : 0.98
테스트 6에폭 정확도 : 0.98
-----------------------------------------------
훈련 7에폭 정확도 : 1.0
테스트 7에폭 정확도 : 1.0
-----------------------------------------------
훈련 8에폭 정확도 : 0.99
테스트 8에폭 정확도 : 0.98
-----------------------------------------------
훈련 9에폭 정확도 : 1.0
테스트 9에폭 정확도 : 0.98
-----------------------------------------------
훈련 10에폭 정확도 : 1.0
테스트 10에폭 정확도 : 1.0
-----------------------------------------------
| MIT | .ipynb_checkpoints/(200818) 2.-checkpoint.ipynb | Adrian123K/tensor |
■ 훈련과 테스트 데이터의 정확도가 시각화 될 수 있도록 코드를 추가 |
import tensorflow as tf
import numpy as np
import warnings
import os
from tensorflow.examples.tutorials.mnist import input_data
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
warnings.filterwarnings('ignore')
mnist = input_data.read_data_sets("MNIST_data/", one_hot = True)
tf.reset_default_graph() # 텐서 그래프 초기화 하는 코드
# 입력층
x = tf.placeholder('float',[None,784])
x1 = tf.reshape(x,[-1,28,28,1]) # 흑백사진, 1층, batch 개수를 모르므로 -1. 2차원 -> 4차원으로 변경
# Convolution 1층
W1 = tf.Variable(tf.random_normal([5,5,1,32], stddev=0.01)) # 필터 32개 생성
b1 = tf.Variable(tf.ones([32])) # 숫자 1로 채워진 bias 생성
y1 = tf.nn.conv2d(x1, W1, strides=[1,1,1,1], padding='SAME')
y1 = y1 + b1
y1 = tf.nn.relu(y1)
y1 = tf.nn.max_pool(y1, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME') # ksize : 필터 사이즈
y1 = tf.reshape(y1, [-1,14*14*32]) # y1 4차원 -> 2차원
# 완전연결계층 1층 (2층)
W2 = tf.get_variable(name="W2", shape=[14*14*32,100], initializer = tf.contrib.layers.variance_scaling_initializer())
b2 = tf.Variable(tf.ones([1,100]))
y2 = tf.matmul(y1, W2) + b2
batch_y2 = tf.contrib.layers.batch_norm(y2, True)
y2_hat = tf.nn.relu(batch_y2)
# drop out
keep_prob = tf.placeholder('float')
y2_hat_drop = tf.nn.dropout(y2_hat, keep_prob)
# 완전연결계층 2층 (3층)
W3 = tf.get_variable(name="W3", shape=[100,50], initializer = tf.contrib.layers.variance_scaling_initializer())
b3 = tf.Variable(tf.ones([1,50]))
y3 = tf.matmul(y2_hat_drop, W3) + b3
batch_y3 = tf.contrib.layers.batch_norm(y3, True)
y3_hat = tf.nn.relu(batch_y3)
# 출력층 (4층)
W4 = tf.get_variable(name="W4", shape=[50,10], initializer = tf.contrib.layers.variance_scaling_initializer())
b4 = tf.Variable(tf.ones([1,10]))
z = tf.matmul(y3_hat,W4) + b4
z_hat = tf.nn.softmax(z)
y_predict = tf.argmax(z_hat, axis=1)
# 정확도 확인
y_onehot = tf.placeholder('float',[None,10]) # 정답 데이터를 담을 배열
y_label = tf.argmax(y_onehot, axis=1)
correction_prediction = tf.equal(y_predict, y_label)
accuracy = tf.reduce_mean(tf.cast(correction_prediction,'float'))
# 오차 확인
loss = -tf.reduce_sum(y_onehot * tf.log(z_hat+0.0000001), axis=1)
rs = tf.reduce_mean(loss)
# 학습
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train = optimizer.minimize(loss)
# 변수 초기화
init = tf.global_variables_initializer()
train_acc_list = []
test_acc_list = []
# 그래프 실행
with tf.Session() as sess:
sess.run(init)
for j in range(10):
for i in range(600):
batch_xs, batch_ys = mnist.train.next_batch(100)
test_xs, test_ys = mnist.test.next_batch(100)
sess.run(train, feed_dict={x: batch_xs, y_onehot: batch_ys, keep_prob: 0.9})
if i == 0: # 1에폭마다 정확도 확인
train_acc = sess.run(accuracy, feed_dict={x: batch_xs, y_onehot: batch_ys, keep_prob: 1.0}) # 훈련 데이터의 정확도
test_acc = sess.run(accuracy, feed_dict={x: test_xs, y_onehot: test_ys, keep_prob: 1.0}) # 테스트 데이터의 정확도
# 그래프용 리스트에 정확도 담기
train_acc_list.append(train_acc)
test_acc_list.append(test_acc)
print('훈련', str(j + 1) + '에폭 정확도 :', train_acc)
print('테스트', str(j + 1) + '에폭 정확도 :', test_acc)
print('-----------------------------------------------')
# 그래프 작성
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (20,10)
plt.rcParams.update({'font.size':20})
markers = {'train': 'o', 'test': 's'}
x = np.arange(len(train_acc_list))
plt.plot()
plt.plot(x, train_acc_list, label='train acc')
plt.plot(x, test_acc_list, label='test acc', linestyle='--')
plt.xlabel("epochs")
plt.ylabel("accuracy")
plt.ylim(min(min(train_acc_list),min(test_acc_list))-0.1, 1.005)
plt.legend(loc='lower right')
plt.show()
| Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
훈련 1에폭 정확도 : 0.65
테스트 1에폭 정확도 : 0.42
-----------------------------------------------
훈련 2에폭 정확도 : 0.99
테스트 2에폭 정확도 : 0.99
-----------------------------------------------
훈련 3에폭 정확도 : 1.0
테스트 3에폭 정확도 : 0.99
-----------------------------------------------
훈련 4에폭 정확도 : 1.0
테스트 4에폭 정확도 : 1.0
-----------------------------------------------
훈련 5에폭 정확도 : 1.0
테스트 5에폭 정확도 : 1.0
-----------------------------------------------
훈련 6에폭 정확도 : 1.0
테스트 6에폭 정확도 : 0.99
-----------------------------------------------
훈련 7에폭 정확도 : 0.99
테스트 7에폭 정확도 : 1.0
-----------------------------------------------
훈련 8에폭 정확도 : 0.98
테스트 8에폭 정확도 : 0.99
-----------------------------------------------
훈련 9에폭 정확도 : 1.0
테스트 9에폭 정확도 : 1.0
-----------------------------------------------
훈련 10에폭 정확도 : 1.0
테스트 10에폭 정확도 : 0.99
-----------------------------------------------
| MIT | .ipynb_checkpoints/(200818) 2.-checkpoint.ipynb | Adrian123K/tensor |
※ 문제140. 위의 CNN 신경망을 아래와 같이 구현하시오 변경 전 : 입력층 ----> Conv1 ----> pooling ----> FC1층 ----> FC2층 ----> 출력층 784 32 100 50 10 변경 후 : 입력층 ----> Conv1 ----> pooling ----> Conv2 ----> pooling ----> FC1층 ----> FC2층 ----> 출력층 784 32 64 100 50 10 |
import tensorflow as tf
import numpy as np
import warnings
import os
from tensorflow.examples.tutorials.mnist import input_data
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
warnings.filterwarnings('ignore')
mnist = input_data.read_data_sets("MNIST_data/", one_hot = True)
tf.reset_default_graph() # 텐서 그래프 초기화 하는 코드
# 입력층
x = tf.placeholder('float',[None,784])
x1 = tf.reshape(x,[-1,28,28,1]) # 흑백사진, 1층, batch 개수를 모르므로 -1. 2차원 -> 4차원으로 변경
# Convolution 1층
W1 = tf.Variable(tf.random_normal([5,5,1,32], stddev=0.01)) # 필터 32개 생성
b1 = tf.Variable(tf.ones([32])) # 숫자 1로 채워진 bias 생성
y1 = tf.nn.conv2d(x1, W1, strides=[1,1,1,1], padding='SAME') + b1
y1 = tf.nn.relu(y1)
y1 = tf.nn.max_pool(y1, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME') # ksize : 필터 사이즈
# y1 = tf.reshape(y1, [-1,14*14*32]) # y1 4차원 -> 2차원
# Convolution 2층
W2 = tf.Variable(tf.random_normal([5,5,32,64], stddev=0.01))
b2 = tf.Variable(tf.ones([64])) # 숫자 1로 채워진 bias 생성
y2 = tf.nn.conv2d(y1, W2, strides=[1,1,1,1], padding='SAME') + b2
y2 = tf.nn.relu(y2)
y2 = tf.nn.max_pool(y2, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')
y2 = tf.reshape(y2, [-1,7*7*64]) # y1 4차원 -> 2차원
# 완전연결계층 1층 (2층)
W3 = tf.get_variable(name="W3", shape=[7*7*64,100], initializer = tf.contrib.layers.variance_scaling_initializer())
b3 = tf.Variable(tf.ones([1,100]))
y3 = tf.matmul(y2, W3) + b3
batch_y2 = tf.contrib.layers.batch_norm(y3, True)
y2_hat = tf.nn.relu(batch_y2)
# drop out
keep_prob = tf.placeholder('float')
y2_hat_drop = tf.nn.dropout(y2_hat, keep_prob)
# 완전연결계층 2층 (3층)
W4 = tf.get_variable(name="W4", shape=[100,50], initializer = tf.contrib.layers.variance_scaling_initializer())
b4 = tf.Variable(tf.ones([1,50]))
y4 = tf.matmul(y2_hat_drop, W4) + b4
batch_y3 = tf.contrib.layers.batch_norm(y4, True)
y3_hat = tf.nn.relu(batch_y3)
# 출력층 (4층)
W5 = tf.get_variable(name="W5", shape=[50,10], initializer = tf.contrib.layers.variance_scaling_initializer())
b5 = tf.Variable(tf.ones([1,10]))
z = tf.matmul(y3_hat,W5) + b5
z_hat = tf.nn.softmax(z)
y_predict = tf.argmax(z_hat, axis=1)
# 정확도 확인
y_onehot = tf.placeholder('float',[None,10]) # 정답 데이터를 담을 배열
y_label = tf.argmax(y_onehot, axis=1)
correction_prediction = tf.equal(y_predict, y_label)
accuracy = tf.reduce_mean(tf.cast(correction_prediction,'float'))
# 오차 확인
loss = -tf.reduce_sum(y_onehot * tf.log(z_hat+0.0000001), axis=1)
rs = tf.reduce_mean(loss)
# 학습
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train = optimizer.minimize(loss)
# 변수 초기화
init = tf.global_variables_initializer()
train_acc_list = []
test_acc_list = []
# 그래프 실행
with tf.Session() as sess:
sess.run(init)
for j in range(10):
for i in range(600):
batch_xs, batch_ys = mnist.train.next_batch(100)
test_xs, test_ys = mnist.test.next_batch(100)
sess.run(train, feed_dict={x: batch_xs, y_onehot: batch_ys, keep_prob: 0.9})
if i == 0: # 1에폭마다 정확도 확인
train_acc = sess.run(accuracy, feed_dict={x: batch_xs, y_onehot: batch_ys, keep_prob: 1.0}) # 훈련 데이터의 정확도
test_acc = sess.run(accuracy, feed_dict={x: test_xs, y_onehot: test_ys, keep_prob: 1.0}) # 테스트 데이터의 정확도
# 그래프용 리스트에 정확도 담기
train_acc_list.append(train_acc)
test_acc_list.append(test_acc)
print('훈련', str(j + 1) + '에폭 정확도 :', train_acc)
print('테스트', str(j + 1) + '에폭 정확도 :', test_acc)
print('-----------------------------------------------')
# # 그래프 작성
# import matplotlib.pyplot as plt
# plt.rcParams['figure.figsize'] = (20,10)
# plt.rcParams.update({'font.size':20})
# markers = {'train': 'o', 'test': 's'}
# x = np.arange(len(train_acc_list))
# plt.plot()
# plt.plot(x, train_acc_list, label='train acc')
# plt.plot(x, test_acc_list, label='test acc', linestyle='--')
# plt.xlabel("epochs")
# plt.ylabel("accuracy")
# plt.ylim(min(min(train_acc_list),min(test_acc_list))-0.1, 1.005)
# plt.legend(loc='lower right')
# plt.show()
| Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
훈련 1에폭 정확도 : 0.5
테스트 1에폭 정확도 : 0.21
-----------------------------------------------
훈련 2에폭 정확도 : 1.0
테스트 2에폭 정확도 : 0.98
-----------------------------------------------
훈련 3에폭 정확도 : 0.99
테스트 3에폭 정확도 : 1.0
-----------------------------------------------
훈련 4에폭 정확도 : 0.99
테스트 4에폭 정확도 : 0.98
-----------------------------------------------
훈련 5에폭 정확도 : 0.99
테스트 5에폭 정확도 : 0.98
-----------------------------------------------
훈련 6에폭 정확도 : 0.99
테스트 6에폭 정확도 : 0.97
-----------------------------------------------
훈련 7에폭 정확도 : 0.99
테스트 7에폭 정확도 : 1.0
-----------------------------------------------
훈련 8에폭 정확도 : 1.0
테스트 8에폭 정확도 : 0.99
-----------------------------------------------
훈련 9에폭 정확도 : 1.0
테스트 9에폭 정확도 : 1.0
-----------------------------------------------
훈련 10에폭 정확도 : 1.0
테스트 10에폭 정확도 : 0.98
-----------------------------------------------
| MIT | .ipynb_checkpoints/(200818) 2.-checkpoint.ipynb | Adrian123K/tensor |
■ cifar10 데이터를 이용한 신경망 구성 cifar10은 총 6만 개의 데이터 셋으로 이루어져 있으며 그 중 5만 개가 훈련데이터, 1만 개가 테스트 데이터 class는 비행기부터 트럭까지 총 10개 1. 비행기 2. 자동차 3. 새 4. 고양이 5. 사슴 6. 강아지 7. 개구리 8. 말 9. 배 10. 트럭 ■ 신경망 구현 홈페이지를 만들려면 필요한 코드 1. 사진 데이터를 신경망에 로드하는 코드들 2. 사진 이미지를 분류하는 신경망 코드 --> 구글 3. 홈페이지 구축하는 코드(RShiny) ■ 1. 사진 데이터를 신경망에 로드하는 코드들 예제1. 사진을 불러와서 아래와 같이 이미지 이름을 출력하는 함수를 생성 | import os
def image_load(path):
file_list = os.listdir(path)
return file_list
train_image = 'd:/tensor/cifar10/train100/'
print(image_load(train_image)) | ['1.png', '10.png', '100.png', '11.png', '12.png', '13.png', '14.png', '15.png', '16.png', '17.png', '18.png', '19.png', '2.png', '20.png', '21.png', '22.png', '23.png', '24.png', '25.png', '26.png', '27.png', '28.png', '29.png', '3.png', '30.png', '31.png', '32.png', '33.png', '34.png', '35.png', '36.png', '37.png', '38.png', '39.png', '4.png', '40.png', '41.png', '42.png', '43.png', '44.png', '45.png', '46.png', '47.png', '48.png', '49.png', '5.png', '50.png', '51.png', '52.png', '53.png', '54.png', '55.png', '56.png', '57.png', '58.png', '59.png', '6.png', '60.png', '61.png', '62.png', '63.png', '64.png', '65.png', '66.png', '67.png', '68.png', '69.png', '7.png', '70.png', '71.png', '72.png', '73.png', '74.png', '75.png', '76.png', '77.png', '78.png', '79.png', '8.png', '80.png', '81.png', '82.png', '83.png', '84.png', '85.png', '86.png', '87.png', '88.png', '89.png', '9.png', '90.png', '91.png', '92.png', '93.png', '94.png', '95.png', '96.png', '97.png', '98.png', '99.png']
| MIT | .ipynb_checkpoints/(200818) 2.-checkpoint.ipynb | Adrian123K/tensor |
예제2. 위의 결과에서 .png는 빼고 숫자만 출력되게 하시오 | import os
import re
def image_load(path):
file_list = os.listdir(path)
file_name = []
for i in file_list:
name = re.sub('[^0-9]','',i)
file_name.append(name)
return file_name
train_image = 'd:/tensor/cifar10/train100/'
print(image_load(train_image)) | ['1', '10', '100', '11', '12', '13', '14', '15', '16', '17', '18', '19', '2', '20', '21', '22', '23', '24', '25', '26', '27', '28', '29', '3', '30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '4', '40', '41', '42', '43', '44', '45', '46', '47', '48', '49', '5', '50', '51', '52', '53', '54', '55', '56', '57', '58', '59', '6', '60', '61', '62', '63', '64', '65', '66', '67', '68', '69', '7', '70', '71', '72', '73', '74', '75', '76', '77', '78', '79', '8', '80', '81', '82', '83', '84', '85', '86', '87', '88', '89', '9', '90', '91', '92', '93', '94', '95', '96', '97', '98', '99']
| MIT | .ipynb_checkpoints/(200818) 2.-checkpoint.ipynb | Adrian123K/tensor |
예제3. 위의 결과를 정렬해서 출력하시오 | import os
import re
def image_load(path):
file_list = os.listdir(path)
file_name = []
for i in file_list:
file_name.append(int(re.sub('[^0-9]', '', i)))
file_name.sort()
return file_name
train_image = 'd:/tensor/cifar10/train100/'
print(image_load(train_image)) | [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100]
| MIT | .ipynb_checkpoints/(200818) 2.-checkpoint.ipynb | Adrian123K/tensor |
예제4. 위의 출력된 결과에서 다시 .png를 붙여서 아래와 같이 출력되게 하시오 | import os
import re
def image_load(path):
file_list = os.listdir(path)
file_name = []
for i in file_list:
file_name.append(int(re.sub('[^0-9]', '', i)))
file_name.sort()
file_res = []
for i in file_name:
file_res.append(str(i)+'.png')
return file_res
train_image = 'd:/tensor/cifar10/train100/'
print(image_load(train_image)) | ['1.png', '2.png', '3.png', '4.png', '5.png', '6.png', '7.png', '8.png', '9.png', '10.png', '11.png', '12.png', '13.png', '14.png', '15.png', '16.png', '17.png', '18.png', '19.png', '20.png', '21.png', '22.png', '23.png', '24.png', '25.png', '26.png', '27.png', '28.png', '29.png', '30.png', '31.png', '32.png', '33.png', '34.png', '35.png', '36.png', '37.png', '38.png', '39.png', '40.png', '41.png', '42.png', '43.png', '44.png', '45.png', '46.png', '47.png', '48.png', '49.png', '50.png', '51.png', '52.png', '53.png', '54.png', '55.png', '56.png', '57.png', '58.png', '59.png', '60.png', '61.png', '62.png', '63.png', '64.png', '65.png', '66.png', '67.png', '68.png', '69.png', '70.png', '71.png', '72.png', '73.png', '74.png', '75.png', '76.png', '77.png', '78.png', '79.png', '80.png', '81.png', '82.png', '83.png', '84.png', '85.png', '86.png', '87.png', '88.png', '89.png', '90.png', '91.png', '92.png', '93.png', '94.png', '95.png', '96.png', '97.png', '98.png', '99.png', '100.png']
| MIT | .ipynb_checkpoints/(200818) 2.-checkpoint.ipynb | Adrian123K/tensor |
예제5. 이미지 이름 앞에 절대 경로가 아래처럼 붙게 하시오 ['d:/tensor/cifar10/train100/1.png','d:/tensor/cifar10/train100/2.png',...,'d:/tensor/cifar10/train100/100.png'] | import os
import re
def image_load(path):
file_list = os.listdir(path)
file_name = []
for i in file_list:
file_name.append(int(re.sub('[^0-9]', '', i)))
file_name.sort()
file_res = []
for i in file_name:
file_res.append(path+str(i)+'.png')
return file_res
train_image = 'd:/tensor/cifar10/train100/'
print(image_load(train_image)) | ['d:/tensor/cifar10/train100/1.png', 'd:/tensor/cifar10/train100/2.png', 'd:/tensor/cifar10/train100/3.png', 'd:/tensor/cifar10/train100/4.png', 'd:/tensor/cifar10/train100/5.png', 'd:/tensor/cifar10/train100/6.png', 'd:/tensor/cifar10/train100/7.png', 'd:/tensor/cifar10/train100/8.png', 'd:/tensor/cifar10/train100/9.png', 'd:/tensor/cifar10/train100/10.png', 'd:/tensor/cifar10/train100/11.png', 'd:/tensor/cifar10/train100/12.png', 'd:/tensor/cifar10/train100/13.png', 'd:/tensor/cifar10/train100/14.png', 'd:/tensor/cifar10/train100/15.png', 'd:/tensor/cifar10/train100/16.png', 'd:/tensor/cifar10/train100/17.png', 'd:/tensor/cifar10/train100/18.png', 'd:/tensor/cifar10/train100/19.png', 'd:/tensor/cifar10/train100/20.png', 'd:/tensor/cifar10/train100/21.png', 'd:/tensor/cifar10/train100/22.png', 'd:/tensor/cifar10/train100/23.png', 'd:/tensor/cifar10/train100/24.png', 'd:/tensor/cifar10/train100/25.png', 'd:/tensor/cifar10/train100/26.png', 'd:/tensor/cifar10/train100/27.png', 'd:/tensor/cifar10/train100/28.png', 'd:/tensor/cifar10/train100/29.png', 'd:/tensor/cifar10/train100/30.png', 'd:/tensor/cifar10/train100/31.png', 'd:/tensor/cifar10/train100/32.png', 'd:/tensor/cifar10/train100/33.png', 'd:/tensor/cifar10/train100/34.png', 'd:/tensor/cifar10/train100/35.png', 'd:/tensor/cifar10/train100/36.png', 'd:/tensor/cifar10/train100/37.png', 'd:/tensor/cifar10/train100/38.png', 'd:/tensor/cifar10/train100/39.png', 'd:/tensor/cifar10/train100/40.png', 'd:/tensor/cifar10/train100/41.png', 'd:/tensor/cifar10/train100/42.png', 'd:/tensor/cifar10/train100/43.png', 'd:/tensor/cifar10/train100/44.png', 'd:/tensor/cifar10/train100/45.png', 'd:/tensor/cifar10/train100/46.png', 'd:/tensor/cifar10/train100/47.png', 'd:/tensor/cifar10/train100/48.png', 'd:/tensor/cifar10/train100/49.png', 'd:/tensor/cifar10/train100/50.png', 'd:/tensor/cifar10/train100/51.png', 'd:/tensor/cifar10/train100/52.png', 'd:/tensor/cifar10/train100/53.png', 'd:/tensor/cifar10/train100/54.png', 'd:/tensor/cifar10/train100/55.png', 'd:/tensor/cifar10/train100/56.png', 'd:/tensor/cifar10/train100/57.png', 'd:/tensor/cifar10/train100/58.png', 'd:/tensor/cifar10/train100/59.png', 'd:/tensor/cifar10/train100/60.png', 'd:/tensor/cifar10/train100/61.png', 'd:/tensor/cifar10/train100/62.png', 'd:/tensor/cifar10/train100/63.png', 'd:/tensor/cifar10/train100/64.png', 'd:/tensor/cifar10/train100/65.png', 'd:/tensor/cifar10/train100/66.png', 'd:/tensor/cifar10/train100/67.png', 'd:/tensor/cifar10/train100/68.png', 'd:/tensor/cifar10/train100/69.png', 'd:/tensor/cifar10/train100/70.png', 'd:/tensor/cifar10/train100/71.png', 'd:/tensor/cifar10/train100/72.png', 'd:/tensor/cifar10/train100/73.png', 'd:/tensor/cifar10/train100/74.png', 'd:/tensor/cifar10/train100/75.png', 'd:/tensor/cifar10/train100/76.png', 'd:/tensor/cifar10/train100/77.png', 'd:/tensor/cifar10/train100/78.png', 'd:/tensor/cifar10/train100/79.png', 'd:/tensor/cifar10/train100/80.png', 'd:/tensor/cifar10/train100/81.png', 'd:/tensor/cifar10/train100/82.png', 'd:/tensor/cifar10/train100/83.png', 'd:/tensor/cifar10/train100/84.png', 'd:/tensor/cifar10/train100/85.png', 'd:/tensor/cifar10/train100/86.png', 'd:/tensor/cifar10/train100/87.png', 'd:/tensor/cifar10/train100/88.png', 'd:/tensor/cifar10/train100/89.png', 'd:/tensor/cifar10/train100/90.png', 'd:/tensor/cifar10/train100/91.png', 'd:/tensor/cifar10/train100/92.png', 'd:/tensor/cifar10/train100/93.png', 'd:/tensor/cifar10/train100/94.png', 'd:/tensor/cifar10/train100/95.png', 'd:/tensor/cifar10/train100/96.png', 'd:/tensor/cifar10/train100/97.png', 'd:/tensor/cifar10/train100/98.png', 'd:/tensor/cifar10/train100/99.png', 'd:/tensor/cifar10/train100/100.png']
| MIT | .ipynb_checkpoints/(200818) 2.-checkpoint.ipynb | Adrian123K/tensor |
예제6. cv2.imread 함수를 이용해서 이미지를 숫자로 변경하시오 | import os
import re
import numpy as np
import cv2
def image_load(path):
file_list = os.listdir(path)
file_name = []
for i in file_list:
file_name.append(int(re.sub('[^0-9]', '', i)))
file_name.sort()
file_res = []
for j in file_name:
file_res.append(path+str(j)+'.png')
image = []
for k in file_res:
image.append(cv2.imread(k))
return np.array(image)
train_image = 'd:/tensor/cifar10/train100/'
print(image_load(train_image)) | [[[[ 63 62 59]
[ 45 46 43]
[ 43 48 50]
...
[108 132 158]
[102 125 152]
[103 124 148]]
[[ 20 20 16]
[ 0 0 0]
[ 0 8 18]
...
[ 55 88 123]
[ 50 83 119]
[ 57 87 122]]
[[ 21 24 25]
[ 0 7 16]
[ 8 27 49]
...
[ 50 84 118]
[ 50 84 120]
[ 42 73 109]]
...
[[ 96 170 208]
[ 34 153 201]
[ 26 161 198]
...
[ 70 133 160]
[ 7 31 56]
[ 20 34 53]]
[[ 96 139 180]
[ 42 123 173]
[ 30 144 186]
...
[ 94 148 184]
[ 34 62 97]
[ 34 53 83]]
[[116 144 177]
[ 94 129 168]
[ 87 142 179]
...
[140 184 216]
[ 84 118 151]
[ 72 92 123]]]
[[[187 177 154]
[136 137 126]
[ 95 104 105]
...
[ 71 95 91]
[ 71 90 87]
[ 70 81 79]]
[[169 160 140]
[154 153 145]
[118 125 125]
...
[ 78 99 96]
[ 62 80 77]
[ 61 73 71]]
[[164 155 140]
[149 146 139]
[112 115 115]
...
[ 64 82 79]
[ 55 70 68]
[ 55 69 67]]
...
[[166 167 175]
[160 154 156]
[170 160 154]
...
[ 36 34 42]
[ 57 53 61]
[ 91 83 93]]
[[128 154 165]
[130 152 156]
[142 161 159]
...
[ 96 93 103]
[120 114 123]
[131 121 131]]
[[120 148 163]
[122 148 158]
[133 156 163]
...
[139 133 143]
[142 134 143]
[144 133 143]]]
[[[255 255 255]
[253 253 253]
[253 253 253]
...
[253 253 253]
[253 253 253]
[253 253 253]]
[[255 255 255]
[255 255 255]
[255 255 255]
...
[255 255 255]
[255 255 255]
[255 255 255]]
[[255 255 255]
[254 254 254]
[254 254 254]
...
[254 254 254]
[254 254 254]
[254 254 254]]
...
[[112 120 113]
[111 118 111]
[106 112 105]
...
[ 80 81 72]
[ 79 80 72]
[ 79 80 72]]
[[110 118 111]
[104 111 104]
[ 98 106 99]
...
[ 73 75 68]
[ 75 76 70]
[ 82 84 78]]
[[105 113 106]
[ 98 106 99]
[ 94 102 95]
...
[ 83 85 78]
[ 83 85 79]
[ 84 86 80]]]
...
[[[ 27 44 33]
[ 31 44 29]
[ 34 45 32]
...
[221 197 157]
[216 199 162]
[213 194 160]]
[[ 24 40 25]
[ 27 40 24]
[ 29 36 23]
...
[227 209 174]
[217 199 167]
[220 198 165]]
[[ 47 56 55]
[ 46 56 47]
[ 52 61 53]
...
[165 162 129]
[133 137 110]
[154 153 123]]
...
[[ 60 97 106]
[ 58 91 103]
[ 53 100 85]
...
[ 52 91 78]
[ 36 64 54]
[ 31 56 44]]
[[ 59 91 97]
[ 57 97 92]
[ 61 108 88]
...
[ 59 107 96]
[ 47 94 81]
[ 41 88 71]]
[[ 91 119 106]
[115 141 128]
[137 158 142]
...
[ 63 108 100]
[ 47 94 81]
[ 40 90 71]]]
[[[ 59 77 90]
[ 64 81 94]
[ 65 81 87]
...
[ 35 44 46]
[ 38 45 53]
[ 38 46 57]]
[[ 68 92 96]
[ 63 84 101]
[ 66 80 95]
...
[ 44 51 60]
[ 50 60 72]
[ 45 56 71]]
[[ 66 87 85]
[ 75 102 113]
[ 76 101 115]
...
[ 57 84 90]
[ 60 87 96]
[ 55 79 91]]
...
[[ 88 105 102]
[ 47 69 61]
[ 53 74 69]
...
[ 95 142 157]
[ 94 137 152]
[ 95 152 169]]
[[ 69 96 101]
[ 52 66 69]
[ 53 68 64]
...
[100 125 131]
[ 91 117 123]
[ 79 109 115]]
[[ 61 86 91]
[ 58 72 78]
[ 68 86 87]
...
[ 85 126 135]
[ 81 116 120]
[ 80 96 102]]]
[[[ 44 64 62]
[ 26 50 50]
[ 19 44 46]
...
[ 69 172 167]
[ 76 184 183]
[ 72 136 137]]
[[ 37 65 63]
[ 26 53 55]
[ 27 50 52]
...
[ 61 169 163]
[ 75 174 171]
[ 77 146 145]]
[[ 36 62 58]
[ 37 66 64]
[ 37 60 56]
...
[ 62 155 153]
[ 64 154 150]
[ 57 128 123]]
...
[[ 99 135 172]
[ 84 110 143]
[ 42 56 130]
...
[ 56 75 94]
[ 86 108 141]
[ 81 105 139]]
[[117 146 183]
[ 95 118 150]
[ 44 64 80]
...
[ 60 72 81]
[ 98 118 135]
[110 125 143]]
[[144 174 209]
[123 151 182]
[ 83 109 139]
...
[ 47 54 59]
[111 119 130]
[160 156 169]]]]
| MIT | .ipynb_checkpoints/(200818) 2.-checkpoint.ipynb | Adrian123K/tensor |
■ 데이터를 신경망에 로드하기 위해 필요한 총 4개의 함수 1. image_load : 데이터를 숫자로 변환하는 함수 2. label_load : 정답 숫자를 one hot encoding하는 함수 3. next_batch : 배치 단위로 데이터 가져오는 함수 4. shuffle_batch : 이미지 데이터를 shuffle 하는 함수 예제8. train_label.csv의 숫자를 출력하는 함수를 만드시오 | train_label = 'd:/tensor/cifar10/train_label.csv'
import csv
def label_load(path):
file = open(path)
label_data = csv.reader(file)
label_list = []
for i in label_data:
label_list.append(i)
return label_list
print(label_load(train_label)) | 50000
| MIT | .ipynb_checkpoints/(200818) 2.-checkpoint.ipynb | Adrian123K/tensor |
예제9. 위의 결과가 문자가 아니라 숫자로 출력되게 하시오 | train_label = 'd:/tensor/cifar10/train_label.csv'
import csv
import numpy as np
def label_load(path):
file = open(path)
label_data = csv.reader(file)
label_list = []
for i in label_data:
label_list.append(i)
label = np.array(label_list).astype(int)
return label
print(label_load(train_label)) | [[6]
[9]
[9]
...
[9]
[1]
[1]]
| MIT | .ipynb_checkpoints/(200818) 2.-checkpoint.ipynb | Adrian123K/tensor |
예제10. 아래의 결과를 출력하시오 [0 0 0 1 0 0 0 0 0 0] | import numpy as np
print(np.eye(10)[4]) | [0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]
| MIT | .ipynb_checkpoints/(200818) 2.-checkpoint.ipynb | Adrian123K/tensor |
예제11. 위의 np.eye를 가지고 예제 9에서 출력하고 있는 숫자들이 아래와 같이 one hot encoding된 숫자로 출력되게 하시오 [0 0 0 1 0 0 0 0 0 0] [0 0 1 0 0 0 0 0 0 0] [1 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 1 0 0] ... | train_label = 'd:/tensor/cifar10/train_label.csv'
import csv
import numpy as np
def label_load(path):
file = open(path)
label_data = csv.reader(file)
label_list = []
for i in label_data:
label_list.append(i)
label = np.eye(10)[np.array(label_list).astype(int)]
return label
print(label_load(train_label)) | [[[0. 0. 0. ... 0. 0. 0.]]
[[0. 0. 0. ... 0. 0. 1.]]
[[0. 0. 0. ... 0. 0. 1.]]
...
[[0. 0. 0. ... 0. 0. 1.]]
[[0. 1. 0. ... 0. 0. 0.]]
[[0. 1. 0. ... 0. 0. 0.]]]
| MIT | .ipynb_checkpoints/(200818) 2.-checkpoint.ipynb | Adrian123K/tensor |
예제12. 위의 결과는 3차원인데 신경망에서 라벨을 사용하려면 2차원이어야 하므로 차원을 2차원으로 축소시켜서 출력하시오 | train_label = 'd:/tensor/cifar10/train_label.csv'
import csv
import numpy as np
def label_load(path):
file = open(path)
label_data = csv.reader(file)
label_list = []
for i in label_data:
label_list.append(i)
label = np.eye(10)[np.array(label_list).astype(int)].reshape(-1,10)
return label
print(label_load(train_label)) | [[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 1.]
[0. 0. 0. ... 0. 0. 1.]
...
[0. 0. 0. ... 0. 0. 1.]
[0. 1. 0. ... 0. 0. 0.]
[0. 1. 0. ... 0. 0. 0.]]
| MIT | .ipynb_checkpoints/(200818) 2.-checkpoint.ipynb | Adrian123K/tensor |
예제13. 지금까지 만든 두 개의 함수 image_load와 label_load를 loader2.py라는 파이썬 코드에 저장하고 아래와 같이 loader2.py를 import한 후에 cifar10 전체 데이터를 로드하는 코드를 구현하시오 | import loader2
train_image='D:/tensor/cifar10/train/'
train_label = 'D:/tensor/cifar10/train_label.csv'
test_image='D:/tensor/cifar10/test/'
test_label = 'D:/tensor/cifar10/test_label.csv'
trainX = loader2.image_load(train_image)
trainY = loader2.label_load(train_label)
testX = loader2.image_load(test_image)
testY = loader2.label_load(test_label)
print ( trainX.shape)
print ( trainY.shape)
print ( testX.shape)
print ( testY.shape) | (50000, 32, 32, 3)
(50000, 10)
(10000, 32, 32, 3)
(10000, 10)
| MIT | .ipynb_checkpoints/(200818) 2.-checkpoint.ipynb | Adrian123K/tensor |
예제14. 신경망에 100장씩 데이터를 로드할 수 있도록 아래와 같이 next_batch 함수를 생성하시오 | import loader2
def next_batch(data1, data2, init, final):
return data1[init:final], data2[init:final]
test_image = 'D:/tensor/cifar10/test/'
test_label = 'D:/tensor/cifar10/test_label.csv'
testX = loader2.image_load(test_image)
testY = loader2.label_load(test_label)
print(next_batch(testX, testY, 0 ,100)) | (array([[[[ 49, 112, 158],
[ 47, 111, 159],
[ 51, 116, 165],
...,
[ 36, 95, 137],
[ 36, 91, 126],
[ 33, 85, 116]],
[[ 51, 112, 152],
[ 40, 110, 151],
[ 45, 114, 159],
...,
[ 31, 95, 136],
[ 32, 91, 125],
[ 34, 88, 119]],
[[ 47, 110, 151],
[ 33, 109, 151],
[ 36, 111, 158],
...,
[ 34, 98, 139],
[ 34, 95, 130],
[ 33, 89, 120]],
...,
[[177, 124, 68],
[148, 100, 42],
[137, 88, 31],
...,
[146, 97, 38],
[108, 64, 13],
[127, 85, 40]],
[[168, 116, 61],
[148, 102, 49],
[132, 85, 35],
...,
[130, 82, 26],
[126, 82, 29],
[107, 64, 20]],
[[160, 107, 54],
[149, 105, 56],
[132, 89, 45],
...,
[124, 77, 24],
[129, 84, 34],
[110, 67, 21]]],
[[[235, 235, 235],
[231, 231, 231],
[232, 232, 232],
...,
[233, 233, 233],
[233, 233, 233],
[232, 232, 232]],
[[238, 238, 238],
[235, 235, 235],
[235, 235, 235],
...,
[236, 236, 236],
[236, 236, 236],
[235, 235, 235]],
[[237, 237, 237],
[234, 234, 234],
[234, 234, 234],
...,
[235, 235, 235],
[235, 235, 235],
[234, 234, 234]],
...,
[[ 89, 99, 87],
[ 37, 51, 43],
[ 11, 23, 19],
...,
[179, 184, 169],
[193, 197, 182],
[201, 202, 188]],
[[ 82, 96, 82],
[ 36, 57, 46],
[ 22, 44, 36],
...,
[183, 189, 174],
[196, 200, 185],
[200, 202, 187]],
[[ 83, 101, 85],
[ 48, 75, 62],
[ 38, 67, 58],
...,
[178, 183, 168],
[191, 195, 180],
[199, 200, 186]]],
[[[222, 190, 158],
[218, 187, 158],
[194, 166, 139],
...,
[234, 231, 228],
[243, 239, 237],
[246, 241, 238]],
[[229, 200, 170],
[226, 199, 172],
[201, 176, 151],
...,
[236, 232, 232],
[250, 246, 246],
[251, 247, 246]],
[[225, 201, 174],
[222, 200, 176],
[199, 179, 157],
...,
[232, 229, 230],
[251, 249, 250],
[247, 244, 245]],
...,
[[ 45, 40, 31],
[ 44, 39, 30],
[ 40, 35, 26],
...,
[ 46, 40, 37],
[ 14, 13, 9],
[ 5, 7, 4]],
[[ 39, 34, 23],
[ 43, 38, 27],
[ 41, 36, 25],
...,
[ 24, 20, 19],
[ 3, 6, 4],
[ 3, 7, 5]],
[[ 47, 41, 28],
[ 50, 43, 30],
[ 52, 45, 32],
...,
[ 8, 6, 5],
[ 3, 5, 4],
[ 7, 8, 7]]],
...,
[[[149, 135, 132],
[150, 137, 133],
[151, 139, 135],
...,
[151, 138, 130],
[152, 137, 130],
[152, 137, 130]],
[[152, 140, 138],
[153, 141, 139],
[153, 141, 139],
...,
[153, 140, 133],
[153, 139, 132],
[153, 138, 131]],
[[151, 140, 139],
[151, 140, 139],
[153, 141, 141],
...,
[151, 139, 132],
[151, 138, 131],
[151, 137, 131]],
...,
[[ 17, 38, 23],
[ 10, 33, 19],
[ 18, 38, 25],
...,
[145, 137, 135],
[145, 138, 135],
[145, 137, 135]],
[[ 13, 30, 17],
[ 11, 26, 14],
[ 12, 30, 17],
...,
[146, 138, 137],
[146, 138, 137],
[146, 138, 137]],
[[ 8, 24, 13],
[ 10, 24, 13],
[ 10, 25, 14],
...,
[144, 136, 134],
[143, 136, 134],
[143, 136, 134]]],
[[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]],
[[255, 255, 255],
[254, 254, 254],
[254, 254, 254],
...,
[254, 254, 254],
[254, 254, 254],
[254, 254, 254]],
[[255, 255, 255],
[254, 254, 254],
[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]],
...,
[[255, 255, 255],
[254, 254, 254],
[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]],
[[255, 255, 255],
[254, 254, 254],
[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]],
[[255, 255, 255],
[254, 254, 254],
[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]],
[[[238, 233, 234],
[241, 237, 238],
[242, 238, 239],
...,
[247, 244, 246],
[249, 246, 248],
[251, 248, 249]],
[[233, 228, 229],
[236, 231, 232],
[237, 232, 233],
...,
[245, 242, 244],
[247, 244, 246],
[248, 246, 248]],
[[236, 230, 231],
[238, 232, 233],
[238, 232, 233],
...,
[250, 248, 250],
[251, 249, 251],
[252, 251, 252]],
...,
[[ 55, 75, 115],
[ 55, 72, 107],
[ 56, 71, 106],
...,
[151, 198, 226],
[128, 176, 212],
[131, 181, 217]],
[[ 55, 74, 113],
[ 55, 70, 103],
[ 55, 69, 99],
...,
[138, 191, 215],
[128, 182, 206],
[129, 177, 207]],
[[ 54, 71, 106],
[ 57, 72, 105],
[ 56, 72, 103],
...,
[125, 178, 203],
[137, 193, 213],
[127, 178, 199]]]], dtype=uint8), array([[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],
[0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
[0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
[0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
[0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
[0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
[0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],
[0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 1., 0., 0.]]))
| MIT | .ipynb_checkpoints/(200818) 2.-checkpoint.ipynb | Adrian123K/tensor |
예제15. 아래와 같이 shuffle_batch 함수를 만들고 loader2.py에 추가시키시오 | def shuffle_batch(data_list, label):
x = np.arange(len(data_list))
np.random.shuffle(x)
data_list2 = data_list[x]
label2 = label[x]
return data_list2, label2
import loader2
test_image = 'D:/tensor/cifar10/test/'
test_label = 'D:/tensor/cifar10/test_label.csv'
testX = loader2.image_load(test_image)
testY = loader2.label_load(test_label)
print(loader2.shuffle_batch(testX, testY)) | _____no_output_____ | MIT | .ipynb_checkpoints/(200818) 2.-checkpoint.ipynb | Adrian123K/tensor |
※ 문제141. (오늘의 마지막 문제) 4개의 함수를 모두 loader2.py에 넣고 내일 옥스포드에서 설계한 vgg 신경망에 넣기 위해 아래와 같이 실행되게 하시오```pythonimport loader2train_image='D:/tensor/cifar10/train/'train_label = 'D:/tensor/cifar10/train_label.csv'test_image='D:/tensor/cifar10/test/'test_label = 'D:/tensor/cifar10/test_label.csv'trainX = loader2.image_load(train_image)trainY = loader2.label_load(train_label)testX = loader2.image_load(test_image)testY = loader2.label_load(test_label)testX, testY = loader2.shuffle_batch(testX, testY)print(loader2.next_batch(testX, testY, 0, 100))``` | import loader2
train_image='D:/tensor/cifar10/train/'
train_label = 'D:/tensor/cifar10/train_label.csv'
test_image='D:/tensor/cifar10/test/'
test_label = 'D:/tensor/cifar10/test_label.csv'
trainX = loader2.image_load(train_image)
trainY = loader2.label_load(train_label)
testX = loader2.image_load(test_image)
testY = loader2.label_load(test_label)
testX, testY = loader2.shuffle_batch(testX, testY)
print(loader2.next_batch(testX, testY, 0, 100)) | _____no_output_____ | MIT | .ipynb_checkpoints/(200818) 2.-checkpoint.ipynb | Adrian123K/tensor |
If you want to average results for multiple seeds, LOG_DIRS must contain subfolders in the following format: ```-0```, ```-1```, ```-0```, ```-1```. Where names correspond to experiments you want to compare separated with random seeds by dash. | LOG_DIRS = 'logs/reacher/'
# Uncomment below to see the effect of the timit limits flag
# LOG_DIRS = 'time_limit_logs/reacher'
results = pu.load_results(LOG_DIRS)
fig = pu.plot_results(results, average_group=True, split_fn=lambda _: '', shaded_std=False) | _____no_output_____ | MIT | write_and_test/visualize.ipynb | liuandrew/training-rl-algo |
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
BUZZ_PIN = 18
GPIO.setup(BUZZ_PIN, GPIO.OUT)
GPIO.output(BUZZ_PIN,False)
def hold(j):
for k in range(1,j):
pass
def fire():
for j in range(1,1100):
GPIO.output(BUZZ_PIN,True)
hold(j)
GPIO.output(BUZZ_PIN,False)
hold(j)
try:
while True:
print("."),
fire()
except KeyboardInterrupt:
print("Uitvoering onderbroken")
GPIO.output(BUZZ_PIN,False)
import time
def buzz(pitch, duration):
period = 1.0 / pitch
delay = period / 2
cycles = int(duration * pitch)
for i in range(cycles):
last_time = time.time()
GPIO.output(BUZZ_PIN, True)
while time.time() < last_time + delay:
pass
GPIO.output(BUZZ_PIN, False)
while time.time() < last_time + 2 * delay:
pass
try:
for pitch in range(500,10000,500):
print("."),
buzz(pitch, duration = 0.5)
except KeyboardInterrupt:
print("Uitvoering onderbroken")
GPIO.output(BUZZ_PIN,False)
GPIO.cleanup() | _____no_output_____ | CC0-1.0 | notebooks/nl-be/Output - Buzzer (Piezo).ipynb | RaspberryJamBe/IPythonNotebooks |
|
Gaussian Distribution (Normal or Bell Curve) Think of a Jupyter Notebook file as a Python script, but with comments given the seriousness they deserve, meaning inserted Youtubes if necessary. We also adopt a more conversational style with the reader, and with Python, pausing frequently to take stock, because we're telling a story.One might ask, what is the benefit of computer programs if we read through them this slowly? Isn't the whole point that they run blazingly fast, and nobody needs to read them except those tasked with maintaining them, the programmer cast?First, lets point out the obvious: even when reading slowly, we're not keeping Python from doing its part as fast as it can, and what it does would have taken a single human ages to do, and would have occupied a team of secretaries for ages. Were you planning to pay them? Python effectively puts a huge staff at your disposal, ready to do your bidding. But that doesn't let you off the hook. They need to be managed, told what to do.Here's what you'll find at the top of your average script. A litany of players, a congress of agents, need to be assembled and made ready for the job at hand. But don't worry, as you remember to include necessary assets, add them at will as you need them. We rehearse the script over and over while building it. Nobody groans, except maybe you, when the director says "take it from the top" once again. | import numpy as np
import scipy.stats as st
import matplotlib.pyplot as plt
import math | _____no_output_____ | MIT | BellCurve.ipynb | 4dsolutions/ONLC_XPYS |
You'll be glad to have np.linspace as a friend, as so often you know exactly what the upper and lower bounds, of a domain, might be. You'll be computing a range. Do you remember these terms from high school? A domain is like a pile of cannon balls that we feed to our cannon, which them fires them, testing our knowledge of ballistics. It traces a parabola. We plot that in our tables. A lot of mathematics traces to developing tables for battle field use. Leonardo da Vinci, a great artist, was also an architect of defensive fortifications.Anyway, np.linspace lets to give exactly the number of points you would like of this linear one dimensional array space, as a closed set, meaning -5 and 5 are included, the minimum and maximum you specify. Ask for a healthy number of points, as points are cheap. All they require is memory. But then it's up to you not to overdo things. Why waste CPU cycles on way too many points?I bring up this niggling detail about points as a way of introducing what they're calling "hyperparameters" in Machine Learning, meaning settings or values that come from outside the data, so also "metadata" in some ways. You'll see in other notebooks how we might pick a few hyperparameters and ask scikit-learn to try all combinations of same.Here's what you'll be saying then:from sklearn.model_selection import GridSearchCV CV = cross-validation | domain = np.linspace(-5, 5, 100) | _____no_output_____ | MIT | BellCurve.ipynb | 4dsolutions/ONLC_XPYS |
I know mu sounds like "mew", the sound a kitten makes, and that's sometimes insisted upon by sticklers, for when we have a continuous function, versus one that's discrete. Statisticians make a big deal about the difference between digital and analog, where the former is seen as a "sampling" of the latter. Complete data may be an impossibility. We're always stuck with something digital trying to approximate something analog, or so it seems. Turn that around in your head sometimes: we smooth it over as an approximation, because a discrete treatment would require too high a level of precision.The sticklers say "mu" for continuous, but "x-bar" (an x with a bar over it) for plain old "average" of discrete sets. I don't see this conventions holding water necessarily, for one thing because it's inconvenient to always reach for the most fancy typography. Python does have full access to Unicode, and to LaTex, but do we have to bother? Lets leave that question for another day and move on to... The Guassian (Binomial if Discrete) | mu = 0 # might be x-bar if discrete
sigma = 1 # standard deviation, more below | _____no_output_____ | MIT | BellCurve.ipynb | 4dsolutions/ONLC_XPYS |
What we have here (below) is a typical Python numeric function, although it does get its pi from numpy instead of math. That won't matter. The sigma and mu in this function are globals and set above. Some LaTex would be in order here, I realize. Let me scavange the internet for something appropriate...$pdf(x,\mu,\sigma) = \frac{1}{ \sigma \sqrt{2 \pi}} e^{\left(-\frac{{\left(\mu - x\right)}^{2}}{2 \, \sigma^{2}}\right)}$Use of dollar signs is key.Here's another way, in a code cell instead of a Markdown cell. | from IPython.display import display, Latex
ltx = '$ pdf(x,\\mu,\\sigma) = \\frac{1}{ \\sigma' + \
'\\sqrt{2 \\pi}} e^{\\left(-\\frac{{\\left(\\mu - ' + \
'x\\right)}^{2}}{2 \\, \\sigma^{2}}\\right)} $'
display(Latex(ltx)) | _____no_output_____ | MIT | BellCurve.ipynb | 4dsolutions/ONLC_XPYS |
I'm really tempted to try out [PrettyPy](https://github.com/charliekawczynski/prettyPy). | def g(x):
return (1/(sigma * math.sqrt(2 * np.pi))) * math.exp(-0.5 * ((mu - x)/sigma)**2) | _____no_output_____ | MIT | BellCurve.ipynb | 4dsolutions/ONLC_XPYS |
What I do below is semi-mysterious, and something I'd like to get to in numpy in more detail. The whole idea behind numpy is every function, or at least the unary ones, are vectorized, meaning the work element-wise through every cell, with no need for any for loops.My Gaussian formula above won't natively understand how to have relations with a numpy array, unless we store it in vectorized form. I'm not claiming this will make it run any faster than under the control of for loops, we can test that. Even without a speedup, here we have a recipe for shortening our code.As many have proclaimed around numpy: one of its primary benefits is it allows one to "lose the loops". | %timeit vg = np.vectorize(g) | The slowest run took 5.55 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 4.1 µs per loop
| MIT | BellCurve.ipynb | 4dsolutions/ONLC_XPYS |
At any rate, this way, with a list comprehension, is orders of magnitude slower: | %timeit vg2 = np.array([g(x) for x in domain])
vg = np.vectorize(g)
%matplotlib inline
%timeit plt.plot(domain, vg(domain)) | The slowest run took 89.97 times longer than the fastest. This could mean that an intermediate result is being cached.
1 loop, best of 3: 2.49 ms per loop
| MIT | BellCurve.ipynb | 4dsolutions/ONLC_XPYS |
I bravely built my own version of the Gaussian distribution, a continuous function (any real number input is OK, from negative infinity to infinity, but not those (keep it in between). The thing about a Gaussian is you can shrink it and grow it while keeping the curve itself, self similar. Remember "hyperparamters"? They control the shape. We should be sure to play around with those parameters.Of course the stats.norm section of scipy comes pre-equipped with the same PDF (probability distribution function). You'll see this curve called many things in the literature. | %timeit plt.plot(domain, st.norm.pdf(domain))
mu = 0
sigma = math.sqrt(0.2)
plt.plot(domain, vg(domain), color = 'blue')
sigma = math.sqrt(1)
plt.plot(domain, vg(domain), color = 'red')
sigma = math.sqrt(5)
plt.plot(domain, vg(domain), color = 'orange')
mu = -2
sigma = math.sqrt(.5)
plt.plot(domain, vg(domain), color = 'green')
plt.title("Gaussian Distributions") | _____no_output_____ | MIT | BellCurve.ipynb | 4dsolutions/ONLC_XPYS |
[see Wikipedia figure](https://en.wikipedia.org/wiki/Gaussian_functionProperties)These are Gaussian PDFs or Probability Density Functions.68.26% of values happen within -1 and 1. | from IPython.display import YouTubeVideo
YouTubeVideo("xgQhefFOXrM")
a = st.norm.cdf(-1) # Cumulative distribution function
b = st.norm.cdf(1)
b - a
a = st.norm.cdf(-2)
b = st.norm.cdf(2)
b - a
# 99.73% is more correct than 99.72%
a = st.norm.cdf(-3)
b = st.norm.cdf(3)
b - a
# 95%
a = st.norm.cdf(-1.96)
b = st.norm.cdf(1.96)
b - a
# 99%
a = st.norm.cdf(-2.58)
b = st.norm.cdf(2.58)
b - a
from IPython.display import YouTubeVideo
YouTubeVideo("zZWd56VlN7w") | _____no_output_____ | MIT | BellCurve.ipynb | 4dsolutions/ONLC_XPYS |
What are the chances a value is less than -1.32? | st.norm.cdf(-1.32) | _____no_output_____ | MIT | BellCurve.ipynb | 4dsolutions/ONLC_XPYS |
What are the chances a value is between -0.21 and 0.85? | 1 - st.norm.sf(-0.21) # filling in from the right (survival function)
a = st.norm.cdf(0.85) # filling in from the left
a
b = st.norm.cdf(-0.21) # from the left
b
a-b # getting the difference (per the Youtube) | _____no_output_____ | MIT | BellCurve.ipynb | 4dsolutions/ONLC_XPYS |
Lets plot the integral of the Bell Curve. This curve somewhat describes the temporal pattern whereby a new technology is adopted, first by early adopters, then comes the bandwagon effect, then come the stragglers. Not the every technology gets adopted in this way. Only some do. | plt.plot(domain, st.norm.cdf(domain)) | _____no_output_____ | MIT | BellCurve.ipynb | 4dsolutions/ONLC_XPYS |
[Standard Deviation](https://en.wikipedia.org/wiki/Standard_deviation)Above is the Bell Curve integral.Remember the derivative is obtain from small differences: (f(x+h) - f(x))/xGiven x is our entire domain and operations are vectorized, it's easy enough to plot said derivative. | x = st.norm.cdf(domain)
diff = st.norm.cdf(domain + 0.01)
plt.plot(domain, (diff-x)/0.01)
x = st.norm.pdf(domain)
diff = st.norm.pdf(domain + 0.01)
plt.plot(domain, (diff-x)/0.01)
x = st.norm.pdf(domain)
plt.plot(domain, x, color = "red")
x = st.norm.pdf(domain)
diff = st.norm.pdf(domain + 0.01)
plt.plot(domain, (diff-x)/0.01, color = "blue") | _____no_output_____ | MIT | BellCurve.ipynb | 4dsolutions/ONLC_XPYS |
Integrating the GaussianApparently there's no closed form, however sympy is able to do an integration somehow. | from sympy import var, Lambda, integrate, sqrt, pi, exp, latex
fig = plt.gcf()
fig.set_size_inches(8,5)
var('a b x sigma mu')
pdf = Lambda((x,mu,sigma),
(1/(sigma * sqrt(2*pi)) * exp(-(mu-x)**2 / (2*sigma**2)))
)
cdf = Lambda((a,b,mu,sigma),
integrate(
pdf(x,mu,sigma),(x,a,b)
)
)
display(Latex('$ cdf(a,b,\mu,\sigma) = ' + latex(cdf(a,b,mu,sigma)) + '$')) | _____no_output_____ | MIT | BellCurve.ipynb | 4dsolutions/ONLC_XPYS |
Lets stop right here and note the pdf and cdf have been defined, using sympy's Lambda and integrate, and the cdf will be fed a lot of data, one hundred points, along with mu and sigma. Then it's simply a matter of plotting.What's amazing is our ability to get something from sympy that works to give a cdf, independently of scipy.stats.norm. | x = np.linspace(50,159,100)
y = np.array([cdf(-1e99,v,100,15) for v in x],dtype='float')
plt.grid(True)
plt.title('Cumulative Distribution Function')
plt.xlabel('IQ')
print(type(plt.xlabel))
plt.ylabel('Y')
plt.text(65,.75,'$\mu = 100$',fontsize=16)
plt.text(65,.65,'$\sigma = 15$',fontsize=16)
plt.plot(x,y,color='gray')
plt.fill_between(x,y,0,color='#c0f0c0')
plt.show() | <class 'function'>
| MIT | BellCurve.ipynb | 4dsolutions/ONLC_XPYS |
The above is truly a testament to Python's power, or the Python ecosystem's power. We've brought in sympy, able to do symbolic integration, and talk LaTeX at the same time. That's impressive. Here's [the high IQ source](https://arachnoid.com/IPython/normal_distribution.html) for the original version of the above code.There's no indefinite integral of the Gaussian, but there's a definite one. sympy comes with its own generic sympy.stats.cdf function which produces Lambdas (symbolic expressions) when used to integrate different types of probability spaces, such as Normal (a continuous PDF). It also accepts discrete PMFs as well.Examples======== >>> from sympy.stats import density, Die, Normal, cdf>>> from sympy import Symbol >>> D = Die('D', 6)>>> X = Normal('X', 0, 1) >>> density(D).dict{1: 1/6, 2: 1/6, 3: 1/6, 4: 1/6, 5: 1/6, 6: 1/6}>>> cdf(D){1: 1/6, 2: 1/3, 3: 1/2, 4: 2/3, 5: 5/6, 6: 1}>>> cdf(3*D, D > 2){9: 1/4, 12: 1/2, 15: 3/4, 18: 1} >>> cdf(X)Lambda(_z, -erfc(sqrt(2)*_z/2)/2 + 1) LAB: convert the Normal Distribution Below to IQ Curve...That means domain is 0-200, standard deviation 15, mean = 100. | domain = np.linspace(0, 200, 3000)
IQ = st.norm.pdf(domain, 100, 15)
plt.plot(domain, IQ, color = "red")
domain = np.linspace(0, 200, 3000)
mu = 100
sigma = 15
IQ = vg(domain)
plt.plot(domain, IQ, color = "green") | _____no_output_____ | MIT | BellCurve.ipynb | 4dsolutions/ONLC_XPYS |
Db2 Connection Document This notebook contains the connect statement that will be used for connecting to Db2. The typical way of connecting to Db2 within a notebooks it to run the db2 notebook (`db2.ipynb`) and then issue the `%sql connect` statement:```sql%run db2.ipynb%sql connect to sample user ...```Rather than having to change the connect statement in every notebook, this one file can be changed and all of the other notebooks will use the value in here. Note that if you do reset a connection within a notebook, you will need to issue the `CONNECT` command again or run this notebook to re-connect.The `db2.ipynb` file is still used at the beginning of all notebooks to highlight the fact that we are using special code to allow Db2 commands to be issues from within Jupyter Notebooks. Connect to Db2This code will connect to Db2 locally. | %sql CONNECT TO SAMPLE USER DB2INST1 USING db2inst1 HOST localhost PORT 50000 | _____no_output_____ | Apache-2.0 | connection.ipynb | Db2-DTE-POC/db2v11 |
Check that the EMPLOYEE and DEPARTMENT table existA lot of the examples depend on these two tables existing in the database. These tables will be created for you if they don't already exist. Note that they will not overwrite the existing Db2 samples tables. | if sqlcode == 0:
%sql -sampledata | _____no_output_____ | Apache-2.0 | connection.ipynb | Db2-DTE-POC/db2v11 |
Code for Refreshing Slideware and Youtube Videos in a Notebook | %%javascript
window.findCellIndicesByTag = function findCellIndicesByTag(tagName) {
return (Jupyter.notebook.get_cells()
.filter(
({metadata: {tags}}) => tags && tags.includes(tagName)
)
.map((cell) => Jupyter.notebook.find_cell_index(cell))
);
};
window.refresh = function runPlotCells() {
var c = window.findCellIndicesByTag('refresh');
Jupyter.notebook.execute_cells(c);
}; | _____no_output_____ | Apache-2.0 | connection.ipynb | Db2-DTE-POC/db2v11 |
Run through all of the cells and refresh everything that has a **refresh** tag in it. | from IPython.display import Javascript
display(Javascript("window.refresh()")) | _____no_output_____ | Apache-2.0 | connection.ipynb | Db2-DTE-POC/db2v11 |
Performance Testing with Computing the Mandlebrot Set This sample was executed on a DSVM on a Standard_D2_v2 in Azure. This code below also uses a few other cluster config files titled: - "10_core_cluster.json" - "20_core_cluster.json"- "40_core_cluster.json"- "80_core_cluster.json"Each of the cluster config files above are used by the doAzureParallel package. They all define static clusters (minNodes = maxNodes) and use the Standard_F2 VM size. Install package dependencies for doAzureParallel | install.packages(c('httr','rjson','RCurl','digest','foreach','iterators','devtools','curl','jsonlite','mime')) | _____no_output_____ | MIT | samples/mandelbrot/mandelbrot_performance_test.ipynb | zerweck/doAzureParallel |
Install doAzureParallel and rAzureBatch from github | library(devtools)
install_github("Azure/rAzureBatch")
install_github("Azure/doAzureParallel") | _____no_output_____ | MIT | samples/mandelbrot/mandelbrot_performance_test.ipynb | zerweck/doAzureParallel |
Install *microbenchmark* package and other utilities | install.packages("microbenchmark")
library(microbenchmark)
library(reshape2)
library(ggplot2) | _____no_output_____ | MIT | samples/mandelbrot/mandelbrot_performance_test.ipynb | zerweck/doAzureParallel |
Define function to compute the mandlebrot set. | vmandelbrot <- function(xvec, y0, lim)
{
mandelbrot <- function(x0,y0,lim)
{
x <- x0; y <- y0
iter <- 0
while (x^2 + y^2 < 4 && iter < lim)
{
xtemp <- x^2 - y^2 + x0
y <- 2 * x * y + y0
x <- xtemp
iter <- iter + 1
}
iter
}
unlist(lapply(xvec, mandelbrot, y0=y0, lim=lim))
} | _____no_output_____ | MIT | samples/mandelbrot/mandelbrot_performance_test.ipynb | zerweck/doAzureParallel |
The local execution is performed on a single Standard_D2_V2 DSVM in Azure. We use the doParallel package and use both cores for this performance test | localExecution <- function() {
print("doParallel")
library(doParallel)
cl<-makeCluster(2)
registerDoParallel(cl)
x.in <- seq(-2, 1.5, length.out=1080)
y.in <- seq(-1.5, 1.5, length.out=1080)
m <- 1000
mset <- foreach(i=y.in, .combine=rbind, .export = "vmandelbrot") %dopar% vmandelbrot(x.in, i, m)
} | _____no_output_____ | MIT | samples/mandelbrot/mandelbrot_performance_test.ipynb | zerweck/doAzureParallel |
The Azure Execution takes in a pool_config JSON file and will use doAzureParallel. | azureExecution <- function(pool_config) {
print("doAzureParallel")
library(doAzureParallel)
pool <- doAzureParallel::makeCluster(pool_config)
registerDoAzureParallel(pool)
x.in <- seq(-2, 1.5, length.out=1080)
y.in <- seq(-1.5, 1.5, length.out=1080)
m <- 1000
mset <- foreach(i=y.in, .combine=rbind, .options.azure = list(chunkSize=10), .export = "vmandelbrot") %dopar% vmandelbrot(x.in, i, m)
} | _____no_output_____ | MIT | samples/mandelbrot/mandelbrot_performance_test.ipynb | zerweck/doAzureParallel |
Using the *microbenchmark* package, we test the difference in performance when running the same code to calculate the mandlebrot set on a single machine (localExecution), a cluster of 10 cores, a cluster of 20 cores, and finally a cluster of 40 cores. | op <- microbenchmark(
doParLocal=localExecution(),
doParAzure_10cores=azureExecution("10_core_cluster.json"),
doParAzure_20cores=azureExecution("20_core_cluster.json"),
doParAzure_40cores=azureExecution("40_core_cluster.json"),
times=5L)
print(op)
plot(op) | _____no_output_____ | MIT | samples/mandelbrot/mandelbrot_performance_test.ipynb | zerweck/doAzureParallel |
Load the data | iris = load_iris(as_frame=True)
pd.concat([iris.data, iris.target], axis=1).plot.scatter(
x='petal length (cm)',
y='petal width (cm)',
c='target',
colormap='viridis'
)
iris.data
X = iris.data[['petal length (cm)','petal width (cm)']]
y = iris.target
X_train, X_test, y_train, y_test= train_test_split(X, y, test_size=0.2)
y_train_0 = (y_train == 0).astype(int)
y_test_0 = (y_test == 0).astype(int) | _____no_output_____ | MIT | lab9/lab9.ipynb | YgLK/ML |
for 0 target value | # for 0 target value
per_clf_0 = Perceptron()
per_clf_0.fit(X_train, y_train_0)
y_pred_train_0 = per_clf_0.predict(X_train)
y_pred_test_0 = per_clf_0.predict(X_test)
acc_train_0 = accuracy_score(y_train_0, y_pred_train_0)
acc_test_0 = accuracy_score(y_test_0, y_pred_test_0)
print("acc_train_0", acc_train_0)
print("acc_test_0", acc_test_0) | acc_train_0 1.0
acc_test_0 1.0
| MIT | lab9/lab9.ipynb | YgLK/ML |
for 1 target value | y_train_1 = (y_train == 1).astype(int)
y_test_1 = (y_test == 1).astype(int)
# for 1 target value
per_clf_1 = Perceptron()
per_clf_1.fit(X_train, y_train_1)
y_pred_train_1 = per_clf_1.predict(X_train)
y_pred_test_1 = per_clf_1.predict(X_test)
acc_train_1 = accuracy_score(y_train_1, y_pred_train_1)
acc_test_1 = accuracy_score(y_test_1, y_pred_test_1)
print("acc_train_1", acc_train_1)
print("acc_test_1", acc_test_1) | acc_train_1 0.6666666666666666
acc_test_1 0.6666666666666666
| MIT | lab9/lab9.ipynb | YgLK/ML |
for 2 target value | y_train_2 = (y_train == 2).astype(int)
y_test_2 = (y_test == 2).astype(int)
# for 2 target value
per_clf_2 = Perceptron()
per_clf_2.fit(X_train, y_train_2)
y_pred_train_2 = per_clf_2.predict(X_train)
y_pred_test_2 = per_clf_2.predict(X_test)
acc_train_2 = accuracy_score(y_train_2, y_pred_train_2)
acc_test_2 = accuracy_score(y_test_2, y_pred_test_2)
print("acc_train_2", acc_train_2)
print("acc_test_2", acc_test_2) | acc_train_2 0.825
acc_test_2 0.8666666666666667
| MIT | lab9/lab9.ipynb | YgLK/ML |
weights | print("0: bias weight", per_clf_0.intercept_)
print("Input weights (w1, w2): ", per_clf_0.coef_)
print("1: bias weight", per_clf_1.intercept_)
print("Input weights (w1, w2): ", per_clf_1.coef_)
print("0: bias weight", per_clf_2.intercept_)
print("Input weights (w1, w2): ", per_clf_2.coef_) | 0: bias weight [-39.]
Input weights (w1, w2): [[ 0.8 27.3]]
| MIT | lab9/lab9.ipynb | YgLK/ML |
Save accuracy lists and weight tuple in the pickles | # accuracy
per_acc = [(acc_train_0, acc_test_0), (acc_train_1, acc_test_1), (acc_train_2, acc_test_2)]
filename = "per_acc.pkl"
save_object_as_pickle(per_acc, filename)
print("per_acc\n", per_acc)
# weights
per_wght = [(per_clf_0.intercept_[0], per_clf_0.coef_[0][0], per_clf_0.coef_[0][1]), (per_clf_1.intercept_[0], per_clf_1.coef_[0][0], per_clf_1.coef_[0][1]), (per_clf_2.intercept_[0], per_clf_2.coef_[0][0], per_clf_2.coef_[0][1])]
filename = "per_wght.pkl"
save_object_as_pickle(per_wght, filename)
print("per_wght\n", per_wght) | per_wght
[(9.0, -2.0999999999999988, -3.0999999999999996), (-8.0, 4.600000000000016, -22.699999999999974), (-39.0, 0.7999999999999883, 27.30000000000003)]
| MIT | lab9/lab9.ipynb | YgLK/ML |
Perceptron, XOR | X = np.array([[0, 0],
[0, 1],
[1, 0],
[1, 1]])
y = np.array([0,
1,
1,
0])
per_clf_xor = Perceptron()
per_clf_xor.fit(X, y)
pred_xor = per_clf_xor.predict(X)
xor_acc = accuracy_score(y, pred_xor)
print("xor_accuracy:", xor_acc)
print("XOR: bias weight", per_clf_xor.intercept_)
print("Input weights (w1, w2): ", per_clf_xor.coef_) | XOR: bias weight [0.]
Input weights (w1, w2): [[0. 0.]]
| MIT | lab9/lab9.ipynb | YgLK/ML |
2nd Perceprton, XOR | import tensorflow as tf
from tensorflow import keras
while True:
model = keras.models.Sequential()
model.add(keras.layers.Dense(2, activation="relu", input_dim=2))
model.add(keras.layers.Dense(1, activation="sigmoid"))
model.compile(loss=tf.keras.losses.BinaryCrossentropy(),
optimizer=tf.keras.optimizers.Adam(learning_rate=0.085),
metrics=["binary_accuracy"])
history = model.fit(X, y, epochs=100, verbose=False)
predict_prob=model.predict(X)
print(predict_prob)
print(history.history['binary_accuracy'][-1])
if predict_prob[0] < 0.1 and predict_prob[1] > 0.9 and predict_prob[2] > 0.9 and predict_prob[3] < 0.1:
weights = model.get_weights()
break | [[0.33351305]
[0.999105 ]
[0.33351305]
[0.33351305]]
0.75
[[0.33357185]
[0.9995415 ]
[0.33357185]
[0.33357185]]
0.75
WARNING:tensorflow:5 out of the last 5 calls to <function Model.make_predict_function.<locals>.predict_function at 0x0000013C7743D430> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
[[0.4994214 ]
[0.9989133 ]
[0.4994214 ]
[0.00138384]]
0.75
WARNING:tensorflow:6 out of the last 6 calls to <function Model.make_predict_function.<locals>.predict_function at 0x0000013C7E32BCA0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
[[0.6631104 ]
[0.6631104 ]
[0.6631104 ]
[0.01176086]]
0.75
[[0.3341035 ]
[0.3341035 ]
[0.99895394]
[0.3341035 ]]
0.75
[[0.5001977]
[0.9983003]
[0.5001977]
[0.0029591]]
0.75
[[0.6626212 ]
[0.6626212 ]
[0.6626212 ]
[0.00670475]]
0.75
[[0.01517117]
[0.9965296 ]
[0.99731755]
[0.01517117]]
1.0
| MIT | lab9/lab9.ipynb | YgLK/ML |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.