workspace
stringclasses
1 value
channel
stringclasses
1 value
sentences
stringlengths
1
3.93k
ts
stringlengths
26
26
user
stringlengths
2
11
sentence_id
stringlengths
44
53
timestamp
float64
1.5B
1.56B
__index_level_0__
int64
0
106k
pythondev
help
after -i
2019-03-04T15:09:49.504400
Ludie
pythondev_help_Ludie_2019-03-04T15:09:49.504400
1,551,712,189.5044
12,021
pythondev
help
i just tried it, am checking now
2019-03-04T15:09:55.504600
Ludie
pythondev_help_Ludie_2019-03-04T15:09:55.504600
1,551,712,195.5046
12,022
pythondev
help
yes!!
2019-03-04T15:10:11.504800
Ludie
pythondev_help_Ludie_2019-03-04T15:10:11.504800
1,551,712,211.5048
12,023
pythondev
help
I did - did that work without the find command? I added it back in with the latest command
2019-03-04T15:10:14.505000
Clemmie
pythondev_help_Clemmie_2019-03-04T15:10:14.505000
1,551,712,214.505
12,024
pythondev
help
people were unclear on the interwebs
2019-03-04T15:10:19.505200
Clemmie
pythondev_help_Clemmie_2019-03-04T15:10:19.505200
1,551,712,219.5052
12,025
pythondev
help
doesnt work: `sed -i'' 's/"language": "en"/"language": "fr"/g' */*.json`
2019-03-04T15:10:26.505600
Ludie
pythondev_help_Ludie_2019-03-04T15:10:26.505600
1,551,712,226.5056
12,026
pythondev
help
works : `sed -i '' 's/"language": "en"/"language": "fr"/g' */*.json`
2019-03-04T15:10:34.505800
Ludie
pythondev_help_Ludie_2019-03-04T15:10:34.505800
1,551,712,234.5058
12,027
pythondev
help
that is perfect !!!!
2019-03-04T15:10:50.506000
Ludie
pythondev_help_Ludie_2019-03-04T15:10:50.506000
1,551,712,250.506
12,028
pythondev
help
woof
2019-03-04T15:10:51.506200
Ludie
pythondev_help_Ludie_2019-03-04T15:10:51.506200
1,551,712,251.5062
12,029
pythondev
help
ok, let me send the email
2019-03-04T15:10:56.506400
Ludie
pythondev_help_Ludie_2019-03-04T15:10:56.506400
1,551,712,256.5064
12,030
pythondev
help
Go forth, and continue to have a job :wink:
2019-03-04T15:10:57.506600
Clemmie
pythondev_help_Clemmie_2019-03-04T15:10:57.506600
1,551,712,257.5066
12,031
pythondev
help
sent :smile:
2019-03-04T15:11:18.506800
Ludie
pythondev_help_Ludie_2019-03-04T15:11:18.506800
1,551,712,278.5068
12,032
pythondev
help
can’t thank you enough
2019-03-04T15:11:21.507000
Ludie
pythondev_help_Ludie_2019-03-04T15:11:21.507000
1,551,712,281.507
12,033
pythondev
help
Can i ask another question related to that? that may help very much in the future as i am going to take more in with json files
2019-03-04T15:12:05.507200
Ludie
pythondev_help_Ludie_2019-03-04T15:12:05.507200
1,551,712,325.5072
12,034
pythondev
help
for the future, you want to add the unix toolkit to your toolkit - `sed` `awk` `cut` `uniq` and `sort` in particular can get you through a whole host of text manipulation problems that we tend to think in much more python for
2019-03-04T15:12:13.507400
Clemmie
pythondev_help_Clemmie_2019-03-04T15:12:13.507400
1,551,712,333.5074
12,035
pythondev
help
sure
2019-03-04T15:12:17.507600
Clemmie
pythondev_help_Clemmie_2019-03-04T15:12:17.507600
1,551,712,337.5076
12,036
pythondev
help
Say if I wanted to changed a full json object, instead of just a value, like this: ```{ "version": "1.0", "identifier": "2854-1269-8-1", "title": "Welcome To The World Of More", { "language": "en", "country": "uk" } }```
2019-03-04T15:13:24.507900
Ludie
pythondev_help_Ludie_2019-03-04T15:13:24.507900
1,551,712,404.5079
12,037
pythondev
help
pardon the indentation
2019-03-04T15:13:33.508100
Ludie
pythondev_help_Ludie_2019-03-04T15:13:33.508100
1,551,712,413.5081
12,038
pythondev
help
If I were to try and change ``` { "language": "en", "country": "uk" }``` only, would i just specify the text without the line breaks?
2019-03-04T15:13:58.508300
Ludie
pythondev_help_Ludie_2019-03-04T15:13:58.508300
1,551,712,438.5083
12,039
pythondev
help
Like this: `{"language": "en","country": "uk"}`
2019-03-04T15:14:27.508500
Ludie
pythondev_help_Ludie_2019-03-04T15:14:27.508500
1,551,712,467.5085
12,040
pythondev
help
you would need to do the line breaks, given what I showed you, if the line breaks are in the text
2019-03-04T15:14:36.508700
Clemmie
pythondev_help_Clemmie_2019-03-04T15:14:36.508700
1,551,712,476.5087
12,041
pythondev
help
but! there are other ways
2019-03-04T15:14:42.508900
Clemmie
pythondev_help_Clemmie_2019-03-04T15:14:42.508900
1,551,712,482.5089
12,042
pythondev
help
i am scared about pasting line breaks into the terminal
2019-03-04T15:14:57.509100
Ludie
pythondev_help_Ludie_2019-03-04T15:14:57.509100
1,551,712,497.5091
12,043
pythondev
help
usually doesnt work out the way i want it to
2019-03-04T15:15:04.509300
Ludie
pythondev_help_Ludie_2019-03-04T15:15:04.509300
1,551,712,504.5093
12,044
pythondev
help
you could use a json processor (there are a bunch, you can take a look for them) to flatten the file, then changeout the text, then use the processor to re-save them pretty printed
2019-03-04T15:15:24.509500
Clemmie
pythondev_help_Clemmie_2019-03-04T15:15:24.509500
1,551,712,524.5095
12,045
pythondev
help
also the char `\n` is the newline indicator (usually) so you would do `{"language": "en",\n"country": "uk"}`
2019-03-04T15:16:01.509700
Clemmie
pythondev_help_Clemmie_2019-03-04T15:16:01.509700
1,551,712,561.5097
12,046
pythondev
help
and also put the `\n` in the replaced text and you would be ok
2019-03-04T15:16:12.509900
Clemmie
pythondev_help_Clemmie_2019-03-04T15:16:12.509900
1,551,712,572.5099
12,047
pythondev
help
nice!
2019-03-04T15:16:20.510100
Ludie
pythondev_help_Ludie_2019-03-04T15:16:20.510100
1,551,712,580.5101
12,048
pythondev
help
that is if you wanted to deal with the file as is
2019-03-04T15:16:22.510300
Clemmie
pythondev_help_Clemmie_2019-03-04T15:16:22.510300
1,551,712,582.5103
12,049
pythondev
help
just always make sure to make a backup copy before messing with it, and you will be fine
2019-03-04T15:16:45.510500
Clemmie
pythondev_help_Clemmie_2019-03-04T15:16:45.510500
1,551,712,605.5105
12,050
pythondev
help
in `sed` you might need to escape control characters (link \n) but you can figure that out quickly enough
2019-03-04T15:17:31.510700
Clemmie
pythondev_help_Clemmie_2019-03-04T15:17:31.510700
1,551,712,651.5107
12,051
pythondev
help
noted
2019-03-04T15:17:32.510900
Ludie
pythondev_help_Ludie_2019-03-04T15:17:32.510900
1,551,712,652.5109
12,052
pythondev
help
yes, i am a little familiar with escaping characters
2019-03-04T15:17:54.511200
Ludie
pythondev_help_Ludie_2019-03-04T15:17:54.511200
1,551,712,674.5112
12,053
pythondev
help
is there a lot of overhead opening a file as gzip as text compared to a normal file?
2019-03-04T15:18:27.512100
Alvina
pythondev_help_Alvina_2019-03-04T15:18:27.512100
1,551,712,707.5121
12,054
pythondev
help
s/ /g is to ground the changes ? and the start and end versions are always separated by a slash?
2019-03-04T15:18:29.512300
Ludie
pythondev_help_Ludie_2019-03-04T15:18:29.512300
1,551,712,709.5123
12,055
pythondev
help
I am pretty much immediately writing chunks of it back to gzip to send elsewhere
2019-03-04T15:18:42.512700
Alvina
pythondev_help_Alvina_2019-03-04T15:18:42.512700
1,551,712,722.5127
12,056
pythondev
help
so `s` = substitute, `/a/b/` = b for a, `g`=globally
2019-03-04T15:19:30.512900
Clemmie
pythondev_help_Clemmie_2019-03-04T15:19:30.512900
1,551,712,770.5129
12,057
pythondev
help
if you leave out the g it will only replace the first occurrence in the file
2019-03-04T15:19:48.513100
Clemmie
pythondev_help_Clemmie_2019-03-04T15:19:48.513100
1,551,712,788.5131
12,058
pythondev
help
Ok
2019-03-04T15:19:50.513300
Ludie
pythondev_help_Ludie_2019-03-04T15:19:50.513300
1,551,712,790.5133
12,059
pythondev
help
i will read more about this command, very powerful!!
2019-03-04T15:20:02.513500
Ludie
pythondev_help_Ludie_2019-03-04T15:20:02.513500
1,551,712,802.5135
12,060
pythondev
help
thanks again
2019-03-04T15:20:04.513700
Ludie
pythondev_help_Ludie_2019-03-04T15:20:04.513700
1,551,712,804.5137
12,061
pythondev
help
you can change the slash delimiter, but I don’t recall the syntax
2019-03-04T15:20:11.513900
Clemmie
pythondev_help_Clemmie_2019-03-04T15:20:11.513900
1,551,712,811.5139
12,062
pythondev
help
no problem. This is fun stuff to me
2019-03-04T15:20:31.514100
Clemmie
pythondev_help_Clemmie_2019-03-04T15:20:31.514100
1,551,712,831.5141
12,063
pythondev
help
have a great day
2019-03-04T15:21:01.514300
Ludie
pythondev_help_Ludie_2019-03-04T15:21:01.514300
1,551,712,861.5143
12,064
pythondev
help
you too
2019-03-04T15:21:07.514500
Clemmie
pythondev_help_Clemmie_2019-03-04T15:21:07.514500
1,551,712,867.5145
12,065
pythondev
help
Likely not enough to matter.
2019-03-04T15:22:50.515000
Lillia
pythondev_help_Lillia_2019-03-04T15:22:50.515000
1,551,712,970.515
12,066
pythondev
help
You said 4.6GB
2019-03-04T15:23:20.515600
Lillia
pythondev_help_Lillia_2019-03-04T15:23:20.515600
1,551,713,000.5156
12,067
pythondev
help
Is that zipped or unzipped?
2019-03-04T15:23:26.515800
Lillia
pythondev_help_Lillia_2019-03-04T15:23:26.515800
1,551,713,006.5158
12,068
pythondev
help
unzipped.
2019-03-04T15:23:42.516100
Alvina
pythondev_help_Alvina_2019-03-04T15:23:42.516100
1,551,713,022.5161
12,069
pythondev
help
so I was copying to gzip instead to be kind to the network
2019-03-04T15:23:56.516600
Alvina
pythondev_help_Alvina_2019-03-04T15:23:56.516600
1,551,713,036.5166
12,070
pythondev
help
and then opening the gzip file
2019-03-04T15:24:02.516800
Alvina
pythondev_help_Alvina_2019-03-04T15:24:02.516800
1,551,713,042.5168
12,071
pythondev
help
Mar 5, 2019 Scheduled update 4:16 am SGT 5m 6s 3,959,765 Data updated successfully Mar 4, 2019 Scheduled update 6:45 pm SGT 3m 16s 3,916,909 Data updated successfully
2019-03-04T15:24:26.517000
Alvina
pythondev_help_Alvina_2019-03-04T15:24:26.517000
1,551,713,066.517
12,072
pythondev
help
yeah I saw about a 2 minute difference almost where the gzip version was slower - but it could be a fluke
2019-03-04T15:24:43.517400
Alvina
pythondev_help_Alvina_2019-03-04T15:24:43.517400
1,551,713,083.5174
12,073
pythondev
help
will have to keep an eye on it
2019-03-04T15:24:48.517600
Alvina
pythondev_help_Alvina_2019-03-04T15:24:48.517600
1,551,713,088.5176
12,074
pythondev
help
Hmm interesting
2019-03-04T15:24:57.517900
Lillia
pythondev_help_Lillia_2019-03-04T15:24:57.517900
1,551,713,097.5179
12,075
pythondev
help
the file is 260ish MB compressed
2019-03-04T15:25:06.518400
Alvina
pythondev_help_Alvina_2019-03-04T15:25:06.518400
1,551,713,106.5184
12,076
pythondev
help
I would guess that the network would be the bottleneck
2019-03-04T15:25:11.518600
Lillia
pythondev_help_Lillia_2019-03-04T15:25:11.518600
1,551,713,111.5186
12,077
pythondev
help
so the overall time should be way lower (copy from remote server to local server)
2019-03-04T15:25:20.519200
Alvina
pythondev_help_Alvina_2019-03-04T15:25:20.519200
1,551,713,120.5192
12,078
pythondev
help
Without any actual knowledge of the situation :smile:
2019-03-04T15:25:23.519400
Lillia
pythondev_help_Lillia_2019-03-04T15:25:23.519400
1,551,713,123.5194
12,079
pythondev
help
just the process with opening and uploading data might be slower.. will have to check
2019-03-04T15:25:35.519800
Alvina
pythondev_help_Alvina_2019-03-04T15:25:35.519800
1,551,713,135.5198
12,080
pythondev
help
You could write a simpler tester.
2019-03-04T15:25:57.520300
Lillia
pythondev_help_Lillia_2019-03-04T15:25:57.520300
1,551,713,157.5203
12,081
pythondev
help
I'd think the 20x smaller network call would be more than worth the cost of unzipping.
2019-03-04T15:26:24.520700
Lillia
pythondev_help_Lillia_2019-03-04T15:26:24.520700
1,551,713,184.5207
12,082
pythondev
help
None
2019-03-04T15:27:42.520800
Alvina
pythondev_help_Alvina_2019-03-04T15:27:42.520800
1,551,713,262.5208
12,083
pythondev
help
is this a table stored as csv?
2019-03-04T15:28:05.521700
Bethany
pythondev_help_Bethany_2019-03-04T15:28:05.521700
1,551,713,285.5217
12,084
pythondev
help
yes
2019-03-04T15:28:08.522000
Alvina
pythondev_help_Alvina_2019-03-04T15:28:08.522000
1,551,713,288.522
12,085
pythondev
help
like a pandas dataframe type of thing?
2019-03-04T15:28:13.522200
Bethany
pythondev_help_Bethany_2019-03-04T15:28:13.522200
1,551,713,293.5222
12,086
pythondev
help
no, just a plain csv file
2019-03-04T15:28:32.522600
Alvina
pythondev_help_Alvina_2019-03-04T15:28:32.522600
1,551,713,312.5226
12,087
pythondev
help
have you tried parquet format?
2019-03-04T15:28:42.523000
Bethany
pythondev_help_Bethany_2019-03-04T15:28:42.523000
1,551,713,322.523
12,088
pythondev
help
the results of a copy query from postgres
2019-03-04T15:28:43.523100
Alvina
pythondev_help_Alvina_2019-03-04T15:28:43.523100
1,551,713,323.5231
12,089
pythondev
help
either remote or local
2019-03-04T15:28:47.523300
Alvina
pythondev_help_Alvina_2019-03-04T15:28:47.523300
1,551,713,327.5233
12,090
pythondev
help
the API I am using only accepts csv:
2019-03-04T15:29:08.523600
Alvina
pythondev_help_Alvina_2019-03-04T15:29:08.523600
1,551,713,348.5236
12,091
pythondev
help
To upload data in CSV format, the Domo specification used for representing data grids in CSV format closely follows the RFC standard for CSV (RFC-4180). For more details on correct CSV formatting, click here.
2019-03-04T15:29:12.523800
Alvina
pythondev_help_Alvina_2019-03-04T15:29:12.523800
1,551,713,352.5238
12,092
pythondev
help
:disappointed:
2019-03-04T15:29:28.524000
Bethany
pythondev_help_Bethany_2019-03-04T15:29:28.524000
1,551,713,368.524
12,093
pythondev
help
yeah
2019-03-04T15:29:34.524200
Alvina
pythondev_help_Alvina_2019-03-04T15:29:34.524200
1,551,713,374.5242
12,094
pythondev
help
I can send chunks of data to the API in parallel though, so I read from a huge csv and yield a chunk (50MBish seems to be fastest)
2019-03-04T15:30:18.525000
Alvina
pythondev_help_Alvina_2019-03-04T15:30:18.525000
1,551,713,418.525
12,095
pythondev
help
and then upload it through the REST API
2019-03-04T15:30:23.525200
Alvina
pythondev_help_Alvina_2019-03-04T15:30:23.525200
1,551,713,423.5252
12,096
pythondev
help
I could potentially save the file in parquet format, read it, write it in memory as csv before uploading it
2019-03-04T15:31:04.525800
Alvina
pythondev_help_Alvina_2019-03-04T15:31:04.525800
1,551,713,464.5258
12,097
pythondev
help
not sure if it would be worth it
2019-03-04T15:31:08.526000
Alvina
pythondev_help_Alvina_2019-03-04T15:31:08.526000
1,551,713,468.526
12,098
pythondev
help
i have been considering moving over to a file format (like parquet) instead of using postgres just because my actual database operations are pretty limited - I don't really normalize most tables or use foreign keys. I do need to maintain unique constraints and update data though
2019-03-04T15:32:27.527300
Alvina
pythondev_help_Alvina_2019-03-04T15:32:27.527300
1,551,713,547.5273
12,099
pythondev
help
probably need a DB for storage, but I love parquet for transferring tabular data around. Compression is great and the io speed is good too
2019-03-04T15:38:55.528000
Bethany
pythondev_help_Bethany_2019-03-04T15:38:55.528000
1,551,713,935.528
12,100
pythondev
help
Do you control the REST API?
2019-03-04T16:14:35.529200
Carmen
pythondev_help_Carmen_2019-03-04T16:14:35.529200
1,551,716,075.5292
12,101
pythondev
help
If so, you could use a resumable API to parallelize the file upload. Retain the benefits of compression for network bandwidth, while still breaking it into chunks for the fastest possible parallel upload speed.
2019-03-04T16:15:44.530400
Carmen
pythondev_help_Carmen_2019-03-04T16:15:44.530400
1,551,716,144.5304
12,102
pythondev
help
hey all AWS question. I have a lambda function that serves a ML model. It's meant to run daily. I want to ping my team's MS teams chat room with the status of the run (success, fail, traceback). Is there a preferred way to do this? (monitor logs, push to SNS or SQS, directly transmit from lambda to ms teams). Keep in mind the function is also returning the results to a caller. (should the caller be the one to do this?)
2019-03-04T17:15:46.532700
Bethany
pythondev_help_Bethany_2019-03-04T17:15:46.532700
1,551,719,746.5327
12,103
pythondev
help
related note, is there like a cloud-centric architecture patterns book or something?
2019-03-04T17:17:02.533100
Bethany
pythondev_help_Bethany_2019-03-04T17:17:02.533100
1,551,719,822.5331
12,104
pythondev
help
Can you use an incoming webhook and just make a POST from the Lambda?
2019-03-04T17:19:04.533200
Lillia
pythondev_help_Lillia_2019-03-04T17:19:04.533200
1,551,719,944.5332
12,105
pythondev
help
Does MS Teams have a webhook thing like Slack? (missed that part)
2019-03-04T17:19:26.533500
Lillia
pythondev_help_Lillia_2019-03-04T17:19:26.533500
1,551,719,966.5335
12,106
pythondev
help
yea
2019-03-04T17:20:00.533700
Bethany
pythondev_help_Bethany_2019-03-04T17:20:00.533700
1,551,720,000.5337
12,107
pythondev
help
so i have a lambda that can take a payload and forward it to ms teams
2019-03-04T17:20:17.533900
Bethany
pythondev_help_Bethany_2019-03-04T17:20:17.533900
1,551,720,017.5339
12,108
pythondev
help
but I don't want problems with publishing to ms teams breaking my inference pipeline
2019-03-04T17:20:46.534100
Bethany
pythondev_help_Bethany_2019-03-04T17:20:46.534100
1,551,720,046.5341
12,109
pythondev
help
You can ignore a failed request.
2019-03-04T17:21:01.534300
Lillia
pythondev_help_Lillia_2019-03-04T17:21:01.534300
1,551,720,061.5343
12,110
pythondev
help
oh simple enough then
2019-03-04T17:21:10.534500
Bethany
pythondev_help_Bethany_2019-03-04T17:21:10.534500
1,551,720,070.5345
12,111
pythondev
help
Just wrap that POST in a try-catch
2019-03-04T17:21:15.534700
Lillia
pythondev_help_Lillia_2019-03-04T17:21:15.534700
1,551,720,075.5347
12,112
pythondev
help
<@Bethany> There are some, but I wouldn't count on them being too relevant for too long. Things are moving fast. I personally try to find case studies (sometimes directly within AWS learning center/docs) and go over those. They describe real-world scenarios and their solutions. As for your Lambda question, I'd imagine (without knowing what MS chat room is), that there is an endpoint (much like on slack) that you can just send the notification to. Question is -- can you do it (assuming the Lambda is in Python) asynchronously, without blocking the rest of it? Is it even important (since the notification is possibly the last thing the Lambda does)? If the answer is yes, I'd find MS chat root endpoint you can send your notification to (much like Slack's webhooks) and not bother with an intermediary like SNS.
2019-03-04T21:00:02.542000
Kara
pythondev_help_Kara_2019-03-04T21:00:02.542000
1,551,733,202.542
12,113
pythondev
help
*Intro* Coming back to Python after a long hiatus, I have a couple of questions in the realm of application dependency management. It seems that it is still a mess, what with the venv, wrapper, poetry and pipenv controversy and so on... *Questions* 1. (more general one) why, in every single folder structure recommendation, there is a redundancy in folder names? like, if I have a `my-module` module, the "main" code folder would be `my-module/my-module` as opposed to `my-module/lib` or `my-module/src` or some other similar, more abstract naming convention? 2. is there any sort of consensus, aside from `pip freeze &gt; requirements.txt` on how to separate dev. dependencies from production ones and the proper way to manage them *without going into poetry/pipenv and such*?
2019-03-04T21:04:34.545600
Kara
pythondev_help_Kara_2019-03-04T21:04:34.545600
1,551,733,474.5456
12,114
pythondev
help
I need to find dbg/debuginfo (both deb and rpm) packages in an application repository depending on the app version and OS (centos, ubuntu, debian, etc). Does anyone have an example of a good python crawler that I can use for this purpose? I will take good framework suggestions.
2019-03-04T21:28:31.546200
Jennifer
pythondev_help_Jennifer_2019-03-04T21:28:31.546200
1,551,734,911.5462
12,115
pythondev
help
Getting a `SystemError: Parent module not loaded, cannot perform relative import` on a Fedora server. The directory structure looks like this: ```root/ | +-- task_scripts / | | | + -- script_to_run_that_imports_from_src.py | +-- src / | +-- imported_into_task_script_and_also_used_in_main_project.py | +-- other_src_files...py ``` Works just fine on my local Windows development box, so I assume there's an OS specific issue I am missing? I guess I could get rid of the task script and find another way to do what it does, which it currently needs to import a class from the main project to do, and I was trying to not spend a ton more time on it, but maybe I'll have to.
2019-03-04T21:52:57.550400
Pilar
pythondev_help_Pilar_2019-03-04T21:52:57.550400
1,551,736,377.5504
12,116
pythondev
help
what’s the most pythonic way to filter a list of dictionaries to just ones that have a “name” key that equals a certain value ?
2019-03-05T01:39:25.553100
Jeanie
pythondev_help_Jeanie_2019-03-05T01:39:25.553100
1,551,749,965.5531
12,117
pythondev
help
How about `results = [d for d in my_dicts if d['name'] == target]`? If it's not guaranteed that the `'name'` key exists, you can also use `d.get('name', None)` to avoid a `KeyError`.
2019-03-05T01:43:47.554600
Sasha
pythondev_help_Sasha_2019-03-05T01:43:47.554600
1,551,750,227.5546
12,118
pythondev
help
Hi guys, I was going over a lil bit of computation, and wanted to do a one liner so I came up with something like this. ``` foo = True spam = 23 100 + (spam if foo else 0) ``` Then when I tried looking for a much elegant solution, I happen to come across this kind of syntax: ``` foo = True spam = 23 100 + [0,spam][foo] ``` Can anybody tell me where can I find this in the docs? I would like to do more reading on this. And if there is any pros/cons for the 2nd snippet, please do tell. :slightly_smiling_face:
2019-03-05T01:46:33.556800
Philip
pythondev_help_Philip_2019-03-05T01:46:33.556800
1,551,750,393.5568
12,119
pythondev
help
I'm not sure of a doc reference, but it's casting the boolean value into an integer array index, with `False` -&gt; `0` and `True` -&gt; `1`. So then it picks either the 0th or the 1st element of the list `[0, spam]`.
2019-03-05T01:48:34.558300
Sasha
pythondev_help_Sasha_2019-03-05T01:48:34.558300
1,551,750,514.5583
12,120