workspace
stringclasses 1
value | channel
stringclasses 1
value | sentences
stringlengths 1
3.93k
| ts
stringlengths 26
26
| user
stringlengths 2
11
| sentence_id
stringlengths 44
53
| timestamp
float64 1.5B
1.56B
| __index_level_0__
int64 0
106k
|
---|---|---|---|---|---|---|---|
pythondev | help | and if i print soup i get &rid= just before string that I want to get | 2019-04-05T08:46:19.553700 | Many | pythondev_help_Many_2019-04-05T08:46:19.553700 | 1,554,453,979.5537 | 17,121 |
pythondev | help | so its something like | 2019-04-05T08:47:09.554000 | Many | pythondev_help_Many_2019-04-05T08:47:09.554000 | 1,554,454,029.554 | 17,122 |
pythondev | help | `<a class="class" href="page.php?next=x&amp;i=104&amp;b=6&amp;c=3&amp;d=352263245324312167&amp;rid=14072904921&amp;dd=231">` | 2019-04-05T08:48:41.555200 | Many | pythondev_help_Many_2019-04-05T08:48:41.555200 | 1,554,454,121.5552 | 17,123 |
pythondev | help | Thanks for the idea | 2019-04-05T08:49:40.555400 | Deirdre | pythondev_help_Deirdre_2019-04-05T08:49:40.555400 | 1,554,454,180.5554 | 17,124 |
pythondev | help | :smile: | 2019-04-05T08:49:43.555600 | Deirdre | pythondev_help_Deirdre_2019-04-05T08:49:43.555600 | 1,554,454,183.5556 | 17,125 |
pythondev | help | beautiful soup will html escape the ouptut when printing but its still just an `&` underneath, for example: | 2019-04-05T08:50:55.556200 | Wilber | pythondev_help_Wilber_2019-04-05T08:50:55.556200 | 1,554,454,255.5562 | 17,126 |
pythondev | help | None | 2019-04-05T08:51:14.556300 | Wilber | pythondev_help_Wilber_2019-04-05T08:51:14.556300 | 1,554,454,274.5563 | 17,127 |
pythondev | help | will check | 2019-04-05T08:51:25.556800 | Many | pythondev_help_Many_2019-04-05T08:51:25.556800 | 1,554,454,285.5568 | 17,128 |
pythondev | help | Aha. Okay, onto the next round of questions: Do you have specific apps that you need to support SSO for? If they have specific requirements for protocols, that makes the determination for you as well. If not, if everyone is going to be integrating after the fact with your internal solution, I'd pick the easiest and cheapest one to setup. You can always transition later to a different piece of software as your identity provider (though that can admittedly be a difficult task). | 2019-04-05T09:03:24.556900 | Carmen | pythondev_help_Carmen_2019-04-05T09:03:24.556900 | 1,554,455,004.5569 | 17,129 |
pythondev | help | <@Sasha> <@Chuck> I know I'm chiming in rather late on this, but a few thoughts from my experience:
• Removing branches is not so common in other SCMs like SVN, but I see it fairly commonly in Git repos. Branch early, branch often, and throw them away when you don't need them anymore.
• `git tag` should be your go-to for release versioning. | 2019-04-05T09:08:50.559100 | Carmen | pythondev_help_Carmen_2019-04-05T09:08:50.559100 | 1,554,455,330.5591 | 17,130 |
pythondev | help | we do prune alot at work | 2019-04-05T09:11:43.559400 | Hiroko | pythondev_help_Hiroko_2019-04-05T09:11:43.559400 | 1,554,455,503.5594 | 17,131 |
pythondev | help | but it depends on the team | 2019-04-05T09:11:49.559700 | Hiroko | pythondev_help_Hiroko_2019-04-05T09:11:49.559700 | 1,554,455,509.5597 | 17,132 |
pythondev | help | some projects have 2000 branches, others have 5 | 2019-04-05T09:12:00.560000 | Hiroko | pythondev_help_Hiroko_2019-04-05T09:12:00.560000 | 1,554,455,520.56 | 17,133 |
pythondev | help | github makes it easy to clear out branches when making PRs | 2019-04-05T09:12:20.560300 | Hiroko | pythondev_help_Hiroko_2019-04-05T09:12:20.560300 | 1,554,455,540.5603 | 17,134 |
pythondev | help | Does anyone have any articles on how to optimize a Folium map? I created over 100 points by Long, Lat and when I open the HTML its very slow on loading. it's at 79.3 MB | 2019-04-05T09:18:23.561800 | Nola | pythondev_help_Nola_2019-04-05T09:18:23.561800 | 1,554,455,903.5618 | 17,135 |
pythondev | help | <@Kesha> had you come across anything on this? I know you were doing some similar work lately. | 2019-04-05T09:22:57.562200 | Karoline | pythondev_help_Karoline_2019-04-05T09:22:57.562200 | 1,554,456,177.5622 | 17,136 |
pythondev | help | <@Karoline> <@Nola>
I have no idea on optimisation unfortunately. The first couple of maps that I built I had a similar issue. <@Nola> if possible it's probably worth sharing the code to fully understand the problem. You shouldn't be having any problems with 100 points | 2019-04-05T09:25:45.563900 | Kesha | pythondev_help_Kesha_2019-04-05T09:25:45.563900 | 1,554,456,345.5639 | 17,137 |
pythondev | help | Yes will share | 2019-04-05T09:29:04.564100 | Nola | pythondev_help_Nola_2019-04-05T09:29:04.564100 | 1,554,456,544.5641 | 17,138 |
pythondev | help | here is the excel and notebook | 2019-04-05T09:30:28.564300 | Nola | pythondev_help_Nola_2019-04-05T09:30:28.564300 | 1,554,456,628.5643 | 17,139 |
pythondev | help | Share as a snippet please? :pray: | 2019-04-05T09:31:18.565000 | Kesha | pythondev_help_Kesha_2019-04-05T09:31:18.565000 | 1,554,456,678.565 | 17,140 |
pythondev | help | got it sorry about that | 2019-04-05T09:31:28.565200 | Nola | pythondev_help_Nola_2019-04-05T09:31:28.565200 | 1,554,456,688.5652 | 17,141 |
pythondev | help | None | 2019-04-05T09:31:55.565600 | Nola | pythondev_help_Nola_2019-04-05T09:31:55.565600 | 1,554,456,715.5656 | 17,142 |
pythondev | help | <@Kesha> i think i will still run into the problem when i plot the other 2 columns in red and green, i removed duplicates to help but still very slow | 2019-04-05T10:02:32.567600 | Nola | pythondev_help_Nola_2019-04-05T10:02:32.567600 | 1,554,458,552.5676 | 17,143 |
pythondev | help | how many rows are you dealing with here? | 2019-04-05T10:03:10.568000 | Hiroko | pythondev_help_Hiroko_2019-04-05T10:03:10.568000 | 1,554,458,590.568 | 17,144 |
pythondev | help | <@Hiroko> shape = (29870, 4) | 2019-04-05T10:04:05.568400 | Nola | pythondev_help_Nola_2019-04-05T10:04:05.568400 | 1,554,458,645.5684 | 17,145 |
pythondev | help | thats when I remove duplicates | 2019-04-05T10:04:20.568700 | Nola | pythondev_help_Nola_2019-04-05T10:04:20.568700 | 1,554,458,660.5687 | 17,146 |
pythondev | help | you’re dumping out 30k rows in html? | 2019-04-05T10:05:54.569000 | Hiroko | pythondev_help_Hiroko_2019-04-05T10:05:54.569000 | 1,554,458,754.569 | 17,147 |
pythondev | help | :grimacing: | 2019-04-05T10:06:05.569200 | Hiroko | pythondev_help_Hiroko_2019-04-05T10:06:05.569200 | 1,554,458,765.5692 | 17,148 |
pythondev | help | that’s a pretty hefty data dump | 2019-04-05T10:06:17.569500 | Hiroko | pythondev_help_Hiroko_2019-04-05T10:06:17.569500 | 1,554,458,777.5695 | 17,149 |
pythondev | help | lmao help me out here, i have no clue about html. | 2019-04-05T10:06:24.569700 | Nola | pythondev_help_Nola_2019-04-05T10:06:24.569700 | 1,554,458,784.5697 | 17,150 |
pythondev | help | I think I'm going to tell my boss the business problem won't work. He basically wants a map of 3 categories, I provided using Basemap then he wanted to know more information and if we could zoom (pulls hair out) | 2019-04-05T10:07:33.571100 | Nola | pythondev_help_Nola_2019-04-05T10:07:33.571100 | 1,554,458,853.5711 | 17,151 |
pythondev | help | do you need _all_ the info you’re dumping out? | 2019-04-05T10:08:09.571400 | Hiroko | pythondev_help_Hiroko_2019-04-05T10:08:09.571400 | 1,554,458,889.5714 | 17,152 |
pythondev | help | why not do clustering? | 2019-04-05T10:08:23.571800 | Hiroko | pythondev_help_Hiroko_2019-04-05T10:08:23.571800 | 1,554,458,903.5718 | 17,153 |
pythondev | help | Thats a good idea, I might try that. cluster, find the max clustering and make a radius for the plot? this (picture below) was where it started and he asked for more information. So then i asked if we could identify states to look into and I do a basemap on those states, his question was "is there a way we can use this and zoom" Cmonnn <@Hiroko> | 2019-04-05T10:10:18.571900 | Nola | pythondev_help_Nola_2019-04-05T10:10:18.571900 | 1,554,459,018.5719 | 17,154 |
pythondev | help | yeah, clustering | 2019-04-05T10:19:01.572600 | Hiroko | pythondev_help_Hiroko_2019-04-05T10:19:01.572600 | 1,554,459,541.5726 | 17,155 |
pythondev | help | because there’s no way you can easily dump 30k POI on a map in a browser and expect it to be zippy | 2019-04-05T10:19:44.573300 | Hiroko | pythondev_help_Hiroko_2019-04-05T10:19:44.573300 | 1,554,459,584.5733 | 17,156 |
pythondev | help | Realistically, resolution of your map at any particular zoom level is going to cause individual points to overlap anyway, so clustering makes perfect sense. | 2019-04-05T10:32:51.574600 | Carmen | pythondev_help_Carmen_2019-04-05T10:32:51.574600 | 1,554,460,371.5746 | 17,157 |
pythondev | help | haha <@Hiroko> thank you so much for helping on this, i been pulling my hair out. | 2019-04-05T10:33:43.574900 | Nola | pythondev_help_Nola_2019-04-05T10:33:43.574900 | 1,554,460,423.5749 | 17,158 |
pythondev | help | are you doing alot of GIS/mapping work? | 2019-04-05T10:39:26.575500 | Hiroko | pythondev_help_Hiroko_2019-04-05T10:39:26.575500 | 1,554,460,766.5755 | 17,159 |
pythondev | help | oh, and welcome. glad to help! | 2019-04-05T10:39:47.575800 | Hiroko | pythondev_help_Hiroko_2019-04-05T10:39:47.575800 | 1,554,460,787.5758 | 17,160 |
pythondev | help | sorry, whats GIS? I'm only doing this for this specific project but its not a normal task if that's what you mean | 2019-04-05T10:48:28.575900 | Nola | pythondev_help_Nola_2019-04-05T10:48:28.575900 | 1,554,461,308.5759 | 17,161 |
pythondev | help | <https://en.wikipedia.org/wiki/Geographic_information_system> | 2019-04-05T10:52:14.576100 | Hiroko | pythondev_help_Hiroko_2019-04-05T10:52:14.576100 | 1,554,461,534.5761 | 17,162 |
pythondev | help | Hey folks, I am having trouble learning kwargs. Here’s the script I have saved as “test.py”
```from sys import argv
def print_values(**kwargs):
for key, value in kwargs.items():
print("The value of {} is {}".format(key, value))```
I run this command in my terminal, but I don’t get any output:
```python3 test.py {'USD': 12.124493, 'CAD': 1.497565}```
why is the dictionary empty, shouldn’t that pass kwargs to my function? | 2019-04-05T10:56:58.576800 | Bethann | pythondev_help_Bethann_2019-04-05T10:56:58.576800 | 1,554,461,818.5768 | 17,163 |
pythondev | help | input from a command line and input from another function are very different things | 2019-04-05T10:58:34.577500 | Hiroko | pythondev_help_Hiroko_2019-04-05T10:58:34.577500 | 1,554,461,914.5775 | 17,164 |
pythondev | help | if you have something like
```if __name__ == '__main__':
print_values(usd=12.124493, cad=1.497565)
```
in your script, then you’d see the output | 2019-04-05T11:00:15.579000 | Hiroko | pythondev_help_Hiroko_2019-04-05T11:00:15.579000 | 1,554,462,015.579 | 17,165 |
pythondev | help | ok I will try this. Do you think this would be helpful for me to look into <https://docs.python.org/3/howto/argparse.html> | 2019-04-05T11:01:41.579500 | Bethann | pythondev_help_Bethann_2019-04-05T11:01:41.579500 | 1,554,462,101.5795 | 17,166 |
pythondev | help | hey guys, im curious what would be the best module for mapping out zipcode coordinance? something like so <https://www.google.com/search?q=zip+code+population+map&safe=off&client=opera&hs=YHo&source=lnms&tbm=isch&sa=X&ved=0ahUKEwiz3aCFn7nhAhVJSK0KHWW5Bm4Q_AUIDigB&biw=1040&bih=1819#imgrc=LmFAOPe_pK48SM>: | 2019-04-05T11:14:53.580500 | Nenita | pythondev_help_Nenita_2019-04-05T11:14:53.580500 | 1,554,462,893.5805 | 17,167 |
pythondev | help | <@Nenita> you mean an API for getting those coordinates? or a data structure to hold them? | 2019-04-05T11:26:54.581100 | Ashley | pythondev_help_Ashley_2019-04-05T11:26:54.581100 | 1,554,463,614.5811 | 17,168 |
pythondev | help | i just need to map them. i have all the data and structure (or atleast can structure as needed) and i was looking for a simple way to map them out to a single state. as for now, i found a pretty good example using plotly @ <https://plot.ly/python/county-choropleth/> for a single state | 2019-04-05T11:27:59.582200 | Nenita | pythondev_help_Nenita_2019-04-05T11:27:59.582200 | 1,554,463,679.5822 | 17,169 |
pythondev | help | if i could single it out to a city, that would be even better, but this may work for now | 2019-04-05T11:28:52.582600 | Nenita | pythondev_help_Nenita_2019-04-05T11:28:52.582600 | 1,554,463,732.5826 | 17,170 |
pythondev | help | geezus im getting so many errors trying to install the packages needed for that map example. I need to install 3 modules and all 3 are erroring out. pip install geopandas==0.3.0
pip install pyshp==1.2.10
pip install shapely==1.6.3 | 2019-04-05T11:44:54.583400 | Nenita | pythondev_help_Nenita_2019-04-05T11:44:54.583400 | 1,554,464,694.5834 | 17,171 |
pythondev | help | None | 2019-04-05T11:49:42.583500 | Jolanda | pythondev_help_Jolanda_2019-04-05T11:49:42.583500 | 1,554,464,982.5835 | 17,172 |
pythondev | help | neither this works `b= """splash:runjs("$('{div} > {tag}').filter(function () {return $(this).text() == {content};}).css('color', 'red');;")""".format(**atribs) ` | 2019-04-05T11:50:35.584000 | Jolanda | pythondev_help_Jolanda_2019-04-05T11:50:35.584000 | 1,554,465,035.584 | 17,173 |
pythondev | help | the above returns `Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: unexpected '{' in field name
unexpected '{' in field name
` | 2019-04-05T11:51:23.584400 | Jolanda | pythondev_help_Jolanda_2019-04-05T11:51:23.584400 | 1,554,465,083.5844 | 17,174 |
pythondev | help | ???? | 2019-04-05T11:54:36.584700 | Hiroko | pythondev_help_Hiroko_2019-04-05T11:54:36.584700 | 1,554,465,276.5847 | 17,175 |
pythondev | help | what are you trying to do? | 2019-04-05T11:54:41.584900 | Hiroko | pythondev_help_Hiroko_2019-04-05T11:54:41.584900 | 1,554,465,281.5849 | 17,176 |
pythondev | help | Replace {div},{tag}.{content} by strings | 2019-04-05T11:56:32.585700 | Jolanda | pythondev_help_Jolanda_2019-04-05T11:56:32.585700 | 1,554,465,392.5857 | 17,177 |
pythondev | help | to become this: `splash:runjs("$('div > textarea').filter(function () {return $(this).text() == "teste";}).css('color', 'red');;")` | 2019-04-05T11:57:57.586500 | Jolanda | pythondev_help_Jolanda_2019-04-05T11:57:57.586500 | 1,554,465,477.5865 | 17,178 |
pythondev | help | actually I got this error `KeyError: 'return $(this)'` | 2019-04-05T11:59:58.586900 | Jolanda | pythondev_help_Jolanda_2019-04-05T11:59:58.586900 | 1,554,465,598.5869 | 17,179 |
pythondev | help | Because of the return {} | 2019-04-05T12:00:06.587200 | Jolanda | pythondev_help_Jolanda_2019-04-05T12:00:06.587200 | 1,554,465,606.5872 | 17,180 |
pythondev | help | None | 2019-04-05T12:16:32.587600 | Jolanda | pythondev_help_Jolanda_2019-04-05T12:16:32.587600 | 1,554,466,592.5876 | 17,181 |
pythondev | help | funny you were just talking about GIS/mapping, I've been prototyping something in that area for the last couple days now, boss just showed the MVP to the clients and they're sold. it's quite fun actually | 2019-04-05T12:27:04.589200 | Carlo | pythondev_help_Carlo_2019-04-05T12:27:04.589200 | 1,554,467,224.5892 | 17,182 |
pythondev | help | Another problem you have is that your shell is probably not interpreting that as a single string argument like you intend. You should wrap that JSON inside double-quotes. | 2019-04-05T12:28:51.589500 | Carmen | pythondev_help_Carmen_2019-04-05T12:28:51.589500 | 1,554,467,331.5895 | 17,183 |
pythondev | help | any recommended db driver to use for python3? the one I'm use to isn't ported to python3 | 2019-04-05T13:38:01.589800 | Holly | pythondev_help_Holly_2019-04-05T13:38:01.589800 | 1,554,471,481.5898 | 17,184 |
pythondev | help | Honestly, no. I tend to use the official MySQL Python database module, but I don't use Python3 with MySQL enough to know if it's supported or not. | 2019-04-05T13:58:39.590100 | Carmen | pythondev_help_Carmen_2019-04-05T13:58:39.590100 | 1,554,472,719.5901 | 17,185 |
pythondev | help | yeah that's the one that's not in python3 :confused:, it looks like pymsql is quite similar, ty | 2019-04-05T14:14:45.590400 | Holly | pythondev_help_Holly_2019-04-05T14:14:45.590400 | 1,554,473,685.5904 | 17,186 |
pythondev | help | Hi Everyone, I was wondering if anyone has any suggestions for getting message digests for large files (tb)? | 2019-04-05T16:11:47.592200 | Aura | pythondev_help_Aura_2019-04-05T16:11:47.592200 | 1,554,480,707.5922 | 17,187 |
pythondev | help | message digests? | 2019-04-05T16:22:59.592500 | Hiroko | pythondev_help_Hiroko_2019-04-05T16:22:59.592500 | 1,554,481,379.5925 | 17,188 |
pythondev | help | do you mean summaries, diffs, what? <@Aura> | 2019-04-05T16:23:13.592900 | Hiroko | pythondev_help_Hiroko_2019-04-05T16:23:13.592900 | 1,554,481,393.5929 | 17,189 |
pythondev | help | Hi <@Hiroko> , Trying to use the hashlib library, trying to make it as efficient as possible | 2019-04-05T16:24:11.593500 | Aura | pythondev_help_Aura_2019-04-05T16:24:11.593500 | 1,554,481,451.5935 | 17,190 |
pythondev | help | I don’t know if that’ll work for anything other than just telling the files are different | 2019-04-05T16:25:42.594000 | Hiroko | pythondev_help_Hiroko_2019-04-05T16:25:42.594000 | 1,554,481,542.594 | 17,191 |
pythondev | help | if you want to know what has changed, there’s a few options but are highly depedent on the format of the data | 2019-04-05T16:26:10.594600 | Hiroko | pythondev_help_Hiroko_2019-04-05T16:26:10.594600 | 1,554,481,570.5946 | 17,192 |
pythondev | help | is it binary, json, plain text, csv, etc? | 2019-04-05T16:26:22.595000 | Hiroko | pythondev_help_Hiroko_2019-04-05T16:26:22.595000 | 1,554,481,582.595 | 17,193 |
pythondev | help | large json | 2019-04-05T16:28:30.595400 | Aura | pythondev_help_Aura_2019-04-05T16:28:30.595400 | 1,554,481,710.5954 | 17,194 |
pythondev | help | There are hash-tree techniques, for instance, where you can parallelize the processing and then combine the pieces. But I expect you'll be I/O limited regardless. | 2019-04-05T16:29:03.596100 | Sasha | pythondev_help_Sasha_2019-04-05T16:29:03.596100 | 1,554,481,743.5961 | 17,195 |
pythondev | help | (Quietly sobbing at the thought of multi-terabyte JSON files...) | 2019-04-05T16:29:33.596600 | Sasha | pythondev_help_Sasha_2019-04-05T16:29:33.596600 | 1,554,481,773.5966 | 17,196 |
pythondev | help | yeah | 2019-04-05T16:29:48.597000 | Hiroko | pythondev_help_Hiroko_2019-04-05T16:29:48.597000 | 1,554,481,788.597 | 17,197 |
pythondev | help | that’s a really inefficient format for that size | 2019-04-05T16:29:55.597400 | Hiroko | pythondev_help_Hiroko_2019-04-05T16:29:55.597400 | 1,554,481,795.5974 | 17,198 |
pythondev | help | I totally agree! Thank you for the suggestion, it was kind of a hypothetical question to see how I can speed up creating digests in batches | 2019-04-05T16:31:33.598400 | Aura | pythondev_help_Aura_2019-04-05T16:31:33.598400 | 1,554,481,893.5984 | 17,199 |
pythondev | help | or large file, still learning on hashing techniques so Ill give the splitting /running in parallel a shot. Thanks alot for your guidance | 2019-04-05T16:32:17.599400 | Aura | pythondev_help_Aura_2019-04-05T16:32:17.599400 | 1,554,481,937.5994 | 17,200 |
pythondev | help | Is there any other approach I can take without reading the file that you can think of? | 2019-04-05T16:33:34.600000 | Aura | pythondev_help_Aura_2019-04-05T16:33:34.600000 | 1,554,482,014.6 | 17,201 |
pythondev | help | It depends what your goal is. If you just want to detect changes, you could look at just the size and modification timestamp, for instance. Or you could do a stochastic sample of a subset of the file bytes and hash those. But in the general case, if you don't read every byte of the file, you'll miss any changes that involve the bytes you didn't read. | 2019-04-05T16:35:15.601500 | Sasha | pythondev_help_Sasha_2019-04-05T16:35:15.601500 | 1,554,482,115.6015 | 17,202 |
pythondev | help | sampling seems like a good way to do it. Ill give making the accuracy a parameter for acceptance rate. Thanks again. Just joined the community and hoping to contribute | 2019-04-05T16:39:33.603400 | Aura | pythondev_help_Aura_2019-04-05T16:39:33.603400 | 1,554,482,373.6034 | 17,203 |
pythondev | help | does anyone know why the following script writes the header row to the my output file twice? | 2019-04-05T16:40:13.603700 | Jorge | pythondev_help_Jorge_2019-04-05T16:40:13.603700 | 1,554,482,413.6037 | 17,204 |
pythondev | help | `writer.writeheader()` | 2019-04-05T16:40:47.604600 | Hiroko | pythondev_help_Hiroko_2019-04-05T16:40:47.604600 | 1,554,482,447.6046 | 17,205 |
pythondev | help | I have a dictionary with different number of angles as a key, each angle has a hexadecimal character as a value, how can I go through the dictionary with an angle that I receive through an input and return its value, only its value, not its key | 2019-04-05T16:40:56.604900 | Melia | pythondev_help_Melia_2019-04-05T16:40:56.604900 | 1,554,482,456.6049 | 17,206 |
pythondev | help | so...the header is normally in the rows, as well as the header? | 2019-04-05T16:41:24.605400 | Jorge | pythondev_help_Jorge_2019-04-05T16:41:24.605400 | 1,554,482,484.6054 | 17,207 |
pythondev | help | why would you ever `writeheader` then? if its in the rows by default? | 2019-04-05T16:41:45.606100 | Jorge | pythondev_help_Jorge_2019-04-05T16:41:45.606100 | 1,554,482,505.6061 | 17,208 |
pythondev | help | <@Melia> can you give an example dict entry? | 2019-04-05T16:41:55.606300 | Sasha | pythondev_help_Sasha_2019-04-05T16:41:55.606300 | 1,554,482,515.6063 | 17,209 |
pythondev | help | It sounds like you just want `mydict[angle]`, but I suspect it is not that simple. | 2019-04-05T16:42:39.606700 | Sasha | pythondev_help_Sasha_2019-04-05T16:42:39.606700 | 1,554,482,559.6067 | 17,210 |
pythondev | help | None | 2019-04-05T16:43:18.606800 | Melia | pythondev_help_Melia_2019-04-05T16:43:18.606800 | 1,554,482,598.6068 | 17,211 |
pythondev | help | Yeah, so why doesn't `angles[angle]` do what you want? Returns the value associated with a given key. | 2019-04-05T16:44:17.607600 | Sasha | pythondev_help_Sasha_2019-04-05T16:44:17.607600 | 1,554,482,657.6076 | 17,212 |
pythondev | help | It is a string versus numeric problem? | 2019-04-05T16:44:37.608000 | Sasha | pythondev_help_Sasha_2019-04-05T16:44:37.608000 | 1,554,482,677.608 | 17,213 |
pythondev | help | it is assumed that each angle is equivalent to a hexadecimal character, I then concatenate 2 hexadecimal characters and I do the translation to letter, I do not know if it is understood ?? | 2019-04-05T16:44:37.608100 | Melia | pythondev_help_Melia_2019-04-05T16:44:37.608100 | 1,554,482,677.6081 | 17,214 |
pythondev | help | ah.... weird
didnt realize you're not supposed to use the `fieldnames` argument in *csv.DictReader* if your file already has the header row in it | 2019-04-05T16:53:56.608900 | Jorge | pythondev_help_Jorge_2019-04-05T16:53:56.608900 | 1,554,483,236.6089 | 17,215 |
pythondev | help | it will treat the first row as data instead of the header | 2019-04-05T16:54:17.609300 | Jorge | pythondev_help_Jorge_2019-04-05T16:54:17.609300 | 1,554,483,257.6093 | 17,216 |
pythondev | help | how can i convert from hexa to ASCII ?? | 2019-04-05T17:02:08.609900 | Melia | pythondev_help_Melia_2019-04-05T17:02:08.609900 | 1,554,483,728.6099 | 17,217 |
pythondev | help | Convert to integer, then use `chr()`. | 2019-04-05T17:09:41.610800 | Sasha | pythondev_help_Sasha_2019-04-05T17:09:41.610800 | 1,554,484,181.6108 | 17,218 |
pythondev | help | My custom built algorithm for a Bag of Words. Would you rather this way or from SK Learn ?
<https://github.com/paulgureghian/Bag_of_Words> | 2019-04-05T17:38:20.612000 | Clayton | pythondev_help_Clayton_2019-04-05T17:38:20.612000 | 1,554,485,900.612 | 17,219 |
pythondev | help | Or NLTK ? | 2019-04-05T17:38:53.612300 | Clayton | pythondev_help_Clayton_2019-04-05T17:38:53.612300 | 1,554,485,933.6123 | 17,220 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.