Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
37,379,793
2016-05-22T21:21:00.000
2
0
0
1
python,sockets,tcp,packet,datagram
37,380,443
1
true
0
0
Since TCP is only an octet stream this is not possible without glue, either around your data (i.e. framing) or inside your data (structure with clear end). The way this is typically done is either by having a delimiter (like \r\n\r\n between HTTP header and body) or just prefix your message with the size. In the latter case just read the size (fixed number of bytes) and then read this number of bytes for the actual message.
1
0
0
I need to transmit full byte packets in my custom format via TCP. But if I understand correctly TCP is streaming protocol, so when I will call send method on sender side, there is not guaranty that it will be received with same size on receiver side when call recv (It can be merged together with Nagle's algorithm and then splited when will not fit into frame or when not fit to buffer). UDP provides full datagrams so there is no such issue. So question is: what will be the best and correct way to recv same pacakges as it was send, with same size, with no glue. I develop using python. I think I can use something like HDLC but I am not sure that iterating throug each byte will be best choice. Maybe there are some open-source small examples for this situation or it is discribed in books?
Correct architectual way to send "datagrams" via TCP
1.2
0
1
54
37,381,061
2016-05-23T00:28:00.000
0
0
0
0
python-3.x,pygame
37,381,094
1
true
0
1
In my experience, most uses for sprites can work with rectangular collision boxes. Only if you had an instance where a ball needed to be collided with then you would have to use an actual sprite. In the long run, I would recommend learning sprites, but if you need help with sprites, the pygame documentation has loads of information!
1
2
0
Ok so I am very bad at sprites and I just don't get how to use sprites with blitted images. So I was wondering, if I make the image and just make a rectangle around that object that follows that image around, would that be a good replacement for sprites, the rectangle would be the one that's colliding for instances... Or should I try learning sprites. If I should is there any tutorial that could help me with using blitted images to make sprite characters that use python 3.4? Thank you!
Replacing sprites idea
1.2
0
0
25
37,383,059
2016-05-23T05:16:00.000
5
0
1
0
python,python-3.x
37,383,171
4
false
0
0
One simple option is to add the items starting with the position with highest value, and then continue with the 2nd highest value, etc. This way you can using the original method, without any problem of "old/new position"
1
9
0
I was wondering if there was a way to insert multiple variables into a list at one time using the same index. So for example, let's say we have a list of [a, b, c] and [0,1,2,3,4] and I wanted to insert the first list such that the end result is, [a, 0, 1, b, 2, 3, c, 4] But if I was going to do it individually using list.insert(pos, value)and used the positions of [0, 2, 4] then the subsequent positions used becomes invalid since it was relating to the old list of 5 elements instead of 6 now. Any suggestions?
How to insert multiple values by index into list at one time
0.244919
0
0
9,505
37,383,545
2016-05-23T06:01:00.000
1
0
0
0
python,mongodb,database-connection,pymongo,mongoengine
37,915,282
1
false
0
0
So after a lot of struggle and a lot of load testing, we solved the problem by upgrading PyMongo to 2.8.1. PyMongo 2.7.2 is the first version to support MongoDB 2.6 and it sure does have some problems handling connections. Upgrading to PyMongo 2.8.1 helped us resolve the issue. With the same load, the connections do not exceed 250-300.
1
0
0
I was running Mongo 2.4 with a replicaset which contains 1 primary, 1 secondary and an arbiter. We were using pymongo 2.6 and mongoengine 0.8.2. Recently, we performed an upgrade to Mongo 2.6, and also upgraded pymongo to 2.7.2 and mongoengine to 0.8.7. This setup worked fine for almost 12 hours, after which we started getting the below error : [initandlisten] connection refused because too many open connections: 819 The ulimit is 1024 which worked perfectly with Mongo 2.4 and hence we have not increased it. Increasing it might solve the problem temporarily but we will hit it again in the future. Somehow the root cause seems to be the inability of Pymongo to close the connections. Any pointers why this happens? P.S : When using Mongo 2.4 with pymongo 2.7.2 and mongoengine 0.8.7, everything works well and the max connections are about 250. Only with Mongo 2.6, pymongo 2.7.2 and mongoengine 0.8.7, the number of connections shoot up to 819.
Too many open connections with mongo 2.6 + pymongo 2.7.2
0.197375
1
0
586
37,386,595
2016-05-23T08:52:00.000
4
0
0
0
python,r,text-mining,lda,mallet
37,416,493
1
true
0
0
Thank you for this thorough summary! As an alternative to topicmodels try the package mallet in R. It runs Mallet in a JVM directly from R and allows you to pull out results as R tables. I expect to release a new version soon, and compatibility with tm constructs is something others have requested. To clarify, it's a good idea for documents to be at most around 1000 tokens long (not vocabulary). Any more and you start to lose useful information. The assumption of the model is that the position of a token within a given document doesn't tell you anything about that token's topic. That's rarely true for longer documents, so it helps to break them up. Another point I would add is that documents that are too short can also be a problem. Tweets, for example, don't seem to provide enough contextual information about word co-occurrence, so the model often devolves into a one-topic-per-doc clustering algorithm. Combining multiple related short documents can make a big difference. Vocabulary curation is in practice the most challenging part of a topic modeling workflow. Replacing selected multi-word terms with single tokens (for example by swapping spaces for underscores) before tokenizing is a very good idea. Stemming is almost never useful, at least for English. Automated methods can help vocabulary curation, but this step has a profound impact on results (much more than the number of topics) and I am reluctant to encourage people to fully trust any system. Parameters: I do not believe that there is a right number of topics. I recommend using a number of topics that provides the granularity that suits your application. Likelihood can often detect when you have too few topics, but after a threshold it doesn't provide much useful information. Using hyperparameter optimization makes models much less sensitive to this setting as well, which might reduce the number of parameters that you need to search over. Topic drift: This is not a well understood problem. More examples of real-world corpus change would be useful. Looking for changes in vocabulary (e.g. proportion of out-of-vocabulary words) is a quick proxy for how well a model will fit.
1
4
1
Introduction I'd like to know what other topic modellers consider to be an optimal topic-modelling workflow all the way from pre-processing to maintenance. While this question consists of a number of sub-questions (which I will specify below), I believe this thread would be useful for myself and others who are interested to learn about best practices of end-to-end process. Proposed Solution Specifications I'd like the proposed solution to preferably rely on R for text processing (but Python is fine also) and topic-modelling itself to be done in MALLET (although if you believe other solutions work better, please let us know). I tend to use the topicmodels package in R, however I would like to switch to MALLET as it offers many benefits over topicmodels. It can handle a lot of data, it does not rely on specific text pre-processing tools and it appears to be widely used for this purpose. However some of the issues outline below are also relevant for topicmodels too. I'd like to know how others approach topic modelling and which of the below steps could be improved. Any useful piece of advice is welcome. Outline Here is how it's going to work: I'm going to go through the workflow which in my opinion works reasonably well, and I'm going to outline problems at each step. Proposed Workflow 1. Clean text This involves removing punctuation marks, digits, stop words, stemming words and other text-processing tasks. Many of these can be done either as part of term-document matrix decomposition through functions such as for example TermDocumentMatrix from R's package tm. Problem: This however may need to be performed on the text strings directly, using functions such as gsub in order for MALLET to consume these strings. Performing in on the strings directly is not as efficient as it involves repetition (e.g. the same word would have to be stemmed several times) 2. Construct features In this step we construct a term-document matrix (TDM), followed by the filtering of terms based on frequency, and TF-IDF values. It is preferable to limit your bag of features to about 1000 or so. Next go through the terms and identify what requires to be (1) dropped (some stop words will make it through), (2) renamed or (3) merged with existing entries. While I'm familiar with the concept of stem-completion, I find that it rarely works well. Problem: (1) Unfortunately MALLET does not work with TDM constructs and to make use of your TDM, you would need to find the difference between the original TDM -- with no features removed -- and the TDM that you are happy with. This difference would become stop words for MALLET. (2) On that note I'd also like to point out that feature selection does require a substantial amount of manual work and if anyone has ideas on how to minimise it, please share your thoughts. Side note: If you decide to stick with R alone, then I can recommend the quanteda package which has a function dfm that accepts a thesaurus as one of the parameters. This thesaurus allows to to capture patterns (usually regex) as opposed to words themselves, so for example you could have a pattern \\bsign\\w*.?ups? that would match sign-up, signed up and so on. 3. Find optimal parameters This is a hard one. I tend to break data into test-train sets and run cross-validation fitting a model of k topics and testing the fit using held-out data. Log likelihood is recorded and compared for different resolutions of topics. Problem: Log likelihood does help to understand how good is the fit, but (1) it often tends to suggest that I need more topics than it is practically sensible and (2) given how long it generally takes to fit a model, it is virtually impossible to find or test a grid of optimal values such as iterations, alpha, burn-in and so on. Side note: When selecting the optimal number of topics, I generally select a range of topics incrementing by 5 or so as incrementing a range by 1 generally takes too long to compute. 4. Maintenance It is easy to classify new data into a set existing topics. However if you are running it over time, you would naturally expect that some of your topics may cease to be relevant, while new topics may appear. Furthermore, it might be of interest to study the lifecycle of topics. This is difficult to account for as you are dealing with a problem that requires an unsupervised solution and yet for it to be tracked over time, you need to approach it in a supervised way. Problem: To overcome the above issue, you would need to (1) fit new data into an old set of topics, (2) construct a new topic model based on new data (3) monitor log likelihood values over time and devise a threshold when to switch from old to new; and (4) merge old and new solutions somehow so that the evolution of topics would be revealed to a lay observer. Recap of Problems String cleaning for MALLET to consume the data is inefficient. Feature selection requires manual work. Optimal number of topics selection based on LL does not account for what is practically sensible Computational complexity does not give the opportunity to find an optimal grid of parameters (other than the number of topics) Maintenance of topics over time poses challenging issues as you have to retain history but also reflect what is currently relevant. If you've read that far, I'd like to thank you, this is a rather long post. If you are interested in the suggest, feel free to either add more questions in the comments that you think are relevant or offer your thoughts on how to overcome some of these problems. Cheers
What is the optimal topic-modelling workflow with MALLET?
1.2
0
0
1,478
37,394,199
2016-05-23T14:51:00.000
2
0
0
0
python-3.x,tkinter,optionmenu
37,395,425
1
true
0
1
It's not designed for that. If you want that sort of behavior you can create your own widget. An optionmenu is just a menubtton and a menu, and a tiny bit of code.
1
2
0
OptionMenu is working perfectly in my application to select one option among several possible. However, I need to allow the user to select more than one option. Is it possible?
How to allow more than one selection in tkinter OptionMenu?
1.2
0
0
230
37,394,794
2016-05-23T15:21:00.000
1
0
1
0
python-3.x,pycharm,jupyter-notebook
37,395,677
1
false
0
0
I think your state may refer to a stack frame (more or less). PyCharm and probably other IDEs have a similar tool. If you are debugging your code and reach a breakpoint you have the possibility to open an interactive ipython console in which you can inspect all current variables in the stack frame, though you cant alter them. I personally prefer debugging in PyCharm and use Jupyter Notebooks more for interactive exploring data or some new piece of code. It is kind of hart to debug loops in Notebooks and some other things like iterators too.
1
2
0
Context: Jupyter notebook has this nice ability to store variable values even when a block of code executed and exited. It is a more powerful version of "break point" in other IDEs. So far, I have noticed some benefits from this jupyter notebook ability: You can run the lengthy part of your code only once and use its results over and over again without rerunning them. you can inspect your current variables more flexibility, print them on console or into a file or into a table. you can try tweaking variables as well, before feed it to the next code block. There can be draw back to this, after tweaking my variables, they are forever changed, so I have to include lines of boilerplate code in my subsequent notebook script after the lengthy script: lengthy_variable_new = lengthy_variable_original.copy() # do things with lengthy_variable_new Now, let's imagine a a new ability: "saveState" a script, in which you can save all variables created by the user and all hidden ones, everything that is currently on heap or stack. So you can continue coding after the saveState point but you can restore the saveState point anytime you want. I understand that python language itself can only pickle things you want. It is handy for some cases but not all. Question: Is there a proper term describing my "saveState" concept? Do we have it in pycharm or any other IDEs? I personally believe there is either a better way, or what I want is very hard to implement so that is almost easier just to re-run the code compared to restoring everything back to memory.
Jupyter notebook's ability to retain variables on script exit, Any other IDE can do it?
0.197375
0
0
1,139
37,402,110
2016-05-23T23:29:00.000
0
0
1
0
windows,python-3.x,polyglot
61,607,500
4
false
0
0
This will work by typing it on Anaconda Prompt: pip install polyglot==14.11
1
0
0
HI i want to install the polyglot on python version 3.5. it requires to have numpy installed which i already have and also libicu-dev. My OS is windows X86. I went through cmd, pip and wrote "pip install libicu-dev" but it gives me an error that could not find a version to satisfy the requirements. how can i install libicu-dev?
Getting error on installing polyglot for python 3.5?
0
0
0
2,805
37,402,998
2016-05-24T01:25:00.000
1
0
0
0
python,openerp
37,405,131
1
true
1
0
If you are going to create odoo normal module then you must create models.Model. If you are going to create odoo module which will handle post or get request from web service then you must use controller. If you are going to create odoo module for other module, and this module is wizard then you must use transient model and etc. Also if you need you can make simple class and use in your module but with your question I cant tell you more
1
1
0
I'm using odoo9 for my project. In my project, there is a common excel handling class fill in the excel template and output result, and here is my question: which base class should it inherit? models.Model or http.Controller, or nothing?
Which base class to inherit if it's a common class in odoo9
1.2
0
0
57
37,406,227
2016-05-24T06:46:00.000
0
1
1
0
python,unit-testing,mocking
37,407,120
2
false
0
0
The problem of "mockism" is that tests bind your code to a particular implementation. Once you have decided to test for a particular function call you have to call (or not as in your example) that function in your production code. As you have already noticed, there is plenty of ways to remove the directory (even by running rm -rf as external process). I think the way you are doing it already is the best - you check for an actual side-effect you are interested, no matter how it has been generated. If you are wondering about performance, you may try to make that test optional, and run it less frequently than the rest of your test suite.
1
0
0
I practice TDD but I have not used mocking before. Suppose I want to build a function that should create a folder, but only if that folder does not already exist. As part of my TDD cycle I first want to create a test to see that my function won’t delete an already existing folder. As my function will probably use os.rm, I gather I could use mocking to see whether os.rm has been called or not. But this isn’t very satisfactory as there are many ways to delete folders. What if I change my function later on to use shutil.rmtree? os.rm would not have been called, but perhaps the function now incorrectly removes the folder. Is it possible to use mocking in a way which is insensitive to the method? (without actually creating files on my machine and seeing whether they are deleted or not - what I have been doing until now)
Python mocking delete
0
0
0
695
37,407,054
2016-05-24T07:29:00.000
1
0
1
0
python,machine-learning,nlp,nltk,crf
37,445,707
1
true
0
0
The way this problem has traditionally been addressed is to use an "intent" classifier to determine the intent of a query. This classifier is trained to route queries to the appropriate sequence model. Then what you can do is send the query to the top 3 models as predicted by the intent classifier and see which of those give reasonable results.
1
2
0
i'm creating a sequence labeling program using pycrfsuite(BIO taging) and nltk. The program should be able to process queries with different context. I've trained different models for each context and saved em separately,one model to process flight booking queries, one model to process queries to send sms etc. I've an interface where user can enter queries from any context. Can anyone suggest me the best way to find and use respective model for that specific query other than iterating over each model ? Or am i completely wrong about using different models ?
How to use Sequence labeling for queries with different context?
1.2
0
0
119
37,413,302
2016-05-24T12:15:00.000
-1
0
0
0
python,scikit-learn,cross-validation
53,760,310
4
false
0
0
For individual scores of each class, use this : f1 = f1_score(y_test, y_pred, average= None) print("f1 list non intent: ", f1)
1
5
1
I'm using cross_val_score from scikit-learn (package sklearn.cross_validation) to evaluate my classifiers. If I use f1 for the scoring parameter, the function will return the f1-score for one class. To get the average I can use f1_weighted but I can't find out how to get the f1-score of the other class. (precision and recall analogous) The functions in sklearn.metrics have a labels parameter which does this, but I can't find anything like this in the documentation. Is there a way to get the f1-score for all classes at once or at least specify the class which should be considered with cross_val_score?
f1 score of all classes from scikits cross_val_score
-0.049958
0
0
13,883
37,413,919
2016-05-24T12:41:00.000
0
0
0
0
python,sqlite
37,414,407
1
false
0
0
you need to switch databases.. I would use the following: postgresql as my database psycopg2 as the driver the syntax is fairly similar to SQLite and the migration shouldn't be too hard for you
1
0
0
I have three programs running, one of which iterates over a table in my database non-stop (over and over again in a loop), just reading from it, using a SELECT statement. The other programs have a line where they insert a row into the table and a line where they delete it. The problem is, that I often get an error sqlite3.OperationalError: database is locked. I'm trying to find a solution but I don't understand the exact source of the problem (is reading and writing in the same time what make this error occur? or the writing and deleting? maybe both aren't supposed to work). Either way, I'm looking for a solution. If it were a single program, I could match the database I/O with mutexes and other multithreading tools, but it's not. How can I wait until the database is unlocked for reading/writing/deleting without using too much CPU?
Lock and unlock database access - database is locked
0
1
0
369
37,414,515
2016-05-24T13:04:00.000
0
0
1
0
python,list,operators
37,414,636
3
true
0
0
Why not create one list for the ints and one for the operators and them append from each list step by step? edit: you can first convert your ints to strings then, create a string by using string=''.joint(list) after that you can just eval(string) edit2: you can also take a look at the Sympy module that allows you to use symbolic math in python
2
0
0
EDIT: When I say function in the title, I mean mathematical function not programming function. Sorry for any confusion caused. I'm trying to create a function from randomly generated integers and operators. The approach I am currently taking is as follows: STEP 1: Generate random list of operators and integers as a list. STEP 2: Apply a set of rules to the list so I always end up with an integer, operator, integer, operator... etc list. STEP 3: Use the modified list to create a single answer once the operations have been applied to the integers. For example: STEP 1 RESULT: [1,2,+,-,2,/,3,8,*] STEP 2 RESULT: [1,+,2,-,2,/,3,*,8] - Note that I am using the operator command to generate the operators within the list. STEP 3 RESULT: The output is intended to be a left to right read function rather than applying BODMAS, so in this case I'd expect the output to be 8/3 (the output doesn't have to be an integer). So my question is: What function (and within what module) is available to help me combine the list as defined above. OR should I be combining the list in a different way to allow me to use a particular function? I am considering changing the way I generate the list in the first place so that I do the sort on the fly, but I think I'll end up in the same situation that I wouldn't know how to combine the integers and operators after going through the sort process. I feel like there is a simple solution here and I am tying myself up in knots unnecessarily! Any help is greatly appreciated, Dom
creating a simple function using lists of operators and integers
1.2
0
0
60
37,414,515
2016-05-24T13:04:00.000
0
0
1
0
python,list,operators
37,438,697
3
false
0
0
So, for those that are interested. I achieved what I was after by using the eval() function. Although not the most robust, within the particular loop I have written the inputs are closely controlled so I am happy with this approach for now.
2
0
0
EDIT: When I say function in the title, I mean mathematical function not programming function. Sorry for any confusion caused. I'm trying to create a function from randomly generated integers and operators. The approach I am currently taking is as follows: STEP 1: Generate random list of operators and integers as a list. STEP 2: Apply a set of rules to the list so I always end up with an integer, operator, integer, operator... etc list. STEP 3: Use the modified list to create a single answer once the operations have been applied to the integers. For example: STEP 1 RESULT: [1,2,+,-,2,/,3,8,*] STEP 2 RESULT: [1,+,2,-,2,/,3,*,8] - Note that I am using the operator command to generate the operators within the list. STEP 3 RESULT: The output is intended to be a left to right read function rather than applying BODMAS, so in this case I'd expect the output to be 8/3 (the output doesn't have to be an integer). So my question is: What function (and within what module) is available to help me combine the list as defined above. OR should I be combining the list in a different way to allow me to use a particular function? I am considering changing the way I generate the list in the first place so that I do the sort on the fly, but I think I'll end up in the same situation that I wouldn't know how to combine the integers and operators after going through the sort process. I feel like there is a simple solution here and I am tying myself up in knots unnecessarily! Any help is greatly appreciated, Dom
creating a simple function using lists of operators and integers
0
0
0
60
37,415,823
2016-05-24T13:58:00.000
1
0
1
0
python,python-2.7
37,415,947
2
false
0
0
You might have to install the module geographiclib by running pip install geographiclib or easy_install geographiclib
2
2
0
When I am trying to run .py script file ,I am getting this error. In the script file after import the packages, I am written "from geographiclib.geodesic import Geodesic"
ImportError: No module named geographiclib.geodesic
0.099668
0
0
2,275
37,415,823
2016-05-24T13:58:00.000
1
0
1
0
python,python-2.7
37,832,763
2
false
0
0
I resolve the issue below are the step which i follow to reslove. 1. Install PIP using command "python get-pip.py". you can download the get-pip.py file from internet. 2. Then run "pip install geographiclib". 3.finish
2
2
0
When I am trying to run .py script file ,I am getting this error. In the script file after import the packages, I am written "from geographiclib.geodesic import Geodesic"
ImportError: No module named geographiclib.geodesic
0.099668
0
0
2,275
37,419,778
2016-05-24T17:03:00.000
1
0
0
0
python,screen,freeze,display
37,421,436
1
true
0
1
If by "the screen" you're talking about the terminal then I highly recommend checking out the curses library. It comes with the standard version of Python. It gives control of many different aspects of the terminal window including the functionality you described.
1
0
0
I searched the web and SO but did not find an aswer. Using Python, I would like to know how (if possible) can I stop the screen from updating its changes to the user. In other words, I would like to buid a function in Python that, when called, would freeze the whole screen, preventing the user from viewing its changes. And, when called again, would set back the screen to normal. Something like the Application.ScreenUpdating Property of Excel VBA, but applied directly to the whole screen. Something like: FreezeScreen(On) FreenScreen(Off) Is it possible? Thanks for the help!!
Using Python, how to stop the screen from updating its content?
1.2
0
0
1,178
37,420,756
2016-05-24T18:01:00.000
0
0
0
0
python
37,421,286
3
false
1
0
You can use selenium with PhantomJS to do this without the browser opening. You have to use the Keys portion of selenium to send data to the form to be submitted. It is also worth noting that this method will not work if there are captcha's on the form.
1
0
0
I want to build a python script to submit some form on internet website. Such as a form to publish automaticaly some item on site like ebay. Is it possible to do it with BeautifulSoup or this is only to parse some website? Is it possible to do it with selenium but quickly without open really the browser? Are there any other ways to do it?
Submit form on internet website
0
0
1
55
37,421,035
2016-05-24T18:17:00.000
1
0
0
0
python,file,vectorization,hdf5,h5py
37,845,330
1
false
0
0
An elegant way for sampling without replacement is computing a random permutation of the numbers 1..N (numpy.random.permutation) and then using chunks of size M from it. Storing data in an h5py file is kind of arbitrary. You could use a single higher dimensional data set or a group containing the N two dimensional data sets. It's up to you. I would actually prefer to have the two dimensional data sets separately (gives you more flexibility) and to iterate over it using Group.iteritems.
1
1
1
I have familiarized myself with the basics of H5 in python. What I would like to do now is two things: Write images (numpy arrays) into an H5 file. Once that is done, be able to pick out $M$ randomly. What is meant here is the following: I would like to write a total of $N=100000$ numpy arrays (images), into one H5 file. Once that is done, then I would like to randomly select say, $M=50$ images from the H5 file at random, and read them. Then, I would like to pick another $M=50$ at random, and read them in as well, etc etc, until I have gone through all $N$ images. (Basically, sample without replacement). Is there an elegant way to do this? I am currently experimenting with having each image be stored as a separate key-value pair, but I am not sure if this is the most elegant. Another solution is to store the entire volume of $N$ images, and then randomly select from there, but I am not sure that is elegant either, as it requires me to read in the entire block.
H5 file with images in Python: Want to randomly select without replacement
0.197375
0
0
274
37,423,208
2016-05-24T20:25:00.000
1
0
0
0
python,python-3.x,pandas,series,ordereddictionary
37,522,101
2
false
0
0
Ordered dict is implemented as part of the python collections lib. These collection are very fast containers for specific use cases. If you would be looking for only dictionary related functionality (like order in this case) i would go for that. While you say you are going to do more deep analysis in an area where pandas is really made for (eg plotting, filling missing values). So i would recommend you going for pandas.Series.
1
5
1
Still new to this, sorry if I ask something really stupid. What are the differences between a Python ordered dictionary and a pandas series? The only difference I could think of is that an orderedDict can have nested dictionaries within the data. Is that all? Is that even true? Would there be a performance difference between using one vs the other? My project is a sales forecast, most of the data will be something like: {Week 1 : 400 units, Week 2 : 550 units}... Perhaps an ordered dictionary would be redundant since input order is irrelevant compared to Week#? Again I apologize if my question is stupid, I am just trying to be thorough as I learn. Thank you! -Stephen
orderedDict vs pandas series
0.099668
0
0
1,177
37,425,230
2016-05-24T22:57:00.000
1
0
0
0
python,sql
37,892,058
1
true
0
0
Do a binary search. Break the files (or the scripts, or whatever) in half, and process both files. One will (should) fail, and the other shouldn't. If they both have errors, doesn't matter, just pick one. Continue splitting the broken files until you've narrowed it down to the something more manageable that you can probe in to, even down to the actual line that's failing.
1
0
0
I am getting a UnicodeDecodeError in my Python script and I know that the unique character is not in Latin (or English), and I know what row it is in (there are thousands of columns). How do I go through my SQL code to find this unique character/these unique characters?
How do I find unique, non-English characters in a SQL script that has a lot of tables, scripts, etc. related to it?
1.2
1
0
30
37,426,196
2016-05-25T00:55:00.000
0
0
1
0
python-2.7,importerror,quandl
38,984,971
15
false
0
0
I am following a Youtube tutorial where they use 'Quandl'. It should be quandl. Change it and it won't throw error.
7
20
1
I am am trying to run the Quandl module on a virtualenv which I have uninstalled packages only pandas and then Quandl, I am running Python 2.7.10 - I have uninstalled all other python versions, but its still giving me the issue of 'ImportError: No module named Quandl'. Do you know what might be wrong? Thanks
import error; no module named Quandl
0
0
0
49,345
37,426,196
2016-05-25T00:55:00.000
12
0
1
0
python-2.7,importerror,quandl
38,992,511
15
false
0
0
Use below syntax all in lower case import quandl
7
20
1
I am am trying to run the Quandl module on a virtualenv which I have uninstalled packages only pandas and then Quandl, I am running Python 2.7.10 - I have uninstalled all other python versions, but its still giving me the issue of 'ImportError: No module named Quandl'. Do you know what might be wrong? Thanks
import error; no module named Quandl
1
0
0
49,345
37,426,196
2016-05-25T00:55:00.000
1
0
1
0
python-2.7,importerror,quandl
60,087,731
15
false
0
0
check whether it exists with the installed modules by typing pip list in the command prompt and if there is no module with the name quandl then type pip install quandl in the command prompt . Worked for me in the jupyter
7
20
1
I am am trying to run the Quandl module on a virtualenv which I have uninstalled packages only pandas and then Quandl, I am running Python 2.7.10 - I have uninstalled all other python versions, but its still giving me the issue of 'ImportError: No module named Quandl'. Do you know what might be wrong? Thanks
import error; no module named Quandl
0.013333
0
0
49,345
37,426,196
2016-05-25T00:55:00.000
0
0
1
0
python-2.7,importerror,quandl
59,360,553
15
false
0
0
quandl has now changed, you require an api key, go the site and register your email. import quandl quandl.ApiConfig.api_key = 'your_api_key_here' df = quandl.get('WIKI/GOOGL')
7
20
1
I am am trying to run the Quandl module on a virtualenv which I have uninstalled packages only pandas and then Quandl, I am running Python 2.7.10 - I have uninstalled all other python versions, but its still giving me the issue of 'ImportError: No module named Quandl'. Do you know what might be wrong? Thanks
import error; no module named Quandl
0
0
0
49,345
37,426,196
2016-05-25T00:55:00.000
0
0
1
0
python-2.7,importerror,quandl
52,893,601
15
false
0
0
With Anaconda\Jupyter notebook go to the install directory (C:\Users\<USER_NAME>\AppData\Local\Continuum\anaconda3) where <USER_NAME> is your logged in Username. Then execute in command prompt: python -m pip install Quandl import quandl
7
20
1
I am am trying to run the Quandl module on a virtualenv which I have uninstalled packages only pandas and then Quandl, I am running Python 2.7.10 - I have uninstalled all other python versions, but its still giving me the issue of 'ImportError: No module named Quandl'. Do you know what might be wrong? Thanks
import error; no module named Quandl
0
0
0
49,345
37,426,196
2016-05-25T00:55:00.000
0
0
1
0
python-2.7,importerror,quandl
43,598,162
15
false
0
0
install quandl for version 3.1.0 Check package path where you installed, make sure it's name is quandl not Quandl (my previous name is Quandl, so when I use import quandl, it always said "no module named quandl") If your package's name is Quandl, delete it and reinstall it. (I use anaconda to install my package, it's covenient!)
7
20
1
I am am trying to run the Quandl module on a virtualenv which I have uninstalled packages only pandas and then Quandl, I am running Python 2.7.10 - I have uninstalled all other python versions, but its still giving me the issue of 'ImportError: No module named Quandl'. Do you know what might be wrong? Thanks
import error; no module named Quandl
0
0
0
49,345
37,426,196
2016-05-25T00:55:00.000
-1
0
1
0
python-2.7,importerror,quandl
45,563,219
15
false
0
0
Sometimes the quandl module is present with "Quandl" in following location C:\Program Files (x86)\Anaconda\lib\site-packages\Quandl. But the scripts from Quandl refer to quandl in import statements. So, renaming folder Quandl to quandl worked for me. New path: "C:\Program Files (x86)\Anaconda\lib\site-packages**quandl**".
7
20
1
I am am trying to run the Quandl module on a virtualenv which I have uninstalled packages only pandas and then Quandl, I am running Python 2.7.10 - I have uninstalled all other python versions, but its still giving me the issue of 'ImportError: No module named Quandl'. Do you know what might be wrong? Thanks
import error; no module named Quandl
-0.013333
0
0
49,345
37,434,949
2016-05-25T10:52:00.000
3
0
0
0
python,database,sqlite,multiprocessing
39,020,351
1
false
0
0
On Linux you can just use /dev/shm as the file location of your sqlite. This is a memory mounted drive suitable exactly for that.
1
3
0
It is possible [in any way, even poorly hacked solution] to share in-memory database between many processes? My application has one process that opens the in-memory database and the other are running only SELECT queries on the database. NOTE: I need solution only for python 2.7, and btw if it matters the module I use for making new processes is multiprocessing.
Share in-memory database between processes sqlite
0.53705
1
0
850
37,438,867
2016-05-25T13:34:00.000
1
0
0
1
python,django,celery
37,637,827
1
false
1
0
periodic tasks scheduler in celery is not designed to handle thousands of scheduled tasks, so from performance perspective, much better solution is to have one task that is running at the smallest interval (e.g. if you allow user to sechedule dayly, weekly, monthly - running task daily is enough) such approach is as well more stable - every time schedule changes, all of the schedule records are reloaded plus is more secure because you do not expose or use any internal mechanisms for tasks execution
1
2
0
I'm working on project which main future will be running periodically one type of async task for each user. Every user will be able to configure task (running daily, weekly etc. at specified time). Also task will use some data stored by user. Now I'm wondering which approach should be better: allow users to create own PeriodicTask (by using some restricted endpoint of course) or create single PeriodicTask (for example running every 5 minutes) which will iterate over all users and determine if task should be queued or not for current user? I think I will use AMPQ as broker.
Celery PeriodicTask per user
0.197375
0
0
341
37,442,494
2016-05-25T16:08:00.000
5
0
1
0
python,python-3.x,python-2.7,anaconda,virtualenv
46,508,952
6
false
0
0
I have python 2.7.13 and 3.6.2 both installed. Install Anaconda for python 3 first and then you can use conda syntax to get 2.7. My install used: conda create -n py27 python=2.7.13 anaconda
3
105
0
I am using currently Anaconda with Python 2.7, but I will need to use Python 3.5. Is it ok to have them installed both in the same time? Should I expect some problems? I am on a 64-bit Win8.
Is it ok having both Anacondas 2.7 and 3.5 installed in the same time?
0.16514
0
0
139,690
37,442,494
2016-05-25T16:08:00.000
4
0
1
0
python,python-3.x,python-2.7,anaconda,virtualenv
37,442,783
6
false
0
0
Yes, It should be alright to have both versions installed. It's actually pretty much expected nowadays. A lot of stuff is written in 2.7, but 3.5 is becoming the norm. I would recommend updating all your python to 3.5 ASAP, though.
3
105
0
I am using currently Anaconda with Python 2.7, but I will need to use Python 3.5. Is it ok to have them installed both in the same time? Should I expect some problems? I am on a 64-bit Win8.
Is it ok having both Anacondas 2.7 and 3.5 installed in the same time?
0.132549
0
0
139,690
37,442,494
2016-05-25T16:08:00.000
0
0
1
0
python,python-3.x,python-2.7,anaconda,virtualenv
58,625,610
6
false
0
0
Anaconda is made for the purpose you are asking. It is also an environment manager. It separates out environments. It was made because stable and legacy packages were not supported with newer/unstable versions of host languages; therefore a software was required that could separate and manage these versions on the same machine without the need to reinstall or uninstall individual host programming languages/environments. You can find creation/deletion of environments in the Anaconda documentation. Hope this helped.
3
105
0
I am using currently Anaconda with Python 2.7, but I will need to use Python 3.5. Is it ok to have them installed both in the same time? Should I expect some problems? I am on a 64-bit Win8.
Is it ok having both Anacondas 2.7 and 3.5 installed in the same time?
0
0
0
139,690
37,442,986
2016-05-25T16:32:00.000
0
0
0
1
python,linux
37,557,915
1
false
0
0
The second command will not hang because the issue is not with a large amount of data on standard output, but a large amount of data on standard error. In the former case, standard error is being redirected to standard output, which is being piped to your program. Hence, a large amount of data being produced on standard error gives an equivalent result to a large amount of data being produced on standard output. In the latter case, the subprocess's standard error is redirected to the calling process's standard error, and hence can't get stuck in the pipe.
1
0
0
From the documentation of Popen.wait(), I see Warning This will deadlock when using stdout=PIPE and/or stderr=PIPE and the child process generates enough output to a pipe such that it blocks waiting for the OS pipe buffer to accept more data. Use communicate() to avoid that. I am having a bit of trouble understanding the behavior below as the command run below can generate a fairly large amount of standard out. However, what I notice is that subproc = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) will hang. While subproc = subprocess.Popen(command, stdout=subprocess.PIPE) will not hang. If the command is generating a large amount of standard out, why does the second statement not hang as we are still using stdout=subprocess.PIPE?
Subprocess Hanging on Wait
0
0
0
678
37,444,362
2016-05-25T17:49:00.000
0
0
1
1
python,linux,command-line
37,445,004
2
false
0
0
Let's say you run pwd and it returns /home/myName. If you then run /home/myName/code/myProgram.py, the working directory of your program is not /home/myName/code; it's /home/myName. The working directory of a process in inherited from the parent process, not set based on where the script is located.
1
1
0
I'm using with open('myFile', 'rb') as file: to read a file. When running the program with python myProgram.py everything works fine. But as soon I try to run it without cd-ing into the directory of myProgram.py and use an absolute path instead (like python /home/myName/myCode/myProgram.py I always get this error message: FileNotFoundError: [Errno 2] No such file or directory. So why does open() behave differently depending on how the Python program is started? And is there a way to make things work even if starting with an absolute path? I've already tried open('/home/myName/myCode/myfile', 'rb') but without success...
Python FileNotFoundError when using open()
0
0
0
1,226
37,446,282
2016-05-25T19:43:00.000
0
0
0
0
python,django,eclipse
37,478,375
1
false
1
0
I usually use the context menu (right click on project in project explorer) and choose django/start new app.
1
0
0
I have created a new app inside a django project using the command line: python manage.py startapp name and I'm using PyDev as my IDE for the first time. My problem is that I can't add this app to the project so I can begin to code. I have been reading some questions and answers but so far I couldn't find a answer. Can you help me, please? I'm using eclipse mars and I installed PyDev using the market place.
PyDev - how to include new apps in the project
0
0
0
220
37,449,333
2016-05-25T23:41:00.000
0
0
0
0
python,windows,drag-and-drop,xlsxwriter
37,449,798
1
false
0
0
Thanks to erkysun this issue was solved! eryksun's solution worked perfectly and I found another reason it wasn't working. This was because when I dragged and dropped the file into the python script then ran os.getcwd() no matter where the file was it returned C:\WINDOWS\system32. To counteract this wherever I had os.getcwd() I changed it to os.path.abspath(os.path.dirname(__file__)) and then it worked!
1
0
0
I have a Python script which user's drag and drop KML files into for easy use. It takes the dropped file as sys.arg[1]. When entered into the command line as myScript.py Location.kml everything works fine. But when I drag and drop the file in an error is thrown saying no module named xlsxwriter. xlsxwriter is in the same folder as my Python script in another folder named Packages. Why does it work for the command line but not when dragged and dropped? Is there some trick I am missing?
Module not found when drag and drop Python file
0
1
0
290
37,450,288
2016-05-26T01:50:00.000
4
0
0
1
python,linux
37,451,543
1
true
0
0
Trap sigint, sigterm and make sure to clean up anything like sockets, files, locks, etc. Trap other signals based on what you are doing. For instance if you have open pipes you might trap sigpipe. Just remember signal handling opens you to race conditions. You probably want to use sigprocmask to disable signals while handling them.
1
2
0
I'm creating a program in python that auto-runs for an embedded system and want to handle program interruptions gracefully. That is if possible close resources and signal child processes to also exit gracefully before actually exiting. At which point my watchdog should notice this and respawn everything. What signals can/should I expect to receive in a non-interactive program from linux? I'm using try/except blocks to handle i/o errors. Is the system shutting down an event that is signaled? In addition to my watchdog I will also have an independent process monitoring a hardware line that gets set when my power supply detects a brownout. I have a supercap to provide some run-time to allow a proper shutdown.
What linux signals should I trap to make a good application
1.2
0
0
102
37,451,236
2016-05-26T03:49:00.000
10
0
1
0
python,jar,ide,settings,pycharm
50,773,143
4
false
0
0
The given link is outdated. However I solved my problem by clicking "Run" at the Pycharm's toolbar and then "Edit Configurations..." and I change my Interpreter to another actual one. Just changing it in the settings does not help, but this opperation already does ;)
2
10
0
When I go to run scripts in the terminal, I get an error in a red box that says Pycharm cannot run program C:\Anaconda\python.exe (in directory E:\etc... CreateProcessError=2 The System cannot find the file specified. I have uninstalled Anaconda and reinstalled it. I have also reinstalled PyCharm. Thing is though, I am using Anaconda2 and have my interpreter as C:\\Users\\my_name\\Anaconda2 and this works when I apply it in settings. I am not sure where this path C:\\Anaconda\\python.exe is coming from in the error, as I have uninstalled Anaconda and reinstalled it to C:\\users\\my_name\\Anaconda2 If it is worth noting, I did import a PyCharm settings jar file earlier today, but then decided not to use it and to go back to my original settings. This was before uninstalling PyCharm and Anaconda out of frustration, so any effects of that I think should be moot anyway. Any help would be greatly appreciated as I am stuck using the console until I can figure this out. Thanks.
PyCharm Cannot Run Program C:\\Anaconda\\python.exe
1
0
0
25,842
37,451,236
2016-05-26T03:49:00.000
4
0
1
0
python,jar,ide,settings,pycharm
54,026,802
4
false
0
0
Deleting the .idea folder solved the problem for me. Pycharm will create a new .idea folder with the correct path.
2
10
0
When I go to run scripts in the terminal, I get an error in a red box that says Pycharm cannot run program C:\Anaconda\python.exe (in directory E:\etc... CreateProcessError=2 The System cannot find the file specified. I have uninstalled Anaconda and reinstalled it. I have also reinstalled PyCharm. Thing is though, I am using Anaconda2 and have my interpreter as C:\\Users\\my_name\\Anaconda2 and this works when I apply it in settings. I am not sure where this path C:\\Anaconda\\python.exe is coming from in the error, as I have uninstalled Anaconda and reinstalled it to C:\\users\\my_name\\Anaconda2 If it is worth noting, I did import a PyCharm settings jar file earlier today, but then decided not to use it and to go back to my original settings. This was before uninstalling PyCharm and Anaconda out of frustration, so any effects of that I think should be moot anyway. Any help would be greatly appreciated as I am stuck using the console until I can figure this out. Thanks.
PyCharm Cannot Run Program C:\\Anaconda\\python.exe
0.197375
0
0
25,842
37,455,466
2016-05-26T08:28:00.000
0
0
0
0
python,excel,formatting,xlwt,xlutils
50,905,858
2
false
0
0
While Destrif is correct, xlutils uses xlwt which doesn't support the .xlsx file format. However, you will also find that xlsxwritter is unable to write xlrd formatted objects. Similarly, the python-excel-cookbook he recommends only works if you are running Windows and have excel installed. A better alternative for this would be xlwings as it works for Windows and Mac with Excel installed. If you are looking for something more platform agnostic, you could try writing your xlsx file using openpyxl.
1
2
0
I have a results analysing spreadsheet where i need to enter my raw data into a 8x6 cell 'plate' which has formatted cells to produce the output based on a graph created. This .xlsx file is heavily formatted with formulas for the analysis, and it is a commercial spreadsheet so I cannot replicate these formulas. I am using python 2.7 to obtain the raw results into a list and I have tried using xlwt and xlutils to copy the spreadsheet to enter the results. When I do this it loses all formatting when I save the file. I am wondering whether there is a different way in which I can make a copy of the spreadsheet to enter my results. Also when I have used xlutils.copy I can only save the file as a .xls file, not an xlsx, is this the reason why it loses formatting?
How to enter values to an .xlsx file and keep formatting of cells
0
1
0
2,214
37,463,782
2016-05-26T14:29:00.000
0
0
0
0
python,virtual-machine,virtualization,proxmox
37,565,566
2
false
0
0
The description parameter is only a message to show in proxmox UI, and it's not related to any function
1
1
0
I am using proxmoxer to manipulate machines on ProxMox (create, delete etc). Every time I am creating a machine, I provide a description which is being written in ProxMox UI in section "Notes". I am wondering how can I retrieve that information? Best would be if it can be done with ProxMox, but if there is not a way to do it with that Python module, I will also be satisfied to do it with plain ProxMox API call.
How can I retrieve Proxmox node notes?
0
0
1
256
37,464,886
2016-05-26T15:16:00.000
2
0
0
0
python,django,apache,mod-wsgi,django-filer
37,495,881
1
false
1
0
That would indicate that you have LimitRequestBody directive set to a very small value in the Apache configuration. This directive wouldn't normally be set at all if using standard Apache. Even if you happen to be using mod_wsgi-express the default there is 10MB. So it must have been overridden to a smaller value.
1
1
0
I after much gnashing of teeth found out my uploads were working just that I was getting a request entity to large 413 error I wasn't seeing. The django app is running under apache mod_wsgi, and I am no apache guru so i am not sure what I need to set to handle larger files. I tried to google online but it was unclear if it was a timeout issue or a file size restriction issue to me. Also not clear if it is a setting in my settings.py file. I currently cannot upload anything over 1MB about. (Doesn't even sound right for all the defaults I read) Anyone done this before and can give some insight?
Fixing request entity too large error under django/apache/mod_wsgi for file uploads
0.379949
0
0
1,167
37,471,681
2016-05-26T21:56:00.000
1
0
0
0
python,oauth2
44,904,547
2
false
0
0
Do you want to store it on the service side or locally? Since your tool interfaces RESTful API, which is stateless, meaning that no information is stored between different requests to API, you actually need to provide access token every time your client accesses any of the REST endpoints. I am maybe missing some of the details in your design, but access tokens should be used only for authorization, since your user is already authenticated if he has a token. This is why tokens are valid only for a certain amount of time, usually 1 hour. So you need to provide a state either by using cookie (web interface) or storing the token locally (Which is what you meant). However, you should trigger the entire oauth flow every time a user logs in to your client (authenticating user and providing a new auth token) otherwise you are not utilizing the benefits of oauth.
1
3
0
I am building a command line tool using python that interfaces with an RESTful api. The API uses oauth2 for authentication. Rather than asking for access_token every time user runs the python tool. Can I store the access_token in some way so that I can use it till its lifespan? If it is then how safe it is.
Is it possible to store authentication token in a separate file?
0.099668
0
1
4,615
37,472,688
2016-05-26T23:39:00.000
1
1
0
1
java,python,c++,pipelining
37,495,749
2
false
0
0
We had same issue where we had to share sensor data between one Java app to other multiple apps including Java,Python and R. First we tried Socket connections but socket communication were not fault tolerant. Restarting or failure in one app affected other. Then we tried RMI calls between them but again we were unhappy due to scalability. We wanted system to be reliable, scalable, distributed and fault tolerant. So, finally we started using RabbitMQ where we created one producer and multiple consumers. It worked well for 2 years. you may consider using Apache Kafka. You have options like Socket pipes, RMI calls, RabbitMQ, Kafka, Redis based on your system requirements now and in near future.
1
3
0
I'm working on a project, which I am not at liberty to discuss the core, but I have reached a stumbling block. I need data to be transferred from C++ to some other language, preferably Java or Python, in realtime (~10ms latency). We have a sensor that HAS to be parsed in C++. We are planning on doing a data read/output through bluetooth, most likely Java or C# (I don't quite know C#, but it seems similar to Java). C++ will not fit the bill, since I do not feel advanced enough to use it for what we need. The sensor parsing is already finished. The data transferring will be happening on the same machine. Here are the methods I've pondered: We tried using MatLab with whatever the Mex stuff is (I don't do MatLab) to access functions from our C++ program, to retrieve the data as an array. Matlab will be too slow (we read somewhere that the TX/RX will be limited to 1-20 Hz.) Writing the data to a text, or other equivalent raw data, file constantly, and opening it with the other language as necessary. I attempted to look this up, but nothing of use showed in the results.
Pipelining or Otherwise Transferring Data Between Languages in Realtime
0.099668
0
0
150
37,475,338
2016-05-27T05:15:00.000
0
0
0
0
mysql,linux,mysql-python,pymysql
37,517,765
1
false
0
0
It works contingency. MySQL is request-rensponse protocol. When two process sends query, it isn't mixed unless the query is large. MySQL server (1) receive one query, (2) send response of (1), (3) receive next query, (4) send response of (3). When first response was send from MySQL server, one of two processes receives it. Since response is small enough, it is received by atomic. And next response is received by another process. Try sending "SELECT 1+2" from one process and "SELECT 1+3" from another process. "1+2" may be 4 by chance and "SELECT 1+3" may be 3 by chance.
1
0
0
abort MySQLdb. I know many process not use same connect, Because this will be a problem . BUT, run the under code , mysql request is block , then many process start to query sql at the same time , the sql is "select sleep(10)" , They are one by one. I not found code abort lock/mutux in MySQLdb/mysql.c , Why there is no problem ? I think there will be problems with same connection fd in general . But pass my test, only block io, problems not arise . Where is the lock ? import time import multiprocessing import MySQLdb db = MySQLdb.connect("127.0.0.1","root","123123","rui" ) def func(*args): while 1: cursor = db.cursor() cursor.execute("select sleep(10)") data = cursor.fetchall() print len(data) cursor.close() print time.time() time.sleep(0.1) if __name__ == "__main__": task = [] for i in range(20): p = multiprocessing.Process(target = func, args = (i,)) p.start() task.append(p) for i in task: i.join() result log, we found each request interval is ten seconds. 1 1464325514.82 1 1464325524.83 1 1464325534.83 1 1464325544.83 1 1464325554.83 1 1464325564.83 1 1464325574.83 1 1464325584.83 1 1464325594.83 1 1464325604.83 1 1464325614.83 1 1464325624.83 tcpdump log: we found each request interval is ten seconds two. 13:07:04.827304 IP localhost.44281 > localhost.mysql: Flags [.], ack 525510411, win 513, options [nop,nop,TS val 2590846552 ecr 2590846552], length 0 0x0000: 4508 0034 23ad 4000 4006 190d 7f00 0001 E..4#.@.@....... 0x0010: 7f00 0001 acf9 0cea fc09 7cf9 1f52 a70b ..........|..R.. 0x0020: 8010 0201 ebe9 0000 0101 080a 9a6d 2e58 .............m.X 0x0030: 9a6d 2e58 .m.X 13:07:04.928106 IP localhost.44281 > localhost.mysql: Flags [P.], seq 0:21, ack 1, win 513, options [nop,nop,TS val 2590846653 ecr 2590846552], length 21 0x0000: 4508 0049 23ae 4000 4006 18f7 7f00 0001 E..I#.@.@....... 0x0010: 7f00 0001 acf9 0cea fc09 7cf9 1f52 a70b ..........|..R.. 0x0020: 8018 0201 fe3d 0000 0101 080a 9a6d 2ebd .....=.......m.. 0x0030: 9a6d 2e58 1100 0000 0373 656c 6563 7420 .m.X.....select. 0x0040: 736c 6565 7028 3130 29 sleep(10) 13:07:14.827526 IP localhost.44281 > localhost.mysql: Flags [.], ack 65, win 513, options [nop,nop,TS val 2590856553 ecr 2590856552], length 0 0x0000: 4508 0034 23af 4000 4006 190b 7f00 0001 E..4#.@.@....... 0x0010: 7f00 0001 acf9 0cea fc09 7d0e 1f52 a74b ..........}..R.K 0x0020: 8010 0201 9d73 0000 0101 080a 9a6d 5569 .....s.......mUi 0x0030: 9a6d 5568 .mUh 13:07:14.927960 IP localhost.44281 > localhost.mysql: Flags [P.], seq 21:42, ack 65, win 513, options [nop,nop,TS val 2590856653 ecr 2590856552], length 21 0x0000: 4508 0049 23b0 4000 4006 18f5 7f00 0001 E..I#.@.@....... 0x0010: 7f00 0001 acf9 0cea fc09 7d0e 1f52 a74b ..........}..R.K 0x0020: 8018 0201 fe3d 0000 0101 080a 9a6d 55cd .....=.......mU. 0x0030: 9a6d 5568 1100 0000 0373 656c 6563 7420 .mUh.....select. 0x0040: 736c 6565 7028 3130 29 sleep(10) ``` ``` end.
use multiprocessing to query in same mysqldb connect , block?
0
1
0
571
37,476,505
2016-05-27T06:41:00.000
3
0
0
0
python,kivy
37,493,470
1
true
0
1
The problem was caused because Kivy was using Pygame. I had to brew install the SDL2 dependencies and reinstall Kivy. This fixed the problem.
1
1
0
I'm still relatively new to Python and Kivy (much more so for Kivy). I'm running Python 3.5 and Kivy 1.9.1 on OS X. I installed Kivy using pip install, and I was never really able to get Kivy to work using the download and putting in my Applications folder like the instructions were asking. Everything seems to work fine, and I've made a few apps; however, I seem to encounter a problem whenever I try to resize the window. The window goes white while I'm resizing it, and then it turns blank/black afterwards. The terminal just says that it reloaded the graphics and the amount of time it took. I was hoping that after I made my apps using Pyinstaller that this would go away, but the problem still persists. Has anyone else encountered this problem? I haven't been able to test this on another computer yet to see if the problem persists. Any help would be appreciated. Thanks!(:
Kivy window goes blank after being resized
1.2
0
0
556
37,477,938
2016-05-27T07:57:00.000
0
0
1
0
python,python-2.7,python-idle
37,478,143
1
true
0
0
I recommend you use sublime text, and use package control.Then, you can use a lots of plugins and of course, auto-indention.
1
0
0
Idle is just a bare-bones text editor with no support for auto-indention. Is there any plugin for that ?
Are there any plugins available for python's IDLE?
1.2
0
0
754
37,480,048
2016-05-27T09:38:00.000
4
0
0
0
django,pythonanywhere
37,480,695
1
true
1
0
I Figured it out, thanks for the hint Mr. Raja Simon. In my PythonAnywhere Dashboard on Web Tab. I set something like this.. URL /media/ Directory /home//media_cdn *media_cdn is where my images located.
1
2
0
I am using Django and PythonAnywhere, and I want to make the DEBUG to False. But when I set it to False and make ALLOWED_HOSTS = ['*'], it works fine. But the problem is the media (or the images) is not displaying. Anyone encounter this and know how to resolve it?
Django + Pythonanywhere: How to disable Debug Mode
1.2
0
0
1,288
37,480,728
2016-05-27T10:10:00.000
0
0
0
0
python,r,csv
37,480,887
1
false
0
0
you can use list.data[[1]]$name1
1
0
1
Suppose that i have multiple .csv files with columns of same kind . If i wanted to access data of a particular column from a specified .csv file , how is it possible? All .csv files have been stored in list.data for ex: Suppose that here , list.data[1] gives me the first .csv file. How will i access a column of this file? I have tried list.data[1]$nameofthecolumn. But this is giving me null values. I am not much familiar with R. list.data[1]$name1 NULL list.data[1]$name1 NULL list.data[1] $ NULL
how to access a particular column of a particular csv file from many imported csv files
0
0
0
51
37,484,932
2016-05-27T13:36:00.000
1
0
0
0
python-2.7,pandas,matplotlib,anaconda,importerror
37,485,073
1
false
0
0
The csv_test.py file you have "added by hand" tries to import scipy; as the error message says, don't do that. You can probably place your test code in a private location, without messing with the scipy installation directory. I suggest uninstalling and reinstalling at least pandas and scipy, possibly everything, to eliminate intentional and unintentional alterations.
1
0
1
import pandas     Traceback (most recent call last):     File "", line 1, in     File "/Users/.../anaconda/lib/python2.7/site-packages/pandas/init.py", line 37 , in import pandas.core.config_init      File "/Users/.../anaconda/lib/python2.7/site-packages/pandas/core/config_init.py", line 18, in     from pandas.formats.format import detect_console_encoding     File "/Users/.../anaconda/lib/python2.7/site-packages/pandas/formats/format.py", l ine 16, in     from pandas.io.common import _get_handle, UnicodeWriter, _expand_user     File "/Users/.../anaconda/lib/python2.7/site-packages/pandas/io/common.py", line 5 , in      import csv      File "csv.py", line 10, in      field_size_limit, \      File "csv_test.py", line 4, in      from scipy import stats      File "scipy/init.py", line 103, in     raise ImportError(msg)     ImportError: Error importing scipy: you cannot import scipy while     being in scipy source directory; please exit the scipy source      tree first, and relaunch your python intepreter. I tried to import pandas by invoking python from bash. Before doing so,I used 'which python' command to make sure which python I'm using. It interacted to me saying "/Users/...(Usersname)/anaconda/bin/python". But when I imported pandas from IPython of anaconda's distribution, successfully imported. for the record,I added csv_test.py module by hand. guess this module is doing something wrong? it just imports scipy.stats and matplotlib.pyplot. So I hope it is not affecting any other module. seems something seriously wrong is going on here,judging from Error sentence..
Import error: Error importing scipy
0.197375
0
0
466
37,486,551
2016-05-27T14:53:00.000
0
0
1
0
python,windows,python-3.x
37,487,571
2
false
0
0
@ Pierre-Louis Sauvage, @ geo: anaconda is useful, yet it is also a fairly heavy application, with about 300MB memory needed. You may therefore want to consider miniconda first. Since you're learning Python, that will most likely be sufficient. So, I write this in 'answer' format instead of comment, since I have not enough reputation points to write a comment.
1
0
0
it's my first post here, however i have used this site as a guest through google searches. to the point now. i am learning python and i want to install scipy and pybrain. i have python 3.5.1 installed in the following path C:\Program Files\Python35-32 When i use pip install (in cmd), I get an error with red letters, something along the lines of failed with error code 1. I've searched all around the web through google (mostly) and of course here but i can't find any solution. I should say i have windows vista 32bit. Thanks in advance for your help.
problems installing scipy and pybrain in python 3.5.1
0
0
0
157
37,487,072
2016-05-27T15:19:00.000
0
0
0
1
python,postgresql,plpython
42,190,637
1
true
0
0
Yes, it will, the package is independent from standard python installation.
1
0
0
I was writing a PL/Python function for PostgreSQl, with Python 2.7 and Python 3.5 already installed on Linux. When I was trying to create extension plpythonu, I got an error, then I fixed executing in terminal the command $ sudo apt-get install postgresql-contrib-9.3 postgresql-plpython-9.3. I understand that this is some other package. If I will not have Python 2.7/3.5 installed, and I will install the plpython package, will the user defined function still work? Is some how the PL/Python depending on Python?
PL/Python in PostgreSQL
1.2
1
0
278
37,488,036
2016-05-27T16:09:00.000
1
0
1
0
python,keyboard-shortcuts,pycharm
37,488,161
2
false
0
0
Possibly you have to set the path to python environment, so it knows where your packages are and can look through them to tell you tips. Check this in PyCharm interpreter options
2
0
0
In pycharm ide of python, ctrl+space shortcut is bringing available properties,methods etc.(auto completion also) but the list has no explanations. In other words, I see available options in the list but when i hower the mouse on them there is no explanation is coming so I have no idea what do these options stand for. Is there any idea to see the explanations ? Or is there any setting that I must configure ?
Python/Pycharm Ctrl+Space Suggestion List Does not show explanations
0.099668
0
0
464
37,488,036
2016-05-27T16:09:00.000
2
0
1
0
python,keyboard-shortcuts,pycharm
37,488,336
2
true
0
0
After pressing Control+Space try Control+Shift+I to see definitions, and Ctrl+Q to see documentations.
2
0
0
In pycharm ide of python, ctrl+space shortcut is bringing available properties,methods etc.(auto completion also) but the list has no explanations. In other words, I see available options in the list but when i hower the mouse on them there is no explanation is coming so I have no idea what do these options stand for. Is there any idea to see the explanations ? Or is there any setting that I must configure ?
Python/Pycharm Ctrl+Space Suggestion List Does not show explanations
1.2
0
0
464
37,492,173
2016-05-27T20:59:00.000
0
0
0
0
python,excel,pandas,dataframe,precision
37,596,921
3
false
0
0
Excel might be truncating your values, not pandas. If you export to .csv from Excel and are careful about how you do it, you should then be able to read with pandas.read_csv and maintain all of your data. pandas.read_csv also has an undocumented float_precision kwarg, that might be useful, or not useful.
1
4
1
I tried to use pandas to read an excel sheet into a dataframe but for floating point columns, the data is read incorrectly. I use the function read_excel() to do the task In excel, the value is 225789.479905466 while in the dataframe, the value is 225789.47990546614 which creates discrepancy for me to import data from excel to database. Does anyone face the same issue with pandas.read_exel(). I have no issue reading a csv to dataframe. Jeremy
loss of precision when using pandas to read excel
0
1
0
4,533
37,493,341
2016-05-27T22:49:00.000
4
0
1
1
python,linux,raspberry-pi
37,494,585
2
false
0
0
I finally figured it out, but wanted to post the solution so others can find it in the future. Subprogram = Popen(['lxterminal', '-e', 'python ./Foo.py'], stdout=PIPE) The lxterminal is the Raspberry Pi's terminal name, -e is required, python ./Foo.py launches the python file, and stdout=PIPE displays the output on the new terminal window. Running the above launches Foo.py in a new terminal window, and allows the user to terminate the Foo.py process if desired.
1
3
0
I am writing a bootstrap program that runs several individual programs simultaneously. Thus, I require each sub-program to have its own terminal window, in a manner that gives me the ability to start/stop each sub-program individually within the bootstrap. I was able to do this on Windows using Popen and CREATE_NEW_CONSOLE (each sub-program has it's own .py file), however I am having trouble achieving this with Linux. I am using a Raspberry Pi and Python 2.7.9. I have tried: Subprogram = Popen([executable, 'Foo.py'], shell=True) However this does not seem to create a new window.. and os.system("python ./Foo.py") Does not seem to create a new window nor allow me to terminate the process. Other research has thus far proved unfruitful.. How can I do this? Many thanks in advance.
Python: Opening a program in a new terminal [Linux]
0.379949
0
0
4,772
37,497,795
2016-05-28T10:19:00.000
0
0
0
0
python,r,classification,data-mining,text-classification
37,499,607
1
true
0
0
There is nothing wrong with using this coding strategy for text and support vector machines. For your actual objective: support vector regression (SVR) may be more appropriate beware of the journal impact factor. It is very crude. You need to take temporal aspects into account; and many very good work is not published in journals at all
1
0
1
I am struggling with the best choice for a classification/prediction problem. Let me explain the task - I have a database of keywords from abstracts for different research papers, also I have a list of journals with specified impact factors. I want to build a model for article classification based on their keywords, the result is the possible impact factor (taken just as a number without any further journal description) with a given keywords. I removed the unique keyword tags as they do not have much statistical significance so I have only keywords that are repeated 2 and more times in my abstract list (6000 keyword total). I think about dummy coding - for each article I will create a binary feature vector 6000 attributes in length - each attribute refers to presence of the keyword in the abstract and classify the whole set by SVM. I am pretty sure that this solution is not very elegant and probably also not correct, do you have any suggestions for a better deal?
Classification of sparse data
1.2
0
0
539
37,499,130
2016-05-28T12:42:00.000
1
0
0
1
python,geany
37,499,554
1
false
0
0
Execute: C:\Python35\python ¨%f¨ This string contains the diaeresis character (¨) (U+00A8) instead of the double quote (") (U+0022).
1
1
0
I'm a new user of Python I installed Python35 for Windows. Hello.py runs fine in a terminal. When trying to run the same in Geany the path is not found. These are the settings in Geany: Compile: C:\Python35\python -m py_compile "%f" Execute: C:\Python35\python ¨%f¨ What I'm doing wrong?
Geany not running Python
0.197375
0
0
863
37,500,810
2016-05-28T15:27:00.000
1
0
0
1
python,file-io,go,attributes,metadata
37,501,333
2
false
0
0
If you are dealing with binary files like docx and pdf, you're best off storing the metadata in seperate files or in a sqlite file. Metadata is usually stored seperate from files, in data structures called inodes (at least in Unix systems, Windows probably has something similar). But you probably don't want to get that deep into the rabbit hole. If your goal is to query the system based on metadata, then it would be easier and more efficient to use something SQLite. Having the meta data in the file would mean that you would need to open the file, read it into memory from disk, and then check the meta data - i.e slower queries. If you don't need to query based on metadata, then storing metadata in the file might make sense. It would reduce the dependencies in your application, but in order to access the contents of the file through Word or Adobe Reader, you'd need to strip the metadata before handing it off to the application. Not worth the hassle, usually
1
0
0
I am in the process of writing a program and need some guidance. Essentially, I am trying to determine if a file has some marker or flag attached to it. Sort of like the attributes for a HTTP Header. If such a marker exists, that file will be manipulated in some way (moved to another directory). My question is: Where exactly should I be storing this flag/marker? Do files have a system similar to HTTP Headers? I don't want to access or manipulate the contents of the file, just some kind of property of the file that can be edited without corrupting the actual file--and it must be rather universal among file types as my potential domain of file types is unbound. I have some experience with Web APIs so I am familiar with HTTP Headers and json. Does any similar system exist for local files in windows? I am especially interested in anyone who has professional/industry knowledge of common techniques that programmers use when trying to store 'meta data' in files in order to access them later. Or if anyone knows of where to point me, as I am unsure to what I should be researching. For the record, I am going to write a program for Windows probably using Golang or Python. And the files I am going to manipulate will be potentially all common ones (.docx, .txt, .pdf, etc.)
Attribute system similar to HTTP Headers for local files
0.099668
0
0
62
37,502,578
2016-05-28T18:37:00.000
0
0
1
0
python,multithreading,sockets,asynchronous,unity3d
37,503,738
1
false
0
0
Use TCP. UDP are mostly used for things like video/audio streaming. That's not the case here. As for async vs thread in Unity, I would suggest you go for with Thread. It is easier and you are guaranteed that your network code will not be slowing Unity down in any way. Unity3D game and plot out these data in a GUI using Python GTK If you are receiving data from eye tracker, you should be able to plot the data with Unity. If your requirement is to use Python then it is still possible to make server/cleint connection between Unity and your Python App.
1
0
0
I'm new to Python, socket, multithreading and asynchronous programming. I'm wondering what is the best approach to implement a server that receives and sends data between a client such as a Unity3D game and plot out these data in a GUI using Python GTK while writing these data to a text file in real time. To make things a bit more complicated, the data comes from an Eyetracker which receives gaze data at the speed of 300 ms. Given these requirements, is it better to use threads or async? Moreover, should I be creating a UDP or TCP server to exchange the data? If more information is needed, please let me know. Thanks.
Multithreading or Asynchronous for Python GTK/Socket Server/Data Writing
0
0
0
316
37,502,942
2016-05-28T19:17:00.000
3
0
0
0
python,python-2.7,user-interface,tkinter
37,503,535
1
true
0
1
Every tkinter program needs exactly one instance of Tk. Tkinter is a wrapper around an embedded tcl interpreter. Each instance of Tk gets its own copy of the interpreter, so two Tk instances have two different namespaces. If you need multiple windows, create one instance of Tk and then additional windows should be instances of Toplevel. While you can create, destroy, and recreate a root window, there's really no point. Instead, create the root window for the login screen, and then just delete the login screen widgets and replace them with your second window. This becomes trivial if you make each of your "windows" a separate class that inherits from tk.Frame. Because tkinter will destroy all child widgets when a frame is destroyed, it's easy to switch from one "window" to another. Create an instance of LoginFrame and pack it in the root window. When they've input a correct password, destroy that instance, create an instance of MainWindow and pack that.
1
0
0
I am starting to learn Tkinter and have been creating new windows with new instances of Tk every time. I just read that that wasn't a good practice. If so, why? And how could this be done better? I have seen others create windows with Toplevel and Frame instances. What are the benefits/drawbacks of using these instead? In case this makes a difference: The application that I am writing code for starts off with a login window and then proceeds to a second window is the entered password is correct.
Tkinter Creating Multiple Windows - Use new Tk instance or Toplevel or Frame?
1.2
0
0
1,757
37,504,035
2016-05-28T21:51:00.000
0
1
0
0
python,optimization,scipy
37,506,002
2
false
0
0
Depends on your function, but it might be possible to solve symbolically using SymPy. That would give all roots. It can find eigenvalues symbolically if necessary. Finding all extrema is the same as finding all roots of the derivative of your function, so it won't be any easier than finding all roots (as WarrenWeckesser mentioned). Finding all roots numerically will require using knowledge about the function. As a simple example, say you knew some minimum spacing between roots. You could try to recursively partition the interval and find roots in each. Stop after finding the maximum number of roots. But if the spacing is small, this could require many function evaluations (e.g. in the worst case when there are zero roots). The more constraints you can impose, the more you can cut down on function evaluations.
1
0
1
I'm looking for an efficient method to find all the roots of a function f on an interval [a,b]. The problem I have is that all the nice methods from scipy.optimize require either that f(a) and f(b) have different signs, or that I provide an initial guess x0, but I know nothing about my roots before running the code. Note: The function f is smooth (at least C1), and doesn't have a pathological behaviour [nothing like sin(1/x)]. However, it requires building a matrix A(x) and finding its eigenvalues, and is therefore time-consuming. It is expected to have between 0 and 10 roots on [a,b], whose position is completely arbitrary. I can't afford missing any of them (e.g. I can't take 100 initial guesses x0 and just hope that i'll catch all the roots). I was thinking about implementing something like this: Find all the extrema {m_1, m_2.., m_k} of f with scipy.optimize [maybe fmin, but I don't know which method is the most efficient]: Search for a minimum m_1 starting from point a [initial guess for gradient algorithm] Search for a maximum m_2 starting from point m_1 + dx [forcing the gradient algorithm to go forward] Search for a minimum m_3... If two consecutive extrema m_i and m_(i+1) have opposite signs, apply brentq on interval [m_i, m_(i+1)] to find a root. Is there a better way of solving this problem? If not, are fmin and brentq the best choices among the scipy.optimize library in order to minimize the number of calls to my function f?
Finding multiple roots on an interval with Python
0
0
0
1,874
37,504,566
2016-05-28T23:09:00.000
3
1
0
1
python,python-3.x,tornado,pytest
37,504,714
1
true
1
0
No, it is not currently possible to test this in Tornado via any public interface (as of Tornado version 4.3). It's straightforward to avoid spinning up a server, although it requires a nontrivial amount of code: the interface between HTTPServer and Application is well-defined and documented. The trickier part is the other side: there is no supported way to determine which handler will be invoked before that handler is invoked. I generally recommend testing routing via end-to-end tests for this reason. You could also store your URL route list before passing it into Tornado, and do your tests against that - the internal logic of "take the first regex match" is pretty easy to replicate.
1
1
0
I'm new to Tornado, and working on a project that involves some rather complex routing. In most of the other frameworks I've used I've been able to isolate routing for testing, without spinning up a server or doing anything terribly complex. I'd prefer to use pytest as my testing framework, but I'm not sure it matters. Is there a way to, say, create my project's instance of tornado.web.Application, and pass it arbitrary paths and assert which RequestHandler will be invoked based on that path?
Route testing with Tornado
1.2
0
0
129
37,505,970
2016-05-29T03:49:00.000
0
0
1
0
python,python-2.7,ipython,graphlab
37,531,473
2
false
0
0
Please remember that console applications and GUI application do not share the same environment on OS X. This also means that if you install a Python package from the console, this would probably not be visible to PyCharm. Usually you need to install packages using PyCharm in order to be able to use them.
1
0
1
I have got a strange issue. I am now using graphlab/numpy to develop a project via Pycharm 5. OS is Mac OS 10.11.5. I created a p2.7 virtual environment for the project. Programme runs well. But after I install ipython, I can no longer import graphlab and numpy correctly. Error message: AttributeError: 'module' object has no attribute 'core' System keeps telling that attribute ‘core' and 'connect' are missing. But I am pretty sure that they can be found in graphlab/numpy folders, and no duplicates. Project runs all right in terminal. Now I have to uninstall ipython. Then everything is ok again. Please kindly help.
Error importing numpy & graphlab after installing ipython
0
0
0
398
37,507,458
2016-05-29T07:46:00.000
0
0
0
0
python,django,pytest
37,513,426
3
false
1
0
yeah you can override the settings on the setUp set the real database for the tests and load your databases fixtures.. but I think, it`s not a good pratice, since you want to run yours tests without modify your "real" app env. you should try the pytest-django. with this lib you can reuse, create drop your databases test.
1
1
0
py.test sets up a test database. I'd like to use the real database set in settings.py file. (Since I'm on test machine with test data already) Would it be possible?
Django py.test run on real database?
0
1
0
1,804
37,507,758
2016-05-29T08:26:00.000
1
0
0
0
python,gpu,gpgpu,theano
37,516,498
1
true
0
0
libgpuarray have been made to support OpenCL, but we don't have time to finish it. Many thinks work, but we don't have the time to make sure it work everywhere. In any cases, you must find an OpenCL version that support that GPU, install it and reinstall libgpuarray to have it use it. Also, I'm not sure that GPU will give you any speed up. Don't spend too much time on tring to make it work.
1
1
1
I'm looking for a way to use Intel GPU as a GPGPU with Theano. I've already installed Intel OpenCL and libgpuarray, but a test code 'python -c "import pygpu;pygpu.test()"' crashed the process. And I found out devname method caused it. It seems there would be a lot more errors. Is it easy to fixed them to work well? I understand Intel OpenCL project can use GPGPU but it might not be supported by libgpuarray. Environment Windows 7 x64 Intel(R) HD Graphics 4600 Python 2.7 x86 or x64
Is there any way to use libgpuarray with Intel GPU?
1.2
0
0
920
37,511,165
2016-05-29T14:37:00.000
2
0
1
0
python,executable,anaconda,pyinstaller
37,513,422
2
true
0
1
Search for mkl_avx2.dll & mkl_def.dll files and paste them in your .exe folder.
1
1
0
I have made a GUI application using wxpython and some other packages (matplotlib, pandas, numpy). I tried to compile this into a standalone executable. However, when I run the 'my_script.exe' I get the following error in my command prompt: Intel MKL FATAL ERROR: Cannot load mkl_avx2.dll or mkl_def.dll. The versions I am using are: Anaconda 2.0.0 (Python 2.7) 64 bit Setuptools 19.2 (downgraded from 20.3 because of import error) Thanks in advance for helping me!
Pyinstaller Intel MKL fatal error when running .exe file Python 2.7
1.2
0
0
2,841
37,512,182
2016-05-29T16:17:00.000
0
0
1
0
python,python-3.x,tornado,python-3.5,python-asyncio
71,522,460
9
false
0
0
For multiple types of scheduling I'd recommend APSScheduler which has asyncio support. I use it for a simple python process I can fire up using docker and just runs like a cron executing something weekly, until I kill the docker/process.
1
87
0
I'm migrating from tornado to asyncio, and I can't find the asyncio equivalent of tornado's PeriodicCallback. (A PeriodicCallback takes two arguments: the function to run and the number of milliseconds between calls.) Is there such an equivalent in asyncio? If not, what would be the cleanest way to implement this without running the risk of getting a RecursionError after a while?
How can I periodically execute a function with asyncio?
0
0
0
87,956
37,513,279
2016-05-29T18:12:00.000
1
0
1
0
python,pip,setuptools
37,531,324
3
false
0
0
Python package installation states that it should never execute Python code in order to install Python packages. This means that you may not be able to download stuff during the installation process. If you want to download some additional data, do it after you install the package , for example when you import your package you could download this data and cache it somewhere in order not to download it at every new import.
1
8
0
I'd like to create some ridiculously-easy-to-use pip packages for loading common machine-learning datasets in Python. (Yes, some stuff already exists, but I want it to be even simpler.) What I'd like to achieve is this: User runs pip install dataset pip downloads the dataset, say via wget http://mydata.com/data.tar.gz. Note that the data does not reside in the python package itself, but is downloaded from somewhere else. pip extracts the data from this file and puts it in the directory that the package is installed in. (This isn't ideal, but the datasets are pretty small, so let's assume storing the data here isn't a big deal.) Later, when the user imports my module, the module automatically loads the data from the specific location. This question is about bullets 2 and 3. Is there a way to do this with setuptools?
Using setuptools, how can I download external data upon installation?
0.066568
0
0
2,343
37,514,795
2016-05-29T20:44:00.000
-1
0
0
0
python,tkinter
37,514,847
1
true
0
1
This is because when you bind a function to a button you are not calling the function, only binding it so that tkinter knows what to do when the button is pressed. You only need to use func() when you are calling the function as you are doing in the the event.
1
1
0
I have a Python function that takes no parameters. In my code, I call this function twice. Once as a command behind a Tkinter Button and once as a function to an event that I bind to a window. For the command, I call the function as func and that works fine. For the event, it will only work if I call it as func() and change the definition of the function to be: func(self). Why? How can I make both these calls compatible?
What is the difference between "func" and "func()" in TkInter?
1.2
0
0
116
37,516,730
2016-05-30T02:03:00.000
0
0
0
0
python,pandas,back-testing
43,774,149
1
false
0
0
The problem in optimising the loop is that say you have 3 years and you have only 3 events that you are interested. Then you can use an event based backtesting with 3 iteration only. The problem here is that you have to precompute the event which will need to the data anyway and most of the time you will need statistics about you backtrading that will need the data also like max-drawdown. So most backtrading framework will use the loop anyway. Or vectorisation if you are on R/Matlab Numpy If you really need to optimize it, you probably need to precompute and store all the information and then just do look-ups
1
0
1
In a hypothetical scenario where I have a large amount of data that is received, and is generally in chronological order upon receipt, is there any way to "play" the data forward or backward, thereby recreating the flow of new information on demand? I know that in a simplistic sense, I can always have a script (with whatever output being unimportant) that starts with a for loop that takes in whatever number of events or observations and does something, then takes in more observations, updates what was previously output with a new result, and so on. Is there a way of doing that is more scaleable than a simple for loop? Basically, any time I look into this subject I quickly find myself navigating to the subject area of High Frequency Trading, specifically algorithm efficacy by way of backtesting against historical data. While my question is about doing this in a more broad sense where our observation do not need to be stock/option/future price pips, the same principles must apply. Does anyone have such experience on how such a platform is built on a more scaleable level than just a for loop with logic below it? Another example would be health data / claims where one can see backward and forward what happened as more claims come in over time.
Backtesting with Data
0
0
0
392
37,522,943
2016-05-30T10:06:00.000
2
0
1
0
python,64-bit,32-bit
61,237,002
2
false
0
0
make a backup of you "Lib" folder present in your python installation folder. Now when you install your python 64 bit paste this "Lib" folder there. After this, you don't need to install many modules again.
2
15
0
I've been working on a single program for a few months now, which now requires some additional functionality. Originally, a 32 bit install was just fine, but since I'm now working with massive matrices in scipy, I simply do not have the required RAM in 32bit. The other problem I have is that my little project has to be very easily transposeable to new systems belonging to people who have no idea what they're doing and just want to click "run", so I did the whole thing with a portable python install. Would it be possible to "upgrade" my little 2.7 python to 64 bit, or am I condemned to reinstalling every single module in a fresh install?
Is it possible to upgrade a portable Python 32 bit install to a 64 bit install?
0.197375
0
0
43,480
37,522,943
2016-05-30T10:06:00.000
42
0
1
0
python,64-bit,32-bit
37,530,517
2
true
0
0
No, it is not possible to upgrade a 32bit Python installation to a 64bit one. Still, there is something that you can do in order to speedup installation of a new 64bit version. Run pip freeze > packages.txt on the old installation in order to generate a list of all installed packages and their versions. After you install the new python version, run pip install -r packages.txt in order to install the same version of the packages that you had on the old installation. As I see you use scipy, you may want to have a look at Anaconda. It could save you a lot of time.
2
15
0
I've been working on a single program for a few months now, which now requires some additional functionality. Originally, a 32 bit install was just fine, but since I'm now working with massive matrices in scipy, I simply do not have the required RAM in 32bit. The other problem I have is that my little project has to be very easily transposeable to new systems belonging to people who have no idea what they're doing and just want to click "run", so I did the whole thing with a portable python install. Would it be possible to "upgrade" my little 2.7 python to 64 bit, or am I condemned to reinstalling every single module in a fresh install?
Is it possible to upgrade a portable Python 32 bit install to a 64 bit install?
1.2
0
0
43,480
37,527,124
2016-05-30T13:38:00.000
1
0
0
0
python,html,qt,groupbox
37,535,825
3
true
1
1
QGroupBox's title property does not support HTML. The only customization you can do through the title string (besides the text itself) is the addition of an ampersand (&) for keyboard accelerators. In short, unlike QLabel, you can't use HTML with QGroupBox.
1
1
0
I want to set my QGroupBox's title with HTML expressions in python program, e.g. : ABC. (subscript) Does anybody have an idea how to do this?
How can I set QGroupBox's title with HTML expressions? (Python)
1.2
0
0
372
37,527,706
2016-05-30T14:07:00.000
0
0
0
0
python,html,flask
49,279,779
3
false
1
0
Rushy already provided a correct answer, but when you want to use flask and bokeh (I am in the same position right now) responsive design does not help in all cases (I am using a bokeh-gridplot and don't want that if accessed from a mobile device). I will append "/mobile" to the links from my home page to the plot page on mobile devices (by checking the screen width with css) and then make the plot layout depend on whether "/mobile" was appended to the link. Obviously this has some drawbacks (if someone on desktop sends a link to someone on mobile or the other way around), but as my page is login-protected, they will be redirected to the login page in most of the cases anyways (if they are not already logged-in) and need to go to the plot-page manually. Just in cases this helps someone.
1
4
0
a short question: Is there a way to get my browsers viewport size by Flask? I'm designing a page for many devices, for example: normal PC, iphone etc... Thank you, FFodWindow
Flask get viewport size
0
0
0
4,664
37,529,837
2016-05-30T16:04:00.000
1
0
1
0
python,regex
37,529,955
3
false
0
0
This happens because inside your first parentheses you have put a +, which means minimum of one or more occurrences. And since your second parentheses has a ?, the second group is optional. Hence, it is omitted as your first parentheses can match your whole string and the second one doesn't need to match anything. You can overcome this by removing the * from within the [] so it isn't matched and the * can't be matched in your first parentheses. So now your regex will be ([a-zA-Z0-9\_]+)(\*\d+)?. Hope this helps.
1
1
0
I'm trying to write a python regex matching two patterns: the first one is scratch_alpha and the second one is scratch_alpha1*12 (where 12 can be any decimal number) and I'd like to put the value after * inside a variable and if scratch_alpha is detected with * after, just write 1 in a variable I wrote this regex: ([a-zA-Z0-9\_*]+)(\*\d+)? I expected to get two groups after, the first one which would be the name "scratch_alpha" and the second which would be the number after * or None (and if None, I initialize the variable to 1). But with my regex, it seems that the first group contains everything (scratch_alpha*12) and not scratch_alpha in first group and value in second group.
Python regex to match variable pattern
0.066568
0
0
253
37,533,613
2016-05-30T20:59:00.000
2
0
1
1
python,pip,python-3.5
37,534,734
1
true
0
0
The space in the name of your disk ("HD 2") is tripping things up. The path to the Python interpreter (which is going to be /Volumes/HD 2/Projects/PythonProjects/BV/bin/python) is getting split on the space, and the system is trying to execute /Volumes/HD. You'd think that in 2016 your operating system ought to be able to deal with this. But it can't, so you need to work around it: Rename "HD 2" to something that doesn't contain a space. Re-create the virtualenv.
1
2
0
After I activated my Virtualenv i received this message: Francos-MBP:BV francoe$source bin/activate (BV) Francos-MBP:BV francoe$ pip freeze -bash: /Volumes/HD 2/Projects/PythonProjects/BV/bin/pip: "/Volumes/HD: bad interpreter: No such file or directory (BV) Francos-MBP:BV francoe$ pip install --upgrade pip -bash: /Volumes/HD 2/Projects/PythonProjects/BV/bin/pip: "/Volumes/HD: bad interpreter: No such file or directory At the moment I am not able set up any virtualenv .. [p.s. I have multiple versions of python (3.5 and systems version 2.7)] Can anyone help me? Thank you
pip: "/Volumes/HD: bad interpreter: No such file or directory
1.2
0
0
444
37,534,440
2016-05-30T22:39:00.000
1
0
1
0
python,jupyter-notebook,command-line-arguments,papermill
60,984,594
11
false
0
0
A workaround is to make the jupyter notebook read the arguments from a file. From the command line, modify the file and run the notebook.
1
67
0
I'm wondering if it's possible to populate sys.argv (or some other structure) with command line arguments in a jupyter/ipython notebook, similar to how it's done through a python script. For instance, if I were to run a python script as follows: python test.py False Then sys.argv would contain the argument False. But if I run a jupyter notebook in a similar manner: jupyter notebook test.ipynb False Then the command line argument gets lost. Is there any way to access this argument from within the notebook itself?
Passing command line arguments to argv in jupyter/ipython notebook
0.01818
0
0
117,203
37,535,998
2016-05-31T02:38:00.000
1
0
1
0
python,python-2.7,pygame
42,210,427
1
false
0
1
Are you using images to show your walls? Because if you are, you can use a program called paint.net to create a transparent sprite. Just create a file with the desired dimensions and use the eraser tool to make the whole thing transparent (click on the little squiggly arrow up at the top to make it easier). Save it as a png in your pygame folder and you have a transparent square.
1
0
0
to elaborate, I'm currently creating a maze layout using wall sprites. however, I want the maze to be invisible when I am actually playing the game, while still able to be able to have collision detection. Is this possible in any way? Thanks.
How to make an invisible sprite in pygame? (python)
0.197375
0
0
318
37,538,068
2016-05-31T06:11:00.000
0
0
0
0
python,svm
37,538,260
1
false
0
0
You can create one more feature vector in your training data, if the name of the book contains your predefined words then make it one otherwise zero.
1
1
1
I 'm using svm to predict the label from the title of a book. However I want to give more weight to some features pre defined. For example, if the title of the book contains words like fairy, Alice I want to label them as children's books. I'm using word n-gram svm. Please suggest how to achieve this using sklearn.
giving more weight to a feature using sklearn svm
0
0
0
156
37,540,755
2016-05-31T08:36:00.000
1
0
1
0
python,powershell
37,540,899
1
false
0
0
Launching the "Python" application for the terminal interface seems to use the system's default command prompt, which, for Windows, is cmd.exe. If you run it this way, using quit() or exit() will exit the interpreter and then close the window entirely. If you start Python from an actual cmd window by running cmd and then running py or python, using quit() or exit() will exit the interpreter and leave you at the system's command line. Starting the interpreter from PowerShell with the py or python command means that you can use PowerShell's interface, which includes conveniences such as highlighting and right-clicking to copy and paste. In terms of running Python code, there is no difference.
1
1
0
I am new to python, and I would like to ask what is the difference if I run python on Windows power-shell compared to when I run it directly and lunch the app's command-line? Is there an impact if i use the latter method?
python in powershell vs python CLI
0.197375
0
0
751
37,543,581
2016-05-31T10:48:00.000
5
0
1
0
python,ide,spyder
71,112,331
2
true
0
0
Nowadays (or at least in Spyder v5), Spyder includes autopep8 directly in the GUI and has an option of automatic formatting (as requested in the original question). Simply go to Tools > Preferences then choose Completion and linting > Code style and formatting. There, toggle on Enable code style linting and Autoformat files on save. Now when you save your file, Spyder will beautify your code, when it can infer what to do. This should include adding missing whitespaces after commas or around operators, removing whitespaces in empty lines or missing new lines to separate functions, just to name a few.
1
8
0
How can I select a piece of code and apply auto-format, aka beautification to it? I especially mean correctly indenting lists broken into several lines, parentheses and square brackets. Auto-indent does nothing.
How do I beautify (auto-format) a piece of code in Spyder IDE
1.2
0
0
25,989
37,543,647
2016-05-31T10:50:00.000
0
0
0
0
python,pandas,dataframe
37,543,931
10
false
0
0
Use: df.fillna(0) to fill NaN with 0.
2
32
1
I have a dataframe with 71 columns and 30597 rows. I want to replace all non-nan entries with 1 and the nan values with 0. Initially I tried for-loop on each value of the dataframe which was taking too much time. Then I used data_new=data.subtract(data) which was meant to subtract all the values of the dataframe to itself so that I can make all the non-null values 0. But an error occurred as the dataframe had multiple string entries.
How to replace all non-NaN entries of a dataframe with 1 and all NaN with 0
0
0
0
42,508
37,543,647
2016-05-31T10:50:00.000
0
0
0
0
python,pandas,dataframe
65,940,835
10
false
0
0
Generally there are two steps - substitute all not NAN values and then substitute all NAN values. dataframe.where(~dataframe.notna(), 1) - this line will replace all not nan values to 1. dataframe.fillna(0) - this line will replace all NANs to 0 Side note: if you take a look at pandas documentation, .where replaces all values, that are False - this is important thing. That is why we use inversion to create a mask ~dataframe.notna(), by which .where() will replace values
2
32
1
I have a dataframe with 71 columns and 30597 rows. I want to replace all non-nan entries with 1 and the nan values with 0. Initially I tried for-loop on each value of the dataframe which was taking too much time. Then I used data_new=data.subtract(data) which was meant to subtract all the values of the dataframe to itself so that I can make all the non-null values 0. But an error occurred as the dataframe had multiple string entries.
How to replace all non-NaN entries of a dataframe with 1 and all NaN with 0
0
0
0
42,508
37,546,726
2016-05-31T13:11:00.000
0
0
1
1
python,import,python-import,dawg
37,661,187
2
true
0
0
Since dawg was installed with pip, the best way to learn what version I have is with pip list installed. (Credit to larsks' comment.)
1
0
0
How can I find what version of dwag I have installed in python? Usually packagename.version does the trick, but dawg seems to lack the relevant methods.
How to find what version of dawg I have?
1.2
0
0
49
37,546,770
2016-05-31T13:13:00.000
9
0
1
1
python,logging
37,547,070
4
true
0
0
Simple give a different filename like filename=r"C:\User\Matias\Desktop\myLogFile.log
1
15
0
I am using the Python logging library and want to choose the folder where the log files will be written. For the moment, I made an instance of TimedRotatingFileHandler with the entry parameter filename="myLogFile.log" . This way myLogFile.log is created on the same folder than my python script. I want to create it into another folder. How could I create myLogFile.log into , let's say, the Desktop folder? Thanks, Matias
Python, choose logging files' directory
1.2
0
0
34,734
37,551,217
2016-05-31T16:37:00.000
0
0
1
0
python-2.7,dependencies,openerp
37,555,901
1
false
1
0
Maybe this help you: The auto_install flag means that the module will be automatically installed as soon as all its dependencies are satisfied. It can be set in the __openerp__.py file. Its default value is False. Add this flag to both modules. So you can create a dummy module C. Add it as a dependency of A and B. All the rest of dependencies of A and B should be in C. Then, if you install C, both modules are installed at the same time.
1
0
0
I'd like to know if their is any way to make a module A depend of module B if module B depend of module A? Like having them installed at the same time? When I try, both are not selectionnable in module list. If not, is it possible to merge both module easily? I have to add something which imply to do this inter-dependency. Will I have to rewrite every line of both modules in a single one? I would be surprised since using different module make the development easier and more structured, odoo seams to be aware of it, so that's why I come to ask you this question even if I found nothing about it.
Odoo module A depends of module B which depends of module A
0
0
0
806
37,552,035
2016-05-31T17:24:00.000
0
0
0
0
python,numpy,scipy,bspline
37,553,823
1
false
0
0
Short answer: No. Spline construction is a global process, so if you add a data point, you really need to recompute the whole spline. Which involves solving an N-by-N linear system etc. If you're adding many knots sequentially, you probably can construct a process where you're using a factorization of the colocation matrix on step n to compute things on step n+1. You'll need to carefully check the stability of this process. And splrep and friends do not give you any help here, so you'll need to write this yourself. (If you do, you might find it helpful to check the sources of interpolate.CubicSpline). But before you start on that, consider using a local interpolator instead. If all you want is to add a knot given data, then there's scipy.interpolate.insert.
1
0
1
I have a bspline created with scipy.interpolate.splrep with points (x_0,y_0) to (x_n,y_n). Usual story. But I would like to add a data point (x_n+1,y_n+1) and appropriate knot without recomputing the entire spline. Can anyone think of a way of doing this elegantly? I could always take the knot list returned by splrep, and add on the last knot of a smaller spline created with (x_n-2, y_n-2) to (x_n+1,y_n+1) but that seems less efficient than it could be.
B-splines with Scipy : can I add a datapoint without full recompute?
0
0
0
56
37,553,343
2016-05-31T18:43:00.000
2
0
1
0
python,exception
37,578,023
2
false
0
0
If the file it wants to delete is not in the directory, I want that to be okay and the program to continue processing the next command instead of aborting the process. In short I want to check to see if the file is there and if it isn't skip the rm code and process the rest of the script.
2
2
0
I am a relative newbie to Python, and I have searched and while there are many posts regarding the error, none appear to address my requirement. Briefly, I am writing code to hourly reports. I intend to have 4 days of reports archived in the folder. At the beginning of my code I have one line that deletes all 24 files for reports generated 5 days earlier. The first run is fine, as the program finds files to delete, so it will continue to run to a successful completion. However, the next 23 runs will fail as the programs fails with a "No such file or directory" error. My work-around is to write code where it only executes the "delete" function on the first hour's run, but I consider that just a band-aid solution. I would prefer to code an exception so that the remaining code is processed even though the first step got that error.
No such File or Directory Exception
0.197375
0
0
1,100
37,553,343
2016-05-31T18:43:00.000
1
0
1
0
python,exception
37,554,002
2
false
0
0
Figures that I would wrestle with this for a couple of days and then figure it out 30 min after I post the question. Here's the solution: if not listdir("insert the work path here"): --the command I want to execute if the dir is not empty-- Else: --whatever code you want to execute when dir is empty-- --Code you want to execute every time the program is run, whether the directory is empty or not--
2
2
0
I am a relative newbie to Python, and I have searched and while there are many posts regarding the error, none appear to address my requirement. Briefly, I am writing code to hourly reports. I intend to have 4 days of reports archived in the folder. At the beginning of my code I have one line that deletes all 24 files for reports generated 5 days earlier. The first run is fine, as the program finds files to delete, so it will continue to run to a successful completion. However, the next 23 runs will fail as the programs fails with a "No such file or directory" error. My work-around is to write code where it only executes the "delete" function on the first hour's run, but I consider that just a band-aid solution. I would prefer to code an exception so that the remaining code is processed even though the first step got that error.
No such File or Directory Exception
0.099668
0
0
1,100
37,554,692
2016-05-31T20:03:00.000
0
0
0
0
python,heroku,amazon-s3,timeout,thumbor
37,598,908
1
true
1
0
This issue on the tc_aws library we are using. It wasn't executing callback functions when 404 is returned from S3. We were at version 2.0.10. After upgrade the library, the problem is fixed.
1
0
0
We are hosting our thumbor image resizing service on Heroku, with image store in aws s3. The Thumbor service access s3 via the aws plugin. Recently we observe a behavior on our thumbor service which I don't understand. Use case: our client application was sending a resize request to resize an image that does not exist in aws S3. expected behavior: the thumbor service fetch image in s3. s3 returns 404 not found thumbor return 404 to heroku, the router returns 404 to client app. what we observe: under certain situation(I cannot reproduce this consistently). s3 returns 404 but thumbor does not let the heroku router know. As a result the heroku wait for 30s and return the request as 503 time out. The flowing are the logs 2016-05-31T19:38:15.094468+00:00 app[web.1]: 2016-05-31 19:38:15 thumbor:WARNING ERROR retrieving image from S3 [bucket]/3C6A3A84-F249-458B-9EFA-BF9BC863874B: {'ResponseMetadata': {'HTTPStatusCode': 404, 'RequestId': '3D61A8CBB187D846', 'HostId': 'C1qYC9Au42J0Salt1SVlCkcvcrKcQv4dltwOCdwGNF1TUFScWpkHb1qC++ZBJ0JzVqQlXW0xONU='}, 'Error': {'Key': '[bucket]/3C6A3A84-F249-458B-9EFA-BF9BC863874B', 'Code': 'NoSuchKey', 'Message': 'The specified key does not exist.'}} 2016-05-31T19:38:14.777549+00:00 heroku[router]: at=error code=H12 desc="Request timeout" method=GET path="/bALh_vgGXd7e_J7kZ0GhyE_lhZ0=/150x150/[bucket]/3C6A3A84-F249-458B-9EFA-BF9BC863874B" host=heroku.app.com request_id=67d87ea3-8010-4fbe-8c29-b2b7298f1cbc fwd="54.162.233.176,54.240.144.43" dyno=web.1 connect=5ms service=30000ms status=503 bytes=0 I am wondering if anyone can help to understand why thumbor hangs? Many thanks in advance!
Thumbor On Heroku causes heroku time out when fetching URL returns 404
1.2
0
0
178
37,555,890
2016-05-31T21:24:00.000
6
0
0
0
python,opencv
43,214,847
3
false
0
0
No need to change the python version, you can just use the pip command open cmd ( admin mode) and type pip install opencv-python
1
4
1
I am having some difficulties installing opencv with python 3.5. I have linked the cv files, but upon import cv2 I get an error saying ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/cv2.so, 2): Symbol not found: _PyCObject_Type or more specifically: /Library/Frameworks/Python.framework/Versions/3.5/bin/python3.5 /Users/Jamie/Desktop/tester/test.py Traceback (most recent call last): File "/Users/Jamie/Desktop/tester/test.py", line 2, in import cv File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/cv.py", line 1, in from cv2.cv import * ImportError:dlopen(/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/cv2.so, 2): Symbol not found: _PyCObject_Type Referenced from: /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/cv2.so Expected in: flat namespace in /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/cv2.so I have linked cv.py and cv2.so from location /usr/local/Cellar/opencv/2.4.12_2/lib/python2.7/site-packages correctly into /Library/Frameworks/Python.framework/Versions/3.5/bin Would anybody be able to help please? Thanks very much
Python OpenCV import error with python 3.5
1
0
0
12,712
37,556,334
2016-05-31T22:00:00.000
0
0
0
0
python,csv,pandas,dataframe,yelp
37,556,551
2
false
0
0
Typically yes, load everything, then filter your dataset. But if you really want to pre-filter, and you're on a unix like system, you can prefilter using grep before even starting Python. A compromise between them is to write the prefilter using Python and Pandas, this way you download the data, prefilter them (write prefiltered data to another csv) and play with your prefiltered data. The way to go depend on the number of times you need to load the whole dataset, if you want to read once and discard it, there's no need to prefilter, but while working on the code, if you want to test it a lot of time, prefiltering may save you some seconds. But here again, there's another possibility: use ipython notebook, so you'll be able to load your dataset, filter it, then execute the block of code on which you're currently working any number of time on this already-loaded dataset, it's even faster than loading a prefiltered dataset. So no real answer here, it really depends on your usage and personal tastes.
1
0
1
I'm working with the yelp dataset and of course it is millions of entries long so I was wondering if there was any way where you can just download what you need or do you have to pick away at it manually? For example, yelp has reviews on everything from auto repair to beauty salons but I only want the reviews on the restaurants. So do I have to read the entire thing and then drop the rows I don't need?
Is there a way to prefilter data before using read_csv to download data into pandas dataframe?
0
0
0
412
37,556,808
2016-05-31T22:40:00.000
0
0
0
0
user-interface,python-3.x,tkinter,textbox,messagebox
37,559,356
1
true
0
1
simpledialog.askstring() was exactly what I needed.
1
1
0
I know how to make message boxes, and I know how to create textboxes. However, I don't know how to make a message box WITH a textbox in it. I've looked through the tkinter documentation and it seems message boxes are kind of limited. Is there a way to make a message box that contains a text entry field, or would I have to create a new pop up window from scratch?
Python: Create a message box with a text entry field using tkinter
1.2
0
0
392
37,557,174
2016-05-31T23:24:00.000
-1
0
1
0
python,python-3.x,pandas,packages
37,557,227
2
false
0
0
Anaconda has included one version of Python with it. You have to change your system environment path with Anaconda's instead of the former one to avoid conflict. Also, if you want to make the whole process easy, it is recommended to use PyCharm, and it will ask you to choose the Python interpreter you want.
1
0
1
I installed Python 3.5.1 from www.python.org. Everything works great. Except that you can't install pandas using pip (it needs visualstudio to compile, which I don't have). So I installed Anaconda (www.continuum.io/downloads). Now I can see pandas as part of the list of installed modules, but when I run python programs I still get: ImportError: No module named 'pandas' How do I set up my environment to use the modules from Anaconda? Note: I have Anaconda's home directory and Library/bin on my path, as well as Python's home directory. I do not have PYTHONPATH or PYTHONHOME set, and I know I have the correct privileges to see everything.
How do I get python to import pandas?
-0.099668
0
0
1,823
37,557,376
2016-05-31T23:51:00.000
3
0
1
0
python-3.x
37,557,395
1
false
0
0
You can't tell the difference in any reasonable way; you could do terrible things with the gc module to count references, but that's not a reasonable way to do things. There is no difference between an anonymous object and a named variable (aside from the reference count), because it will be named no matter what when received by the function; "variables" aren't really a thing, Python has "names" which reference objects, with the object utterly unconcerned with whether it has named or unnamed references. Make a consistent API. If you need to have it operate both ways, either have it do both things (mutate in place and return the mutated copy for completeness), or make two distinct APIs (one of which can be written in terms of the other, by having the mutating version used to implement the return new version by making a local copy of the argument, passing it to the mutating version, then returning the mutated local copy).
1
2
0
How can you test whether your function is getting [1,2,4,3] or l? That might be useful to decide whether you want to return, for example, an ordered list or replace it in place. For example, if it gets [1,2,4,3] it should return [1,2,3,4]. If it gets l, it should link the ordered list to l and do not return anything.
In Python, how to know if a function is getting a variable or an object?
0.53705
0
0
25
37,559,502
2016-06-01T04:34:00.000
0
0
0
0
python
37,658,540
1
false
0
1
I found an alternative solution to this problem. In a python while loop push a single file into SD Card and use "adb mv" to rename the file in the SD Card after pushing the file, continue this until the device memory is full.
1
1
0
I know "adb push" command could be used to push files to android device. But in Python how do i push single file for example file.txt(size of about 50 KB) into android sdcard with different names(for ex. file1.txt, file2.txt etc) until the device storage is full. Any idea much appreciated.
Python push single file into android device with different names until the device runs out of memory
0
0
0
227
37,568,811
2016-06-01T12:36:00.000
1
0
0
0
python,django,url,sharing
37,651,587
2
true
1
0
As Seluck suggested I decided to go with base64 encoding and decoding: In the model my "link" property is now built from the standard url + base64.urlsafe_b64encode(str(media_id)) The url pattern I use to match the base64 pattern: base64_pattern = r'(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$' And finally in the view we decode the id to load the proper data: media_id = base64.urlsafe_b64decode(str(media_id)) media = Media.objects.get(pk=media_id)
1
0
0
I'm having a bit of a problem figuring out how to generate user friendly links to products for sharing. I'm currently using /product/{uuid4_of_said_product} Which is working quite fine - but it's a bit user unfriendly - it's kind of long and ugly. And I do not wish to use and id as it would allow users to "guess" products. Not that that is too much of an issue - I would like to avoid it. Do you have any hints on how to generate unique, user friendly, short sharing urls based on the unique item id or uuid?
Sharing URL generation from uuid4?
1.2
0
1
491