Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
16,905,703 | 2013-06-03T20:57:00.000 | 0 | 0 | 1 | 0 | python,function,dynamic,input,message | 16,905,780 | 2 | false | 0 | 0 | isn't that pretty trivial,
subscribe to message received event from your queue
extract the command
use switch cases
Did I understand the question correctly ? | 1 | 1 | 0 | I'm constantly receiving serial input and am storing the messages I receive in a queue.
I want to parse the messages in this queue and do different things with them.
For example, if I receive the message "KEY0" I want to call my function Key0().
If I receive the message "LOGXrandom message" I want to write 'random message' to a file logx.txt, or logy.txt if the message is "LOGYrandom message".
What is the best way to create a system that would do something like this? | Message parsing system in Python | 0 | 0 | 0 | 65 |
16,905,803 | 2013-06-03T21:03:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,python-3.x | 16,906,095 | 2 | false | 0 | 0 | You can use an IPC framework wrapping all your calls transparently. You could try using Pyro4 for that, but I'm not sure if that is going to work. | 2 | 1 | 0 | I have code written in python 2.6 that is using(depending on) other third-party libraries that are written in python 2.6 also. These third-party libraries are old and won´t be translated to python 3.x in a near future or in some cases never. My question is if it is possible to write 3.x code that can call code(functions) from the 2.6 code.
ex: write 3.x python code that can call 2.6 code that in turn calls the third-party libraries and returns the result if there is any back to the 3.x code.
If you have any examples or know if this is possible or not or can point me in the right direction it would be great.
Thank you for your time | Is it possible to call 2.x code from 3.x code in Python | 0 | 0 | 0 | 87 |
16,905,803 | 2013-06-03T21:03:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,python-3.x | 16,905,882 | 2 | true | 0 | 0 | You have a couple of options:
Port the libraries yourself - this isn't as hard as it might seem, although not ideal.
Use some kind of IPC to interface between python 3.x and 2.x installations.
Just stick with python 2.x
I'd recommend option 3, for the reasons you identify - it looks like a lot of libraries are only going to be available for 2.x. Frankly, there aren't any compelling reasons to switch to 3.x.
Update: you have a fourth option, which is to use something like PyPi to create a python executable which can cope with running the two languages at once. If the languages seriously end up with a split between sets of essential libraries, then someone will probably do this. | 2 | 1 | 0 | I have code written in python 2.6 that is using(depending on) other third-party libraries that are written in python 2.6 also. These third-party libraries are old and won´t be translated to python 3.x in a near future or in some cases never. My question is if it is possible to write 3.x code that can call code(functions) from the 2.6 code.
ex: write 3.x python code that can call 2.6 code that in turn calls the third-party libraries and returns the result if there is any back to the 3.x code.
If you have any examples or know if this is possible or not or can point me in the right direction it would be great.
Thank you for your time | Is it possible to call 2.x code from 3.x code in Python | 1.2 | 0 | 0 | 87 |
16,908,840 | 2013-06-04T02:20:00.000 | 1 | 0 | 1 | 0 | python,dictionary | 16,909,122 | 3 | false | 0 | 0 | No, the order of the dict will not change because you change the values. The order depends on the keys only (or their hash value, to be more specific at least in CPython). However, it may change between versions and implementations of Python, and in Python 3.3, it will change every time you start Python. | 1 | 4 | 0 | I have a python dictionary (say dict) in which I keep modifying values (the keys remain unaltered). Will the order of keys in the list given by dict.keys() change when I modify the values corresponding to the keys? | Does the order of keys in dictionary.keys() in a python dictionary change if the values are changed? | 0.066568 | 0 | 0 | 607 |
16,910,114 | 2013-06-04T05:00:00.000 | 2 | 0 | 1 | 0 | python,pandas | 69,644,920 | 4 | false | 0 | 0 | it is so simple, you need to use the filter function and lambda exp:
df_filterd=df.groupby('name').filter(lambda x:(x.name == 'cond1' or...(other condtions )))
you need to take care that if you want to use more than condtion to put it in brackets()..
and you will get back a DataFrame not GroupObject. | 2 | 22 | 1 | Is it possible to delete a group (by group name) from a groupby object in pandas? That is, after performing a groupby, delete a resulting group based on its name. | Delete a group after pandas groupby | 0.099668 | 0 | 0 | 27,093 |
16,910,114 | 2013-06-04T05:00:00.000 | 0 | 0 | 1 | 0 | python,pandas | 58,868,271 | 4 | false | 0 | 0 | Should be easy:
df.drop(index='group_name',inplace=True) | 2 | 22 | 1 | Is it possible to delete a group (by group name) from a groupby object in pandas? That is, after performing a groupby, delete a resulting group based on its name. | Delete a group after pandas groupby | 0 | 0 | 0 | 27,093 |
16,913,178 | 2013-06-04T08:34:00.000 | 1 | 0 | 0 | 0 | python,django | 16,914,965 | 2 | false | 1 | 0 | Often it's ok to import models between apps. This just creates a dependency, something many apps have. Of course it's more flexible to have your app be independently pluggable, but the important thing is that you document any dependencies for anyone else trying to use your app(s).
If you really want your app to be pluggable, consider reorganizing your app. Simplicity is good, but going over-overboard and insisting on strict, literal adherence to principles can get in the way of functionality.
(Without specific details of your app, this is just speculation, but since all the apps you describe revolve around Contacts, it seems like they could simply be repackaged into the same app with unsubscribe as boolean field in contacts and a view to set the attribute. And depending on what exactly you want to do with Email, something similar) | 1 | 3 | 0 | I have the following scenario.
I have an existing application called ‘Contacts’ with it's model I have number and name.
I want to create a new application called ‘unsubscribe’ and I want to make it reusable.
This is my issue:
In the new app called unsubscribe it's model will need a foreign key relationg to the contact number. This now means that it is now tied to ‘contacts’ and I cannot use it for say my email app. How does Django deal with this from a reusable point of view? | Django reusable application with linked FK? | 0.099668 | 0 | 0 | 115 |
16,915,118 | 2013-06-04T10:09:00.000 | 0 | 1 | 1 | 0 | python,scripting | 16,915,630 | 3 | false | 0 | 0 | Others have already mentioned documentation and unit-testing as being the main tools here. I want to add a third: the Python shell. One of the huge advantages of a non-compiled language like Python is that you can easily fire up the shell, import your module, and run the code there to see what it does and what it returns.
Linked to this is the Python debugger: just put import pdb;pdb.set_trace() at any point in your code, and when you run it you will be dropped into the interactive debugger where you can inspect the current values of the variables. In fact, the pdb shell is an actual Python shell as well, so you can even change things there. | 2 | 0 | 0 | Python is a relatively new language for me and I already see some of the trouble areas of maintaining a scripting language based project. I am just wondering how the larger community , with a scenario when one has to maintain a fairly large code base written by people who are not around anymore, deals with the following situations:
Return type of a function/method. Assuming past developers didn't document the code very well, this is turning out to be really annoying as I am basically reading code line by line to figure out what a method/function is suppose to return.
Code refactoring: I figured a lot of code need to be moved around, edited/deleted and etc. But lot of times simple errors, which would otherwise be compile time error in other compiled languages e.g. - wrong number of arguments, wrong type of arguments, method not present and etc, only show up when you run the code and the code reaches the problematic area. Therefore, whether a re-factored code will work at all or not can only be known once you run the code thoroughly. I am using PyLint with PyDev but still I find it very lacking in this respect. | checking/verifying python code | 0 | 0 | 0 | 51 |
16,915,118 | 2013-06-04T10:09:00.000 | 0 | 1 | 1 | 0 | python,scripting | 16,915,300 | 3 | false | 0 | 0 | You are right, that's an issue with dynamically typed interpreted languages.
There are to important things that can help:
Good documentation
Extensive unit-testing.
They apply to other languages as well of course, but here they are especially important. | 2 | 0 | 0 | Python is a relatively new language for me and I already see some of the trouble areas of maintaining a scripting language based project. I am just wondering how the larger community , with a scenario when one has to maintain a fairly large code base written by people who are not around anymore, deals with the following situations:
Return type of a function/method. Assuming past developers didn't document the code very well, this is turning out to be really annoying as I am basically reading code line by line to figure out what a method/function is suppose to return.
Code refactoring: I figured a lot of code need to be moved around, edited/deleted and etc. But lot of times simple errors, which would otherwise be compile time error in other compiled languages e.g. - wrong number of arguments, wrong type of arguments, method not present and etc, only show up when you run the code and the code reaches the problematic area. Therefore, whether a re-factored code will work at all or not can only be known once you run the code thoroughly. I am using PyLint with PyDev but still I find it very lacking in this respect. | checking/verifying python code | 0 | 0 | 0 | 51 |
16,917,613 | 2013-06-04T12:17:00.000 | 1 | 0 | 1 | 0 | python,multithreading,function,python-2.7,freeze | 16,918,902 | 1 | true | 0 | 0 | My suggestion would be to have two threads, a speaking thread and a console thread. Make a queue shared between the two and when new data needs to be spoken, shove it on the queue. The speaking thread idles it the queue is empty, if not, it pops a value and speaks. | 1 | 1 | 0 | Question
I need to be able to call a function in the background without it freezing the console. I have experience with multithreading, but I would prefer if it completed tasks in order. What's the best way to do this? Example code is greatly appreciated as English isn't my first language.
Background information (Specific to my question)
I'm using a heavily modified version of pyttsx, thus when a specific function is called it performs a SAPI call which freezes up the console. I would like to be able to call speak.main(decrypt(data)) and still be able to continue inputting data whilst my computer is speaking. | Calling a python function in the background | 1.2 | 0 | 0 | 152 |
16,920,835 | 2013-06-04T14:46:00.000 | 5 | 0 | 1 | 0 | python,python-2.7 | 16,920,879 | 3 | false | 0 | 0 | They could be a problem for the programmer. Keep the function names reasonably short, and use docstrings to document them. | 2 | 18 | 0 | I'm writing a set of python functions that perform some sort of conformance checking on a source code project. I'd like to specify quite verbose names for these functions, e.g.: check_5_theVersionOfAllVPropsMatchesTheVersionOfTheAutolinkHeader()
Could such excessively long names be a problem for python? Is there a maximum length for attribute names? | What is the maximum length for an attribute name in python? | 0.321513 | 0 | 0 | 26,534 |
16,920,835 | 2013-06-04T14:46:00.000 | 3 | 0 | 1 | 0 | python,python-2.7 | 16,920,953 | 3 | false | 0 | 0 | Since attribute names just get hashed and turned in to keys on inst.__dict__ for 99% of classes you'll ever encounter, there's no real limit on length. As long as it is hashable, it'll work as an attribute name. For the other 1% of classes that fiddle with __setattr__\ __getattr__\ __getattribute__ in ways that break the guarantee that anything hashable is a valid attribute name though, the previous does not apply.
Of course, as others have pointed out, you will have code style and quality concerns with longer named attributes. If you are finding yourself needing such long names, it's likely indicative of a design flaw in your program, and you should probably look at giving your data more hierarchical structure and better abstracting and dividing responsibility in your functions and methods. | 2 | 18 | 0 | I'm writing a set of python functions that perform some sort of conformance checking on a source code project. I'd like to specify quite verbose names for these functions, e.g.: check_5_theVersionOfAllVPropsMatchesTheVersionOfTheAutolinkHeader()
Could such excessively long names be a problem for python? Is there a maximum length for attribute names? | What is the maximum length for an attribute name in python? | 0.197375 | 0 | 0 | 26,534 |
16,921,450 | 2013-06-04T15:14:00.000 | 11 | 0 | 1 | 0 | python,sympy | 16,945,012 | 1 | true | 0 | 0 | The only way to do it is to do Add(x, y, b, c, x, y, evaluate=False), which unfortunately isn't very easy to work with. | 1 | 7 | 0 | I'm using Sympy to substiture a set of expressions for another using the Subs function, and I would like for the program not to rearrage or simplify the equations.
i.e if i were substituting x+y for a in
a+b+c+a to return x+y+b+c+x+y
Does anyone know of a way to perform this?
Many thanks | prevent Sympy from simplifying expression python after a substitution | 1.2 | 0 | 0 | 3,072 |
16,921,871 | 2013-06-04T15:34:00.000 | 0 | 0 | 1 | 0 | python,delimiter | 16,922,059 | 2 | false | 0 | 0 | The python str class (strings) includes a method called split. What you will want to do is call s.split(','). You can replace the comma with your delimiter of choice. This returns a list of strings. The delimiter will be removed from each of the strings in your list. | 1 | 0 | 0 | I could do the following in a loop, but was looking for a cleaner way to do this, or better way.
I have a string that may be over 100,000 characters.
example:
somestring,otherstring,mystring,blahstring,etc....
I need to break up the string to multiple strings or a list, each section containing less than 30,000 characters, while only slicing at a delimiter, comma in this example.
Like I said before I wrote up a for loop where I manage it in several lines, but it's messy, and i'm wanting to learn more about python, so thought I would see better ways to handle this here. Thank you for any direction. | Using python to split up a large file into smaller strings with sequence | 0 | 0 | 0 | 306 |
16,922,506 | 2013-06-04T16:04:00.000 | 8 | 0 | 1 | 0 | python,list,python-2.7 | 16,922,668 | 2 | true | 0 | 0 | It seems you have a function in your code that is shadowing Python's built-in function named list. | 2 | 4 | 0 | I have list = [] and I am adding an element to it using self.list.append('test') and I get this error - AttributeError: 'function' object has no attribute 'append'
The other list that I have defined append just fine, any ideas? | python list append gives a error: AttributeError: 'function' object has no attribute 'append' | 1.2 | 0 | 0 | 15,766 |
16,922,506 | 2013-06-04T16:04:00.000 | 0 | 0 | 1 | 0 | python,list,python-2.7 | 71,697,586 | 2 | false | 0 | 0 | first of all,you cannot use in-built function 'list' as a variable name and second is that with function object we cannot use append | 2 | 4 | 0 | I have list = [] and I am adding an element to it using self.list.append('test') and I get this error - AttributeError: 'function' object has no attribute 'append'
The other list that I have defined append just fine, any ideas? | python list append gives a error: AttributeError: 'function' object has no attribute 'append' | 0 | 0 | 0 | 15,766 |
16,922,589 | 2013-06-04T16:08:00.000 | 1 | 0 | 1 | 0 | python | 16,922,969 | 3 | false | 0 | 0 | you can do several things to limit powers (see my comment on OP)
but really my two cents ... just give each user their own VM (maybe AWS?) ... that way they cant screw up much... and you can always just restart it ...
a more complicated approach (but arguably cooler) would be to use lex/yacc (in python its PLY) and define your own language (which could be a limited subset of python) | 1 | 6 | 0 | I am writing an app and I want users to be able to input python files for corner cases. In my head the best way I can think of doing this is to save their file to disk and save the location to a DB then dynamically import it using __import__() and then execute it. The first part of my question is: is this the best way to do this?
Also, this brings up some pretty big security concerns. Is there a way to run their module under restriction? To not let it see the file system or anything?
Edit:
The execution of the python would be to retrieve data from a backend service that is outside the scope of "normal", So it would not be a full application. It could just be a definition of a custom protocol. | Dynamically reading in a python file and executing it safely | 0.066568 | 0 | 0 | 727 |
16,925,038 | 2013-06-04T18:33:00.000 | 0 | 0 | 1 | 0 | python,windows,path | 16,925,096 | 2 | false | 0 | 0 | Move C:\Python27\python.exe to C:\Python27\python27.exe
Add C:\Python27 to your PATH. | 1 | 0 | 0 | I use python 3.3.2 on windows everyday, so i added C:\Python33 to my PATH in order to be able to call "python foo.py" from console and get python 3.3.2 to execute it.
But sometimes, i also need to use python 2.7. How could i add a "python27" entry to my path, in order to call "python27 bar.py" and get python 2.7 to execute it ? | Add python27 to windows path while using python 3.3 | 0 | 0 | 0 | 648 |
16,925,802 | 2013-06-04T19:19:00.000 | 0 | 0 | 0 | 0 | python,sorting,hadoop,bigdata,hadoop-streaming | 16,927,877 | 2 | false | 0 | 0 | I'll assume that you are looking for a total sort order without a secondary sort for all your rows. I should also mention that 'better' is never a good question since there is typically a trade-off between time and space and in Hadoop we tend to think in terms of space rather than time unless you use products that are optimized for time (TeraData has the capability of putting Databases in memory for Hadoop use)
Out of the two possible approaches you mention, I think only one would work within the Hadoop infrastructure. Num 2, Since Hadoop leverages many nodes to do one job, sorting becomes a little trickier to implement and we typically want the 'shuffle and sort' phase of MR to take care of the sorting since distributed sorting is at the heart of the programming model.
At the point when the 59th variable is generated, you would want to sample the distribution of that variable so that you can send it through the framework then merge like you mentioned. Consider the case when the variable distribution of x contain 80% of your values. What this might do is send 80% of your data to one reducer who would do most of the work. This assumes of course that some keys will be grouped in the sort and shuffle phase which would be the case unless you programmed them unique. It's up to the programmer to set up partitioners to evenly distribute the load by sampling the key distribution.
If on the other hand we were to sort in memory then we could accomplish the same thing during reduce but there are inherent scalability issues since the sort is only as good as the amount of memory available in the node currently running the sort and dies off quickly when it starts to use HDFS to look for the rest of the data that did not fit into memory. And if you ignored the sampling issue you will likely run out of memory unless all your key values pairs are evenly distributed and you understand the memory capacity within your data. | 1 | 0 | 1 | I have a large dataset with 500 million rows and 58 variables. I need to sort the dataset using one of the 59th variable which is calculated using the other 58 variables. The variable happens to be a floating point number with four places after decimal.
There are two possible approaches:
The normal merge sort
While calculating the 59th variables, i start sending variables in particular ranges to to particular nodes. Sort the ranges in those nodes and then combine them in the reducer once i have perfectly sorted data and now I also know where to merge what set of data; It basically becomes appending.
Which is a better approach and why? | Sorting using Map-Reduce - Possible approach | 0 | 0 | 0 | 889 |
16,926,408 | 2013-06-04T19:57:00.000 | 1 | 0 | 1 | 0 | python,python-2.7 | 16,926,447 | 2 | false | 0 | 0 | Yes it is really O(1) for any key. | 1 | 1 | 0 | Does anybody know what O(?) for python's dictionary 'get(key)' method?
I've tested it with cProfile module and get same results of time for 100, 1000, 10000, 100000, 1000000, 100000000 records in the dictionary.
Does it means that python's dictionary provides O(1) access time for any key? | BigO for dictionary method 'get(key)' | 0.099668 | 0 | 0 | 1,882 |
16,929,149 | 2013-06-04T23:31:00.000 | -2 | 0 | 0 | 0 | python,regex,html-parsing,web-crawler,lxml | 16,933,336 | 2 | false | 0 | 0 | You can use pyquery, a library for python that bring you functions from jquery. | 1 | 1 | 0 | I am trying to build a fast web crawler, and as a result, I need an efficient way to locate all the links on a page. What is the performance comparison between a fast XML/HTML parser like lxml and using regex matching? | Finding links fast: regex vs. lxml | -0.197375 | 0 | 1 | 1,263 |
16,930,978 | 2013-06-05T03:18:00.000 | 5 | 0 | 1 | 0 | c#,ironpython | 16,982,644 | 4 | false | 0 | 0 | The simple Idea will be to start it as a Process,
after you kill the Process every Thread of it will also be killed. | 1 | 4 | 0 | If I start a IronPyThon Engine in a C# thread, the python script will start a number of threads. However, when I kill the C# thread, all of the threads in the python script are still running. How can I kill all of the threads in the Python script? | Stop all threads running in IronPython | 0.244919 | 0 | 0 | 1,002 |
16,931,563 | 2013-06-05T04:27:00.000 | 0 | 0 | 0 | 0 | python,eclipse,debugging,web.py | 16,931,564 | 1 | true | 1 | 0 | It's actually very easily, and applies to most projects.
1) Go to your "Debug Configurations" window (under "Debug As").
2) Under "Python Run", add a new configuration.
3) Enter the name of the project.
4) Under "Main Module", click on "Browse" and select the script that you've been starting from the command-line.
5) Save.
Dustin | 1 | 0 | 0 | I have my web.py project open in Eclipse, but how can I:
1) Start my project from within Eclipse (and not the console)?
2) Debug my project from within Eclipse (breakpoints, etc..)?
There's no readily-accessible information about this. | How do I debug my web.py project in Eclipse? | 1.2 | 0 | 0 | 337 |
16,931,757 | 2013-06-05T04:47:00.000 | 1 | 0 | 0 | 0 | python,django,django-south | 16,932,331 | 1 | true | 1 | 0 | In a scale of one to madness, I think it's a terrible idea. Your version control system would commit suicide as soon as you tried to update the code further since your VCS would only have the old values while your migration would change the existing files.
I think it's reasonable to have the migration rename uploaded files, but not source files.
Why treat this any different from any other change to your source? Why put this particular source change in a migration? This is madness :) | 1 | 0 | 0 | I recently found out that, during my coding, I inadvertently named one of the Models of a Django app as a subtly mispelled version of an English word. This was not too long ago, but now there are exactly 300 occurrences of the same mispelled word across models, views, tests and my old grandmother's last will and testament.
I'll surely use South to handle the changes in the models, but what about filenames and other changes in the code? Should I have the forward() migration change everything, including finding-replacing all instances of the word and renaming a couple of files?
In a scale of one to madness, how bad is this idea? | Should South be used to rename files and/or instances of a keyword in code? | 1.2 | 0 | 0 | 70 |
16,941,273 | 2013-06-05T13:30:00.000 | 0 | 0 | 1 | 0 | python,multithreading,logging | 16,941,309 | 3 | false | 0 | 0 | Seems that your only option is requiring your users to override some other method instead of run. This way you'll have run in your CustomThread that invokes that other method and reports when done.
This has an extra benefit: start function is non-trivial, you'll be able to report successful start at the beginning of run instead of carefully dealing with overriden start. | 2 | 0 | 0 | I am currently subclassing python's threading.Thread class in order to add additional logging features to it. At the moment I am trying to get it to report both when the thread is started, and when it has finished. Reporting the thread has started is easy enough since I can just extend the start() function. However reporting exit has been more difficult. I tried to extend the _bootstrap and _boothstrap_inner functions to add logging after they were complete, however that seems to have no effect. I can not modify those functions at all.
Does anyone know of a way to add the ability for a thread to report that it has finished? | Adding the ability for a python Thread to report when finished | 0 | 0 | 0 | 103 |
16,941,273 | 2013-06-05T13:30:00.000 | 0 | 0 | 1 | 0 | python,multithreading,logging | 16,943,284 | 3 | false | 0 | 0 | If you have a lot of asynchronous threads maybe you should consider a message queue for inter communication instead? have the thread post messages to an exchange and then exit. Then let the calling thread decide when to poll for messages. Kinda depends on your workload though.
This has the advantage that you can go multi process rather than multi thread if you want later.
I understand this suggestion may not be what you were wanted. | 2 | 0 | 0 | I am currently subclassing python's threading.Thread class in order to add additional logging features to it. At the moment I am trying to get it to report both when the thread is started, and when it has finished. Reporting the thread has started is easy enough since I can just extend the start() function. However reporting exit has been more difficult. I tried to extend the _bootstrap and _boothstrap_inner functions to add logging after they were complete, however that seems to have no effect. I can not modify those functions at all.
Does anyone know of a way to add the ability for a thread to report that it has finished? | Adding the ability for a python Thread to report when finished | 0 | 0 | 0 | 103 |
16,942,317 | 2013-06-05T14:15:00.000 | 3 | 0 | 0 | 0 | python,django,postgresql,heroku,django-south | 16,942,831 | 3 | true | 1 | 0 | From the command line you should be able to do heroku pg:psql to connect directly via PSQL to your database and from in there \dt will show you your tables and \d <tablename> will show you your table schema. | 1 | 2 | 0 | So I have my Django app running and I just added South. I performed some migrations which worked fine locally, but I am seeing some database errors on my Heroku version. I'd like to view the current schema for my database both locally and on Heroku so I can compare and see exactly what is different. Is there an easy way to do this from the command line, or a better way to debug this? | How to View My Postgres DB Schema from Command Line | 1.2 | 1 | 0 | 3,348 |
16,946,659 | 2013-06-05T17:53:00.000 | 0 | 0 | 0 | 0 | python,django,dropzone.js | 17,276,082 | 2 | false | 1 | 0 | This sounds as if you didn't include the CSS files that come along with Dropzone.
Or you didn't add the dropzone or dropzone-previews class to your form. | 1 | 1 | 0 | I'm using latest dropzone.js, version 3.2.0. I downloaded the folder and have all files needed. Using latest Chrome.
When i drop a file, dropzone sends it to the server, and i successfully save it, but nothing visual happens on the front end.
I guess i'm missing something trivial. How to make dropzone show upload progress animation?
Another issue i have is that dropzone doesnt hide the div.fallback that contains fallback form.
I thought those features supposed to work automatically. | Dropzonejs - Doesn't show progress bar / complete status and doesnt hide fallback form | 0 | 0 | 1 | 1,443 |
16,948,154 | 2013-06-05T19:24:00.000 | 0 | 0 | 0 | 1 | ipython,enthought | 28,218,590 | 1 | false | 0 | 0 | If you want to launch web interactive then the command
ipython notebook in windows shell or in canopy shell works. | 1 | 2 | 0 | I have been using EPD for some time and recently started using Canopy. So now I have both EPD and Canopy installed on my machine, which runs Windows 7 Pro x64. But I just realized I cannot launch Canopy's IPython interactive session (located in the directory C:\Users\User\AppData\Local\Enthought\Canopy\User\Scripts) in a Windows command prompt. I already added this directory to my Path before the EPD's python directory.
I checked out those files in the directory .../Canopy/User/Scripts/, I believe that problem is not with the file "ipython-script.py" there, but with the file "ipython.exe", which is what will be run when I simply type "ipython" in a Windows command shell (I set the path already).
In a Windows command shell, if I changed to the directory .../Canopy/User/Scripts/ and type up "python ipython-script.py", then I can correctly start the IPython session in the command shell. So, it looks like that "ipython.exe" does not run the script "ipython-script.py"...
Has anyone run into this same problem? Is there an easy fix?
P.S. I already had the latest Canopy (version 1.0.1.1160) installed.
Thanks for any help. | Cannot start Canopy's IPython from Windows command shell | 0 | 0 | 0 | 1,361 |
16,948,194 | 2013-06-05T19:27:00.000 | 0 | 0 | 1 | 0 | python | 16,948,293 | 2 | false | 0 | 0 | I suppose you could write a script (doesn't have to be python) that uses pip or alike to install all those libraries. If you do that, be carefull with dependencies and the order in which you install those libraries.
You could also download all of the libraries and then call all the setup.py files instead of using pip. | 1 | 0 | 0 | I distribute a software package that is dependent on a variety of binary python extension modules. Sometimes at conferences we have participants install these packages so we can demo how to script our software tool. It's a pain to have folks click through 12 different installers. Is it possible to take existing python binary extension modules and build them into a single installer that will install all 12 at once?
If relevant, here are the python modules I'd like to wrap up:
Cython
Numpy Superpack
GDAL
PIL
py2exe
PyQt
Scipy Superpack
Setuptools
Distribute
Nose
Shapely
PyAMG
Virtualenv
Poster | Is it possible to wrap up python binary installers into a single package? | 0 | 0 | 0 | 171 |
16,949,369 | 2013-06-05T20:35:00.000 | 1 | 1 | 0 | 0 | python,serial-port,arduino,pyserial | 16,951,886 | 3 | false | 0 | 0 | Edit:
I forgot about RS-485, which 'jdr5ca' was smart enough to recommend. My explanation below is restricted to RS-232, the more "garden variety" serial port. As 'jdr5ca' points out, RS-485 is a much better alternative for the described problem.
Original:
To expand on zmo's answer a bit, it is possible to share serial at the hardware level, and it has been done before, but it is rarely done in practice.
Likewise, at the software driver level, it is again theoretically possible to share, but you run into similar problems as the hardware level, i.e. how to "share" the link to prevent collisions, etc.
A "typical" setup would be two serial (hardware) devices attached to each other 1:1. Each would run a single software process that would manage sending/receiving data on the link.
If it is desired to share the serial link amongst multiple processes (on either side), the software process that manages the link would also need to manage passing the received data to each reading process (keeping track of which data each process had read) and also arbitrate which sending process gets access to the link during "writes".
If there are multiple read/write processes on each end of the link, the handshaking/coordination of all this gets deep as some sort of meta-signaling arrangement may be needed to coordinate the comms between the process on each end.
Either a real mess or a fun challenge, depending on your needs and how you view such things. | 1 | 1 | 0 | If this is a stupid question, please don't mind me. But I spent some time trying to find the answer but I couldn't get anything solid. Maybe this is a hardware question, but I figured I'd try here first.
Does Serial Communication only work one to one? The reason this came up is because I had an arduino board listening for communication on its serial port. I had a python script feed bytes to the port as well. However, whenever I opened up the arduino's serial monitor, the connection with the python script failed. The serial monitor also connects to the serial port for communication for its little text input field.
So what's the deal? Does serial communication only work between a single client and a single server? Is there a way to get multiple clients writing to the server? I appreciate your suggestions. | Serial Communication one to one | 0.066568 | 0 | 1 | 1,057 |
16,949,415 | 2013-06-05T20:37:00.000 | 1 | 0 | 1 | 0 | python,regex | 16,949,480 | 4 | false | 0 | 0 | You should improve your basics about regular expressions. The error is due to the ? at the befinning. It's a quantifier and there is nothing before this quantifier. Your use of * and + makes also not much sense. Without knowing your exact requirements it's hard to propose a better solution, because there are too many problems with your regex. | 1 | 1 | 0 | I'm trying to capture the dollar amount in a line:
example:
blah blah blah (blah $23.32 blah) blah blac (blah)
I want to capture "$23.32"
This is what I'm using:r'?([\$][.*]+)'
I'm telling it to find one occurance of (...) with ?
Then I tell it to find something which starts of with a "$" and any character which may come after (so I can get the decimal point also).
However, I get an error of error: nothing to repeat | Capture $ in regex Python | 0.049958 | 0 | 0 | 132 |
16,951,484 | 2013-06-05T23:28:00.000 | 1 | 0 | 0 | 0 | java,php,javascript,python,ruby | 16,951,499 | 2 | false | 1 | 0 | No. Bytecode caches are available for PHP (e.g. Zend Accelerator); Java is compiled to bytecode. Can't speak for Python. | 2 | 0 | 0 | JavaScript on the server can be interpreted to mashine code using Google's V8 Javascript Engine. But PHP and Ruby and Python and Java all have to run through an interpreter every time they're accessed and it interpretation will be less fast.
Is that true? I read this in an article about Google's V8 Javascript Engine. | Interpretation of Java or Ruby neccessary with every access? | 0.099668 | 0 | 0 | 38 |
16,951,484 | 2013-06-05T23:28:00.000 | 1 | 0 | 0 | 0 | java,php,javascript,python,ruby | 16,951,649 | 2 | true | 1 | 0 | Java is compiled to bytecode, and then (usually) compiled to machine code using a Just-In-Time (JIT) compiler. Java servers don't launch a new process for every request (most just launch a new thread), so the cost of the JIT compile is amortized across the entire lifetime of your server. In practice, this means that Java servers can handle requests at speeds comparable to C or C++ (modulo the different performance profile of automatic memory management).
Python is compiled to bytecode, but the bytecode is interpreted each time it is executed, much like PHP with a bytecode cache. There has been some work on JIT compilers for Python (Psyco was one, and PyPy has done a lot of work with JITs) but they aren't generally considered production-ready. (YMMV, of course.) | 2 | 0 | 0 | JavaScript on the server can be interpreted to mashine code using Google's V8 Javascript Engine. But PHP and Ruby and Python and Java all have to run through an interpreter every time they're accessed and it interpretation will be less fast.
Is that true? I read this in an article about Google's V8 Javascript Engine. | Interpretation of Java or Ruby neccessary with every access? | 1.2 | 0 | 0 | 38 |
16,951,821 | 2013-06-06T00:09:00.000 | 1 | 0 | 1 | 0 | python,gdb | 16,952,566 | 1 | true | 0 | 0 | I think you can use Type.fields to iterate over the fields.
Then, you can look at the field offset and you can compute a pointer to the anonymous field, along the lines of (type *) (((char *) obj) + offset).
This isn't ideal. There is a bug open to implement something better. | 1 | 1 | 0 | GDB 7.2 python doesn't have gdb.Type.iteritems method. Anyway I can access the members of the anonymous structure (which is within another structure of course) from gdb 7.2 ? The assumption is that I dont know know the name of the members of the anonymous structure or else I could have done gdb.parse_and_eval on them. | GDB 7.2 + python: how to get members of anonymous structure? | 1.2 | 0 | 0 | 267 |
16,952,059 | 2013-06-06T00:47:00.000 | 1 | 0 | 0 | 0 | python,pygame | 16,952,066 | 3 | false | 0 | 1 | On each handle of the input, check if the object's target x position plus its width exceeds the width of the canvas or if it is less than 0. Deny the movement if so.
Repeat for the y coordinate and the height. | 1 | 0 | 0 | I have made an object controlled with arrow keys. When I move it to the edge of the pygame screen, the object moves off the screen. I was wondering how to keep the object on the screen. Any suggestions? | In pygame how to make an object controlled with arrow keys not move of the edge of the screen | 0.066568 | 0 | 0 | 364 |
16,952,350 | 2013-06-06T01:28:00.000 | 0 | 0 | 0 | 0 | c++,python,python-sip | 16,976,040 | 2 | true | 0 | 1 | It turns out that I do not need to do anything complicated...
In my case, there is no difference to call functions in DLL from C++ or from python code embedded in C++.
I am totally over-thinked. | 1 | 0 | 0 | I have a C++ GUI, it load a DLL when running. I use SIP to import the DLL in python. I need to embed the python part in the GUI, and some data are needed to exchange between python and C++.
For example, in the C++ GUI, I can enter command from a panel, such as "drawSomething()", it will call corresponding function in python, and the result will be shown in the GUI.
Can I use SIP to
extract a C++ object from python object (just like the way boost.python does), or is there a better way to share data between python and c++ seamlessly?
thanks. | python-sip: How to access a DLL from both Python and C++ | 1.2 | 0 | 0 | 447 |
16,955,325 | 2013-06-06T06:40:00.000 | 0 | 0 | 0 | 0 | python | 16,955,453 | 2 | false | 0 | 0 | 64 result is the limit? Sounds weird to me!
Even with browser, I can navigate till the 100th page with no problem.
I'm very curious about how did you reach this limit.
Anyway: classical possible solutions are:
proxying ( IE: tor )
delaying requests
randomly switch user agent | 2 | 2 | 0 | Is there a way to scrape more than 64 results from google with python without getting my IP address instantly blocked? | Scraping Google Results with Python | 0 | 0 | 1 | 241 |
16,955,325 | 2013-06-06T06:40:00.000 | 1 | 0 | 0 | 0 | python | 16,955,450 | 2 | false | 0 | 0 | I use tsocks and ssh-tunnels to machines with other ip addresses to achieve this. | 2 | 2 | 0 | Is there a way to scrape more than 64 results from google with python without getting my IP address instantly blocked? | Scraping Google Results with Python | 0.099668 | 0 | 1 | 241 |
16,957,276 | 2013-06-06T08:32:00.000 | 0 | 0 | 0 | 0 | python,multithreading,multiprocessing,scrapy,web-crawler | 16,961,651 | 4 | false | 1 | 0 | Scrapy is still an option.
Speed/performance/efficiency
Scrapy is written with Twisted, a popular event-driven networking
framework for Python. Thus, it’s implemented using a non-blocking (aka
asynchronous) code for concurrency.
Database pipelining
You mentioned that you want your data to be pipelined into the database - as you may know Scrapy has Item Pipelines feature:
After an item has been scraped by a spider, it is sent to the Item
Pipeline which process it through several components that are executed
sequentially.
So, each page can be written to the database immediately after it has been downloaded.
Code organization
Scrapy offers you a nice and clear project structure, there you have settings, spiders, items, pipelines etc separated logically. Even that makes your code clearer and easier to support and understand.
Time to code
Scrapy does a lot of work for you behind the scenes. This makes you focus on the actual code and logic itself and not to think about the "metal" part: creating processes, threads etc.
But, at the same time, Scrapy might be an overhead. Remember that Scrapy was designed (and great at) to crawl, scrape the data from the web page. If you want just to download a bunch of pages without looking into them - then yes, grequests is a good alternative. | 1 | 5 | 0 | I have a >100,000 urls (different domains) in a list that I want to download and save in a database for further processing and tinkering.
Would it be wise to use scrapy instead of python's multiprocessing / multithreading? If yes, how do I write a standalone script to do the same?
Also, feel free to suggest other awesome approaches that come to your mind. | What is the best way to download number of pages from a list of urls? | 0 | 0 | 1 | 1,273 |
16,958,427 | 2013-06-06T09:27:00.000 | 0 | 0 | 1 | 0 | python,list,inspect | 16,958,933 | 3 | false | 0 | 0 | Try doing ", ".join(my_long_list). | 1 | 0 | 0 | I have an object with a huge list somewhere down in the bowels. The object's dump (e.g. using print(o)) doesn't fit on one screen.
But if I could manage to have members that are 1-D lists printed comma-separated on one line, the object would be easier to inspect.
How can this be achieved?
EDIT:
Found out that the object I was trying to display had a __repr__ that explicitly showed it's array content in a vertical manner... So this question may be closed. | display long lists in member of member of... on one line in python | 0 | 0 | 0 | 196 |
16,960,199 | 2013-06-06T10:50:00.000 | 3 | 0 | 1 | 0 | python,virtualenv,pip,fabric | 57,332,482 | 8 | false | 0 | 0 | Although your problem was specifically due to a typo, to help other users:
pip freeze doesn't show the dependencies that pip depends on. If you want to obtain all packages you can use pip freeze --all or pip list. | 5 | 18 | 0 | I am using a virtualenv. I have fabric installed, with pip. But a pip freeze does not give any hint about that. The package is there, in my virtualenv, but pip is silent about it. Why could that be? Any way to debug this? | pip freeze does not show all installed packages | 0.07486 | 0 | 0 | 21,286 |
16,960,199 | 2013-06-06T10:50:00.000 | 1 | 0 | 1 | 0 | python,virtualenv,pip,fabric | 60,008,571 | 8 | false | 0 | 0 | Adding my fix in addition of above fix also ,
I was also facing the same issue on windows,even after activating the virtualenv too pip freeze was not giving me all list of installed packages. So i upgraded my pip with python -m pip install --upgrade pip command and then used pip freeze.
This time it worked and gave me all list of installed packages. | 5 | 18 | 0 | I am using a virtualenv. I have fabric installed, with pip. But a pip freeze does not give any hint about that. The package is there, in my virtualenv, but pip is silent about it. Why could that be? Any way to debug this? | pip freeze does not show all installed packages | 0.024995 | 0 | 0 | 21,286 |
16,960,199 | 2013-06-06T10:50:00.000 | -1 | 0 | 1 | 0 | python,virtualenv,pip,fabric | 65,576,721 | 8 | false | 0 | 0 | This might be stupid but I have got the same problem. I solved it by refreshing vs code file directory (inside vscode there is a reload button). :) | 5 | 18 | 0 | I am using a virtualenv. I have fabric installed, with pip. But a pip freeze does not give any hint about that. The package is there, in my virtualenv, but pip is silent about it. Why could that be? Any way to debug this? | pip freeze does not show all installed packages | -0.024995 | 0 | 0 | 21,286 |
16,960,199 | 2013-06-06T10:50:00.000 | 0 | 0 | 1 | 0 | python,virtualenv,pip,fabric | 70,033,153 | 8 | false | 0 | 0 | If none of the above answers are working for you.
As with me you might have problem in you venv and pip configuration.
Go inside your venv/bin and open pip and see the 2nd line as:
'''exec' "path/to/yourvenv/bin/python3" "$0" "$@"
See if this line is correctly pointing inside your venv or not
For example in my case.
I initially named my virtual environment as venv1
and later just renamed it to venv2.
In doing so my pip file 2nd line had: '''exec' "venv1/bin/python3" "$0" "$@"
which to work properly should have: '''exec' "venv2/bin/python3" "$0" "$@" notice "venv2" not "venv1" since venv1 in now renamed to venv2.
Due to this python was looking inside pip of venv2 and throwing error or not working as desired. | 5 | 18 | 0 | I am using a virtualenv. I have fabric installed, with pip. But a pip freeze does not give any hint about that. The package is there, in my virtualenv, but pip is silent about it. Why could that be? Any way to debug this? | pip freeze does not show all installed packages | 0 | 0 | 0 | 21,286 |
16,960,199 | 2013-06-06T10:50:00.000 | 0 | 0 | 1 | 0 | python,virtualenv,pip,fabric | 71,878,415 | 8 | false | 0 | 0 | For those who added Python modules via PyCharm IDE, after generating a virtual environment from the command prompt, good luck! You will need to rebuild the requirements.txt file manually with the ones missing by first running pip3 freeze and adding what is missing from PyCharm.
I highly suggest switching to Visual Studio Code. | 5 | 18 | 0 | I am using a virtualenv. I have fabric installed, with pip. But a pip freeze does not give any hint about that. The package is there, in my virtualenv, but pip is silent about it. Why could that be? Any way to debug this? | pip freeze does not show all installed packages | 0 | 0 | 0 | 21,286 |
16,961,438 | 2013-06-06T11:53:00.000 | 2 | 0 | 0 | 0 | python,mysql-python | 16,961,869 | 1 | true | 0 | 0 | You can get the number of affected rows by using cursor.rowcount. The information which rows are affected is not available since the mysql api does not support this. | 1 | 1 | 0 | I am executing an update query using MySQLdb and python 2.7. Is it possible to know which rows affected by retrieving all their ids? | Python, mySQLdb: Is it possible to retrieve updated keys, after update? | 1.2 | 1 | 0 | 67 |
16,962,282 | 2013-06-06T12:37:00.000 | 2 | 0 | 1 | 0 | python | 16,962,547 | 4 | true | 0 | 0 | When invoking str on a string object, the underlying function __str__(self) will be executed. Whether to return the original object (the python string case) or a copy depends on the implementation of the function.
Generally speaking, the language will not do or do little to handle redundant calls I think. The program itself will decide the behavior. (i.e. whether a named function is defined on the object, if not an error will be issued.)
Let's think another way. If you have some knowledge about C++, in C++ there is a kind of copy constructor. It goes similarly facing the deep copy and shadow copy problem relying on your implementation. | 1 | 1 | 0 | Very simple question, but it is a curiosity for me...
Say we have a list of items which are strings. If we call the built-in function str on each element in the list, that would seem to be redundant since the items are already strings. What would happen under the hood, specifically for Python but interested in other languages as well. Would the interpreter already see that the item is a string and not call the str function? Or would it do it anyway and return a string, and what would a string of a string mean? | What happens in redundant function calls? | 1.2 | 0 | 0 | 244 |
16,962,528 | 2013-06-06T12:47:00.000 | 0 | 1 | 1 | 0 | python,file,exists | 16,962,634 | 3 | false | 0 | 0 | Afaik isfile() will be faster while open(path) is more secure, in the sence that if open() is able to actually open the file, you can be sure it's there. | 1 | 2 | 0 | Which one should I use to maximize performance? os.path.isfile(path) or open(path)? | checking if file exists: performance of isfile Vs open(path) | 0 | 0 | 0 | 3,684 |
16,963,865 | 2013-06-06T13:50:00.000 | 1 | 0 | 0 | 1 | python,gcc,scons | 16,966,985 | 1 | true | 0 | 0 | You might want to give it something to compile. Maybe be redirecting input from null: (not sure if that's correct for windows). Though if so, that looks like a moderately strange compiler | 1 | 0 | 0 | Actually I'm trying to read out the version of my cc1plus executable in windows. This is a rather simple job:
cc1plus -version
I need this for a scons script (Tool), to integrate an ARM cross compiler. Because of that I directly call cc1plus instead of using some compiler driver. There is no useful compiler driver available.
Back to my problem: When I'm calling "cc1plus -version" on cmd I get a version string back, but cc1plus isn't terminated. Instead it is continuously executed. I have to kill cc1plus with CRTL+D. For my script this is a problem.
In the following a snippet of my cmd:
C:\DevTools\CrossWorks_for_ARM_2.3\bin>cc1plus -version
GNU C++ (GCC) version 4.7.3 20121207 (release) [ARM/embedded-4_7-branch revision 194305] (arm-unknown-eabi)
compiled by GNU C version 3.4.4 (mingw special), GMP version 4.3.2, MPFR version 2.4.2, MPC version 0.8.1
GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072
^C
C:\DevTools\CrossWorks_for_ARM_2.3\bin>
Is there any trick to terminate cc1plus after retrieving the version? For me it is rather incomprehensible why cc1plus isn't terminating. | Reading out version of cc1plus (SCons script-based) | 1.2 | 0 | 0 | 229 |
16,964,137 | 2013-06-06T14:02:00.000 | 1 | 0 | 0 | 0 | python-3.x,pyramid,jinja2 | 16,975,273 | 2 | false | 1 | 0 | There appears to be issues with the dev version of jinja2 that you are installing as they reimplement the python 3 port using a single codebase. I'd suggest going back to a previous release that is using 2to3. | 2 | 1 | 0 | I can't make Jinja2 2.8 work with Pyramid 1.4.2 and Python 3.3.2. I got this error:
File "/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/Jinja2-2.8_devdev_20130604-py3.3.egg/jinja2/environment.py", line 765, in _load_template
template = self.loader.load(self, name, globals)
File "/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/Jinja2-2.8_devdev_20130604-py3.3.egg/jinja2/loaders.py", line 119, in load
bucket = bcc.get_bucket(environment, name, filename, source)
File "/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/Jinja2-2.8_devdev_20130604-py3.3.egg/jinja2/bccache.py", line 176, in get_bucket
key = self.get_cache_key(name, filename)
File "/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/Jinja2-2.8_devdev_20130604-py3.3.egg/jinja2/bccache.py", line 163, in get_cache_key
if isinstance(filename, unicode):
NameError: global name 'unicode' is not defined
I have WebOb 1.2.3 and distribute 0.6.45. Thanks!!! | Pyramid with Jinja2 running Python 3.3 | 0.099668 | 0 | 0 | 347 |
16,964,137 | 2013-06-06T14:02:00.000 | 1 | 0 | 0 | 0 | python-3.x,pyramid,jinja2 | 16,996,264 | 2 | true | 1 | 0 | I had the same issue with Jinja2 2.7.
pip install jinja2==2.6 solved the problem for me. | 2 | 1 | 0 | I can't make Jinja2 2.8 work with Pyramid 1.4.2 and Python 3.3.2. I got this error:
File "/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/Jinja2-2.8_devdev_20130604-py3.3.egg/jinja2/environment.py", line 765, in _load_template
template = self.loader.load(self, name, globals)
File "/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/Jinja2-2.8_devdev_20130604-py3.3.egg/jinja2/loaders.py", line 119, in load
bucket = bcc.get_bucket(environment, name, filename, source)
File "/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/Jinja2-2.8_devdev_20130604-py3.3.egg/jinja2/bccache.py", line 176, in get_bucket
key = self.get_cache_key(name, filename)
File "/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/Jinja2-2.8_devdev_20130604-py3.3.egg/jinja2/bccache.py", line 163, in get_cache_key
if isinstance(filename, unicode):
NameError: global name 'unicode' is not defined
I have WebOb 1.2.3 and distribute 0.6.45. Thanks!!! | Pyramid with Jinja2 running Python 3.3 | 1.2 | 0 | 0 | 347 |
16,966,095 | 2013-06-06T15:26:00.000 | 0 | 1 | 1 | 1 | python,setuptools,distutils,setup.py,distribute | 16,966,255 | 2 | false | 0 | 0 | I would use subprocess. I believe setup.py command line arguments should be your interface.
Check setup.py clean --all | 2 | 0 | 0 | I have a directory containing N subdirectories each of which contains setup.py file. I want to write a python script that iterates through all subdirectories, issues python setup.py bdist_egg --dist-dir=somedir, and finally removes build and *.egg-info from each subdirectory and I have two questions:
Can I invoke bdist_egg without using os.system? Some python interface would be nicer.
Can I tell bdist_egg not to generate build and *.egg-info or is there any complementary command for setup.py that cleans this for me? | automated build of python eggs | 0 | 0 | 0 | 97 |
16,966,095 | 2013-06-06T15:26:00.000 | 0 | 1 | 1 | 1 | python,setuptools,distutils,setup.py,distribute | 17,346,341 | 2 | true | 0 | 0 | It turned out that Fabric is the right way! | 2 | 0 | 0 | I have a directory containing N subdirectories each of which contains setup.py file. I want to write a python script that iterates through all subdirectories, issues python setup.py bdist_egg --dist-dir=somedir, and finally removes build and *.egg-info from each subdirectory and I have two questions:
Can I invoke bdist_egg without using os.system? Some python interface would be nicer.
Can I tell bdist_egg not to generate build and *.egg-info or is there any complementary command for setup.py that cleans this for me? | automated build of python eggs | 1.2 | 0 | 0 | 97 |
16,966,280 | 2013-06-06T15:34:00.000 | 0 | 0 | 1 | 0 | python,ipython,ipython-notebook | 16,966,960 | 11 | false | 0 | 0 | So @MikeMuller's good idea will work for a local notebook, but not a remote one (right?). I don't think there is a way for you to remotely invoke individual cell blocks or functions of ipynb code on a remote server and be able to get results back into your calling routine programmatically, unless that code does something fairly extraordinary to communicate results.
I was in the process of writing when @Matt submitted the same idea about
ipython <URI_to_Notebook> --script
The *.pynb is a JSON container and not an actual python script. You can get ipython to export a *.py with
If the target *.ipynb is on a remote machine you don't control, you'll probably need to pull the file so that you can write the output to a local path. (Haven't looked into whether you can invoke this on a remote resource to create a local output.) Once this is created you should be able to import and run the *.py or individual functions within it.
A question for @Matt on that neat example of running another *.ipynb file wholesale with io.open(nbfile) is whether the nbfile can be remote? Seems like a long shot, but would be great... | 1 | 51 | 0 | I am using IPython and want to run functions from one notebook from another (without cutting and pasting them between different notebooks). Is this possible and reasonably easy to do? | Reusing code from different IPython notebooks | 0 | 0 | 0 | 27,903 |
16,967,509 | 2013-06-06T16:35:00.000 | 0 | 0 | 0 | 0 | python,google-drive-api | 20,088,100 | 3 | false | 0 | 0 | You can batch your multiple deletes into a single HTTP request. | 1 | 0 | 0 | Is this now possible using Google Drive API or should I just send a multiple requests to accomplish this task?
By the way I'm using Python 2.7 | How to do a multiple folder removal in Google Drive API? | 0 | 0 | 1 | 259 |
16,968,889 | 2013-06-06T17:53:00.000 | 1 | 0 | 0 | 0 | python,user-interface,graphics,gtk,wxwidgets | 16,970,638 | 1 | false | 0 | 1 | wxWidgets probably won't help you a lot here. I.e. you should be able to do what you want with it but you will need to implement most of your bullet points yourself. E.g. drawing would almost certainly be done using OpenGL but using OpenGL in a wxWidgets application is exactly the same as using it anywhere else. And you will have to implement panning/zooming/hit-testing.
wxWidgets does provide decent multi-threading support for the typical background-worker-threads-one-main-GUI-thread scenario that you would be almost certainly using here too. And simple communications between the threads (although I'm speaking about C++ here, not sure how is it done on Python side). But then any other decent framework should provide this too... | 1 | 0 | 0 | I am planning to implement, as an exercise and for personal use, in a very relaxed pace, a GPS/Mapping/Cycling desktop application, with Python as the primary language. It must be cross-platform (windows and linux), and the graphical front-end should allow the following:
Relatively fast rendering of map tiles à la Google Maps, Bing Maps, etc. with panning, zooming, overlay, image-blending, etc.
Good support for retained-mode 2D graphics (routes, points) with direct manipulation, hit-testing, highlighting, selection, etc;
Good integration with multithreaded architecture (no UI freeze while performing calculations;
Preferrably good support to events-based communication between application code and GUI code;
Preferrably some support to 3D rendering with OpenGL or similar.
I have some experience with GTK, but I feel it too low-level, so I'm wondering if wxWidgets could be a good alternative IN THIS SCENARIO (rich graphics as a main requirement of the UI).
Any | Cross platform, python based, rich-graphics application for mapping and gps: GTK or wxWidgets? | 0.197375 | 0 | 0 | 283 |
16,969,190 | 2013-06-06T18:12:00.000 | 1 | 0 | 1 | 0 | python,matrix | 16,969,259 | 2 | false | 0 | 0 | Read the input till Ctrl+d, split by newline symbols first and then split the results by spaces. | 2 | 0 | 1 | Entering arbitrary sized matrices to manipulate them using different operations. | I am working on a project where you have to input a matrix that has an arbitrary size. I am using python. Suggestions? | 0.099668 | 0 | 0 | 51 |
16,969,190 | 2013-06-06T18:12:00.000 | 1 | 0 | 1 | 0 | python,matrix | 16,971,642 | 2 | false | 0 | 0 | Think about who is using this programme, and how, then develop an interface which meets those needs. | 2 | 0 | 1 | Entering arbitrary sized matrices to manipulate them using different operations. | I am working on a project where you have to input a matrix that has an arbitrary size. I am using python. Suggestions? | 0.099668 | 0 | 0 | 51 |
16,969,812 | 2013-06-06T18:45:00.000 | 0 | 0 | 1 | 0 | python,object,properties,vmware,suds | 17,188,061 | 1 | false | 0 | 0 | If you are using PropertyCollector's RetrieveProperties, CheckForUupdates etc then in PropertyFilterSpec set PropertySpec.all=True. This will fetch all properties of the MOR. But this will be a huge performance hit. Instead I would suggest list out the properties you need in the PropertySpec.pathSet. | 1 | 0 | 0 | How does one get all the property values from an object. For example a method returned me an object, but when I print it out there's only type and value displayed. For example I've got an ManagedObjectReference of Task named obj. If I write print obj.info an error occurs:
AttributeError: returnval instance has no attribute 'info' | Python: VMware Object data | 0 | 0 | 0 | 134 |
16,972,556 | 2013-06-06T21:32:00.000 | 0 | 0 | 1 | 0 | python | 16,972,628 | 3 | false | 0 | 0 | Use integer division(//)
123456789 // 10000 will return 12345. | 1 | 2 | 0 | if i have an integer like this:
123456789
I would like to return only 12345 - so trimming the number to a length of 5 digits.
Is this possible without first converting to a string? I cant see any built in function to do this. | python trim number without converting to string first | 0 | 0 | 0 | 4,322 |
16,978,169 | 2013-06-07T07:03:00.000 | 2 | 0 | 0 | 0 | python-2.7,dropbox,dropbox-api | 16,988,885 | 1 | false | 0 | 0 | I think the right thing to do is to generate a media link each time you need it.
Is there a reason you don't like that solution? | 1 | 0 | 0 | I am using dropbox for one of my application where a user can connect their dropbox folders.
Usage is such that a user can create links among the files of a folder and many more. But the problem is the moment when I stored the file information in my application, the file media information is stored with a key expires. So obviously I wont be able to use the link next time once the expiry time is met.
One way is to generate the media information every time the user is selecting a thumbnail from my application, as I already have metadata of the file.
But is there any other way (i.e by using python client or API) that I can make a folder public when a user selects it to connect with my application.
Any help would be really appreciated.
Thanks in advance for your precious time. | Can I make a dropbox folder public by using python client? | 0.379949 | 0 | 1 | 130 |
16,981,708 | 2013-06-07T10:15:00.000 | 2 | 0 | 1 | 0 | python,python-2.7,scikit-learn,enthought,epd-python | 16,982,178 | 1 | true | 0 | 0 | Enthought Canopy 1.0.1 does not register the user's Python installation as the main one for the system. This has been fixed and will work in the upcoming release. | 1 | 0 | 1 | I have installed Enthought Canopy 32 - bit which comes with python 2.7 32 bit . And I downloaded windows installer scikit-learn-0.13.1.win32-py2.7 .. My machine is 64 bit. I could'nt find 64 bit scikit learn installer for intel processor, only AMD is available.
Python 2.7 required which was not found in the registry is the error message I get when I try to run the installer. How do I solve this? | Scikit-Learn windows Installation error : python 2.7 required which was not found in the registry | 1.2 | 0 | 0 | 494 |
16,982,300 | 2013-06-07T10:49:00.000 | 1 | 0 | 1 | 0 | python-2.7,tkinter | 16,991,920 | 2 | true | 0 | 1 | Try self.configure(background='black') or self['bg'] = 'black'. Most Tkinter widgets can be configured with similar properties. | 1 | 0 | 0 | I wrote a code to pop up a window on my screen and to print a data on the window.Here according to my requirement i want the entire screen to be BLACK in color. SO that i can print some text on that black window. The window title bar tkinter should be removed and the entire window screen should be in black. | python:window with no title bar, and change window default color(grey) to black | 1.2 | 0 | 0 | 875 |
16,983,673 | 2013-06-07T12:01:00.000 | 2 | 0 | 1 | 0 | python-3.x | 23,407,612 | 11 | false | 0 | 0 | In my case restarting (close / open new) the console or the Command Prompt window works | 10 | 29 | 0 | I attempt to run a python program and the following pops up in command prompt:
"Unable to initialize device PRN"
I should also mention that the program runs fine. | Unable to initialize device PRN in Python | 0.036348 | 0 | 0 | 107,528 |
16,983,673 | 2013-06-07T12:01:00.000 | 0 | 0 | 1 | 0 | python-3.x | 61,242,766 | 11 | false | 0 | 0 | I am using VS code and I just noticed this issue today. Google a little and I comment out all the print function still not working. I exit the vs code and launch it again and it works just perfectly. | 10 | 29 | 0 | I attempt to run a python program and the following pops up in command prompt:
"Unable to initialize device PRN"
I should also mention that the program runs fine. | Unable to initialize device PRN in Python | 0 | 0 | 0 | 107,528 |
16,983,673 | 2013-06-07T12:01:00.000 | 1 | 0 | 1 | 0 | python-3.x | 26,442,675 | 11 | false | 0 | 0 | Try changing name of the program; that worked for me. Don't forget: use fresh cmd when you start executing. | 10 | 29 | 0 | I attempt to run a python program and the following pops up in command prompt:
"Unable to initialize device PRN"
I should also mention that the program runs fine. | Unable to initialize device PRN in Python | 0.01818 | 0 | 0 | 107,528 |
16,983,673 | 2013-06-07T12:01:00.000 | 4 | 0 | 1 | 0 | python-3.x | 52,900,948 | 11 | false | 0 | 0 | I know this is an old question, but I experienced this issue in VS Code after installing the latest version of Python and the Python extension.
To fix it, I just needed to add the Python installation path to my PATH environment variable.
On Windows 10, go to Settings and search for Environment
Click on Edit environment variables for your account
Select Path and click the Edit button
If the path to your Python.exe is not listed, click the New button
Enter the path to your Python.exe application and click the OK button
The path for me was %LocalAppData%\Programs\Python\Python37-32\
Restart VS Code and try to run your python script again | 10 | 29 | 0 | I attempt to run a python program and the following pops up in command prompt:
"Unable to initialize device PRN"
I should also mention that the program runs fine. | Unable to initialize device PRN in Python | 0.072599 | 0 | 0 | 107,528 |
16,983,673 | 2013-06-07T12:01:00.000 | 0 | 0 | 1 | 0 | python-3.x | 60,726,731 | 11 | false | 0 | 0 | To fix this, add python.exe to system varible in your windows
On Windows 10, go to Settings and search for Environment
Click on Edit environment variables for your account
Select Path and click the Edit button
If the path to your Python.exe is not listed, click the New button
Enter the path to your Python.exe application and click the OK button
and restart the application | 10 | 29 | 0 | I attempt to run a python program and the following pops up in command prompt:
"Unable to initialize device PRN"
I should also mention that the program runs fine. | Unable to initialize device PRN in Python | 0 | 0 | 0 | 107,528 |
16,983,673 | 2013-06-07T12:01:00.000 | 63 | 0 | 1 | 0 | python-3.x | 26,721,404 | 11 | false | 0 | 0 | I had this same problem when I accidentally typed "print program.py" instead of "python program.py". The error message comes from the Windows command-line program named print. Those who suggested restarting the command prompt probably committed the same typo without noticing, and corrected it in their new command prompt. | 10 | 29 | 0 | I attempt to run a python program and the following pops up in command prompt:
"Unable to initialize device PRN"
I should also mention that the program runs fine. | Unable to initialize device PRN in Python | 1 | 0 | 0 | 107,528 |
16,983,673 | 2013-06-07T12:01:00.000 | 2 | 0 | 1 | 0 | python-3.x | 66,608,644 | 11 | false | 0 | 0 | I encountered this in VS Code. I had run some lines of code (using Shift + Enter), which starts a Python interpreter in a terminal, and then I called quit() at the terminal, to exit Python. If you try to run some more lines of code, it doesn't start another Python interpreter, it just runs those lines in the terminal, which of course doesn't work and causes error messages like the one in the question.
Two easy solutions are either to start a new Python interpreter in the terminal (by entering python), or killing the active terminal instance (the dustbin icon), before running more lines of code. | 10 | 29 | 0 | I attempt to run a python program and the following pops up in command prompt:
"Unable to initialize device PRN"
I should also mention that the program runs fine. | Unable to initialize device PRN in Python | 0.036348 | 0 | 0 | 107,528 |
16,983,673 | 2013-06-07T12:01:00.000 | 0 | 0 | 1 | 0 | python-3.x | 67,950,461 | 11 | false | 0 | 0 | Please try to enter python in terminal and then execute the same command .
In terminal type "Python" then run the command print("Hi"). It will work. | 10 | 29 | 0 | I attempt to run a python program and the following pops up in command prompt:
"Unable to initialize device PRN"
I should also mention that the program runs fine. | Unable to initialize device PRN in Python | 0 | 0 | 0 | 107,528 |
16,983,673 | 2013-06-07T12:01:00.000 | 0 | 0 | 1 | 0 | python-3.x | 72,175,477 | 11 | false | 0 | 0 | I had the same problem when using Netstat print route in order to view the
routing table. It gave the error:
'Unable to initialize device PRN'
. It disappeared when I used the equivalent
netstat -r (and it did print out the routing table) | 10 | 29 | 0 | I attempt to run a python program and the following pops up in command prompt:
"Unable to initialize device PRN"
I should also mention that the program runs fine. | Unable to initialize device PRN in Python | 0 | 0 | 0 | 107,528 |
16,983,673 | 2013-06-07T12:01:00.000 | 1 | 0 | 1 | 0 | python-3.x | 63,211,777 | 11 | false | 0 | 0 | I had the same problem by accidentally sending the word "print" to the command line from my programming which DOS interprets as the DOS command to print to the printer. | 10 | 29 | 0 | I attempt to run a python program and the following pops up in command prompt:
"Unable to initialize device PRN"
I should also mention that the program runs fine. | Unable to initialize device PRN in Python | 0.01818 | 0 | 0 | 107,528 |
16,985,604 | 2013-06-07T13:41:00.000 | 2 | 0 | 0 | 1 | python,mysql,macos | 16,985,650 | 1 | true | 0 | 0 | You probably need Xcode's Command Line Tools.
Download the lastest version of Xcode, then go to "Preferences", select "Download" tab, then install Command Line Tools. | 1 | 1 | 0 | MySQL is installed at /usr/local/mysql
In site.cfg the path for mysql_config is /usr/local/mysql/bin/mysql_config
but when i try to build in the terminal im getting this error:
hammads-imac-2:MySQL-python-1.2.4b4 syedhammad$ sudo python setup.py build
running build
running build_py
copying MySQLdb/release.py -> build/lib.macosx-10.8-intel-2.7/MySQLdb
running build_ext
building '_mysql' extension
clang -fno-strict-aliasing -fno-common -dynamic -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -mno-fused-madd -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -pipe -Dversion_info=(1,2,4,'beta',4) -D_version_=1.2.4b4 -I/usr/local/mysql/include -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c _mysql.c -o build/temp.macosx-10.8-intel-2.7/_mysql.o -Wno-null-conversion -Os -g -fno-strict-aliasing -arch x86_64
unable to execute clang: No such file or directory
error: command 'clang' failed with exit status 1
Help Please | Configuring MySQL with python on OS X lion | 1.2 | 1 | 0 | 141 |
16,991,456 | 2013-06-07T19:19:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,python-3.x | 16,991,548 | 2 | false | 0 | 0 | Python uses reference counting to keep track of variables. When all references to a variable are removed, that variable is garbage collected. However that garbage collection is done by python at it's own whim, not right away.
It could be that your code is going faster than python garbage collects, or that you have something wrong with your code. Since you didn't give any of your code there's no real way to know. | 1 | 5 | 0 | I am writing some python code to process huge amounts of data (almost 6 million pieces!).
In the code, I'm using a huge for loop to process each set. In that loop, I'm using the same variables every loop and overwriting them. When I ran the program, I noticed that the longer I ran it, the slower it got. Furthermore, upon further experimenting, I discovered that the speed if you ran it for values 10,000 - 10,100 was the same as from 0 to 100. Thus I concluded that since I was not creating more variables and merely processing existing ones, every time I overwrote a variable, it must be being saved somewhere by python.
So:
Am I right? must it be python saving my overwritten somewhere?
Or am I wrong? Is something else happening? | What happens to overwritten variables in python? | 0.099668 | 0 | 0 | 1,715 |
16,991,901 | 2013-06-07T19:48:00.000 | 9 | 1 | 1 | 0 | python,unit-testing,testing,python-unittest | 24,315,867 | 4 | false | 0 | 0 | In setUp(), self._testMethodName contains the name of the test that will be executed. It's likely better to put the test into a different class or something, of course, but it's in there. | 1 | 28 | 0 | When I create a unittest.TestCase, I can define a setUp() function that will run before every test in that test case. Is it possible to skip the setUp() for a single specific test?
It's possible that wanting to skip setUp() for a given test is not a good practice. I'm fairly new to unit testing and any suggestion regarding the subject is welcome. | Is it possible to skip setUp() for a specific test in python's unittest? | 1 | 0 | 0 | 10,928 |
16,994,232 | 2013-06-07T23:11:00.000 | 23 | 0 | 1 | 0 | python,pyqt4,homebrew | 17,066,007 | 5 | false | 0 | 1 | After brew install pyqt, you can brew test pyqt which will use the python you have got in your PATH in oder to do the test (show a Qt window).
For non-brewed Python, you'll have to set your PYTHONPATH as brew info pyqt will tell.
Sometimes it is necessary to open a new shell or tap in order to use the freshly brewed binaries.
I frequently check these issues by printing the sys.path from inside of python:
python -c "import sys; print(sys.path)"
The $(brew --prefix)/lib/pythonX.Y/site-packages have to be in the sys.path in order to be able to import stuff. As said, for brewed python, this is default but for any other python, you will have to set the PYTHONPATH. | 4 | 27 | 0 | I installed pyqt4 by using Homebrew. But when I import PyQt4 in python interpreter, It said that "No module named PyQt4". Can somebody help me with that? | ImportError: No module named PyQt4 | 1 | 0 | 0 | 122,359 |
16,994,232 | 2013-06-07T23:11:00.000 | 5 | 0 | 1 | 0 | python,pyqt4,homebrew | 39,129,422 | 5 | false | 0 | 1 | I solved the same problem for my own program by installing python3-pyqt4.
I'm not using Python 3 but it still helped. | 4 | 27 | 0 | I installed pyqt4 by using Homebrew. But when I import PyQt4 in python interpreter, It said that "No module named PyQt4". Can somebody help me with that? | ImportError: No module named PyQt4 | 0.197375 | 0 | 0 | 122,359 |
16,994,232 | 2013-06-07T23:11:00.000 | 14 | 0 | 1 | 0 | python,pyqt4,homebrew | 37,955,316 | 5 | false | 0 | 1 | You have to check which Python you are using. I had the same problem because the Python I was using was not the same one that brew was using. In your command line:
which python
output: /usr/bin/python
which brew
output: /usr/local/bin/brew //so they are different
cd /usr/local/lib/python2.7/site-packages
ls //you can see PyQt4 and sip are here
Now you need to add usr/local/lib/python2.7/site-packages to your python path.
open ~/.bash_profile //you will open your bash_profile file in your editor
Add 'export PYTHONPATH=/usr/local/lib/python2.7/site-packages:$PYTHONPATH' to your bash file and save it
Close your terminal and restart it to reload the shell
python
import PyQt4 // it is ok now | 4 | 27 | 0 | I installed pyqt4 by using Homebrew. But when I import PyQt4 in python interpreter, It said that "No module named PyQt4". Can somebody help me with that? | ImportError: No module named PyQt4 | 1 | 0 | 0 | 122,359 |
16,994,232 | 2013-06-07T23:11:00.000 | 4 | 0 | 1 | 0 | python,pyqt4,homebrew | 16,994,978 | 5 | false | 0 | 1 | It is likely that you are running the python executable from /usr/bin (Apple version) instead of /usr/loca/bin (Brew version)
You can either
a) check your PATH variable
or
b) run brew doctor
or
c) run which python
to check if it is the case. | 4 | 27 | 0 | I installed pyqt4 by using Homebrew. But when I import PyQt4 in python interpreter, It said that "No module named PyQt4". Can somebody help me with that? | ImportError: No module named PyQt4 | 0.158649 | 0 | 0 | 122,359 |
16,994,243 | 2013-06-07T23:13:00.000 | 5 | 0 | 1 | 0 | python,pygame,draw | 17,000,916 | 1 | true | 0 | 1 | The command is pygame.draw.circle(Surface, color, pos, radius, width=0). The last argument of width can be changed to not fill the circle in. If it is zero, the circle is solid and if it is anything else it is with an edge thickness of whatever the parameter is. | 1 | 1 | 0 | If I want to craw a circle, I use the regualr draw.circle but the problem is that I don't want it filled so I draw a smaller circle inside of it, it makes a circle with thickness.
Everything is good right? no. The background image inside that circle disappeared because the circles are filled.
Now my question is: There is a function that draws a circle (and other shapes[?]) that gets thick and not filling the whole inside of the shape?
EDIT: If you can give me the signature so I know how to use it. | pygame not filled shapes (Python) | 1.2 | 0 | 0 | 9,645 |
16,994,514 | 2013-06-07T23:48:00.000 | 0 | 0 | 0 | 0 | python,heroku,redis | 20,607,006 | 2 | false | 0 | 0 | I think maybe you should keep your redis instance in the global, let all requests share the same redis instance, this should not cause too many connections anymore. The redis instance will have its own connection pool, you can limit your connection nums by set max_connections parameter to redis.ConnectionPool. If max_connections is set, then this object raises redis.ConnectionError when the pool's limit is reached. | 1 | 2 | 0 | I'd like to avoid running into "Max number of clients reached" errors with interfacing with a 3rd party Redis host from my Heroku app by limiting the number of connections held in the pool to an arbitrary amount of my choosing.
Is that possible? | limit number of connections to redis in py-redis | 0 | 0 | 1 | 1,957 |
16,995,007 | 2013-06-08T01:09:00.000 | -2 | 0 | 1 | 0 | python,counter,urlopen | 16,995,022 | 3 | false | 0 | 0 | i think you can't do it that way.
Delete duplicates in list. | 1 | 0 | 0 | I'm running a Python code that reads a list of URLs and opens each one of them individually with urlopen. Some URLs are repeated in the list. An example of the list would be something like:
www.example.com/page1
www.example.com/page1
www.example.com/page2
www.example.com/page2
www.example.com/page2
www.example.com/page3
www.example.com/page4
www.example.com/page4
[...]
I would like to know if there's a way to implement a counter that would tell me how many times a unique URL was opened previously by the code. I want to get a counter that would return me what is showed in bold for each of the URLs in the list.
www.example.com/page1 : 0
www.example.com/page1 : 1
www.example.com/page2 : 0
www.example.com/page2 : 1
www.example.com/page2 : 2
www.example.com/page3 : 0
www.example.com/page4 : 0
www.example.com/page4 : 1
Thanks! | How to count the number of time a unique URL is open in python? | -0.132549 | 0 | 0 | 1,310 |
16,999,676 | 2013-06-08T12:48:00.000 | 0 | 0 | 0 | 0 | python,sql,design-patterns | 17,017,714 | 1 | false | 0 | 0 | So after playing around with it some more I now have a solution that is halfway decent. I split the class in question up into three separate classes:
A class that provides access to the required data;
A context manager that supports the temporary table stuff;
And the old class with all the logic (sans the database stuff);
When I instantiate my logic class I supply it with an instance of the aforementioned classes. It works ok, abstraction is slightly leaky (especially the context manager), but I can at least unit test the logic properly now. | 1 | 3 | 0 | We are currently developing an application that makes heavy use of PostgreSQL. For the most part we access the database using SQLAlchemy, and this works very well. For testing the relevant objects can be either mocked, or used without database access. But there are some parts of the system that run non-standard queries. These subsystems have to create temporary tables insert a huge number of rows and then merge data back into the main table.
Currently there are some SQL statements in these subsystems, but this makes the relevant classes tightly coupled with the database, which in turn makes things harder to unit-test.
Basically my question is, is there any design pattern for solving this problem? The only thing that I could come up with is to put these SQL statements into a separate class and just pass an instance to the other class. This way I can mock the query-class for unit-tests, but it still feels a bit clumsy. Is there a better way to do this? | Design Pattern for complicated queries | 0 | 1 | 0 | 161 |
17,000,163 | 2013-06-08T13:50:00.000 | 2 | 0 | 1 | 0 | python,list,function,standards | 17,000,218 | 2 | false | 0 | 0 | This is something I'm thinking about a lot, too. Second approach is probably cleaner and better because clients don't need to assume anything about your function and it's not confusing. Me personally, I would prefer that. Whenever I call a function, I'd like for it to return a new value with the old unchanged. But, that is just me. | 1 | 4 | 0 | I am making a function to perform the midpoint displacement algorithm, as well as some other realistic looking terrain generation functions, on a 2d list (of format [[n11,n12,...],[n11,n12,...],...])
My question is, in python, is it standard to change the input list in such a case (with no return value) or is it better to make a deep copy of the list and return it?
I know copy and return is less efficient, however I don't want use of the function to be confusing to others. | In python should a function change a list or make a copy and return that? | 0.197375 | 0 | 0 | 285 |
17,001,402 | 2013-06-08T16:10:00.000 | 0 | 0 | 0 | 0 | python,collections,plone,zope | 17,032,461 | 1 | false | 0 | 0 | There is no random sort criteria. Any randomness will need to be done in custom application code. | 1 | 0 | 1 | is there any best practice for adding a "random" sort criteria to the old style collection in Plone?
My versions:
Plone 4.3 (4305)
CMF 2.2.7
Zope 2.13.19 | New sort criteria "random" for Plone 4 old style collections | 0 | 0 | 0 | 109 |
17,006,134 | 2013-06-09T03:30:00.000 | 4 | 1 | 1 | 0 | javascript,python,garbage-collection,v8,pyv8 | 17,597,335 | 1 | true | 1 | 0 | It's possible to disable garbage collection for good by changing the source code of V8.
In V8's source, edit src/heap.cc, and put a return statement in the beginning of Heap::CollectGarbage.
Other than that, it's not possible (AFAICT): V8 will always invoke garbage collection when it's about to run out of memory. There is no (configurable) way to not have it do that. | 1 | 4 | 0 | I'm having an issue which seems to be related with the way Python & PyV8's garbage collection interact. I've temporarily solved the issue by disabling python's garbage collection, and calling gc.collect and PyV8.JSEngine.collect together every few seconds when no JavaScript is being run. However, this seems like a pretty hackish fix... in particular, I'm worried PyV8 might decide to collect at an inopportune time and cause problems, anyway. Is there any way to disable PyV8's automatic garbage collection for good, at least until I have a few days to spend figuring out exactly what is going on and thus to actually fix the issue? | PyV8 disable automatic garbage collection | 1.2 | 0 | 0 | 482 |
17,010,636 | 2013-06-09T14:28:00.000 | 3 | 0 | 0 | 0 | python,django,session,cookies,login | 17,010,697 | 1 | true | 1 | 0 | I would do this in middleware. Have an attribute in the profile, or the session, which records the date the user was last seen, and in the middleware check if it is < today: if so, award the points and update the field. | 1 | 1 | 0 | In our project we give the users points for every day they visit the site.
The issue is that the user doesn't always log in in an explicit way (e.g. submitting login form), but often when he comes back he's logged in thanks to the cookie session id set by Django and we can't recognize his login in any way.
How can I check if the user has logged in this way? | How to recognize user login when log in comes from cookie? | 1.2 | 0 | 0 | 124 |
17,011,695 | 2013-06-09T16:29:00.000 | 0 | 0 | 1 | 0 | android,python,tablet | 17,013,723 | 2 | true | 0 | 0 | Ithink I'llgo for @levon 's sugestion of the web-based learnpython.org. Thanks. | 1 | 0 | 0 | Is there a simple environment like idle for someone to learn their first programming language - Python, on an Android tablet?
I am not after the complexity of creating apps. They just need to learn basic Python.
Thanks. | Simple Python environment for learning the language on Android tablet? | 1.2 | 0 | 0 | 280 |
17,012,349 | 2013-06-09T17:36:00.000 | 4 | 0 | 0 | 0 | python,psycopg2 | 17,012,369 | 2 | true | 1 | 0 | Have you looked in to SQLAlchemy at all? It takes care of a lot of the dirty details - it maintains a pool of connections, and reuses/closes them as necessary. | 1 | 4 | 0 | I've been writing a Python web app (in Flask) for a while now, and I don't believe I fully grasp how database access should work across multiple request/response cycles. Prior to Python my web programming experience was in PHP (several years worth) and I'm afraid that my PHP experience is misleading some of my Python work.
In PHP, each new request creates a brand new DB connection, because nothing is shared across requests. The more requests you have, the more connections you need to support. However, in a Python web app, where there is shared state across requests, DB connections can persist.
So I need to manage those connections, and ensure that I close them. Also, I need to have some kind of connection pool, because if I have just one connection shared across all requests, then requests could block waiting on DB access, if I don't have enough connections available.
Is this a correct understanding? Or have I identified the differences well? In a Python web app, do I need to have a DB connection pool that shares its connections across many requests? And the number of connections in the pool will depend on my application's request load?
I'm using Psycopg2. | Database access strategy for a Python web app | 1.2 | 1 | 0 | 351 |
17,013,738 | 2013-06-09T20:13:00.000 | 1 | 0 | 1 | 0 | python,python-3.x | 17,013,917 | 2 | false | 0 | 0 | You don't necessarily have to create a class in python, but the use of classes would help you wrap up functions and variables into an object. | 1 | 2 | 0 | I'm creating my own python module that has some functions I want to use. I wanna know how to do these things:
How do I proceed with my python module called module.py, so I import only the functions I want, when calling 'import module.function'? (so I don't have to import the entire module)
Does I always have to create a class for my functions, even if I would never use more than ONE object of that class? (if not, how to create a function that has all the self variables inside, so them don't mess up with global variables from the entire code?) (but without use def fun(self, var1, var2...) because I don't want to call fun("", var1, var2...)
Is it better to 'install' my module or use it like a external file? | Creating my own python module - questions | 0.099668 | 0 | 0 | 382 |
17,021,318 | 2013-06-10T10:03:00.000 | 3 | 0 | 0 | 0 | python,django | 17,021,455 | 2 | true | 1 | 0 | You can do everything web with Django just like with any other webframework/weblibrary.
Probably the easiest way would be to have a user-profile, and as soon as the payment has been working out you add this video to the users "allowed" list. This makes it quite easy to show the users available videos.
The redirection thing after payment really depends on your provider, paypal and others allow you to embed their payment process into your application, and have powerful APIs to check for "incoming" payments. | 2 | 0 | 0 | Is it possible to implement this in django: In a video site, for every video a user want to watch he/she must pay a fee before watching the video. If it's possible, what's the best way to implement this. And after every successful payment, how can the user be redirected back to the particular video he paid for? | Pay Per Request in Django | 1.2 | 0 | 0 | 228 |
17,021,318 | 2013-06-10T10:03:00.000 | 0 | 0 | 0 | 0 | python,django | 17,021,565 | 2 | false | 1 | 0 | I believe this is possible.
What you can do is have a check on your video page for a certain receipt that you can add as an entry in the UserProfile model that you'll have for your django website.
Now this receipt will only be generated when your user goes through the complete payment path which you can handle by scripts.
And as for redirection, you can have the payment processing service redirect to your receipt token generating django view that handles the addition of the user to your whitelist for that video. | 2 | 0 | 0 | Is it possible to implement this in django: In a video site, for every video a user want to watch he/she must pay a fee before watching the video. If it's possible, what's the best way to implement this. And after every successful payment, how can the user be redirected back to the particular video he paid for? | Pay Per Request in Django | 0 | 0 | 0 | 228 |
17,022,237 | 2013-06-10T10:55:00.000 | 2 | 0 | 0 | 1 | python,logging,twisted | 17,023,178 | 1 | true | 0 | 0 | You can definitely do this with Twisted's logging system. You're on the right track by looking at DailyLogFile.
However, consider that the best solution might involve idiomatically integrating with the target deployment platform. If the convention on the platform is for applications to manage their own log files, then I'd say you're on the right track.
If, instead, the convention is for applications to run under a manager like launchd, then you may want to consider that approach instead. If all deployed software follows the same local conventions, then the system admin has an easier time managing everything correctly. | 1 | 1 | 0 | what should be the better way to use Python twistedmatrix log file, and customize it so that it can be :
- rotating on a weekly basis (sunday)
- with a custom naming convention (replace the current date _underscore glued that can be seen in the DailyLogFile with something like myfile.yyyymmdd.log or so)
shoud it be by writing my own/subclassing in the same way as class DailyLogFile(BaseLogFile): ?
i have seen that some consider logrotate from linux command, but i wanted to go with a python twistedmatrix solution. (but maybe are there some trouble that i dot have guessed ?)
best regards | Twistedmatrix, rotate log on a weekly basis and customizing name of the log | 1.2 | 0 | 0 | 113 |
17,027,970 | 2013-06-10T15:54:00.000 | 0 | 0 | 0 | 1 | python,sockets,permissions,udp,traceroute | 17,045,881 | 1 | true | 0 | 0 | The conclusion I've come to is that I'm restricted to parsing the output of the traceroute using subprocess. traceroute is able to overcome the root-requirement by using setuid for portions of the code effectively allowing that portion of the code to run as root. Since I cannot establish those rights without root privileges I'm forced to rely on the existence of traceroute since that is the more probable of the two situations. | 1 | 2 | 0 | I'm trying to implement a UDP traceroute solution in Python 2.6, but I'm having trouble understanding why I need root privileges to perform the same-ish action as the traceroute utility that comes with my operating system.
The environment that this code will run in will very doubtfully have root privileges, so is it more likely that I will have to forego a python implementation and write something to parse the output of the OS traceroute in UDP mode? Or is there something I'm missing about opening a socket configured like self.rx = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_UDP). It seems that socket.SOCK_RAW is inaccessible without root privileges which is effectively preventing me from consuming the data I need to implement this in python. | Implementing UDP traceroute in Python without root | 1.2 | 0 | 1 | 1,380 |
17,028,568 | 2013-06-10T16:25:00.000 | 0 | 0 | 1 | 0 | python,pyc | 17,028,692 | 2 | false | 0 | 0 | As far as I know, the compile-to-bytecode process works the same as normal make in how it handles recompilation; it checks if the compiled files are older than the source files, and if they are it recompiles while if they aren't it leaves them be. My best suggestion would be to just clear the PYC files whenever sudden power failure is suspected and do an import to get the new ones (you could probably automate this, too, to make it simpler).
Note that I have not experimented to see if Python waits until a bytecode file is complete before writing to disk; if it does, that would reduce the possibility of the sort of corruption you talk about, as the PYC file would simply not be written to disk to begin with if the power failure occurred during actual compilation. However, corruption would still be an issue if the power loss occurs during the write, as I believe most OSes update a file's modification time when the file is opened with write access rather than when the file handle is closed. | 1 | 2 | 0 | Wondering if python is able to institute checks within their compile-phase of a given py to pyc to guard against a corrupt pyc due to sudden power-down of the system(and disk).
When the system comes back up will the pyc that may exist be checked for integrity
and if its considered suspect, regenerated? | Python pyc possible when power cycling? | 0 | 0 | 0 | 369 |
17,029,608 | 2013-06-10T17:31:00.000 | 1 | 0 | 0 | 0 | wxpython,listctrl | 17,091,808 | 2 | false | 0 | 1 | I don't believe there is a built-in method to accomplish this. You would have to save the data, clear the control and then insert the new row or rows followed by the original rows. Personally, I would switch to using the ObjectListView widget where you can use lists of objects. Then you could just insert an object into the list and reset the control. | 1 | 0 | 0 | I have a ListCtrl and i wish to add the new rows to the top of the list (prior rows to be pushed down)
can you help me on that?
Thanks! | WxPython, ListCtrl add rows to the top | 0.099668 | 0 | 0 | 981 |
17,030,327 | 2013-06-10T18:21:00.000 | 0 | 1 | 1 | 0 | python,path,installation,setup.py | 17,296,790 | 2 | false | 0 | 0 | This will probably not answer your question, but if you need to access the source code of a package you have installed, or any other file within this package, the best way to do it is to install this package in develop mode (by downloading the sources, putting it wherever you want and then running python setup.py develop in the base directory of the package sources). This way you know where the package is found. | 1 | 10 | 0 | After installation, I would like to make soft-links to some of the configuration & data files created by installation.
How can I determine the location of a new package's files installed from within the package's setup.py?
I initially hard-coded the path "/usr/local/lib/python2.7/dist-packages", but that broke when I tried using a virtual environment. (Created by virtualenv.)
I tried distutils.sysconfig.get_python_lib(), and that works inside the virtualenv. When installed on the real system, however, it returns "/usr/lib/python2.7/dist-packages" (Note the "local" directory isn't present.)
I've also tried site.getsitepackages():
Running a Python shell from the base environment:
import site
site.getusersitepackages()
'/home/sarah/.local/lib/python2.7/site-packages'
site.getsitepackages()
['/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages']
Running a Python shell from a virtual environment "testenv":
import site
site.getsitepackages()
Traceback (most recent call last):
File "", line 1, in
AttributeError: 'module' object has no attribute 'getsitepackages'
I'm running "Python 2.7.3 (default, Aug 1 2012, 05:14:39)" with "[GCC 4.6.3] on linux2" on Ubuntu. I can probably cobble something together with try-except blocks, but it seems like there should be some variable set / returned by distutils / setuptools. (I'm agnostic about which branch to use, as long as it works.)
Thanks. | Detect python package installation path from within setup.py | 0 | 0 | 0 | 3,205 |
17,030,589 | 2013-06-10T18:41:00.000 | 1 | 0 | 1 | 1 | ipython-notebook | 17,628,850 | 1 | false | 0 | 0 | For those who end up on this page, here's the solution. This is happening because your OS package manager (in my case 12.04) is lagging pypi in python packages - but not in core libraries (like zeromq).
To solve this, my recommended solution is to install python-pandas using your package manager, but also install systemwide "pip". and then run "sudo pip install --upgrade ipython,pandas"
this should get everything back in sync. | 1 | 1 | 0 | I am using python 2.6, ipython 0.12.1, tornado 3.02, pyzmq 13.1 , I am getting this error when I start ipython notebook.
"Websocket connection cannot be made"
In the ipython console window I get torado.application error , in line 183 in create_shell_stream
shell_stream = self.create_connected_stream(ip.....,zmq.XREQ)
error is "module" object has no attribute 'XREQ'
Do you know what's wrong? and how can I fix this error?
I installed ipython, tornado and pyzmq seperate and not from easy_install or pip. | Ipython - Notebook error: Tornado.application : "Module" object has no attribute 'XREQ' | 0.197375 | 0 | 0 | 494 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.