Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
28,741,590 | 2015-02-26T11:44:00.000 | 2 | 1 | 0 | 0 | python-2.7,sentiment-analysis,textblob | 30,297,074 | 1 | false | 0 | 0 | TextBlob used a classifier to predict the polarity of a sentence. By default it uses the Movie-Review data set. You can train it using your own training model. | 1 | 2 | 0 | How does textblob calculate polarity in sentiment analysis? What logic does it follow and can we change the logic?
Thank you. | Logic behind the polarity score calculated by TEXTBLOB? | 0.379949 | 0 | 0 | 650 |
28,741,731 | 2015-02-26T11:51:00.000 | 0 | 1 | 1 | 0 | python,events,rabbitmq,amqp | 28,864,448 | 1 | false | 0 | 0 | The right answer to your question depends on your architecture. If you're adopting a microservices-style architecture, it might make sense to use AMQP for all your event communication. If you're adopting more of a monolithic architecture, you may want to use PyDispatcher + AMQP.
Note that AMQP and RabbitMQ shouldn't be used interchangeably. AMQP is a protocol. RabbitMQ implements a specific version of AMQP (0.9 and 0.9.1). ActiveMQ (broker), Apache Qpid (broker), Apache Qpid Proton (client), and Microsoft Azure (cloud service) support AMQP 1.0. | 1 | 1 | 0 | I am developing a trading/position management system in Python. It is therefore going to interact with external software, database events, and also on timed events as well.
Is it efficient enough to use a AMQP like RabbitMQ to handle everything? Or should I use a PyDispatcher for local events and AMQP for external events?
Are both these solutions going to be slower than just say, a normal function call from one python script to another imported python script?
Many thanks! | Pydispatcher or AMQP/RabbitMQ for local event driven processing written in python? | 0 | 0 | 0 | 285 |
28,749,243 | 2015-02-26T17:45:00.000 | 3 | 0 | 0 | 0 | python,qt,squish | 30,211,284 | 2 | false | 0 | 1 | ... but I have noticed when I copy and paste tests from a suite
I have and move them to another suite or computer with Squish the file
will run, but when it runs anytime it sees an object name it does not
recognize it and throws an exception ...
For your test script, Squish creates this suite folder you mention.
Inside this folder, you also have besides the test.py file, a object.map file (in which squish stores all the objects that you use inside the test).
Also, besides those 2 files you have a suite.conf file.
Instead of copy/paste your test script file, you could move the suite folder on the other computer, and there open it in SquishIDE. Or along with the test script, copy also the object.map file. | 1 | 0 | 0 | I'm using a combination of Python and Squish for Qt to write tests on a Qt GUI, but I have noticed when I copy and paste tests from a suite I have and move them to another suite or computer with Squish the file will run, but when it runs anytime it sees an object name it does not recognize it and throws an exception. Most of the time I use the picker tool to go get the object name and place it where the old object name was and it works (I might add that the object names haven't changed I'm literally copying and pasting a string over THE SAME EXACT string). I must be doing something wrong. Has anyone seen this or have a way of resolving the issue so I don't have to re-record? | Has anyone seen Squish that you must re-record after moving? | 0.291313 | 0 | 0 | 246 |
28,751,764 | 2015-02-26T20:06:00.000 | 1 | 1 | 1 | 1 | python,installation,anaconda,packages,pydicom | 28,770,249 | 1 | true | 0 | 0 | You shouldn't copy the source to site-packages directly. Rather, use python setup.py install in the source directory, or use pip install .. Make sure your Python is indeed the one in /usr/local/anaconda, especially if you use sudo (which in general is not necessary and not recommended with Anaconda). | 1 | 2 | 0 | I'm installing some additional packages to anaconda and I can't get them to work. One such package is pydicom which I downloaded, unziped, and moved to /usr/local/anaconda/lib/python2.7/site-package/pydicom. In the pydicom folder the is a subfolder called source which contains both ez_setup.py and setup.py. I ran sudo python setup.py install which didn't spit out any errors and then ran sudo python ez_setup.py install when I still couldn't get the module to open in ipython. Now I can successfully import dicom but ONLY when my current directory is /usr/local/anaconda/lib/python2.7/site-package/pydicom/source. How do I get it so I import it from any directory? I'm running CentOS and I put
export PATH=/usr/local/anaconda/bin:$PATH
export PATH=/usr/local/anaconda/lib/python2.7/:$PATH
in my .bashrc file. | How do I get python to recognize a module from any directory? | 1.2 | 0 | 0 | 1,420 |
28,752,126 | 2015-02-26T20:28:00.000 | 7 | 0 | 0 | 0 | python,numpy,fft,fftw | 28,753,451 | 3 | true | 0 | 0 | If you are implementing the DFFT entirely within Python, your code will run orders of magnitude slower than either package you mentioned. Not just because those libraries are written in much lower-level languages, but also (FFTW in particular) they are written so heavily optimized, taking advantage of cache locality, vector units, and basically every trick in the book, that it would not surprise me if they ran at 10,000x the speed of a naive Python implementation. Even if you are using numpy in your implementation, it will still pale in comparison.
So yes; use numpy's fftpack. If that is not fast enough, you can try the python bindings for FFTW (PyFFTW), but the speedup from fftpack to fftw will not be nearly as dramatic. I really doubt there's a need to drop into C++ just for FFTs - they're sort of the ideal case for Python bindings. | 1 | 0 | 1 | I am currently need to run FFT on 1024 sample points signal. So far I have implementing my own DFT algorithm in python, but it is very slow. If I use the NUMPY fftpack, or even move to C++ and use FFTW, do you guys think it would be better? | Numpy fft.pack vs FFTW vs Implement DFT on your own | 1.2 | 0 | 0 | 4,655 |
28,752,446 | 2015-02-26T20:46:00.000 | 2 | 0 | 0 | 0 | python,tkinter | 28,753,245 | 2 | true | 0 | 1 | If you plan on literally destroying everything in the root window, my recommendation is to make a single frame be a child of the root window, and then put everything inside that frame. Then, when you need to clear the root window you only have to delete this one widget, and it will take care of deleting all of the other widgets.
If instead of destroying the widgets you want to merely hide them, the best solution is to use grid, so that you can use grid_forget to remove them from view, and grid to make them visible again. You can use pack and pack_forget, but pack doesn't remember where the widget was, making it more difficult to restore them without a lot of work.
If your program is made of logical groups of widgets, use frames to create the groups, and then you can destroy or call grid_forget on the entire frame at once, rather than having to do that for each individual widget. | 1 | 1 | 0 | I am new to tkinter and I was wondering if there is a way to clear whole root (window). I tried with root.destroy() , but I want to have a chance to undo(to callback some widgets). I also tried .pack_forget() and .grid_forget() , but it takes a lot of time and later may cause many bugs in program.
Thank you for help. | Clear whole root | 1.2 | 0 | 0 | 4,266 |
28,753,061 | 2015-02-26T21:26:00.000 | 1 | 0 | 0 | 1 | google-app-engine,google-cloud-storage,google-app-engine-python | 28,754,390 | 1 | false | 0 | 0 | Unfortunately you only have two good options here:
Have a service which authenticates the individial app according to whatever scheme you like (some installation license, a random GUID assigned at creation time, whatever) and vends GCS signed URLs, which the end user could then use for a single operation, like uploading an object or listing a bucket's content. The downside here is that all requests must involve your service. All resources would belong entirely to your application.
Abandon the "without asking the user to login" requirement and require a single Google login at install time. | 1 | 1 | 0 | I'd like to give to each of my customers access to their own bucket under my GCS enabled app.
I also need to make sure that a user's bucket is safe from other users' actions.
Last but not least, the customer will be a client application, so the whole process needs to be done transparently without asking the user to login.
If I apply an ACL on each bucket, granting access only to the user I want, can I create an API key only for that bucket and hand that API key to the client app to perform GCS API calls? | Can I have GCS private isolated buckets with a unique api key per bucket? | 0.197375 | 0 | 1 | 56 |
28,756,204 | 2015-02-27T01:50:00.000 | 0 | 0 | 1 | 0 | python,multiprocessing | 61,904,891 | 1 | false | 0 | 0 | It seems like Beep function couldn't play several notes at the same time. You can record the sound of 'Beep' through sounddevice library, save the sound in wav files. Then, use subprocess.Popen to achieve multiprocess. The subprocess would play the wav files. | 1 | 0 | 0 | I am writing a basic piano style program that uses functions with winsound.Beep to play different note. I am new to multi-processing, and was wondering how I would be able to play two notes at once. If that is not possible, perhaps there is a way to combine frequencies that I do not know. Thanks for reading
~Jimnebob | How do I use the multiprocess library with winsound in python? | 0 | 0 | 0 | 142 |
28,756,362 | 2015-02-27T02:09:00.000 | -2 | 0 | 0 | 1 | python,process | 28,756,912 | 6 | false | 0 | 0 | popen works great because you can run things through grep, cut, etc. So you can tailor the info to exactly what you want. | 1 | 3 | 0 | I know it has something to do with /proc but I'm not really familiar with it. | How do I show a list of processes for the current user using python? | -0.066568 | 0 | 0 | 4,574 |
28,757,736 | 2015-02-27T04:49:00.000 | 1 | 0 | 0 | 0 | python,django | 28,760,572 | 1 | true | 1 | 0 | I would recommend using Celery processes in the background and perhaps use the awesome Twisted library for handling your network requirements. | 1 | 1 | 0 | I have a django framework that receives user form. Thi input is used to create a TCP connection with a backhand process and send and receive json on it.
How do I handle several TCP connections using django framework? | how to handle tcp connections on django | 1.2 | 0 | 0 | 143 |
28,765,243 | 2015-02-27T12:33:00.000 | 2 | 1 | 0 | 0 | python,api,automation,web-api-testing | 38,866,398 | 1 | false | 0 | 0 | I have done API Automation framework using JAVA - TestNG - HTTP Client.
It's a Hybrid framework consist of,
Data Driven Model : Reads data from JSON/ XML file.
Method-Driven : I have written POJO for JSON Objects and Arrays reading and writing.
Report : I will be getting the Report using TestNG customized report format
Dependency Management: I have used Maven.
This framework I have Integrated with Jenkins for Continous Integration.
SCM : I have Used GIT for that. | 1 | 3 | 0 | I am planning to have API Automation Framework built on the top pf Python + Request Library
Expected flow:
1) Read Request Specification From input file "csv/xml"
2) Make API Request & get Response & analyse the same
3) Store Test Results
4) Communicate the same
Initial 'smoke test' to be performed with basic cases then the detailed ones.There will be 'n' number of api's with respective cases. | API Test Automation Framework Structure | 0.379949 | 0 | 1 | 1,451 |
28,765,398 | 2015-02-27T12:42:00.000 | 1 | 0 | 0 | 0 | javascript,python,html,web-scraping,beautifulsoup | 28,852,463 | 2 | false | 1 | 0 | The Python binding for Selenium and phantomjs (if you want to use a headless browser as backend) are the appropriate tools for this job. | 2 | 0 | 0 | I want to fetch few data/values from a website. I have used beautifulsoup for this and the fields are blank when I try to fetch them from my Python script, whereas when I am inspecting elements of the webpage I can clearly see the values are available in the table row data.
When i saw the HTML Source I noticed its blank there too.
I came up with a reason, the website is using Javascript to populate the values in their corresponding fields from its own database. If so then how can i fetch them using Python? | How to fetch data from a website using Python that is being populated by Javascript? | 0.099668 | 0 | 1 | 272 |
28,765,398 | 2015-02-27T12:42:00.000 | 0 | 0 | 0 | 0 | javascript,python,html,web-scraping,beautifulsoup | 28,850,142 | 2 | false | 1 | 0 | Yes, you can scrape JS data, it just takes a bit more hacking. Anything a browser can do, python can do.
If you're using firebug, look at the network tab to see from which particular request your data is coming from. In chrome element inspection, you can find this information in a tab named network, too. Just hit ctrl-F to search the response content of the requests.
If you found the right request, the data might be embedded in JS code, in which case you'll have some regex parsing to do. If you're lucky, the format is xml or json, in which case you can just use the associated builtin parser. | 2 | 0 | 0 | I want to fetch few data/values from a website. I have used beautifulsoup for this and the fields are blank when I try to fetch them from my Python script, whereas when I am inspecting elements of the webpage I can clearly see the values are available in the table row data.
When i saw the HTML Source I noticed its blank there too.
I came up with a reason, the website is using Javascript to populate the values in their corresponding fields from its own database. If so then how can i fetch them using Python? | How to fetch data from a website using Python that is being populated by Javascript? | 0 | 0 | 1 | 272 |
28,775,534 | 2015-02-27T22:37:00.000 | 0 | 0 | 0 | 0 | python-3.x,gtk,glade | 28,878,731 | 1 | true | 0 | 0 | After a lot of searching for a Glade-only solution, I think that Gtk.Menuitem doesn't have a URL-open option. I now just defined on_menuitem_clicked-function that uses:
webbrowser.open_new_tab()
from the standard library. | 1 | 0 | 0 | What is the best way to create a menuitem (for the Gtk.MenuBar) that should open the default browser with a new tab and loading an URL?
Is it possible to do that in Glade directly or do I need to create that function in the program code itself? Is there a preferred way in Python 3 to do that? | Link to website in Gtk.MenuBar using Glade | 1.2 | 0 | 1 | 178 |
28,777,006 | 2015-02-28T01:25:00.000 | 5 | 0 | 1 | 0 | python,python-2.7,pdb,spyder | 54,999,358 | 2 | false | 0 | 0 | You can use ipdb. put ipdb.set_trace() wherever you want to debug. Then press s to step into the function. | 1 | 5 | 0 | Note: To explain this quickly I'm going to talk about this from the perspective of working in Spyder.
If the a function is called in my code, I can put a break point next to where it's called and then when my code gets to that point I can click the "Step into function.." button to see what happens inside this function.
Suppose I'm at some arbitrary breakpoint and want to see what happens inside a function that's not in my code. Is there any way to call this function through the pdb console and "step into" said function call? | python pdb: step into a function called from console | 0.462117 | 0 | 0 | 3,082 |
28,777,882 | 2015-02-28T03:45:00.000 | 0 | 0 | 0 | 0 | python,objectlistview | 28,974,102 | 1 | true | 0 | 1 | There is no real getter.
Use ObjectListView.sortColumnIndex for that.
It will be '-1' if none of the columns are used for sorting.
btw: How can I add code here? The "code snipped" symbol just come up with a ugly un-understandable dialog. | 1 | 0 | 0 | I want to ask a ObjectListView object if and which of its columns
(index would be enough) are sorted and in which direction (asc or desc).
I know
ObjectListView.GetSortColumn()
which return a ColumnDef object.
But I can not see a way how to ask about the index of the ColumnDef.
In wx.ListCtrl I can not found anything about sorting. | How to ask a ObjectListView which column is ordered and in which direction? | 1.2 | 0 | 0 | 81 |
28,780,611 | 2015-02-28T10:20:00.000 | 0 | 0 | 0 | 0 | python,pygame,joystick | 28,937,815 | 1 | false | 0 | 1 | I was not able to figure out why get_axis(1) did not get the position of the left paddle.
However, I was able to use evdev to get all the information I need from the steering wheel. It's not very high level, unlike Pygame, but it does work. | 1 | 0 | 0 | I am using Pygame to get values from a Simraceway steering wheel, which is just seen as a joystick. There are three axes on the joystick -- one for steering, one for the left paddle, and one for the right paddle.
When I do the Pygame command get_numaxes(), I correctly get back 3 axes. But when I do the command get_axis(1), which should return the value for the left paddle, I do not get the correct value. I do get the correct values for get_axis(0) (the steering) and get_axis(2) (the right paddle).
In Windows, the left paddle shows up as Z-rotation rather than a normal axis. I'm not sure if that has anything to do with it.
Why is get_axis(1) not getting the position of the left paddle? Is there any program other than Pygame that will work better at getting a Z-rotation axis from this joystick, like Windows does? | Pygame.not getting value for one of axes on the joystick | 0 | 0 | 0 | 542 |
28,786,932 | 2015-02-28T21:09:00.000 | 0 | 0 | 0 | 0 | python,html,css,python-3.x | 28,787,299 | 2 | false | 1 | 0 | To achieve your goal, you need good knowledge of javascript, the language for dynamic web pages. You should be familiar with dynamic web techniques, AJAX, DOM, JSON. So the main part is on the browser side. Practically any python web server fits. To "bridge the gap" the keyword is templates. There are quite a few for python, so you can choose, which one suites you best. Some frameworks like django bring their own templating engine.
And for your second question: when a web site doesn't offer an API, perhaps the owner of the site does not want, that his data is abused by others. | 1 | 1 | 0 | I am working on making a GUI front end for a Python program using HTML and CSS (sort of similar to how a router is configured using a web browser). The program assigns values given by the user to variables and performs calculations with those variables, outputting the results. I have a few snags to work out:
How do I design the application so that the data is constantly updated, in real-time? I.e. the user does not need to hit a "calculate" button nor refresh the page to get the results. If the user changes the value in a field, all other are fields simultaneously updated, et cetera.
How can a value for a variable be fetched from an arbitrary location on the internet, and also constantly updated if/when it is updated at the source? A few of the variables in the program are based on current market conditions (e.g. the current price of gold). I would like to make those values be automatically entered after retrieving them from certain sources that, let us assume, do not have APIs.
How do I build the application to display via HTML and CSS? Considering Python cannot be implemented like PHP, I am seeking a way to "bridge the gap" between HTML and Python, without such a "heavy" framework like Django, so that I can run Python code on the server-side for the webpage.
I have been looking into this for a quite some time now, and have found a wide range of what seem to be solutions, or that are nearly solutions but not quite. I am having trouble picking what I need specifically for my application. These are my best findings:
Werkzeug - I believe Werkzeug is the best lead I have found for putting the application together with HTML and Python. I would just like to be reassured that I am understanding what it is correctly and that is, in fact, a solution for what I am trying to do.
WebSockets - For displaying the live data from an arbitrary website, I believe I could use this protocol. But I am not sure how I would implement this in practice. I.e. I do not understand how to target the value, and then continuously send it to my application. I believe this to be called scraping? | How to make a webpage display dynamic data in real time using Python? | 0 | 0 | 0 | 6,330 |
28,787,814 | 2015-02-28T22:40:00.000 | 1 | 0 | 0 | 0 | python,mysql,sql | 28,787,981 | 2 | false | 1 | 0 | The latter one you need to do and handle in any case, thus I do not see there is much value in querying for duplicates, except to show the user information beforehand - e.g. report "This username has been taken already, please choose another" when the user is still filling in the form. | 2 | 1 | 0 | Python application, standard web app.
If a particular request gets executed twice by error the second request will try to insert a row with an already existing primary key.
What is the most sensible way to deal with it.
a) Execute a query to check if the primary key already exists and do the checking and error handling in the python app
b) Let the SQL engine reject the insertion with a constraint failure and use exception handling to handle it back in the app
From a speed perspective it might seem that a failed request will take the same amount of time as a successful one, making b faster because its only one request and not two.
However, when you take things in account like read-only db slaves and table write-locks and stuff like that things get fuzzy for my experience on scaling standard SQL databases. | Let the SQL engine do the constraint check or execute a query to check the constraint beforehand | 0.099668 | 1 | 0 | 61 |
28,787,814 | 2015-02-28T22:40:00.000 | 2 | 0 | 0 | 0 | python,mysql,sql | 28,788,000 | 2 | true | 1 | 0 | The best option is (b), from almost any perspective. As mentioned in a comment, there is a multi-threading issue. That means that option (a) doesn't even protect data integrity. And that is a primary reason why you want data integrity checks inside the database, not outside it.
There are other reasons. Consider performance. Passing data into and out of the database takes effort. There are multiple levels of protocol and data preparation, not to mention round trip, sequential communication from the database server. One call has one such unit of overhead. Two calls have two such units.
It is true that under some circumstances, a failed query can have a long clean-up period. However, constraint checking for unique values is a single lookup in an index, which is both fast and has minimal overhead for cleaning up. The extra overhead for handling the error should be tiny in comparison to the overhead for running the queries from the application -- both are small, one is much smaller.
If you had a query load where the inserts were really rare with respect to the comparison, then you might consider doing the check in the application. It is probably a tiny bit faster to check to see if something exists using a SELECT rather than using INSERT. However, unless your query load is many such checks for each actual insert, I would go with checking in the database and move on to other issues. | 2 | 1 | 0 | Python application, standard web app.
If a particular request gets executed twice by error the second request will try to insert a row with an already existing primary key.
What is the most sensible way to deal with it.
a) Execute a query to check if the primary key already exists and do the checking and error handling in the python app
b) Let the SQL engine reject the insertion with a constraint failure and use exception handling to handle it back in the app
From a speed perspective it might seem that a failed request will take the same amount of time as a successful one, making b faster because its only one request and not two.
However, when you take things in account like read-only db slaves and table write-locks and stuff like that things get fuzzy for my experience on scaling standard SQL databases. | Let the SQL engine do the constraint check or execute a query to check the constraint beforehand | 1.2 | 1 | 0 | 61 |
28,790,032 | 2015-03-01T04:17:00.000 | 0 | 0 | 0 | 0 | python,scikit-learn | 28,871,022 | 1 | true | 0 | 0 | Currently there is no way to directly get the optimum number of estimators from GradientBoostingClassifier. If you also pass n_estimators in the parameter grid to GridSearchCV it will only try the exact values you give it, and return one of these.
We are looking to improve this, by searching over the number of estimators automatically. | 1 | 0 | 1 | With GradientBoostingClassifier suppose I set n_estimators to 2000 and use GridSearchCV to search across learning_rate in [0.01, 0.05, 0.10] - how do I know the number of boosting iterations that produced the optimal result - is the model always going to fit 2000 trees for each value of learning_rate or is it going to use the optimal number of boosting iterations for each of these values? It wouldn't make sense to also include n_estimators in the grid search and search for all values in [1,2000]. | Obtain optimal number of boosting iterations in GradientBoostingClassifier using grid search | 1.2 | 0 | 0 | 1,119 |
28,791,278 | 2015-03-01T07:41:00.000 | 0 | 0 | 1 | 0 | python-3.x | 28,791,357 | 1 | false | 0 | 0 | What you describe is what was a very popular type of game a long time ago. The most difficult portion of this type of game is interpreting user input. You can start with a list of possible commands, iterating through each token of the user's input. If I type
look left
the game can begin by calling a method or function which interprets the available visual data. You will want to look into your database implementation for what is left in relation to the current player's position. The game can respond with
You see a wooden box on the floor. It is locked.
You get the idea. As the user, I will probably want to check my inventory for a key. You can implement a subroutine in your parser for searching, categorizing, and using your inventory. The key to writing anything like the described program is spending at least twice as much time testing and debugging the game over how much time you spent coding the first working version. The second key is to make sure you are thinking ahead. Example: "How can I modularize the navigation subroutine so that maps can be modified or expanded easily?"
Comment if you have any questions. Good luck coding! | 1 | 0 | 0 | Metaphorically speaking, I'm learning python in a community college for game programming; and our second assignment is to make a text based game. I'm stuck trying to figure out how to get the code to run if the player has something in their inventory, then display these options or print these options if they don't have that certain item in their inventory. | Key and lock code [game programming]? | 0 | 0 | 0 | 117 |
28,791,639 | 2015-03-01T08:42:00.000 | 2 | 0 | 1 | 0 | python,constructor,initializer | 28,791,729 | 3 | false | 0 | 0 | __init__ is called with an already built up instance of the object as first parameter (normally called self, but that's just a parameter name).
__new__ instead is called passing the class as first parameter and is expected to return an instance (that will be later passed to __init__).
This allows for example __new__ to return an already-existent instance for value-based objects that are immutable and for which identity shouldn't play a role. | 1 | 14 | 0 | I have heard that the __init__ function in python is not a Constructor, It's an Initializer and actually the __new__ function is the Constructor and the difference is that the __init__ function is called after the creation of the object and the __new__ called before. Am I right? Can you explain the difference better and why do we need both __new__ and __init__? | Initializer vs Constructor | 0.132549 | 0 | 0 | 17,678 |
28,793,857 | 2015-03-01T13:04:00.000 | 2 | 0 | 0 | 1 | python,google-app-engine | 28,804,617 | 1 | true | 1 | 0 | Actually two different App Engine apps cannot see the same items in memcache. Their memcache spaces are totally isolated from each other.
However two different modules of the same app use the same memcache space and can read and write the same items. Modules act like sub-apps. Is that what you meant?
It is also possible to have different versions of an app (or module) running at the same time (for example to do A/B testing), and these also use the same memcache space. | 1 | 1 | 0 | I have 2 Google App Engine applications which share memcache items, one app writes the items and the other apps reads them. This works in production. however - locally using the SDK, items written by one app are not available to the other. Is there a way to make this work? | How to share memcache items across 2 apps locally using google app eninge sdk | 1.2 | 0 | 0 | 63 |
28,794,872 | 2015-03-01T14:49:00.000 | 0 | 0 | 0 | 0 | java,python,xml,linux,groovy | 28,805,524 | 1 | false | 0 | 0 | Looking deeper into the problem and after scanning other posts I found out that inst2xsd is the tool for this kind of tasks. | 1 | 0 | 0 | I have 46000 xml files that all share a common structure but there are variations and the number of files makes it impossible to figure out a common xml schema for all of them. Is there a tool that may help me to extract a schema from all these files or at least something that may give me a close enough idea of what is mandatory and what is optional?
Preferably, of course, a schema according to some standard or a DTD.
And, as I work entirely in Linux, a Linux tool or a program that works in Linux is OK. I am quite fluent in C, Java, Javascript, Groovy, Python (2.7 and 3) and a number of other languages. | extract an XML schema (or equivalent) from a large set of xml files | 0 | 0 | 1 | 76 |
28,798,079 | 2015-03-01T19:32:00.000 | -1 | 0 | 0 | 0 | python,django,cookies,django-rest-framework,django-testing | 28,808,418 | 1 | true | 1 | 0 | I have found my solution. It was rather straightforward actually.
I tried to set my cookie on my request for the test, which made sense since the cookie should be sent in the request. A better way of testing it is to just GET a resource first (which sets te cookie, which would happen in a normal web-situation as well) and then POST with the set cookie. This worked just fine. | 1 | 2 | 0 | I'm using get_signed_cookie in one of my views, and it works. The problem is in testing it, i'm manually signing my cookie in my test using Django's django.core.signing module with the same salt as my actual code (i'm using a salt that's in my settings).
When i do non-automated tests, all is well. I've compared the cookie that is set in my test with the cookie that is set by Django itself and they're the same structure:
In my test (I'm manually signing the string 'test_cookie_value' in my test):
{'my_cookie': 'test_cookie_value:s5SA5skGmc4As-Weyia6Tlwq_V8'}
Django:
{'my_cookie': 'zTmwe4ATUEtEEAVs:1YRl0a:Lg3KHfpL93HoUijkZlNphZNURu4'}
But when my view tries to get_signed_cookie("my_cookie", False, salt="my_salt"), False is returned.
I think this is strange, and while i'm trying to obey the testing goat (first time TDD'er here) i'm tempted to skip testing this..
I hope someone can give me some guidance, i'm stuck ignoring the goat noises in the background for now. | Django get_signed_cookie on manually signed cookie | 1.2 | 0 | 0 | 820 |
28,799,663 | 2015-03-01T22:02:00.000 | 2 | 1 | 1 | 1 | python,multithreading | 28,799,878 | 2 | true | 0 | 0 | To me, this looks like a pristine application for the subprocess module.
I.e. do not run the test-scripts from within the same python interpreter, rather spawn a new process for each test-script. Do you have any particular reason why you would not want to spawn a new process and run them in the same interpreter instead? Having a sub-process isolates the scripts from each other, like imports, and other global variables.
If you use the subprocess.Popen to start the sub-processes, then you have a .terminate() method to kill the process if need be. | 1 | 0 | 0 | I have a python program run_tests.py that executes test scripts (also written in python) one by one. Each test script may use threading.
The problem is that when a test script unexpectedly crashes, it may not have a chance to tidy up all open threads (if any), hence the test script cannot actually complete due to the threads that are left hanging open. When this occurs, run_tests.py gets stuck because it is waiting for the test script to finish, but it never does.
Of course, we can do our best to catch all exceptions and ensure that all threads are tidied up within each test script so that this scenario never occurs, and we can also set all threads to daemon threads, etc, but what I am looking for is a "catch-all" mechanism at the run_tests.py level which ensures that we do not get stuck indefinitely due to unfinished threads within a test script. We can implement guidelines for how threading is to be used in each test script, but at the end of the day, we don't have full control over how each test script is written.
In short, what I need to do is to stop a test script in run_tests.py even when there are rogue threads open within the test script. One way is to execute the shell command killall -9 <test_script_name> or something similar, but this seems to be too forceful/abrupt.
Is there a better way?
Thanks for reading. | Best way to stop a Python script even if there are Threads running in the script | 1.2 | 0 | 0 | 238 |
28,804,395 | 2015-03-02T07:11:00.000 | 0 | 0 | 1 | 0 | python,differential-equations | 28,998,382 | 2 | false | 0 | 0 | RK4 or the classical Runge-Kutta method is one specific integration method. As an explicit method, it is imminently unsuitable for stiff problems. As a one-method method, it has no intrinsic features for step size control.
For stiff problems, you want implicit RK methods with step size control. | 1 | 1 | 0 | I need a solver of stiff Inital-Value Problems (IVP) in python exploiting RK4 preferably explicit. I have been searching for past few days but could not find it. Following are my queries:
Does the solver, i.e. any module, exist?
If no, will it be reasonable to code one? I am asking this because I can't find any reference for using RK4-explicit for stiff problems. | Runge-Kutta(RK4) for stiff IVP | 0 | 0 | 0 | 331 |
28,804,802 | 2015-03-02T07:44:00.000 | 0 | 0 | 0 | 0 | python,django,apache | 33,037,343 | 1 | true | 1 | 0 | The problem for slow response was with the custom views which did not have pagination. After implementing this feature, response became fast :) | 1 | 1 | 0 | I have used django-adminplus to register custom views with django-filters in the admin site. This is slowing down the performance of my django project. I am using Apache as my http webserver as also my static file renderer as I 'HAVE' to do it so. I cannot use gninx nor gunicorn. The views render several 100k records. I have about 30-40 custom views registered with search/ filter option on the admin site. How do I improve the performance. I use linux debian, 64 GB RAM. django 1.6.5 . Thx in advance | Does django adminplus effects performance? | 1.2 | 0 | 0 | 83 |
28,805,401 | 2015-03-02T08:30:00.000 | 0 | 0 | 1 | 0 | python,pycxx | 28,805,402 | 1 | false | 0 | 1 | If NDEBUG is defined you have to use the Debug version of the interpreter python_d.exe.
Furthermore if the name of the extension is myextension the name of the Dll in Release must be myextension.pyd but in Debug the name of the Dll must be myextension_d.pyd | 1 | 0 | 0 | When I Debug my Python C extension using Visual Studio the program abort with the message: "PyThreadState_Get: no current thread".
In Release the program works fine and if I add debugging information it still works fine.
How to solve the problem? | Debugging my Python C extension lead to "PyThreadState_Get: no current thread" | 0 | 0 | 0 | 304 |
28,807,448 | 2015-03-02T10:25:00.000 | 0 | 0 | 0 | 0 | python,wxpython | 28,818,633 | 1 | false | 0 | 1 | You can use focus events. Either catch wx.EVT_KILL_FOCUS which fires when you leave the grid widget or catch wx.EVT_SET_FOCUS when you enter a new widget. Then you can do something in your event handler to update grid1. | 1 | 0 | 0 | I have a window with two grids in it. Example for using:
Making some operations on grid1. When making operations on another object in this window(like the other grid) I want to do final operation on grid1. What is the right event for this? | wxpython event on deactivating a grid | 0 | 0 | 0 | 18 |
28,807,894 | 2015-03-02T10:48:00.000 | 0 | 0 | 1 | 0 | python | 28,809,702 | 1 | false | 0 | 0 | My guess is you have multiple versions of python, and Pycharm is using a different one than the version of IDLE you're using. It could also be that some packages like Pygame only work in 32 bit versions... | 1 | 0 | 0 | I primarily use PyCharm but when I try to develop using IDLE it doesn't recognise any of my packages / folders. Even though they have a __init__.py file in them, any reason why and how can I fix it?
Thanks for any help, I can provide more information as and when it is needed. | IDLE doesn't recognise packages | 0 | 0 | 0 | 99 |
28,811,104 | 2015-03-02T13:34:00.000 | 0 | 0 | 0 | 0 | python,gtk,pygtk | 34,956,262 | 1 | false | 0 | 1 | Assuming you want five pixels of spacing, if you would like to have space between the different cells of your table, you can try table.set_row_spacings(5); table.set_col_spacings(5); If you would like to have space around the entire table, you can do table.set_padding(5). | 1 | 1 | 0 | I am trying to use gtk table in one of my projects(GUI based) to show a pair of columns and its values. Since gtk table doesn't have borders by default, my GUI is not coming up well.
So,
Is there anyway I can add borders to gtk table and its cells?
If no can I create a customized widget with borders by extending gtk Table?
How can I create a customized widget in pygtk?
P.S: I have tried gtk Treeview and gtk Frames and thats not what I want. Also I have created a gtk treeview for each pair of columns and its values but the GUI works very very slow. | Add border to table in pygtk | 0 | 0 | 0 | 1,159 |
28,811,202 | 2015-03-02T13:39:00.000 | 0 | 0 | 0 | 0 | python,tkinter,ttk | 28,813,514 | 1 | false | 0 | 1 | No, it is likely not possible, depending on what you mean by "everything" and what you mean by "scaled". Any widget can be made to stretch to fill its allotted space. Text widgets and canvas widgets, for example, scale nicely. A button or label will fill the space it's in, but the text inside the widget won't change (that is also true of text and canvas widgets).
It's possible to organize your GUI so that when the window resizes, everything remains in its proper place at its original size. Having buttons that automatically scale is not something most people would expect. Disabling resizing usually results in a poor user experience -- users should have the ability to make a window larger or smaller. | 1 | 0 | 0 | Is it possible that when a Tkinter window is made bigger or larger, everyhting in the window is scaled?
So all the proportions stay the same but their sizes vary.
Now when the window is resized all the buttons etc stay the same so I disabled resize because there is no point, it just looks bad. | Tkinter - scale all components on resize? | 0 | 0 | 0 | 569 |
28,811,688 | 2015-03-02T14:02:00.000 | 0 | 0 | 0 | 0 | python,django,movie,imdb | 28,812,266 | 1 | false | 1 | 0 | If you have a server which is frequently used, you could think about caching the information locally. I suppose it's very likely that most queries will be clustered around a reduced set of movies, so that would speed up your search. | 1 | 0 | 0 | I've tried using OMDb, it gives all the things I need and has a very simple working too, but it is very very slow. I looped about 200 queries and it took about 3 minutes to complete.
Is there any faster API out there that can give me similar results on querying an movie. The details i'm looking for in particular are genre, actors and director.
My base code is written in python (for my django server), so an API with a wrapper would just make my day | API for fetching movie details from IMDB or similar sites? | 0 | 0 | 0 | 1,396 |
28,813,409 | 2015-03-02T15:27:00.000 | -2 | 0 | 0 | 0 | python,postgresql,unicode | 28,813,836 | 3 | false | 0 | 0 | Since a string is basically just data and a pointer, you can save null in it. However, since null represents the end of the string ("null terminator "), there is no way to read beyond the null without knowing the size ahead of reading.
Therefore, seems that you ought to store your data in binary and read it as a buffer.
Good luck! | 2 | 5 | 0 | Are null bytes allowed in unicode strings?
I don't ask about utf8, I mean the high level object representation of a unicode string.
Background
We store unicode strings containing null bytes via Python in PostgreSQL.
The strings cut at the null byte if we read it again. | Are null bytes allowed in unicode strings in PostgreSQL via Python? | -0.132549 | 1 | 0 | 8,634 |
28,813,409 | 2015-03-02T15:27:00.000 | 1 | 0 | 0 | 0 | python,postgresql,unicode | 28,814,135 | 3 | false | 0 | 0 | Python itself is perfectly capable of having both byte strings and Unicode strings with null characters having a value of zero. However if you call out to a library implemented in C, that library may use the C convention of stopping at the first null character. | 2 | 5 | 0 | Are null bytes allowed in unicode strings?
I don't ask about utf8, I mean the high level object representation of a unicode string.
Background
We store unicode strings containing null bytes via Python in PostgreSQL.
The strings cut at the null byte if we read it again. | Are null bytes allowed in unicode strings in PostgreSQL via Python? | 0.066568 | 1 | 0 | 8,634 |
28,813,775 | 2015-03-02T15:44:00.000 | 0 | 1 | 0 | 1 | python,bash,scripting,path,packing | 28,828,064 | 2 | false | 0 | 0 | Thanks for the answers!
Of course this is not a good approach for providing a published code! Sorry if it was confusing. But this is a good approach if you are developing some e.g. scientific idea, and you wish to obtain a proof of concept result fast and you wish to do similar tasks several times but replacing fast some parts of the algorithm. Note, that sometimes many codes are available for some parts of the task. These codes are sometimes needed to be modified a bit (a few lines). I am a big believer of re-implementing everything, but first it is good to know if it worth to do!
For some compromise: can I call a script externally that is wrapped in some tar or zip and is not compressed?
Thanks again,
J. | 1 | 0 | 0 | For some routine work I found that combining different scripting languages can be the fastest way to do. Like I have some main bash script which calles some awk, pyton and bash scripts or even some compiled fortran executables.
I can put all the files into a folder that is in the paths, but it makes modification is a bit slower. If I need a new copy with some modifications I need to add another path to $PATH as well.
Is there a way to make merge these files as a single executable?
For example: tar all the files together and explain somehow that the main script is main.sh? This way I could simply vi the file, modify, run, modify, run ... but I could move the file between folders and machines easily. Also dependencies could be handle properly (executing the tar could set PATH itself).
I hope this dream does exist! Thanks for the comments!
Janos | Multiple executables as a single file | 0 | 0 | 0 | 202 |
28,814,845 | 2015-03-02T16:34:00.000 | 1 | 1 | 1 | 1 | python,git | 28,814,943 | 2 | true | 0 | 0 | You could consider running your script from a separate checkout to where you do your development. That way you would need to commit, push locally, and pull in the 'deployment' location before you could run the updated script. You could probably automate those steps with a shell script or even a git commit hook. | 1 | 1 | 0 | I have a slightly unusually situation: I have scripts that I make small changes to frequently, and that take hours to execute.
I save output logs, but more importantly I need to make sure that the code which produced a given log will not be lost.
Committing changes before each run will work, but I'd like to enforce this automatically by preventing my code from running if git is not up to date.
Is there a simple way to do this, or is running shell commands and scraping output my best bet? | only run python script if it is git committed | 1.2 | 0 | 0 | 132 |
28,817,249 | 2015-03-02T18:49:00.000 | 2 | 0 | 1 | 0 | python,python-c-api | 28,817,512 | 1 | true | 0 | 1 | PyEval_SetTrace() is exactly the API that you need to use. Not sure why you need some additional way to "pause" the execution; when your callback has been called, the execution is already paused and will not resume until you return from the callback. | 1 | 1 | 0 | I am using Python C Api to embed a python in our application. Currently when users execute their scripts, we call PyRun_SimpleString(). Which runs fine.
I would like to extend this functionality to allow users to run scripts in "Debug" mode, where like in a typical IDE, they would be allowed to set breakpointsm "watches", and generally step through their script.
I've looked at the API specs, googled for similar functionality, but did not find anything that would help much.
I did play with PyEval_SetTrace() which returns all the information I need, however, we execute the Python on the same thread as our main application and I have not found a way to "pause" python execution when the trace callback hits a line number that contains a user checked break point - and resuming the execution at a later point.
I also see that there are various "Frame" functions like PyEval_EvalFrame() but not a whole lot of places that demo the proper usage. Perhaps these are the functions that I should be using?
Any help would be much appreciated! | How to implement breakpoint functionality in a embedding of Python | 1.2 | 0 | 0 | 179 |
28,817,558 | 2015-03-02T19:07:00.000 | 0 | 1 | 0 | 0 | python,django,mercurial | 28,817,753 | 2 | false | 1 | 0 | You can use hg id -i to see the currently checked out revision on your server, and hg status to check if the file has been modified relative to that revision. | 1 | 0 | 0 | I'm using mercurial and I think I may have an older revision of a python file running on my server. How can I tell which revision, of a particular file, is currently being ran on in my web application?
I suspect an older revision because I currently have an error being thrown in sentry that refers to a line of code that no longer exists in my latest code file. I can log directly onto the server and see my latest file, but still my web app runs this old version
I have ruled out browser caching and server caching
Thanks in advance. | HG - Find current running revision of a code file | 0 | 0 | 0 | 119 |
28,818,394 | 2015-03-02T19:56:00.000 | 2 | 0 | 0 | 0 | python,amazon-web-services,amazon-dynamodb,boto | 28,890,074 | 1 | true | 1 | 0 | You can create 2 GSIs: 1 with date as hashKey, 1 with month as hashKey.
Those GSIs will point you to the rows of that month / of that day.
Then you can just query the GSI, get all the rows of that month/day, and do the aggregation on your own.
Does that work for you?
Thanks!
Erben | 1 | 0 | 0 | For a project I have to use DynamoDB(aws) and python(with boto).
I have items with a date and I need to display the count grouped by date or by month.
Something like
by date of the month [1/2: 5, 2/2: 10, 3/2: 7, 4/2: 30, 5/2: 25, ...]
or
by month of the year [January: 5, February: 10, March: 7, ...] | Using group by in DynamoDB | 1.2 | 1 | 0 | 3,106 |
28,818,559 | 2015-03-02T20:06:00.000 | 0 | 0 | 0 | 0 | python,urllib,urlparse | 30,655,631 | 1 | false | 1 | 0 | Two dots (..) means go back once in the hierarchy, change the second link to ./v2/meila07a/meila07a.pdf and it should be working fine.
Or you can also change the root one to http://www.jmlr.org/proceedings/papers/v2/, thanks to this change it will no longer dispose of v2 at the end because the root was not set to a proper directory. | 1 | 0 | 0 | I am trying to do some web scraping but I have some problems in joining relative and root urls
for example the root url is: http://www.jmlr.org/proceedings/papers/v2
and the relative url is: ../v2/meila07a/meila07a.pdf
As I use urljoin in urlparse: the result is odd:
http://www.jmlr.org/proceedings/v2/meila07a/meila07a.pdf
Which is not a valid link. Can anybody help me with that? | joining urls with urljoin in python | 0 | 0 | 1 | 399 |
28,819,758 | 2015-03-02T21:27:00.000 | 0 | 0 | 0 | 0 | python,opencv,ffmpeg,video-processing | 40,351,890 | 1 | false | 0 | 0 | Had the same problem using opencv2.4 also when using other codec than MJPG. After upgrading to version 3.1.0 the error doesn't occur anymore. | 1 | 1 | 0 | I am new to opencv and I am trying to create a video file with frame size 56x72 using opencv-python. I am using 'MJPG' to encode the video with a frame rate of 20. I get an error which says - [mjpeg @ 0x27ee9e0] buffer smaller than minimum size.
I checked the avcodec.h file and it says that the FF_MIN_BUFFER_SIZE = 16384 and it verifies if the buf_size is at least FF_MIN_BUFFER_SIZE and i think the buf_size is width*height*4 (i am not really sure of this).
So does it mean that I can't create a video file with frame size 56x72 or smaller? Is there any way around? | mjpeg @ 0x27ee9e0 buffer smaller than minimum size: How to create a video file with size less than the minimum buffer size? | 0 | 0 | 0 | 822 |
28,823,172 | 2015-03-03T02:40:00.000 | 1 | 1 | 0 | 0 | java,python,amazon-s3,thumbnails,aws-lambda | 56,451,012 | 2 | false | 1 | 0 | You can add a SQS queue as an event source/trigger for the Lambda, make the slight changes in the Lambda to correctly process a SQS event as opposed to a S3 event, and then using a local script loop through a list of all objects in the S3 bucket (with pagination given the 2MM files) and add them as messages into SQS. Then when you're done, just remove the SQS event source and queue.
This doesn't get around writing a script to list and find and then call the lambda function but the script is really short. While this way does require setting up a queue, you won't be able to process the 2MM files with direct calls due to lambda concurrency limits.
Example:
Set up SQS queue and add as event source to Lambda.
The syntax for reading a SQS message and an S3 event should be pretty similar
Paginate through list_objects_v2 on the S3 bucket in a for-loop
Create messages using send_message_batch
Suggestion:
Depending on the throughput of new files landing in your bucket, you may want to switch to S3 -> SQS -> Lambda processing anyways instead of direct S3 -> Lambda calls. For example, if you have large bursts of traffic then you may hit your Lambda concurrency limit, or if an error occurs and you want to keep the message (can be resolved by configuring a DLQ for your lambda). | 1 | 4 | 0 | I'm planning to migrate existing image processing logic to AWS lambda. Lambda thumbnail generator is better than my previous code so I want to re-process all the files in an existing bucket using lamdba.
Lambda seems to be only event driven, this means that my lamdba function will only be called via a PUT event. Since the files are already in the bucket this will not trigger any events.
I've considered creating a new bucket and moving the files from my existing bucket to a new bucket. This will trigger new PUT events, but my bucket has 2MM files so I refuse to consider this hack as a viable options. | Running aws-lambda function in a existing bucket with files | 0.099668 | 0 | 0 | 2,230 |
28,824,286 | 2015-03-03T04:47:00.000 | 0 | 0 | 0 | 0 | python,wxpython | 28,824,375 | 1 | false | 0 | 1 | Worked on it for an hour without success before posting, then solved it myself five minutes later...
Here's my solution, creating a ClientDC if the event doesn't have its own DC:
import wx
class Foo(wx.Frame):
def __init__(self, parent, title):
wx.Frame.__init__ (self, parent, -1, title, size=(500,300))
self.panel = wx.Panel(self, -1)
self.Bind(wx.EVT_ERASE_BACKGROUND, self.OnEraseBackground)
self.Bind(wx.EVT_ENTER_WINDOW, self.onEnter)
self.Show()
def OnEraseBackground(self, e):
try:
DC = e.GetDC()
except:
DC = wx.ClientDC(self)
DC.Clear()
def onEnter(self, e):
wx.PostEvent(self, wx.PyCommandEvent(wx.wxEVT_ERASE_BACKGROUND))
app = wx.App()
Foo(None, 'foo')
app.MainLoop() | 1 | 0 | 0 | I am having trouble posting an erase background event to draw to the screen. In my full code, I want to draw a bitmap (DC.DrawBitmap()) when a button is clicked. I do that by posting an EVT_ERASE_BACKGROUND event which is caught by a custom bound method. However, once it is in that method, the event.GetDC() method that normally works fails.
Here is a simplified code that has the same result:
import wx
class Foo(wx.Frame):
def __init__(self, parent, title):
wx.Frame.__init__ (self, parent, -1, title, size=(500,300))
self.panel = wx.Panel(self, -1)
self.Bind(wx.EVT_ERASE_BACKGROUND, self.OnEraseBackground)
self.Bind(wx.EVT_ENTER_WINDOW, self.onEnter)
self.Show()
def OnEraseBackground(self, e):
DC = e.GetDC()
def onEnter(self, e):
wx.PostEvent(self, wx.PyCommandEvent(wx.wxEVT_ERASE_BACKGROUND))
app = wx.App()
Foo(None, 'foo')
app.MainLoop()
This raises:
AttributeError: 'PyCommandEvent' object has no attribute 'GetDC'
How can I solve this? | wxpython Post Erase Background with DC | 0 | 0 | 0 | 499 |
28,829,805 | 2015-03-03T10:46:00.000 | -1 | 0 | 0 | 0 | python,scikit-learn | 28,948,827 | 1 | false | 0 | 0 | As Andreas said above, splitting criteria are coded in cython. | 1 | 0 | 1 | I need other splitting criteria for a Desicion tree than the provided 'gini' and 'entropy. I want to use the wonderful sklearn package as base though. Is there a way to go around the C-implementation of the tree building process? As in implementing the criterion in Python and let the TreeBuilder work with it? | Create splitting criterion for sklearn trees | -0.197375 | 0 | 0 | 601 |
28,839,976 | 2015-03-03T19:10:00.000 | 2 | 0 | 0 | 0 | python,pandas,xlsxwriter | 28,862,593 | 1 | false | 0 | 0 | I found xlwings. It's intuitive and does all the things I want to do. Also, it does well with all pandas data types. | 1 | 1 | 1 | Is it possible to write nice-formatted excel files with dataframe.to_excel-xlsxwriter combo?
I am aware that it is possible to format cells when writing with pure xlsxwriter. But dataframe.to_excel takes so much less space.
I would like to adjust cell width and add some colors to column names.
What other alternatives would you suggest? | How to do formmating with combination of pandas dataframe.to_excel and xlsxwriter? | 0.379949 | 1 | 0 | 100 |
28,840,855 | 2015-03-03T19:58:00.000 | 0 | 0 | 1 | 0 | python,terminal,bioinformatics | 28,840,911 | 1 | false | 0 | 0 | Assuming you are running Ubuntu, run sudo apt-get -yinstall python-cairo python-gobject python-gtk2 to ensure that the packages are installed, as your error seems to indicate that the packages are not installed. Now try running that program again. | 1 | 0 | 0 | I really don't know what I'm doing. I'm trying to run a bioinformatics program that I know is written in python, which I have one class worth of experience in a while ago. It requires installing a few things (Python 2.5.1, GTK+ 2.12.9 with glade support, pycairo-1.2.6-1, pygobject-2.12.3-1, pygtk-2.10.4-1, PIL-1.1.6), which I did with Terminal. Now when I try to run the program it says "ImportError: No module named gtk". It is missing the GTK+ or the pygtk? Is there a way that I can make sure that everything that I installed worked?
Thanks for your help. | Probably obvious - ImportError: No module named gtk | 0 | 0 | 0 | 1,105 |
28,843,234 | 2015-03-03T22:27:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,flask,blobstore,flask-wtforms | 28,857,208 | 2 | false | 1 | 0 | Okey, so the real problem was that I was giving an absolute url to the successpath argument (i.e. the first) of blobstore.create_upload_url(), causing the request notifying about the success caused a csrf error when loading the root path (/). I changed it to a path relative to the root and now just using @csrf.exempt as normal works fine. | 1 | 0 | 0 | I'm running flask on app engine. I need to let users upload some files. For security reasons I have csrf = CsrfProtect(app) on the whole app, with specific url's exempted using the @csrf.exempt decorator in flask_wtf. (Better to implicitly deny than to implicitly allow.)
Getting an upload url from blobstore with blobstore.create_upload_url works fine, but the upload itself fails with a 400; CSRF token missing or incorrect.
This problem is on the development server. I have not tested it on the real server, since it is in production.
How do I exempt the /_ah/ path so the uploads work? | GAE blobstore upload fails with CSRF token missing | 0 | 0 | 0 | 228 |
28,846,059 | 2015-03-04T03:12:00.000 | 0 | 0 | 0 | 0 | python,sockets,namespaces | 67,091,745 | 2 | false | 0 | 0 | I just came across this post while looking into network namespaces and using python to interact with them. In regards to your question about non-root users running the setns(), or similar functions, I believe that is achievable. In a small script that creates the red and blue namespaces mentioned in this post, you could also set a linux capability inside of the new namespace that would allow non-root users to attach and bind. Directly from the man page, we see this description: call setns(2) (requires CAP_SYS_ADMIN in the target namespace);
Capabilities can be added to binaries such as the python2.7 or it can also be added to systemd processes. For example, if you look at the default openvpn-server service file on a centos 7 or RHEL 7 you can see added capabilities so that it can run without root privileges: CapabilityBoundingSet=CAP_IPC_LOCK CAP_NET_ADMIN.....
I know this isn't an answer to the original question, but I do not have enough reputation to reply to comments at the moment. I would suggest looking at capabilities and all of the options provided if you are security conscious and looking to run code as non-root users. | 1 | 8 | 0 | I am running some application in multiple network namespace. And I need to create socket connection to the loopback address + a specific port in each of the name space. Note that the "specific port" is the same across all network namespaces. Is there a way I can create a socket connection like this in python?
Appreciate any pointer! | Can I open sockets in multiple network namespaces from my Python code? | 0 | 0 | 1 | 6,844 |
28,848,106 | 2015-03-04T06:42:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,argparse | 28,848,239 | 2 | false | 0 | 0 | argparse allows you to specify that certain args have their own args, like so:
parser.add_argument("--snap", nargs=1)
or use a + to allow for an arbitrary number of "subargs"
After you call parse_args(), the values will be in a list:
filename = args.snap[0] | 1 | 1 | 0 | I want to write a python code in which, based on some arguments passed from command line I want to make positional argument optional.
For example,
My python program is test.py, and with it I can give --init, --snap, --check options. Now if I have given --snap and --check option, then file name is compulsory i.e.
test.py --snap file1
but if I have given --init option then it should not take any other arguments. i.e. in this case file name is optional:
test.py --init
How to implement this condition | How to make positional argument optional in argparser based on some condition in python | 0.099668 | 0 | 0 | 480 |
28,848,740 | 2015-03-04T07:27:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine,task-queue | 28,849,410 | 1 | false | 1 | 0 | You can specify as many queues as you like in queue.yaml rather than just using the default push queue. If you feel that no more than, say, five users at once are likely to contest for simultaneous use of them then simply define five queues. Have a global counter that increases by one and wraps back to 1 when it exceeds five. Use it to assign which queue a given user gets to push his or her tasks to at the time of the request. With this method, when you have six or more users concurrently adding tasks, you are no worse off than you currently are (in fact, likely much better off).
If you find the server overloading, turn down the default "rate: 5/s" to a lower value for some or all of the queues if you have to, but first try lowering the bucket size, because turning down the rate is going to slow things down when there are not multiple users. Personally, I would first try only turning down the four added queues and leave the first queue fast to solve this if you have performance issues that you can't resolve by tuning the bucket sizes. | 1 | 2 | 0 | In my application, I need to allow only one task at a time per user. I have seen that we can set max_concurrent_requests: 1 in queue.yaml. but this will allows only one task at a time in a queue.
When a user click a button, a task will be initiated and it will add 50 task to the queue. If 2 user click the button in almost same time total task count will be 100. If i give max_concurrent_requests: 1 it will run only one task from any of these user.
How do i handle this situation ? | Task queue: Allow only one task at a time per user | 0.197375 | 0 | 0 | 406 |
28,850,205 | 2015-03-04T08:57:00.000 | 0 | 0 | 0 | 0 | python,windows,visual-studio,opencv,login | 28,854,271 | 1 | false | 0 | 0 | You can store the snapshots in an array, run your recognition on each image and see if the user is recognized as one of the users you have trained your model on.
If not then prompt the user for their name, if the name matches one of the users you trained your model on, add these snapshots to their training set and re-train, so the model is more up to date, if it does not match any names, then you assume this is a new user and create a new label and folder for them, and retrain your recognizer.
Beware though that each time you add more snapshots the training time will increase, so perhaps limit each snapshot capture to 1 FPS or something. | 1 | 0 | 1 | maybe some of you can point me in the right direction.
I've been playing around with OpenCV FaceRecognition for some time with Eigenfaces to let it learn to recognize faces. Now I would like to let it run during windows logon.
Precisely, I want to make Snapshots of Faces when I log into a user so after the software has learned the faces after x logins and to which user it belongs, it will log me in automatically when it recognises me during typing.
As a prototype it would be enough to somehow get a textfield and get textinput working to "simulate" login in. Im using python.
I hope you understand what i want to achieve and can help me out.
Edit: Another Question/Idea: If i want to build an Application in Visual Studio, can I reuse my python Code or do I have to use c++? I could make a Windows Store app or something like that. | opencv FaceRecognition during login | 0 | 0 | 0 | 211 |
28,850,653 | 2015-03-04T09:20:00.000 | 0 | 0 | 0 | 0 | python,plone | 28,862,411 | 2 | false | 1 | 0 | You could override Products/CMFPlone/skins/plone_login/require_login.py and add any logic that you want there to give a custom response.
Or bypass require_login completely and use an own browser view to handle this. In your Plone Site go to acl_users/credentials_cookie_auth/manage_propertiesForm and for the Login Form property replace require_login with your browser view name. | 1 | 1 | 0 | Upon unauthorized access, Plone by default provides a redirect to login form. I need to prevent this for certain subpaths (ie. not globally), and instead return 403 Forbidden, with a custom (short) HTML page, after the (otherwise) normal Plone authentication & authorization has taken place.
I looked into ITraversable but using that takes place too early in the request processing chain - ie. before auth etc.
One possible yet unexplored method is having a custom view injected to URL path that performs auth checks on the object that maps to the remaining subpath. So there could be a request with URL something like http://myplone/somepath/customview/withcustom403, where:
The customview view implements IPublishTraverse, and its publishTraverse() returns itself
The same view then next validates, in its __call__ method, the access to withcustom403 object, ie. by calling getSecurityManager().validate()
If validation fails, the view sets response to 403 and returns the custom HTML
Would this work? Or is there some event triggered after auth takes place, but before Plone calls response.Unauthorized() (triggering redirect), that would provide a cleaner solution?
This is for current Plone 4.3.4. | how to generate custom 403 HTTP responses in Plone (prevent redirect) | 0 | 0 | 0 | 172 |
28,853,923 | 2015-03-04T11:59:00.000 | 0 | 0 | 1 | 1 | python,subprocess,popen,notepad | 28,854,681 | 3 | false | 0 | 0 | Found the exact solution from Alex K's comment. I used pywinauto to perform this task. | 1 | 2 | 0 | I am trying to open Notepad using popen and write something into it. I can't get my head around it. I can open Notepad using command:
notepadprocess=subprocess.Popen('notepad.exe')
I am trying to identify how can I write anything in the text file using python. Any help is appreciated. | Python Subprocess for Notepad | 0 | 0 | 0 | 1,546 |
28,854,821 | 2015-03-04T12:42:00.000 | 0 | 0 | 0 | 0 | python,pandas,binary,hex | 28,858,326 | 1 | false | 0 | 0 | If I understood, in column 1 you have 00, column 2 : 55, ...
If I am right, you first need to concat three columns in a string value = str(col1)+str(col2)+str(col3) and then use the method to convert it in binary. | 1 | 1 | 1 | I am pretty new to Python and pandas library, i just learned how to read a csv file using pandas.
my data is actually raw packets i captured from sensor networks, to analyze corrupt packets.
what i have now is, thousands of rows and hundreds of columns, literally, and the values are all in Hex. i need to convert all the values to binary with trailing zeros.
i am at lost on how to accomplish that once i have read the CSV file successfully using pandas.
I'd appreciate every kind of help, even a simple direction.
here is my sample data:
00 FF FF 00 00 29 00 89 29 41 88 44 22 00 FF FF 01 00 3F 06 55 55 55
55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55
55 55 0A
00 FF FF 00 00 29 00 89 29 41 88 45 22 00 FF FF 01 00 3F 06 55 55 55
55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55
55 55 0A
00 FF FF 00 00 29 00 89 29 41 88 46 22 00 FF FF 01 00 3F 06 55 55 55
55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55
55 55 0A
00 FF FF 00 00 29 00 89 29 41 88 47 22 00 FF FF 01 00 3F 06 55 55 55
55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55
55 55 0A | converting dataframe from Hex to binary in using python | 0 | 0 | 0 | 778 |
28,856,501 | 2015-03-04T14:06:00.000 | 2 | 0 | 0 | 0 | python,macos,oop,tkinter | 28,856,654 | 1 | true | 0 | 1 | No, there is no built-in way to do it. It's your code doing the packing, so you can store a flag in a dictionary, or create your own pack function to do that automatically. | 1 | 0 | 0 | I'm making a GUI with Tkinter, and I need to find a way to know if a widget (let's call it l = Label(root, text="test")) is packed or not. I know I can do if l in Tk.pack_slaves(root):..., but this seems inefficient.
Is there any way of adding a "line" to the widget.pack() method, such as telling it to set an attribute to widget.is_packed = True? Or is there a way of telling a class, On_method_call(pack()) do this?
Cheers. | Tkinter "on-pack" method or "is-packed" attribute | 1.2 | 0 | 0 | 473 |
28,856,908 | 2015-03-04T14:25:00.000 | 0 | 0 | 1 | 0 | python,import,subdirectory | 28,859,755 | 1 | false | 0 | 0 | I did not get any response, so my solution was to make a class in my main directory that stores the variables and replace all references to those variables. I guess the way I designed this code was bad practice anyway. It is working now. | 1 | 0 | 0 | I have a main folder, with a a main method.
This is the structure:
A main directory
subdirectory A
subdirectory B
Now from the B subdirectory, I would like to access the static variables from a class in A. However, they always end up being 0 (initialization value). When I print from the Main method, the values are correct. When I print from B, the values are incorrect.
I have correctly imported everyting, so the class in directory B does recognise classes in A. The only problem seems to be that the class is not correcly included in the total project, and so the variables are instantiated twice, even though the variables are static.
How do I include the classes from subdirectory A in subdirectory B correctly?
I hope anyone can help, been seaching for hours now to find the answer. | Variables in other folder are not the same (Python) | 0 | 0 | 0 | 32 |
28,860,799 | 2015-03-04T17:20:00.000 | 2 | 0 | 1 | 0 | python,encryption,data-manipulation | 28,861,578 | 1 | true | 0 | 0 | This is impossible to solve. You should use some kind of encryption or signature scheme to make sure the data is not tampered with in combination with obfuscation so that the secret algorithm or secret key cannot be easily extracted from the ciphertext.
Since this is about tamper protection and not data confidentiality, I would suggest to you to use a digital signature algorithm. You would generate a developer private+public key pair, use the private key to sign the static data and put the public key into the code in an obfuscated fashion.
This is protects the manipulation of your data better than any encryption scheme under the assumption that it is harder to manipulate your program than to simply decompile and read the source code.
If you would for example use a symmetric cipher to encrypt that data and store the key in your code, a malicious user could deduce the algorithm and key from your code and implement this. The user would then decrypt the static data, manipulate it and re-encrypt it again. Your program which was not manipulated would never know that the content was changed.
It also has to be asymmetric (such as RSA signature or ECDSA), so using an HMAC over your static data would have the same issue as symmetric encryption. | 1 | 0 | 0 | I'm fairly new to Python and as a project I'm working on a little game. I'd like to make sure my data is stored in a format that I won't run into problems moving forward.
The data I'll need to store will be integral to generating some in-game elements through procedural generation—essentially storing names, properties, and possible behaviors for these objects. I would not want players to be able to easily edit a file and change values so that a common level 1 creature suddenly drops 2 million gold pieces.
My intentions are to eventually use Py2Exe or PyInstaller. I have considered XML, YAML, and JSON but I'm not sure which direction I should go or what I may not be aware of.
I'm sure if a user wanted to badly enough they could figure it out—but what is the best way to make it inconvenient for the average user to tamper with said data? | How Can I store data so users can't easily tamper with it? | 1.2 | 0 | 0 | 102 |
28,864,152 | 2015-03-04T20:22:00.000 | 2 | 1 | 0 | 0 | python,django,testing,python-unittest | 28,864,271 | 1 | true | 1 | 0 | This most likely means you've got some component installed which is trying to make network connections. Possibly something that does monitoring or statistics gathering?
The simplest way to figure out what's going on is to use tcpdump to capture your network traffic and see what's going on. To do that:
Run tcpdump -i any (or tcpdump -i en1 if you're on a mac; the airport is usually en1, but you can double check with ifconfig)
Watch the traffic to get some idea what's normal
Run your test suite
Watch the traffic printed by tcpdump to see if anything obviously jumps out at you | 1 | 2 | 0 | I have a django test suite that builds a DB from a 400 line fixture file. It runs unfortunately slow. Several seconds per test.
I was on the train yesterday developing without internet access, with my wifi turned off, and I noticed my tests ran literally 10x faster without internet. And they are definitely running correctly.
Everything is local, it all runs fine without an internet connection. The tests themselves do not hit any APIs or make any other connections, so it seems it must be something else. | Django Tests run faster with no internet connection | 1.2 | 0 | 0 | 523 |
28,865,200 | 2015-03-04T21:20:00.000 | 2 | 1 | 1 | 0 | python,algorithm,math | 28,866,275 | 1 | false | 0 | 0 | Floating point numbers are stored in binary in python. Accuracy and precision have totally different implications in binary, especially given that computers are constrained to fixed-length representations. You can demonstrate that to yourself simply by noting that the code 1e16+1 == 1e16 returns True.
You linked to Wolfram.com where precision and accuracy are discussed in terms of decimal numbers. It means nothing to talk about the position of the decimal point in a number stored in a computer in binary. To represent and manipulate decimal numbers with their real decimal precision and accuracy, you need to use python's Decimal class.
Note that you must give the Decimal class a string for it to be accurate in decimal terms. If you give it a literal number (e.g. Decimal(1.001)) you are just giving it a float, and the Decimal you create will be the precise decimal representation of python's imprecise floating point representation of 1.001.
Sadly, the Decimal class doesn't have methods to return the number of digits either side of the decimal point. However, assuming your decimal is in decimal form, you can just examine it as a string, using str(x) where x is an instance of Decimal. | 1 | 0 | 0 | Given a double, what is the most efficient way to calculate the precision and accuracy of the number? Ideally, I'd like to avoid for loops. | Precision and Accuracy | 0.379949 | 0 | 0 | 458 |
28,874,356 | 2015-03-05T09:32:00.000 | 7 | 0 | 0 | 1 | python,c++,macos,installation,blpapi | 29,039,670 | 1 | false | 0 | 0 | There is a missing step in the Python SDK README file; it instructs you to set BLPAPI_ROOT in order to build the API wrapper, but this doesn't provide the information needed at runtime to be able to load it.
If you unpacked the C/C++ SDK into '/home/foo/blpapi-sdk' (for example), you will need to set DYLD_LIBRARY_PATH to allow the runtime dynamic linker to locate the BLPAPI library. This can be done as so:
$ export DYLD_LIBRARY_PATH=/home/foo/blpapi-sdk/Darwin | 1 | 3 | 0 | I am trying to install and run successfully Bloomberg API Python 3.5.5 and I have also downloaded and unpacked C++ library 3.8.1.1., both for the Mac OS X. I'm running Mac OS X 10.10.2. I am using the Python native to Mac OS X, Python 2.7.6 and I had already installed, via Xcode, the Command line gcc compiler, GCC 4.2.1.
I did, on an administrator account, sudo python setup.py install. I also had changed the setup.py ENVIRONMENT variable BLPAPI_ROOT to the directory for the C++ headers, blpapi_cpp_3.8.1.1. The setup was successful.
I changed to another directory as suggested by the Python's README file, to avoid 'Import Error: No module named _internals'.
When I go to python and enter the command import blpapi, I obtain the following error:
import blpapi
Traceback (most recent call last):
File "", line 1, in
File "/Library/Python/2.7/site-packages/blpapi/init.py", line 5, in
from .internals import CorrelationId
File "/Library/Python/2.7/site-packages/blpapi/internals.py", line 50, in
_internals = swig_import_helper()
File "/Library/Python/2.7/site-packages/blpapi/internals.py", line 46, in swig_import_helper
_mod = imp.load_module('_internals', fp, pathname, description)
ImportError: dlopen(/Library/Python/2.7/site-packages/blpapi/_internals.so, 2): Library not loaded: libblpapi3_64.so
Referenced from: /Library/Python/2.7/site-packages/blpapi/_internals.so
Reason: image not found
I check the directory for /Library/Python.../blpapi/ and there is no _internals.so only *.py files. Is that the problem? I don't know how to proceed. | Bloomberg API Python 3.5.5 with C++ 3.8.1.1. on Mac OS X import blpapi referencing | 1 | 0 | 0 | 3,384 |
28,877,499 | 2015-03-05T12:05:00.000 | 0 | 0 | 0 | 0 | python,django,pagination | 28,939,075 | 2 | true | 1 | 0 | You've all been really helpful. However the my confusion came from the fact that paginator was being added to the context, yet there was a statement {% load paginator %} at the top of the template. I thought they were the same, but no. The paginator from the context was unused, and the load statement pulled in the bad paginator, which was registered with the templating engine.
The fix is obvious: remove the load statement, including the context paginator and use that one. | 1 | 0 | 0 | My code imports Paginator from django.core.paginator. (Django 1.6.7)
However, when run, somehow it is calling a custom paginator. I don't want it to do this, as the custom paginator template has been removed in an upgrade. I just want Django's Paginator to be used, but I can't find how to workout where it's overriding Django's Paginator with our broken one.
This may not be so much of a Django question and more of a generic Python question. All the usual things like grepping the code, inserting ipdb's, judicious use of find etc yield no help. | Django keeps calling another package to paginate -- how? | 1.2 | 0 | 0 | 36 |
28,877,646 | 2015-03-05T12:14:00.000 | 0 | 0 | 0 | 1 | python,python-2.7,celery,py2exe | 28,878,537 | 1 | false | 0 | 0 | if __name__ == '__main__':
app.start()
should be added to the entry point of the script. | 1 | 0 | 0 | Is it possible to use Celery to run an already compiled (py2exe) python script, if yes, how I can invoke it ? | Is it possible to use Celery to run an already compiled (py2exe) python script | 0 | 0 | 0 | 94 |
28,879,391 | 2015-03-05T13:50:00.000 | 3 | 0 | 1 | 0 | python,excel,xlsx | 28,879,986 | 1 | false | 0 | 0 | It depends on the application you are using to view the file. You will have to check the features available in the tools you are using.
For instance, in Excel, this is impossible. When you open an Excel document, it actually creates an invisible copy. You are not editing the original. It is only when the file is saved that the original is updated. So, if you have a file my_excel_file.xlsx, when you open it, another file is created named ~$my_excel_file.xlsx. So, editing the original file will not update the file being viewed in the Excel application.
For text files, on the other hand, there are some applications that will reload changes from disk. Sublime Text is an example of this. If you have a file open in Sublime Text, then make a change to the file with another program, Sublime Text will automatically reload the new version when the application regains focus. | 1 | 2 | 0 | Is it possible to add or remove entries to an excel file or a text file while it is still open (viewing live update of values from python output) instead of seeing the output in terminal? | Editing an open document with python | 0.53705 | 1 | 0 | 2,289 |
28,882,541 | 2015-03-05T16:16:00.000 | 8 | 0 | 1 | 0 | python,spyder | 43,925,127 | 6 | false | 0 | 0 | For Spyder IDE just go to View -> Reset window layout | 3 | 28 | 0 | New to Python and Spyder. How do I reposition the panes in Spyder. I had them set with the editor in the upper left, the object inspector in the upper right, and the ipython console in the lower left. Somehow I messed it up, and can't figure out how to reposition them. Have crawled all over the web, but no joy.
Thanks
jpl | repositioning panes in Spyder panes | 1 | 0 | 0 | 42,169 |
28,882,541 | 2015-03-05T16:16:00.000 | 2 | 0 | 1 | 0 | python,spyder | 46,728,994 | 6 | false | 0 | 0 | just go to view->click on spyder default layout
that should work | 3 | 28 | 0 | New to Python and Spyder. How do I reposition the panes in Spyder. I had them set with the editor in the upper left, the object inspector in the upper right, and the ipython console in the lower left. Somehow I messed it up, and can't figure out how to reposition them. Have crawled all over the web, but no joy.
Thanks
jpl | repositioning panes in Spyder panes | 0.066568 | 0 | 0 | 42,169 |
28,882,541 | 2015-03-05T16:16:00.000 | 1 | 0 | 1 | 0 | python,spyder | 60,206,028 | 6 | false | 0 | 0 | On windows click on the 3 bars at the upright corner of the pane, undock and move the pane where you want. On Linux unselect View > lock panes and toolbars and move your pane. | 3 | 28 | 0 | New to Python and Spyder. How do I reposition the panes in Spyder. I had them set with the editor in the upper left, the object inspector in the upper right, and the ipython console in the lower left. Somehow I messed it up, and can't figure out how to reposition them. Have crawled all over the web, but no joy.
Thanks
jpl | repositioning panes in Spyder panes | 0.033321 | 0 | 0 | 42,169 |
28,882,759 | 2015-03-05T16:27:00.000 | 0 | 0 | 1 | 0 | python,environment-variables,virtualenv | 28,882,976 | 2 | false | 0 | 0 | Depending on what operating system you are using you could edit the activate file and set an environment variable there. For example, a Windows virtualenv folder has a sub-folder called Scripts. Inside scripts is the activate.bat file. Edit activate.bat and alter the path variable. One thing to consider though, is you might want to save the original path variable in another temporary environment variable and restore from that temporary environment variable in the deactivate.bat file. | 2 | 0 | 0 | I'd like to alter my $PATH only in a Python virtual environment. Is it possible to have the $PATH change when I activate a virtual environment? | python virtualenv -- possible to augment $PATH or add other environment variables? | 0 | 0 | 0 | 155 |
28,882,759 | 2015-03-05T16:27:00.000 | 1 | 0 | 1 | 0 | python,environment-variables,virtualenv | 28,882,980 | 2 | true | 0 | 0 | You can write an activation script that sources virtualenv's activate (on linux, or calls the bat file on windows) and then updates PATH, PYTHONPATH and other environment variables. Use the virtualenv bootstrap hooks to install the script when the virtualenv is created and call it instead of activate. | 2 | 0 | 0 | I'd like to alter my $PATH only in a Python virtual environment. Is it possible to have the $PATH change when I activate a virtual environment? | python virtualenv -- possible to augment $PATH or add other environment variables? | 1.2 | 0 | 0 | 155 |
28,885,814 | 2015-03-05T19:13:00.000 | 1 | 0 | 0 | 0 | python-2.7,igraph | 28,891,088 | 2 | false | 0 | 0 | Pickle is a serializer from the standard library in Python. These guesses seem quite likely to me:
When igraph was started they did not want to create an own file format so they used pickle. Now the default behavior for saving graphs is not pickle but the own format.
When saving objects with igraph in graphml, the library knows what is important and what is not and will use minimal memory. Pickle, however, can serialize many Python objects to strings and will save every object in a list or dictionary in case it is reused or has cyclic references. | 2 | 1 | 1 | I am a beginner in igraph.
I have a graph data of 60000 nodes and 900K edges. I could successfully create the graph using python-igraph and write to disk. My machine has 3G memory.
When I wrote the graph to disk in graphml format, the memory usage was around 19%; with write_pickle, the usage went up to 50% and took significantly more time.
What is the reason behind this behavior of igraph? When should and when should I not use the pickle format?
Please shed light into this. | python-igraph pickling efficiency | 0.099668 | 0 | 0 | 650 |
28,885,814 | 2015-03-05T19:13:00.000 | 1 | 0 | 0 | 0 | python-2.7,igraph | 28,895,245 | 2 | true | 0 | 0 | Pickling is a generic format to store arbitrary objects, which may reference other objects, which may in turn also reference other objects. Therefore, when Python is pickling an object, it must keep track of all the objects that it has "seen" and serialized previously to avoid getting stuck in an infinite loop. That's probably the reason why pickling is slower (and uses more memory).
The advantage of using pickling is that the pickled representation will preserve the exact Python type of every single graph, vertex or edge attribute (provided that you use types that support pickling). GraphML won't keep the exact types because there is no unambiguous mapping from Python types to GraphML types; for instance, all numeric attributes would be converted to doubles in the GraphML representation, irrespectively of whether the original attributes were Python ints, longs, or floating-point numbers. | 2 | 1 | 1 | I am a beginner in igraph.
I have a graph data of 60000 nodes and 900K edges. I could successfully create the graph using python-igraph and write to disk. My machine has 3G memory.
When I wrote the graph to disk in graphml format, the memory usage was around 19%; with write_pickle, the usage went up to 50% and took significantly more time.
What is the reason behind this behavior of igraph? When should and when should I not use the pickle format?
Please shed light into this. | python-igraph pickling efficiency | 1.2 | 0 | 0 | 650 |
28,888,290 | 2015-03-05T21:40:00.000 | 0 | 0 | 1 | 0 | python,import,python-import | 28,888,521 | 1 | false | 0 | 0 | I needed to add import conda to the script above the other imports because I have tables installed in anaconda | 1 | 0 | 0 | If I start python and input import tables it works fine but when I run python m_BlackrockLib.py (which has import os, struct, tables, pickle, re, shutil as it's first line) I get the error ImportError: No module named tables. I can see that tables is present in /usr/local/anaconda/lib/python2.7/site-package so I'm not sure what is causing this error. Can anyone see the issue? | Python failing to import module from script | 0 | 0 | 0 | 74 |
28,891,305 | 2015-03-06T02:13:00.000 | 0 | 1 | 0 | 1 | python,linux,ssh,terminal | 28,891,431 | 2 | false | 0 | 0 | Things to try:
nohup, or
screen | 1 | 1 | 0 | From my home pc using putty, I ssh'ed into a remote server, and I ran a python program that takes hours to complete, and as it runs it prints stuff. Now after a while, my internet disconnected, and I had to close and re-open putty and ssh back in. If I type 'top' I can see the python program running in the background with its PID number. Is there a command I can use to basically re-open that process and see it printing its stuff again?
Thanks | How to open process again in linux terminal? | 0 | 0 | 0 | 1,139 |
28,892,506 | 2015-03-06T04:47:00.000 | 2 | 0 | 1 | 0 | python,travis-ci,anaconda,conda | 28,909,473 | 2 | true | 0 | 0 | The requirements in the meta.yaml can be any conda package (which doesn't have to just be Python packages). If you have a conda package for your dependency, you can specify it. | 1 | 3 | 0 | I am writing a meta.yaml file for a python package to be used in conda packages in a way that works with CI systems. how can i specify external software requirements for the package? meaning software that is not a python library but is required for the package unit tests to pass? to clarify: the required module is not a python package, but the python package depends on it. | how to specify external software requirements with conda/meta.yaml in Python? | 1.2 | 0 | 0 | 1,203 |
28,894,339 | 2015-03-06T07:53:00.000 | 1 | 0 | 0 | 0 | python,django,django-forms,django-formwizard | 28,894,444 | 2 | false | 1 | 0 | One approach that you could consider is splitting up your model into semantic slices, each one being a model on its own with a more digestable number of fields.
Then map these "slice-models" back to your main object using a one-to-one relationship (implemented by OneToOneField).
In your wizard you could start a transaction at the beginning and commit only if everything ran through nicely. | 1 | 0 | 0 | I'm new to django and would like to hear your opinion on how to create a form for
a table with 540 fields. What would be the best approach? Is it best to split the modelform into multiple components or create a template that collects the input in parts (multiple inputs per page) and proceeds through all fields? It would be great if you could point me to some information and/or examples.
Thanks. | create input form for 540 fields in django. Best approach? | 0.099668 | 0 | 0 | 50 |
28,894,741 | 2015-03-06T08:22:00.000 | 2 | 0 | 1 | 0 | python,code-generation,abstract-syntax-tree,grako | 28,953,859 | 1 | false | 0 | 0 | Grako-style code generation is done through in-line templates using string.Formatter.
Look into the grako.codegen and grako.model modules/packages, and take a look at the examples/antlr2grako example.
Code generation is one of the least documented parts of Grako (in the README), but it is all there. | 1 | 3 | 0 | I am using grako utility of python to parse my OIL file to AST. but i want to re generate the source code back from the AST after modifying the AST. Is grako is having feature to do this or else any other utility in python is available for this source code re generation. | Source code generation | 0.379949 | 0 | 0 | 157 |
28,896,055 | 2015-03-06T09:43:00.000 | 1 | 0 | 1 | 0 | python,multiprocessing | 28,896,285 | 2 | false | 0 | 0 | The Queue has no way of knowing when it does not have any possible writers anymore. You could pass the object to any number of subprocesses, and it does not know if you passed it to any given subprocess. So it will have to wait, even if a subprocess dies. A queue is not a file descriptor that is automatically closed when the child dies.
What you are looking for is some kind of supervisor in the parent process that notices when children die unexpectedly and handle that situation in whatever way you think appropriate. You can do this by catching a SIGCHLD process, checking Process.is_alive or using Process.join in a thread. A simple implementation would use the timeout parameter in the Queue.get call and do a Process.is_alive check when that returns.
If you have a bit more control over the death of the child process, it should send an "EOF"-type object (None, or some kind of marker that it is done) to the queue so your parent process can handle it correctly. | 1 | 4 | 0 | I have a subprocess via multiprocessing.Process and a queue via multiprocessing.Queue.
The main process is using multiprocessing.Queue.get() to get some new data. I don't want to have a timeout there and I want it to be blocking.
However, when the child process dies for whatever reason (manually killed by user via kill, or segfault, etc.), Queue.get() just will hang forever.
How can I avoid that? | multiprocessing.Queue hanging when Process dies | 0.099668 | 0 | 0 | 1,360 |
28,898,827 | 2015-03-06T12:29:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine,google-cloud-datastore | 29,608,781 | 1 | true | 1 | 0 | It's not recommended to dynamically create a new table. You need to redesign your database relation structure.
For example in a user messaging app instead of making a new table for every new message [ which contains message and user name] , you should rather create a User table and Messagestable separately and implement a many to one relation between the two tables. | 1 | 0 | 0 | I wish to implement this:
There should be an entity A with column 1 having values a,b,c...[dynamically increases by user's input]
There should be another entity B for each values of a , b , c..
How should I approach this problem?
Should I dynamically generate other entities as user creates more [a,b,c,d... ] ?
If yes , how?
Any other way of implementation of the same problem,? | How to create multiple entities dynamically in google app engine using google data storage(python) | 1.2 | 0 | 0 | 165 |
28,900,091 | 2015-03-06T13:43:00.000 | 1 | 0 | 0 | 0 | python,python-2.7,caching,flask,gunicorn | 50,258,834 | 2 | false | 1 | 0 | For your use case with gunicorn, there is no multi-threading issue since each service run single-threadedly in its own process. But a potential problem would be "dirty" read of the data.
Think about the following case:
process1 read from db and populate its own cache, cache1
process2 read from the same table using the same query and populate its own cache, cache2
process2 update the table with new data and invalidate old cache2
process1 execute the same query again, reading from cache1 with the outdated data!. This is when problem happens because process1/cache1 is not aware of the database update | 1 | 8 | 0 | We are using the following setup: NGINX+Gunicorn+Flask. We need to add just a little bit of caching, no more than 5Mb per Flask worker. SimpleCache seems to be simplest possible solution - it uses memory locally, inside the Python process itself.
Unfortunately, the documentation states the following:
"Simple memory cache for single process environments. This class
exists mainly for the development server and is not 100% thread safe."
However, I fail to see where thread safety would matter at all in our setup. I think that Gunicorn keeps several Flask workers running, and each worker has its own small cache. What can possibly go wrong? | What can go wrong if I use SimpleCache in my Flask app | 0.099668 | 0 | 0 | 3,322 |
28,903,187 | 2015-03-06T16:28:00.000 | 0 | 0 | 0 | 0 | django,facebook,rest,django-rest-framework,python-social-auth | 38,586,945 | 1 | false | 1 | 0 | i didn't unerstand the first case, when you are using facebook login it does the authentication and we will register the user with the access token provided by facebook. When ever user log in we are not worried about the password, authentication is not done on our end. so when ever user tries to login it contacts facebook if everything goes good there, it will give you a token with that user can login. | 1 | 1 | 0 | I have to implement a REST backend for mobile applications.
I will have to use Django REST Framework.
Among the features that I need to implement there will be the registration user and login.
Through the mobile application the user can create an account using ONLY the Facebook login.
Then, the application will take the information from Facebook using the token-facebook and it will send this data to my server.
I tried using python_social about Facebook authentication and user registration using the Facebook token.
At this point I have doubts:
think there could be two choices:
1:
The mobile application use the Facebook-login to retrieve user data and will send a request to my server to create a new user with the Facebook data user and passing the Facebook-token.
In this case, in the server side, it will not be integrated python_social and facebook-token is a simple profile field.
Doubts: how can you implement the next login (which password Is necessary to use?)
2:
The second possibility is to use python_social. In this way there are no problems for subsequent logins. The token Facebook will be used to retrieve the data (and validate the user) by calling: do_auth
But in this case, for each user, the server will have to make a request to Facebbok (which actually is possible to avoid: the mobile application has already recovered all the data)
What is the best case? What do you usually use for authentication backend rest with Facebook? | Django REST Backend for mobile app with Facebook login | 0 | 0 | 0 | 779 |
28,903,499 | 2015-03-06T16:44:00.000 | 1 | 0 | 0 | 1 | python,ubuntu,networking,hostname,mininet | 29,895,418 | 2 | false | 0 | 0 | I don't think you can get different names by running the "hostname" on each host. Only networking-related commands will produce different results on different hosts because the hosts run on separated namespaces.
So perhaps one way to get the hostname is to run ifconfig and intepret the hostname from the interfaces' name. | 2 | 2 | 0 | I need to emulate a network with n hosts connected by a switch. The perfect tool for this seems to be mininet. The problem is that I need to run a python script in every host that makes use of the hostname. The skript acts different depending on the hostname, so this is very important for me :)
But the hostname seems to be the same in every host! Example:
h1 hostname
outputs "simon-pc"
h2 hostname
outputs "simon-pc"
"simon-pc" is the hostname of my "real" underlying ubuntu system.
I don't find a possibility to change the hostname on the host.
Is this even possible? And if yes, how? If no, why not?
I read about mininet using one common kernel for every host. Might this be the problem? | Change hostname in mininet host | 0.099668 | 0 | 0 | 1,568 |
28,903,499 | 2015-03-06T16:44:00.000 | 1 | 0 | 0 | 1 | python,ubuntu,networking,hostname,mininet | 43,475,651 | 2 | false | 0 | 0 | I finally figure out how to do it , first you need to run the "ifconfig" command from inside the program and storing it in a variable , second you use regular expressions 're' to grab the text , I use it to grab the addressees of my hosts...you can do the same for the hostname
code:
getip():
ifconfig_output=(subprocess.check_output('ifconfig')).decode()
ip=re.search(r".\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}",ifconfig_output)
return((str(ip))[47:-2])
The first line will get the output from the terminal (in binary use decode to represent it in utf-8 encoding string)
The second line will grab the ip in the text using regular expressions
The Third will grab the ip from the search object and eliminate info about the position of the grabbed text in the given string | 2 | 2 | 0 | I need to emulate a network with n hosts connected by a switch. The perfect tool for this seems to be mininet. The problem is that I need to run a python script in every host that makes use of the hostname. The skript acts different depending on the hostname, so this is very important for me :)
But the hostname seems to be the same in every host! Example:
h1 hostname
outputs "simon-pc"
h2 hostname
outputs "simon-pc"
"simon-pc" is the hostname of my "real" underlying ubuntu system.
I don't find a possibility to change the hostname on the host.
Is this even possible? And if yes, how? If no, why not?
I read about mininet using one common kernel for every host. Might this be the problem? | Change hostname in mininet host | 0.099668 | 0 | 0 | 1,568 |
28,904,066 | 2015-03-06T17:15:00.000 | 4 | 0 | 1 | 1 | python,api,anaconda,blpapi | 28,947,897 | 1 | false | 0 | 0 | This is not true. The Anaconda Python and Python extension modules are built using Visual Studio (2008 for Python 2 and 2010 for Python 3, the same as the Python installers from python.org). | 1 | 1 | 0 | I spent hours yesterday trying to get the blapi up and running and finally gave in and emailed their support, this is the response:
"Unfortunately our BLPAPI SDKs are not compatible with the Anaconda
distribution of Python. That Python is built using GCC, and it is not
capable of loading DLLs that were built using Microsoft Visual Studio;
our DLLS were built with MSVS.
This means you'll need to use the Python distribution from Python.org,
which is also built with MSVS."
I cannot download the normal Python (from Python.org) due to security constraints, but for some reason I can do Anaconda. Honestly it's preferable for me anyhow because I don't want to mess with having to download 15 diff packages I need afterwards.
Does anybody have any idea if it is even possible to work around this? It seems ridiculous that Bloomberg would force you to use the straight distribution and then have to go download all the packages you want individually by making this incompatible with GCC builds. | Bloomberg API SDK not compatible with Anaconda Python | 0.664037 | 0 | 0 | 1,122 |
28,904,565 | 2015-03-06T17:44:00.000 | 0 | 1 | 1 | 0 | php,python,matlab,import | 28,905,871 | 2 | false | 0 | 0 | I'm not sure about your question, but maybe your IDE can help you to get rid of those imports. For example, in Spyder you can execute a startup script that will run when a new console is launched. Placing your imports there would provide direct access to those files. | 1 | 1 | 0 | As I've mentioned before I originally came from a Matlab background and then moved onto PHP before discovering Python. In both Matlab and PHP there were ways to create a scripts that when run all the variables got dumped into your current workspace. That workspace in Matlab being the interpreter when called from there or the function's workspace when called from a function. I would use this capability for constants - for example, plotting tools where you want to define a set of default fonts, line widths, etc.
Now in Python I can import a module but then all of the references constants in that module require either from {module} import * or {module}.{constant}.
This doesn't really limit me it is just an inconvenience. However, it would be nice to be able to import a constants file and just have the constants available to whatever called them.
I suspect Python does not allow this but I thought I would ask to see if someone had a clever work around. | Python module variables into current workspace | 0 | 0 | 0 | 1,278 |
28,908,468 | 2015-03-06T22:04:00.000 | 1 | 0 | 0 | 0 | python,scikit-learn | 28,910,572 | 1 | true | 0 | 0 | Not with anything built into scikit-learn, as removing rows is something that is not easily done in the current API.
It should be quite easy to write a custom function / class that does that based on the output of DictVectorizer. | 1 | 0 | 1 | I am setting up a classification system using scikit-learn. After training a classifier I would like to save it for reuse along with the necessary transforms such as the DictVectorizer.
I am looking for a way to filter the incoming stream of unclassified data that will feed into the feature transforms and classifier. Ideally, I would like to remove and flag vectors that contain new values for categorical attributes and/or altogether new attributes.
I have used the DictVectorizer.restrict() method to filter input data but this only results in the vectorizer filtering new attributes and zeroing new values, where I would also like to put aside inconsistent data. Is there an easy way to pull out rows with values and attribute that were not in the initial data set? | Identify Data Vectors with New Attributes and/or Values | 1.2 | 0 | 0 | 34 |
28,909,360 | 2015-03-06T23:26:00.000 | 0 | 0 | 1 | 0 | python,xlsxwriter | 28,909,616 | 2 | false | 0 | 0 | Instead of assigning the variable worksheet to workbook.add_worksheet(thename), have a list called worksheets. When you normally do worksheet = workbook.add_worksheet(thename), do worksheets.append(workbook.add_worksheet(thename)). Then access your latest worksheet with worksheets[-1]. | 1 | 1 | 0 | I've written some code that iterates through a flat file. After a certain section is completed reading, I take the data and put it into a spreadsheet. Then, I go back and continue reading the flat file for the next section and write to a new worksheet...and so on and so forth.
When looping through the python code, I create a new worksheet for each section read above. During this looping, I create the new worksheet as such:
worksheet = workbook.add_worksheet(thename)
The problem is that the second time through the loop, python crashes when re-assigning the worksheet object above to a new worksheet. Is there a way to "close the worksheet object", then re-assign it?
FYI: If I can't use the same object name, "worksheet" in this case, the code is going to become tremendously long and messy in order to handle "worksheet1", "worksheet2", "worksheet3", etc... (as you might imagine)
Thank you in advance! | python xlsxwriter worksheet object reuse | 0 | 1 | 0 | 1,050 |
28,909,929 | 2015-03-07T00:27:00.000 | 0 | 0 | 1 | 0 | python,pandas,datanitro | 28,910,528 | 1 | false | 0 | 0 | Right click in the shell, click "mark", highlight the region you want to copy, and press enter. It'll then be in your clipboard and you can paste it into an editor.
(This is the way to copy things from a windows command prompt in general, which is where the shell is running.) | 1 | 2 | 0 | I'm using DataNitro and would like to be able to copy output from the Excel Shell and paste into my editor. However, I can't find a way to do that. Any idea how I might do that? | DataNitro - copying from the shell into an editor | 0 | 0 | 0 | 110 |
28,911,280 | 2015-03-07T04:12:00.000 | -2 | 0 | 1 | 0 | python,list,sorting | 28,911,603 | 3 | false | 0 | 0 | For the range command you have to put two exclusive integers as parameters for example range(0,11) will pick a number between 1 and 10. This is probably why you can use range(). Also you need to use a column to follow range():. | 1 | 1 | 0 | I have to alphabetically sort a text file where each new word is on a new line. I currently have the whitespace stripped and my program prints the first letter attaching it to a integer with the ord function. I cannot use sort and I know I have to put it into a list just not sure how. This is what i wrote for the lists.
lists = [ [] for _ in range( 26 ) ] | How to alphabetically sort a text file in python without sort function? | -0.132549 | 0 | 0 | 1,361 |
28,913,310 | 2015-03-07T09:23:00.000 | 3 | 0 | 1 | 0 | python,string,eof | 28,913,362 | 1 | false | 0 | 0 | There is no EOF character.... Usually it is just when you run out of data. If you really want one, you could just make some arbitrary string combination up, but you are probably better off just reading each line until the file is out of data. | 1 | 6 | 0 | how can I store in a variable, a string containing the EOF character? In other words, how can I represent EOF in a python string?
Thanks in advance | String with EOF in python | 0.53705 | 0 | 0 | 16,285 |
28,915,681 | 2015-03-07T14:04:00.000 | 1 | 0 | 0 | 0 | python,qt,pyqt | 28,923,741 | 3 | true | 0 | 1 | QWidget.sizeHint holds the recommended size for the widget. It's default implementation returns the layout's preferred size if the widget has a layout. So if your dialog has a layout, just use sizeHint to get the recommended size which is the default one. | 1 | 0 | 0 | When I create a QMainWindow without explicitly specifying its dimensions, PyQt will give it a -let's say- "standard size" which is not the minimum that the window can get.
Can I set this size at will in any way?
My goal is to get this "standard size" according to the currently visible widgets, when I set the visibility of some widgets on/off. | how to get the default size of a window in pyqt4 | 1.2 | 0 | 0 | 1,830 |
28,915,998 | 2015-03-07T14:38:00.000 | 0 | 0 | 1 | 0 | python,pyscripter,abaqus | 29,386,384 | 2 | false | 0 | 0 | I don't think its possible. You can only import abaqus modules when python is run from the abaqus executable, i.e. by issuing the 'abaqus python' command. I don't know why this is, perhaps someone else can enlighten us.
I didn't want to use the abaqus PDE either and spent ages trying to get around it. I have resorted to writing the code in another environment and using a command prompt to run it with 'abaqus python ###.py' etc. | 1 | 1 | 0 | How to install Pyscripter for using with Abaqus finite element program? I don't want to use Abaqus PDE. The pyscripter works ok on it's own, but it shows Import Error "only" for modules related to Abaqus finite element software. I have directed the Python path to Abaqus Python executable, however it doesn't help. | Pyscripter doesn't work with Abaqus Python | 0 | 0 | 0 | 234 |
28,921,545 | 2015-03-07T23:58:00.000 | 17 | 0 | 1 | 1 | python,pyinstaller | 35,067,170 | 1 | false | 0 | 0 | PyInstaller 3.0 includes the --uac-admin option! | 1 | 5 | 0 | I need to create an executable for windows 8.1 and lower, so I tried Py2Exe (no success) and then PyInstaller, and damn it, it worked.
Now I need to run it as admin (everytime since that uses admin tasks).
My actual compiling script looks like this :
python pyinstaller.py --onefile --noconsole --icon=C:\Python27\my_distrib\ico_name --name=app_name C:\Python27\my_distrib\script.py
Is there an option to use UAC or things like this ? (its mess to me)
Also, everytime I start it on my laptop windows 8.1 (my desktop computer is windows 7) it says that is dangerous... Anything to make it a "trust" exe?
Thanks in advance | Create executable that uses admin rights with Pyinstaller | 1 | 0 | 0 | 7,070 |
28,921,711 | 2015-03-05T18:15:00.000 | 1 | 0 | 1 | 0 | python | 28,921,713 | 2 | false | 0 | 0 | In Python, the append method mutates the list it was called on but does not return it. There's not much reason to either, since you already have a reference to the list. | 1 | 0 | 0 | A=['1']
C=A.append('1')
print C
Why is the above code return None but not ['1', '1'] in Python ? | Why C=A.append('1') doesn't work in Python | 0.099668 | 0 | 0 | 85 |
28,921,997 | 2015-03-08T01:11:00.000 | 1 | 0 | 0 | 0 | python,android,pygame,renpy | 28,922,150 | 1 | false | 0 | 1 | I e-mailed PyTom, the creator of Ren'py, and he responded back instantly:
"Edit the rapt/blacklist.txt file, and delete the line that says **.py. "
It worked! Thanks, PyTom. | 1 | 0 | 0 | I have this game which I have coded in python with pygame.
When I launch my game through the renpy launcher, it works just fine.
When I emulate my game through the renpy android emulator, it works just fine.
When I go through each of the build steps (install sdk, configure, build package, install package) it completes successfully.
The package is installed on my connected android device.
When I open the game on my android device, it says:
"ImportError: No module named renpygame."
When I build the game and play it on my desktop I don't have this problem.
I'm guessing somehow renpygame is not getting included in the package when I go through the build process in renpy for Android, but I don't know why or how to fix it. I'm not even getting an error message, I don't know where to start. | Ren'Py with Renpygame for Android | 0.197375 | 0 | 0 | 1,556 |
28,923,393 | 2015-03-08T05:24:00.000 | -1 | 1 | 0 | 1 | python,linux,centos | 37,664,878 | 5 | false | 0 | 0 | -bash: /usr/bin/yum: /usr/bin/python: bad interpreter: Permission denied then
first remove python follow command line
-- sudo rpm -e python
second check which package install this command line
-- sudo rpm -q python
then install package
-- sudo yum install python*
i think this problem solve | 3 | 5 | 0 | I am new in centos.I am try to do an application on it.For my application I need to install python 2.7.But the default one on server was python 2.6. So tried to upgrade the version .And accidentally I deleted the folder /usr/bin/python.After that I Installed python 2.7 through make install.I created the folder again /usr/bin/python and run command sudo ln -s /usr/bin/python2.7 /usr/bin/python. After this when I tried to run YUM commands I am getting the error
-bash: /usr/bin/yum: /usr/bin/python: bad interpreter: Permission denied
drwxrwxrwx 2 root root 4096 Mar 8 00:19 python
this is permission showing for the directory /usr/bin/python | -bash: /usr/bin/yum: /usr/bin/python: bad interpreter: Permission denied | -0.039979 | 0 | 0 | 45,325 |
28,923,393 | 2015-03-08T05:24:00.000 | 3 | 1 | 0 | 1 | python,linux,centos | 40,200,244 | 5 | false | 0 | 0 | yum doesn't work with python2.7.
You should do the following
vim /usr/bin/yum
change
#!/usr/bin/python
to
#!/usr/bin/python2.6
If your python2.6 was deleted, then reinstall them and point the directory in /usr/bin/yum to your python2.6 directory. | 3 | 5 | 0 | I am new in centos.I am try to do an application on it.For my application I need to install python 2.7.But the default one on server was python 2.6. So tried to upgrade the version .And accidentally I deleted the folder /usr/bin/python.After that I Installed python 2.7 through make install.I created the folder again /usr/bin/python and run command sudo ln -s /usr/bin/python2.7 /usr/bin/python. After this when I tried to run YUM commands I am getting the error
-bash: /usr/bin/yum: /usr/bin/python: bad interpreter: Permission denied
drwxrwxrwx 2 root root 4096 Mar 8 00:19 python
this is permission showing for the directory /usr/bin/python | -bash: /usr/bin/yum: /usr/bin/python: bad interpreter: Permission denied | 0.119427 | 0 | 0 | 45,325 |
28,923,393 | 2015-03-08T05:24:00.000 | 0 | 1 | 0 | 1 | python,linux,centos | 62,297,081 | 5 | false | 0 | 0 | this problem is that yum file start head write #!/usr/local/bin/python2.6, write binary file, is not dir, is python binary file | 3 | 5 | 0 | I am new in centos.I am try to do an application on it.For my application I need to install python 2.7.But the default one on server was python 2.6. So tried to upgrade the version .And accidentally I deleted the folder /usr/bin/python.After that I Installed python 2.7 through make install.I created the folder again /usr/bin/python and run command sudo ln -s /usr/bin/python2.7 /usr/bin/python. After this when I tried to run YUM commands I am getting the error
-bash: /usr/bin/yum: /usr/bin/python: bad interpreter: Permission denied
drwxrwxrwx 2 root root 4096 Mar 8 00:19 python
this is permission showing for the directory /usr/bin/python | -bash: /usr/bin/yum: /usr/bin/python: bad interpreter: Permission denied | 0 | 0 | 0 | 45,325 |
28,927,247 | 2015-03-08T13:57:00.000 | 0 | 0 | 0 | 0 | python,django,python-3.4,django-1.7,photologue | 31,394,483 | 3 | false | 1 | 0 | I had exactly the same problem. I suspected some problem with django-sortedm2m package. To associate photo to gallery, it was using SortedManyToMany() from sortedm2m package. For some reason, the admin widget associated with this package did not function well. (I tried Firefox, Chrome and safari browser).
I actually did not care for the order of photos getting uploaded to Gallery, so I simply replaced that function call with Django's ManyToManyField(). Also, I noticed that SortedManyToMany('Photo') was called with constant string Photo. Instead it should be called with SortedManyToMany(Photo) to identify Photo class. Although it did not resolve my problem entirely. So I used default ManyToMany field and it is showing all the photos from Gallery. | 2 | 0 | 0 | after I put the photologue on the server, I have no issue with uploading photos.
the issue is when I am creating a Gallery from the admin site, I can choose only one photo to be attached to the Gallery. even if I selected many photos, one of them will be linked to the Gallery only.
The only way to add photos to a gallery is by adding them manually to photologue_gallery_photos table in the database :(
anyone kows how to solve it? | Gallery in Photologue can have only one Photo | 0 | 1 | 0 | 184 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.