Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
24,482,222 | 2014-06-30T02:29:00.000 | 0 | 1 | 0 | 1 | python,cx-freeze,panda3d | 24,698,264 | 1 | false | 0 | 0 | No, it is not possible to recover the original source code.
If the application used CPython, though, it is always possible to recover the CPython bytecode, which you can use a disassembler on to make a reconstruction of the Python code, but a lot of information will be lost; the resulting code will look rather unreadable and obfuscated, depending on the degree to which the bytecode was optimised.
If you want to go down that path, though, I advise looking into CPython's "dis" module. There are also numerous other utilities available that can reconstruct Python code from CPython bytecode. | 1 | 1 | 0 | need help with something...
I had this python program which i made.
The thing is, i need the source of it, but the thing is, the hdd i had with it is dead
, and when i tried to lookup any backups, it wasn't there.
The only thing i have the binary, which i think, was compiled in cx_Freeze. I'm really desperate about it, and i tried any avialble ways to do it, and there was none or almost little.
Is there a way to ''unfreeze'' the executable or at least get the pyc out of it? | cx_Freeze Unfreeze. Is it possible? [python] | 0 | 0 | 0 | 1,104 |
24,487,444 | 2014-06-30T09:56:00.000 | 0 | 0 | 0 | 0 | python,amazon-web-services,amazon-s3,aws-cli | 42,559,759 | 3 | false | 1 | 0 | One possible issue could be that proxy might not have been set in your instance service role. Configure the env to point to your proxy servers via HTTP_PROXY / HTTPS_PROXY (Since the above error displays 443, it should be HTTPS_PROXY). | 1 | 4 | 0 | I am trying to copy a file from my aws ec2 instance to S3 bucket folder, but i am getting error
Here is the command sample
aws s3 cp /home/abc/icon.jpg s3://mybucket/myfolder
This the error i am getting
upload failed: ./icon.jpg to s3://mybucket/myfolder/icon.jpg HTTPSConnectionPool(host='s3-us-west-1b.amazonaws.com', port=443): Max retries exceeded with url: /mybucket/myfolder/icon.jpg (Caused by : [Errno -2] Name or service not known)
I have already configured the config file for aws cli command line
Please suggest the solution to this problem | HTTPSConnectionPool(host='s3-us-west-1b.amazonaws.com', port=443): Max retries exceeded with url | 0 | 0 | 1 | 11,516 |
24,491,143 | 2014-06-30T13:22:00.000 | 0 | 0 | 0 | 0 | python,algorithm,minimax | 24,496,933 | 3 | false | 0 | 0 | The simplest way to select your move is to choose your move that has the maximum number of winning positions stemming from that move.
I would, for each node in your search tree (game state) keep a record of possible win states that can be created by the current game state. | 3 | 0 | 0 | So far I have successfully been able to use the Minimax algorithm in Python and apply it to a tic-tac-toe game. I can have my algorithm run through the whole search treee, and return a value.
However, I am confused as to how to take this value, and transform it into a move? How am I supposed to know which move to make?
Thanks. | How to Get Move From Minimax Algorithm in Tic-Tac-Toe? | 0 | 0 | 0 | 767 |
24,491,143 | 2014-06-30T13:22:00.000 | 0 | 0 | 0 | 0 | python,algorithm,minimax | 24,491,647 | 3 | false | 0 | 0 | In using the MM algorithm, you must have had a way to generate the possible successor boards; each of those was the result of a move. As has been suggested, you can modify your algorithm to include tracking of the move that was used to generate a board (for example, adding it to the definition of a board, or using a structure that has the board and the move); or, you could have a special case for the top level of the algorithm, since that is the only one in which the particular move is important.
For example, if your function currently returns just the computed value of the board it was passed, it could instead return a dict (or tuple, which isn't as clear) with both the value and the first move used to obtain that value, and then modify your code to use whichever bit is needed. | 3 | 0 | 0 | So far I have successfully been able to use the Minimax algorithm in Python and apply it to a tic-tac-toe game. I can have my algorithm run through the whole search treee, and return a value.
However, I am confused as to how to take this value, and transform it into a move? How am I supposed to know which move to make?
Thanks. | How to Get Move From Minimax Algorithm in Tic-Tac-Toe? | 0 | 0 | 0 | 767 |
24,491,143 | 2014-06-30T13:22:00.000 | 0 | 0 | 0 | 0 | python,algorithm,minimax | 24,491,495 | 3 | false | 0 | 0 | Conceptualize the minimax algorithm like a graph, where every vertex is a possible configuration of the board, and every edge from a vertex to its neighbor is a transition/move from one board configuration to the next.
You need to look at the heuristic value of each board state neighboring your current state, then choose the state with the best heuristic value, then update your screen to show that board state.
If you are doing animations/transitions between board states, then you would have to look at the edge and figure out which piece is different between the two states, and animate that piece accordingly. | 3 | 0 | 0 | So far I have successfully been able to use the Minimax algorithm in Python and apply it to a tic-tac-toe game. I can have my algorithm run through the whole search treee, and return a value.
However, I am confused as to how to take this value, and transform it into a move? How am I supposed to know which move to make?
Thanks. | How to Get Move From Minimax Algorithm in Tic-Tac-Toe? | 0 | 0 | 0 | 767 |
24,493,849 | 2014-06-30T15:36:00.000 | -1 | 0 | 0 | 0 | python,python-2.7 | 24,494,507 | 1 | false | 0 | 0 | make a matrix object and use Crammer's method | 1 | 2 | 1 | I want to solve systems of linear inequalities in 3 or more variables. That is, to find all possible solutions.
I originally found GLPK and tried the python binding, but the last few updates to GLPK changed the APIs and broke the bindings. I haven't been able to find a way to making work.
I would like to have the symbolic answer, but numeric approximations will be fine too.
I would also be happy to use a library that solves maximization problem. I can always re-write the problems to be solve in that way. | Solving system of linear inequalities in 3 or more variables - Python | -0.197375 | 0 | 0 | 1,233 |
24,496,445 | 2014-06-30T18:18:00.000 | 0 | 0 | 0 | 0 | python,paypal | 24,497,485 | 1 | false | 0 | 0 | The merchant is generally responsible for paying transaction fees and conversion fees. As far as I know there is no way to programmatically force the cardholder to pay them. | 1 | 0 | 0 | We have following scenario:
Our service shows the bill amount to the user in INR (Indian Rupees). For example Rs 1000
We want to receive full payment in INR in our bank account. That is, we should get full Rs 1000 in our account.
User selects his preferred currency as one of the following: USD, GBP, CAD etc based on the credit card he is carrying.
All the extra charges like Paypal fee + currency conversion charges should be deducted from the user's credit card.
How can this be done via REST API? We are using Python REST SDK. | User's preferred currency is different from the product amount currency | 0 | 0 | 1 | 59 |
24,497,219 | 2014-06-30T19:09:00.000 | 0 | 0 | 0 | 1 | javascript,python,google-maps,google-app-engine | 24,501,164 | 2 | false | 1 | 0 | You didn't say how frequently the data points are updated, but assuming 1) they're updated infrequently and 2) there are only hundreds of points, then consider just querying them all once, and storing them sorted in memcache. Then your handler function would just fetch from memcache and filter in memory.
This wouldn't scale indefinitely but it would likely be cheaper than querying the Datastore every time, due to the way App Engine pricing works. | 1 | 0 | 0 | I am developing a web app based on the Google App Engine.
It has some hundreds of places (name, latitude, longitude) stored in the Data Store.
My aim is to show them on google map.
Since they are many I have registered a javascript function to the idle event of the map and, when executed, it posts the map boundaries (minLat,maxLat,minLng,maxLng) to a request handler which should retrieve from the data store only the places in the specified boundaries.
The problem is that it doesn't allow me to execute more than one inequality in the query (i.e. Place.latminLat, Place.lntminLng).
How should I do that? (trying also to minimize the number of required queries) | Google App Engine NDB Query on Many Locations | 0 | 1 | 0 | 144 |
24,497,239 | 2014-06-30T19:10:00.000 | -1 | 0 | 0 | 0 | python,ajax,django | 32,511,257 | 2 | false | 1 | 0 | Just think of the Web as a platform for building easy-to-use, distributed, loosely couple systems, with no guarantee about the availability of resources as 404 status code suggests.
I think that creating tightly coupled solutions such as your idea is going against web principles and usage of REST. xhr.abort() is client side programming, it's completely different from server side. It's a bad practice trying to tighten client side technology to server side internal behavior.
Not only this is a waste of resources, but also there is no guarantee on processing status of the request by web server. It may lead to data inconsistency too.
If your request generates no server-side side effects for which the client
can be held responsible. It is better just to ignore it, since these kind of requests does not change server state & the response is usually cached for better performance.
If your request could cause changes in server state or data, for the sake of data consistency you can check whether the changes have taken effect or not using an API. In case of affection try to rollback using another API. | 2 | 11 | 0 | I initiate a request client-side, then I change my mind and call xhr.abort().
How does Django react to this? Does it terminate the thread somehow? If not, how do I get Django to stop wasting time trying to respond to the aborted request? How do I handle it gracefully? | How do I terminate a long-running Django request if the XHR gets an abort()? | -0.099668 | 0 | 0 | 1,318 |
24,497,239 | 2014-06-30T19:10:00.000 | 1 | 0 | 0 | 0 | python,ajax,django | 52,607,897 | 2 | false | 1 | 0 | Due to how http works and that you usually got a frontend in front of your django gunicorn app processes (or uswgi etc), your http cancel request is buffered by nginx. The gunicorns don't get a signal, they just finish processing and then output whatever to the http socket. But if that socket is closed it will have an error (which is caught as a closed connection and move one).
So it's easy to DOS a server if you can find a way to spawn many of these requests.
But to answer your question it depends on the backend, with gunicorn it will keep going until the timeout. | 2 | 11 | 0 | I initiate a request client-side, then I change my mind and call xhr.abort().
How does Django react to this? Does it terminate the thread somehow? If not, how do I get Django to stop wasting time trying to respond to the aborted request? How do I handle it gracefully? | How do I terminate a long-running Django request if the XHR gets an abort()? | 0.099668 | 0 | 0 | 1,318 |
24,499,602 | 2014-06-30T21:59:00.000 | 0 | 0 | 0 | 0 | python,tkinter,installation | 57,996,033 | 4 | false | 0 | 1 | Just change your code to
from tkinter import *
In python3, the issue is with capitalization, so you should use tkinter instead of Tkinter. | 1 | 3 | 0 | How the hell to install tkinter on to my PC ?
I tried for week to install it. I can not figure it out. Please, Help.
How does it work ? Is there a site you install it from or what ? | How to install Tkinter? | 0 | 0 | 0 | 10,084 |
24,500,025 | 2014-06-30T22:44:00.000 | 3 | 0 | 0 | 1 | python,terminal,google-chrome-os | 25,711,026 | 4 | false | 0 | 0 | No, developer mode does not disable automatic updates. My Chromebook has been in dev mode for over a year and I haven't missed an update yet. | 2 | 4 | 0 | I want to enable full access to the terminal (to install Python), so I need to enable developer mode. But I don't want to lose automatic updates to ChromeOS.
Does enabling developer mode in ChromeOS disable automatic updates? | Does enabling developer mode in ChromeOS disable automatic updates? | 0.148885 | 0 | 0 | 1,800 |
24,500,025 | 2014-06-30T22:44:00.000 | 2 | 0 | 0 | 1 | python,terminal,google-chrome-os | 25,073,166 | 4 | false | 0 | 0 | I receive automatic canary updates every day in dev mode. That info must be outdated. | 2 | 4 | 0 | I want to enable full access to the terminal (to install Python), so I need to enable developer mode. But I don't want to lose automatic updates to ChromeOS.
Does enabling developer mode in ChromeOS disable automatic updates? | Does enabling developer mode in ChromeOS disable automatic updates? | 0.099668 | 0 | 0 | 1,800 |
24,500,190 | 2014-06-30T23:01:00.000 | 0 | 0 | 1 | 0 | python,client-server | 24,532,689 | 1 | true | 0 | 0 | One solution is to have the executable scripts all at the top folder like this:
server
server specific code
client
client specific code
common
common code
server.py (executable script that imports from server and common)
client.py (executable script that imports from client and common)
When deploying the server I just copy server.py, the server and the common folder. Similar for the client.
It's not the ideal solution and I'd be thankful if someone comes up with a better one but that is how I use it now. | 1 | 0 | 0 | I'm working on a client/server application in Python, where client and server share a lot of code.
How should the folder structure look like?
My idea is to have three folders with the code files in it
server
server.py
etc.
client
client.py
etc.
common
common.py
etc.
But how can I import from common.py in server.py when server.py has to be executable (can't be a package)?
Currently we have all files in the same folder but since the project got more complex this isn't manageable anymore. | Layout for Client/Server project with common code | 1.2 | 0 | 1 | 249 |
24,500,522 | 2014-06-30T23:43:00.000 | 0 | 0 | 0 | 0 | python,matplotlib | 24,500,552 | 1 | false | 0 | 0 | Use the numpy function histogram, which returns arrays with the bin locations and sizes. | 1 | 0 | 1 | I'm creating a histogram(which is NOT normalized) using matplotlib.
I want to get the exact size of each bin. That is, not the width but the length.
In other words, number of data contained in each bin.
Any tips??? | Histogram bin size(matplotlib) | 0 | 0 | 0 | 148 |
24,500,665 | 2014-07-01T00:02:00.000 | 3 | 0 | 0 | 0 | python,mysql,sqlalchemy,mysql-python | 24,614,258 | 1 | true | 0 | 0 | session.connection().connection.thread_id() | 1 | 3 | 0 | I'm used to creating connections using MySQLdb directly so I'm not sure this is at all possible using sqlalchemy, but is there a way to get the mysql connection thread id from a mysql Session a la MySQLdb.connection.thread_id()? I've been digging through and can't seem to find a way to access it. I'm not creating a connection from the Engine directly, and would like to avoid doing so.
For reference this is a single-thread application, I just need to get the mysql thread id for other purposes. | Getting a mysql thread id from sqlalchemy Session | 1.2 | 1 | 0 | 707 |
24,502,362 | 2014-07-01T04:16:00.000 | 0 | 0 | 0 | 0 | mysql,python-2.7,mysql-python | 24,502,420 | 2 | false | 0 | 0 | Yes, it is possible to have up to that many number of mySQL connectins. It depends on a few variables. The maximum number of connections MySQL can support depends on the quality of the thread library on a given platform, the amount of RAM available, how much RAM is used for each connection, the workload from each connection, and the desired response time.
The number of connections permitted is controlled by the max_connections system variable. The default value is 151 to improve performance when MySQL is used with the Apache Web server.
The important part is to properly handle the connections and closing them appropriately. You do not want redundant connections occurring, as it can cause slow-down issues in the long run. Make sure when coding that you properly close connections. | 2 | 0 | 0 | I made a program that receives user input and stores it on a MySQL database. I want to implement this program on several computers so users can upload information to the same database simoultaneously. The database is very simple, it has just seven columns and the user will only enter four of them.
There would be around two-three hundred computers uploading information (not always at the same time but it can happen). How reliable is this? Is that even possible?
It's my first script ever so I appreciate if you could point me in the right direction. Thanks in advance. | simultaneous connections to a mysql database | 0 | 1 | 0 | 1,497 |
24,502,362 | 2014-07-01T04:16:00.000 | 0 | 0 | 0 | 0 | mysql,python-2.7,mysql-python | 24,502,484 | 2 | true | 0 | 0 | Having simultaneous connections from the same script depends on how you're processing the requests. The typical choices are by forking a new Python process (usually handled by a webserver), or by handling all the requests with a single process.
If you're forking processes (new process each request):
A single MySQL connection should be perfectly fine (since the total number of active connections will be equal to the number of requests you're handling).
You typically shouldn't worry about multiple connections since a single MySQL connection (and the server), can handle loads much higher than that (completely dependent upon the hardware of course). In which case, as @GeorgeDaniel said, it's more important that you focus on controlling how many active processes you have and making sure they don't strain your computer.
If you're running a single process:
Yet again, a single MySQL connection should be fast enough for all of those requests. If you want, you can look into grouping the inserts together, as well as multiple connections.
MySQL is fast and should be able to easily handle 200+ simultaneous connections that are writing/reading, regardless of how many active connections you have open. And yet again, the performance you get from MySQL is completely dependent upon your hardware. | 2 | 0 | 0 | I made a program that receives user input and stores it on a MySQL database. I want to implement this program on several computers so users can upload information to the same database simoultaneously. The database is very simple, it has just seven columns and the user will only enter four of them.
There would be around two-three hundred computers uploading information (not always at the same time but it can happen). How reliable is this? Is that even possible?
It's my first script ever so I appreciate if you could point me in the right direction. Thanks in advance. | simultaneous connections to a mysql database | 1.2 | 1 | 0 | 1,497 |
24,503,344 | 2014-07-01T05:58:00.000 | 3 | 0 | 0 | 0 | python,scipy,scikit-learn,statsmodels,pymc | 24,505,486 | 2 | true | 0 | 0 | The scikit-learn SGDRegressor class is (iirc) the fastest, but would probably be more difficult to tune than a simple LinearRegression.
I would give each of those a try, and see if they meet your needs. I also recommend subsampling your data - if you have many gigs but they are all samples from the same distibution, you can train/tune your model on a few thousand samples (dependent on the number of features). This should lead to faster exploration of your model space, without wasting a bunch of time on "repeat/uninteresting" data.
Once you find a few candidate models, then you can try those on the whole dataset. | 1 | 4 | 1 | I'm performing a stepwise model selection, progressively dropping variables with a variance inflation factor over a certain threshold.
In order to do this, I'm running OLS many, many times on datasets ranging from a few hundred MB to 10 gigs.
What is the quickest implementation of OLS would be for larger datasets? The Statsmodel OLS implementation seems to be using numpy to invert matrices. Would a gradient descent based method be quicker? Does scikit-learn have an especially quick implementation?
Or maybe an mcmc based approach using pymc is quickest...
Update 1: Seems that the scikit learn implementation of LinearRegression is a wrapper for the scipy implementation.
Update 2: Scipy OLS via scikit learn LinearRegression is twice as fast as statsmodels OLS in my very limited tests... | Quickest linear regression implementation in python | 1.2 | 0 | 0 | 3,098 |
24,504,172 | 2014-07-01T06:57:00.000 | 2 | 0 | 1 | 0 | python,packages | 24,504,533 | 3 | false | 0 | 0 | Imports in Python are really just another form of name assignment. There is really no difference between an object that has been imported into foo and one that has been defined in foo - they are both visible internally and externally in exactly the same way. So no, there is no way to prevent this.
I don't really see how this is cluttering the namespace, though. You've still only imported one name, foo, into your other module. | 2 | 3 | 0 | If I create a package named foo that imports bar, why is bar visible under foo as foo.bar when I import foo in another module? Is there a way to prevent this; to keep bar hidden so as not to clutter the namespace? | Why are imported packages visible inside other packages? | 0.132549 | 0 | 0 | 173 |
24,504,172 | 2014-07-01T06:57:00.000 | 0 | 0 | 1 | 0 | python,packages | 24,504,645 | 3 | false | 0 | 0 | TL;DR: Python Imports create named bindings for pieces of code so they can be referenced and used.
An import is essentially binding a piece of code to a name. So the namespace should always reflect what has been imported. If you hide that you may end up causing unexpected problems for someone else or yourself.
If you are importing the wrong modules, importing modules you don't use, or have a ton of imports because you have 10 classes in one file you should consider fixing the underlying issue(s). Not trying to hide it by messing with how modules are imported. | 2 | 3 | 0 | If I create a package named foo that imports bar, why is bar visible under foo as foo.bar when I import foo in another module? Is there a way to prevent this; to keep bar hidden so as not to clutter the namespace? | Why are imported packages visible inside other packages? | 0 | 0 | 0 | 173 |
24,504,185 | 2014-07-01T06:58:00.000 | 1 | 0 | 1 | 1 | python,shell,virtualenv | 24,505,623 | 1 | true | 0 | 0 | Use a function instead of a separate script. A function executes in the context of your current shell. | 1 | 0 | 0 | I'm working on a Python project that's wrapped in a virtualenv. I'd like to have a script that does all the "footwork" of getting set up as soon as I clone my git repo -- namely make the virtualenv, download my requirements, and stay in the virtualenv after exiting. However, once the shell script finishes, I'm no longer in my virtualenv, since the changes it makes to its shell don't propagate to mine.
How can I have the virtualenv "stick" to the parent shell that ran the script? | Start a virtualenv inside a shell script | 1.2 | 0 | 0 | 267 |
24,504,231 | 2014-07-01T07:01:00.000 | 21 | 0 | 1 | 0 | python,flask,virtualenv | 24,504,466 | 3 | true | 0 | 0 | At a glance it looks like you need admin permissions to install packages on your system. Try starting pip as admin or your OS equivalent. | 1 | 15 | 0 | I'm trying to install a virtual environment using the command:
pip install virtualenv
but I get the following error:
IOError: [Errno 13] Permission denied: '/Library/Python/2.7/site-packages/virtualenv.py'
How do I fix this? | What's causing this error when I try and install virtualenv? IOError: [Errno 13] Permission denied: '/Library/Python/2.7/site-packages/virtualenv.py' | 1.2 | 0 | 0 | 23,177 |
24,506,949 | 2014-07-01T09:35:00.000 | 0 | 1 | 0 | 0 | python,python-2.7,module,cryptography | 24,516,137 | 1 | true | 0 | 0 | If you really want to, you could generate a key and then hard-code the key value using hexadecimals. You could try and hide the value in the code, but it would amount to obfuscation, adding little to no security. | 1 | 0 | 0 | I have some module, which using AES-128 encryption/decryption. And me necessary automatically generate one time in module secret key (if didnt initialized) for every user, after save and disallow to change. How can I do this? | Genererate secret key for cryptography in module | 1.2 | 0 | 0 | 52 |
24,514,129 | 2014-07-01T15:32:00.000 | 2 | 0 | 1 | 1 | python | 24,514,261 | 2 | false | 0 | 0 | subprocesses run in the background. In the subprocess module, there is a class called Popen that starts a process in the background. It has a wait() method you can use to wait for the process to finish. It also has a communicate() helper method that will handle stdin/stdout/stderr plus wait for the process to complete. It also has convenience functions like call() and check_call() that create a Popen object and then wait for it to complete.
So, subprocess implements a non-blocking model but also gives you blocking helper functions. | 1 | 1 | 0 | Do subprocess calls in Python hang? That is, do subprocess calls operate in the same thread as the rest of the Python code, or is it a non-blocking model? I couldn't find anything in the docs or on SO on the matter. Thanks! | Python subprocess calls hang? | 0.197375 | 0 | 0 | 877 |
24,515,783 | 2014-07-01T17:06:00.000 | 2 | 0 | 0 | 0 | python,matplotlib,plot | 24,524,359 | 1 | true | 0 | 0 | If your linear SVM classifier works quite well, then that suggests there is a hyperplane which separates your data. So there will be a nice 2D geometric representation of the decision boundary.
To understand the "how" you need to look at the support vectors themselves, see which ones contribute to which side of the hyperplane, e.g., by feeding individual support vectors into the trained classifier. In general, visualising text algos is not straightforward. | 1 | 3 | 1 | I have a scikits-learn linear svm.SVC classifier designed to classify text into 2 classes (-1,1). The classifier uses 250 features from the training set to make its predictions, and it works fairly well.
However, I can't figure out how plot the hyperplane or the support vectors in matplotlib. All the examples online use only 2 features to derive the decision boundary and the support vector points. I can't seem to find any that plot hyperplanes or support vectors that have more than 2 features or lack fixed features. I know that there is a fundamental mathematical step that I am missing here, and any help would be appreciated. | How do you plot the hyperplane of an sklearn svm with more than 2 features in matplotlib? | 1.2 | 0 | 0 | 1,876 |
24,516,396 | 2014-07-01T17:47:00.000 | 2 | 0 | 0 | 0 | python,arrays,performance,numpy,histogram | 24,516,539 | 3 | false | 0 | 0 | First, fill in your 16 bins without considering date at all.
Then, sort the elements within each bin by date.
Now, you can use binary search to efficiently locate a given year/month/week within each bin. | 1 | 3 | 1 | I have a 2D numpy array consisting of ca. 15'000'000 datapoints. Each datapoint has a timestamp and an integer value (between 40 and 200). I must create histograms of the datapoint distribution (16 bins: 40-49, 50-59, etc.), sorted by year, by month within the current year, by week within the current year, and by day within the current month.
Now, I wonder what might be the most efficient way to accomplish this. Given the size of the array, performance is a conspicuous consideration. I am considering nested "for" loops, breaking down the arrays by year, by month, etc. But I was reading that numpy arrays are highly memory-efficient and have all kinds of tricks up their sleeve for fast processing. So I was wondering if there is a faster way to do that. As you may have realized, I am an amateur programmer (a molecular biologist in "real life") and my questions are probably rather naïve. | efficient, fast numpy histograms | 0.132549 | 0 | 0 | 4,163 |
24,516,745 | 2014-07-01T18:10:00.000 | 4 | 0 | 1 | 0 | python,environment-variables,scrapy | 24,516,837 | 1 | false | 1 | 0 | Windows uses the environment variable called PATH to identify a command in comand prompt and directs to the folder in which the command is associated with. For instance, when you install Python, it appends it's location in your system to the PATH variable, so that when you call it in cmd (type in python), it knows where to look and calls the appropriate program/s at that location. | 1 | 5 | 0 | I have just got Scrapy set up on my machine (Windows Vista 64 bit, Python.org version 2.7, 64 bit shell). I have tried running the command 'scrapy startproject myproject' and got the seemingly standard error message of 'scrapy is not a recognised command.
A lot of the other people who have asked this question have been advised that they need to set up environment variables for Python in Windows. I'm not entirely sure why I am supposed to do this to be honest. Could someone please explain? | Why do I need to set environment variables for Python to make Scrapy work? | 0.664037 | 0 | 0 | 1,891 |
24,517,722 | 2014-07-01T19:18:00.000 | 4 | 0 | 1 | 0 | python,nlp,nltk | 24,518,289 | 3 | false | 0 | 0 | The goal of a stemmer is to remove as much of the word as possible to allow it to cover as many cases as possible, yet retain the core of the word. One reason profile might go to profil is to cover the case of profiling. You would need a conditional or another stemmer in order to guard against this, although I would imagine the majority of them will remove the trailing 'e'. (Especially giving the number of 'e' -> 'ing' cases) | 2 | 7 | 1 | I'm using NLTK stemmer to remove grammatical variations of a stem word.
However, the Port or Snowball stemmers remove the trailing "e" of the original form of a noun or verb, e.g., Profile becomes Profil.
How can I prevent this from happening? I know I can use a conditional to guard against this. But obviously it will fail on different cases.
Is there an option or another API for what I want? | How to stop NLTK stemmer from removing the trailing "e"? | 0.26052 | 0 | 0 | 3,819 |
24,517,722 | 2014-07-01T19:18:00.000 | 8 | 0 | 1 | 0 | python,nlp,nltk | 24,521,458 | 3 | true | 0 | 0 | I agree with Philip that the goal of stemmer is to retain only the stem. For this particular case you can try a lemmatizer instead of stemmer which will supposedly retain more of a word and is meant to remove exactly different forms of a word like 'profiles' --> 'profile'. There is a class in NLTK for this - try WordNetLemmatizer() from nltk.stem.
Beware that it's still not perfect (like nothing when working with text) because I used to get 'physic' from 'physics'. | 2 | 7 | 1 | I'm using NLTK stemmer to remove grammatical variations of a stem word.
However, the Port or Snowball stemmers remove the trailing "e" of the original form of a noun or verb, e.g., Profile becomes Profil.
How can I prevent this from happening? I know I can use a conditional to guard against this. But obviously it will fail on different cases.
Is there an option or another API for what I want? | How to stop NLTK stemmer from removing the trailing "e"? | 1.2 | 0 | 0 | 3,819 |
24,517,793 | 2014-07-01T19:22:00.000 | 3 | 0 | 0 | 0 | python,machine-learning,nlp,scikit-learn,vectorization | 24,519,951 | 2 | true | 0 | 0 | Intrinsically you can not use TF IDF in an online fashion, as the IDF of all past features will change with every new document - which would mean re-visiting and re-training on all the previous documents, which would no-longer be online.
There may be some approximations, but you would have to implement them yourself. | 2 | 2 | 1 | I'm looking to use scikit-learn's HashingVectorizer because it's a great fit for online learning problems (new tokens in text are guaranteed to map to a "bucket"). Unfortunately the implementation included in scikit-learn doesn't seem to include support for tf-idf features. Is passing the vectorizer output through a TfidfTransformer the only way to make online updates work with tf-idf features, or is there a more elegant solution out there? | Online version of scikit-learn's TfidfVectorizer | 1.2 | 0 | 0 | 2,712 |
24,517,793 | 2014-07-01T19:22:00.000 | 4 | 0 | 0 | 0 | python,machine-learning,nlp,scikit-learn,vectorization | 24,841,469 | 2 | false | 0 | 0 | You can do "online" TF-IDF, contrary to what was said in the accepted answer.
In fact, every search engine (e.g. Lucene) does.
What does not work if assuming you have TF-IDF vectors in memory.
Search engines such as lucene naturally avoid keeping all data in memory. Instead they load one column at a time (which due to sparsity is not a lot). IDF arises trivially from the length of the inverted list.
The point is, you don't transform your data into TF-IDF, and then do standard cosine similarity.
Instead, you use the current IDF weights when computing similarities, using a weighted cosine similarity (often modified with additional weighting, boosting terms, penalizing terms, etc.)
This approach will work essentially with any algorithm that allows attribute weighting at evaluation time. Many algorithms will do, but very few implementations are flexible enough, unfortunately. Most expect you to multiply the weights into your data matrix before training, unfortunately. | 2 | 2 | 1 | I'm looking to use scikit-learn's HashingVectorizer because it's a great fit for online learning problems (new tokens in text are guaranteed to map to a "bucket"). Unfortunately the implementation included in scikit-learn doesn't seem to include support for tf-idf features. Is passing the vectorizer output through a TfidfTransformer the only way to make online updates work with tf-idf features, or is there a more elegant solution out there? | Online version of scikit-learn's TfidfVectorizer | 0.379949 | 0 | 0 | 2,712 |
24,519,113 | 2014-07-01T20:54:00.000 | 0 | 0 | 0 | 0 | python,pdf,matplotlib,plot | 24,519,425 | 2 | false | 0 | 0 | If you don't have a requirement to use PDF figures, you can save the matplotlib figures as .png; this format just contains the data on the screen, e.g. I tried saving a large scatter plot as PDF, its size was 198M; as png it came out as 270K; plus I've never had any problems using png inside latex. | 1 | 1 | 1 | I have a problem with Matplotlib. I usually make big plots with many data points and then, after zooming or setting limits, I save in pdf only a specific subset of the original plot. The problem comes when I open this file: matplotlib saves all the data into the pdf making not visible the one outside of the range. This makes almost impossible to open afterwards those plots or to import them into latex.
Any idea of how I could solve this problem is really welcome.
Thanks a lot | Matplotlib saves pdf with data outside set | 0 | 0 | 0 | 247 |
24,520,176 | 2014-07-01T22:25:00.000 | 0 | 0 | 0 | 0 | python,database,amazon-s3,amazon-dynamodb,boto | 24,642,053 | 2 | false | 0 | 0 | From what you described, I think you just need to create one table with hashkey. The haskey should be object id. And you will have columns such as "date", "image pointer", "text pointer", etc.
DynamoDB is schema-less so you don't need to create the columns explicitly. When you call getItem the server will return you a dictionary with column name as key and the value.
Being schema-less also means you can create new column dynamically. Assuming you already have a row in the table with only "date" column. now you want to add the "image pointer" column. you just need to call UpdateItem and gives it the hashkey and image-pointer key-value pair. | 2 | 0 | 0 | I'm attempting to store information from a decompiled file in Dynamo.
I have all of the files stored in s3 however I would like to change some of that.
I have an object id with properties such as a date, etc which I know how to create a table of in dynamo. My issue is that each object also contains images, text files, and the original file. I would like to have the key for s3 for the original file in the properties of the file:
Ex: FileX, date, originalfileLoc, etc, images pointer, text pointer.
I looked online but I'm confused how to do the nesting. Does anyone know of any good examples? Is there another way? I assume I create an images and a text table. Each with the id and all of the file's s3 keys. Any example code of how to create the link itself?
I'm using python boto btw to do this. | How to correctly nest tables in DynamoDb | 0 | 1 | 0 | 477 |
24,520,176 | 2014-07-01T22:25:00.000 | 0 | 0 | 0 | 0 | python,database,amazon-s3,amazon-dynamodb,boto | 24,536,208 | 2 | true | 0 | 0 | If you stay between the limits of Dynamodb of 64Kb per item.
You can have one item (row) per file.
DynamoDB has String type (for file name, date, etc) and also a StringSet (SS) for list of attributes (for text files, images).
From what you write I assume you are will only save pointers (keys) to binary data in the S3.
You could also save binary data and binary sets in DynamoDB but I believe you will reach the limit AND have an expensive solution in terms of throughput. | 2 | 0 | 0 | I'm attempting to store information from a decompiled file in Dynamo.
I have all of the files stored in s3 however I would like to change some of that.
I have an object id with properties such as a date, etc which I know how to create a table of in dynamo. My issue is that each object also contains images, text files, and the original file. I would like to have the key for s3 for the original file in the properties of the file:
Ex: FileX, date, originalfileLoc, etc, images pointer, text pointer.
I looked online but I'm confused how to do the nesting. Does anyone know of any good examples? Is there another way? I assume I create an images and a text table. Each with the id and all of the file's s3 keys. Any example code of how to create the link itself?
I'm using python boto btw to do this. | How to correctly nest tables in DynamoDb | 1.2 | 1 | 0 | 477 |
24,521,661 | 2014-07-02T01:42:00.000 | -2 | 0 | 0 | 1 | python,django,multithreading,gunicorn,gevent | 24,544,667 | 3 | true | 1 | 0 | I have settled for using a synchronous (standard) worker and making use of the multiprocessing library. This seems to be the easiest solution for now.
I have also implemented a global pool abusing a memcached cache providing locks so only two tasks can run. | 2 | 8 | 0 | we recently switched to Gunicorn using the gevent worker.
On our website, we have a few tasks that take a while to do. Longer than 30 seconds.
Preamble
We did the whole celery thing already, but these tasks are run so rarely that its just not feasible to keep celery and redis running all the time. We just do not want that. We also do not want to start celery and redis on demand. We want to get rid of it. (I'm sorry for this, but I want to prevent answers that go like: "Why dont you use celery, it's great!")
The tasks we want to run asynchronously
I'm talking about tasks that perform 3000 SQL queries (inserts) that have to be performed one after each other. This is not done all too often. We limited to running only 2 of these tasks at once as well. They should take like 2-3 minutes.
The approach
Now, what we are doing now is taking advantage of the gevent worker and gevent.spawn the task and return the response.
The problem
I found that the spawned threads are actually blocking. As soon as the response is returned, the task starts running and no other requests get processed until the task stops running. The task will be killed after 30s, the gunicorn timeout.
In order to prevent that, I use time.sleep() after every other SQL query, so the server gets a chance to respond to requests, but I dont feel like this is the point.
The setup
We run gunicorn, django and use gevent. The behaviour described occurs in my dev environment and using 1 gevent worker. In production, we will also run only 1 worker (for now). Also, running 2 workers did not seem to help in serving more requests while a task was blocking.
TLDR
We consider it feasible to use a gevent thread for our 2 minute task
(over celery)
We use gunicorn with gevent and wonder why a thread
spawned with gevent.spawn is blocking
Is the blocking intended or is our setup wrong?
Thank you! | Gunicorn, Django, Gevent: Spawned threads are blocking | 1.2 | 0 | 0 | 5,406 |
24,521,661 | 2014-07-02T01:42:00.000 | 0 | 0 | 0 | 1 | python,django,multithreading,gunicorn,gevent | 24,769,760 | 3 | false | 1 | 0 | It would appear no one here gave an actual to your question.
Is the blocking intended or is our setup wrong?
There is something wrong with your setup. SQL queries are almost entirely I/O bound and should not be blocking any greenlets. You are either using a SQL/ORM library that is not gevent-friendly, or something else in your code is causing the blocking. You should not need to use multiprocessing for this kind of task.
Unless you are explicitly doing a join on the greenlets, then the server response should not be blocking. | 2 | 8 | 0 | we recently switched to Gunicorn using the gevent worker.
On our website, we have a few tasks that take a while to do. Longer than 30 seconds.
Preamble
We did the whole celery thing already, but these tasks are run so rarely that its just not feasible to keep celery and redis running all the time. We just do not want that. We also do not want to start celery and redis on demand. We want to get rid of it. (I'm sorry for this, but I want to prevent answers that go like: "Why dont you use celery, it's great!")
The tasks we want to run asynchronously
I'm talking about tasks that perform 3000 SQL queries (inserts) that have to be performed one after each other. This is not done all too often. We limited to running only 2 of these tasks at once as well. They should take like 2-3 minutes.
The approach
Now, what we are doing now is taking advantage of the gevent worker and gevent.spawn the task and return the response.
The problem
I found that the spawned threads are actually blocking. As soon as the response is returned, the task starts running and no other requests get processed until the task stops running. The task will be killed after 30s, the gunicorn timeout.
In order to prevent that, I use time.sleep() after every other SQL query, so the server gets a chance to respond to requests, but I dont feel like this is the point.
The setup
We run gunicorn, django and use gevent. The behaviour described occurs in my dev environment and using 1 gevent worker. In production, we will also run only 1 worker (for now). Also, running 2 workers did not seem to help in serving more requests while a task was blocking.
TLDR
We consider it feasible to use a gevent thread for our 2 minute task
(over celery)
We use gunicorn with gevent and wonder why a thread
spawned with gevent.spawn is blocking
Is the blocking intended or is our setup wrong?
Thank you! | Gunicorn, Django, Gevent: Spawned threads are blocking | 0 | 0 | 0 | 5,406 |
24,523,468 | 2014-07-02T05:27:00.000 | 0 | 0 | 0 | 0 | python,3d,render,panda3d | 25,160,655 | 1 | false | 0 | 1 | Have a look at other games made with Panda3D. see the tutorials, demos, etc. That will give you a good overview of what can be possibly made. Check for the keyword technologies that you need - SSAO, Skeletal-Animation, Physics, etc.
In general, you might want to detalise your idea and see how other games implement parts of it. If there are some unique things that nowhere to be found - then ask about how to make them. Otherwise they are certainly viable (as someone already made them)
Without much more info in the original question, the answer can be only this detailed. | 1 | 0 | 0 | I'm considering attempting a game built using Panda3D where no objects are built using a 3D editor. It would all be made and rendered using geometric functions. This includes multiple characters running around, spells being cast and buildings and other objects being around.
How viable of an idea is this? Would rendering all of that in real-time be too inefficient?
I have a very vague idea myself of what the game will consist of at this point or else I'd give more details, but I'm really just wondering if the general idea is possible. | Real-time Geometry Rendering with Panda3D -- Efficient? | 0 | 0 | 0 | 344 |
24,525,861 | 2014-07-02T07:59:00.000 | 2 | 0 | 1 | 0 | python,cython,pyinstaller | 62,512,529 | 2 | false | 0 | 0 | Just in case someone's looking for a quick fix.
I ran into the same situation and found a quick/dirty way to do the job. The issue is that pyinstaller is not adding the necessary libraries in the .exe file that are needed to run your program.
All you need to do is import all the libraries (and the .so files) needed into your main.py file (the file which calls file_a.py and file_b.py). For example, assume that file_a.py uses opencv library (cv2) and file_b.py uses matplotlib library. Now in your main.py file you need to import cv2 and matplotlib as well. Basically, whatever you import in file_a.py and file_b.py, you have to import that in main.py as well. This tells pyinstaller that the program needed these libraries and it includes those libraries in the exe file. | 1 | 24 | 0 | I am trying to build a Python multi-file code with PyInstaller. For that I have compiled the code with Cython, and am using .so files generated in place of .py files.
Assuming the 1st file is main.py and the imported ones are file_a.py and file_b.py, I get file_a.so and file_b.so after Cython compilation.
When I put main.py, file_a.so and file_b.so in a folder and run it by "python main.py", it works.
But when I build it with PyInstaller and try to run the executable generated, it throws errors for imports done in file_a and file_b.
How can this be fixed? One solution is to import all standard modules in main.py and this works. But if I do not wish to change my code, what can be the solution? | Building Cython-compiled python code with PyInstaller | 0.197375 | 0 | 0 | 14,305 |
24,529,197 | 2014-07-02T10:47:00.000 | 1 | 0 | 0 | 0 | python,kivy | 24,530,629 | 1 | false | 0 | 1 | Leaving aside the question of whether you should be using threading or something instead (which possibly you should), the answer is just that you should move your cpu calculations to somewhere else. Display something simple initially (i.e. returning a simple widget from your build method), then do the calculations after that, such as by clock scheduling them.
Your calculations will still block the gui in this case. You can work around this by doing them in a thread or by manually breaking them up into small pieces that can be sequentially scheduled.
It might be possible to update the gui by manually calling something like Clock.tick(), but I'm not sure if this will work right, and even if so it won't be able to display graphics before they have been initialised. | 1 | 1 | 0 | I am writing an app in kivy which does cpu-heavy calculations at launch. I want the app to display what it's doing at the moment along with the progress, however, since the main loop is not reached yet, it just displays empty white screen until it finishes working. Can I force kivy to update the interface?
Basically I'm looking for kivy's equivalent of Tkinter's root.update()
I could create a workaround by defining a series of functions with each calling the next one through Clock.schedule_once(nextFunction, 1), but that would be very sloppy.
Thanks in advance. | Force update GUI in kivy | 0.197375 | 0 | 0 | 2,011 |
24,529,823 | 2014-07-02T11:22:00.000 | 1 | 0 | 0 | 0 | image,python-2.7,matplotlib,pdf-generation | 24,531,535 | 1 | true | 0 | 0 | If you can do it the other way round, it is easier:
plot the image
load the logo from file with, e.g. Image module (PIL)
add the logo with plt.imshow, use the extent keyword to place it correctly
save the image into PDF
(You may even want to plot the logo first, so that it stays in the background.)
Unfortunately, this does not work with vector graphics, but as logos usually are not that large, you may use a .png or even a .jpg.
If you already have the PDF's then this is not a matplotlib or python question. You need some PDF editing tools or libraries to add the logo. Possible, but an entirely different thing. | 1 | 0 | 1 | I am using matplotlib to draw a graph using some data and I have saved it in Pdf format.Now I want to add a logo to this file.How can I do this.
Thanks in advance | I have generated a pdf file using matplotlib and I want to add a logo to this pdf file. How can I do it | 1.2 | 0 | 0 | 141 |
24,532,338 | 2014-07-02T13:24:00.000 | 0 | 0 | 1 | 1 | python,function,dll,loadlibrary | 24,601,961 | 2 | true | 0 | 0 | I was able to modify the export table, changing the base address of an already exported routine to my own routine.
This allowed me to execute the subroutine I was interested in via Python by using the exported name. | 1 | 2 | 0 | I would like to know if it is possible (and if so, how) to call a routine from a DLL by the Proc address instead of by name - in Python.
Case in point: I am analyzing a malicious dll, and one of the routines I want to call is not exported (name to reference it by), however I do know the address to the base of the routine.
This is possible in C/C++ by casting the function pointer to a typedef'ed function prototype.
Is there a similar way to do this in Python?
If not, are there any concerns with modifying the export table of the dll, to make a known exported name map to the address. | Python - Calling a Procedure in a Windows DLL (by address, not by name) | 1.2 | 0 | 0 | 391 |
24,533,128 | 2014-07-02T14:00:00.000 | 0 | 0 | 0 | 1 | java,python,psexec | 24,534,244 | 1 | true | 1 | 0 | Are you sure the remote python script flushes the stdout?
It should get flushed every time you print a new line, or when you explicitly call sys.stdout.flush(). | 1 | 0 | 0 | I have two files on a remote machine that I am running with PsExec, one is a Java program and the other Python.
For the Python file any outputs to screen (print() or sys.stdout.write()) are not sent back to my local machine until the script has terminated; for the Java program I see the output (System.out.println()) on my local machine as soon as it is created on the remote machine.
If anyone can explain to me why there is this difference and how to see the Python outputs as they are created I would be very grateful!
(Python 3.1, Remote Machine: Windows Server 2012, Local: Windows 7 32-bit) | Console output delay with Python but not Java using PsExec | 1.2 | 0 | 0 | 156 |
24,534,542 | 2014-07-02T15:01:00.000 | 0 | 0 | 1 | 0 | python | 24,535,327 | 2 | false | 0 | 0 | You could try using dir() to view the contents of your local scope, or dir(foo) to show the contents of foo. This won't display their values, but you could use locals() or globals() which return dictionaries of the contents of the local or global scope.
Since they're dictionaries, you can do something like locals()['foo']. | 1 | 0 | 0 | Is it possible (without using from foo import *) to access a variable declared in foo without writing foo.variable or using from foo import variable?
I need, on one hand, to access the variables easily, and it would be nice if I could look at their values (I'm using spyder, a MATLAB-inspired workspace which displays all variables and enables you to look at their values)
on the other hand, I can't use from foo import * because I need to use a lot of reload | Accessing variables from module without having to specify module name in python | 0 | 0 | 0 | 81 |
24,535,601 | 2014-07-02T15:48:00.000 | 0 | 0 | 1 | 0 | python,debugging,python-idle | 28,057,981 | 2 | false | 0 | 0 | pyshell.py file opens during the debugging process when the function that is under review is found in Python's library - for example print() or input(). If you want to bypass this file/process click Over and it will step over this review of the function in Python's library. | 2 | 0 | 0 | I have recently started to learn Python 3 and have run into an issue while trying to learn how to debug using IDLE. I have created a basic program following a tutorial, which then explains how to use the debugger. However, I keep running into an issue while stepping through the code, which the tutorial does not explain (I have followed the instructions perfectly) nor does hours of searching on the internet. Basically if I step while already inside a function, usually following print() the debugger steps into pyshell.py, specifically, PyShell.py:1285: write() if i step out of pyshell, the debugger will simple step back in as soon as I try to move on, if this is repeated the step, go, etc buttons will grey out.
Any help will be greatly appreciated.
Thanks. | Python 3 Debugging issue | 0 | 0 | 0 | 243 |
24,535,601 | 2014-07-02T15:48:00.000 | 0 | 0 | 1 | 0 | python,debugging,python-idle | 28,735,903 | 2 | false | 0 | 0 | In Python 3.4, I had the same problem. My tutorial is from Invent with Python by Al Sweigart, chapter 7.
New file editor windows such as pyshell.py and random.pyopen when built-in functions are called, such as input(), print(), random.randint(), etc. Then the STEP button starts stepping through the file it opened.
If you click OVER, you will have to click it several times, but if you click OUT, pyshell.py will close immediately and you'll be back in the original file you were trying to debug.
Also, I encountered problems confusing this one--the grayed-out buttons you mentioned--if I forgot to click in the shell and give input when the program asked. I tried Wing IDE and it didn't run the program correctly, although the program has no bugs. So I googled the problem, and there was no indication that IDLE is broken or useless.
Therefore, I kept trying till the OUT button in the IDLE debugger solved the problem. | 2 | 0 | 0 | I have recently started to learn Python 3 and have run into an issue while trying to learn how to debug using IDLE. I have created a basic program following a tutorial, which then explains how to use the debugger. However, I keep running into an issue while stepping through the code, which the tutorial does not explain (I have followed the instructions perfectly) nor does hours of searching on the internet. Basically if I step while already inside a function, usually following print() the debugger steps into pyshell.py, specifically, PyShell.py:1285: write() if i step out of pyshell, the debugger will simple step back in as soon as I try to move on, if this is repeated the step, go, etc buttons will grey out.
Any help will be greatly appreciated.
Thanks. | Python 3 Debugging issue | 0 | 0 | 0 | 243 |
24,536,004 | 2014-07-02T16:08:00.000 | 0 | 1 | 1 | 0 | python,python-3.x,python-c-api | 24,562,179 | 1 | false | 0 | 0 | Based on your comment:
If you are trying to remove characters from your string use the .strip() method.
If you want the byte count of the string compared to the character count you need to change the encoding.
If you are just trying to remove the \0 character use the .replace() method. | 1 | 0 | 0 | I can't easily find out the exact size of the string I will produce. I only know the upper bound which should be within 1-2 characters of the final size. How do I shrink the string after filling it? | Using the python C API, is it possible to shrink a PyUnicode object? | 0 | 0 | 0 | 77 |
24,536,240 | 2014-07-02T16:19:00.000 | 2 | 0 | 0 | 0 | python,django | 24,553,843 | 2 | false | 1 | 0 | little explanation with fun
Answer is Simply No,
Because Language only has the authority to own anything.Python is the owner of the house
The Django guy is paying rent to Python guy. So, How Django guy can reserve the objects of the house?
same logic is applied here too | 1 | 3 | 0 | I'm developing a django project for agriculture. I want to name an app "fields" and inside the app "fields" I want to name a model "Field" (referring to a farmer field).
I tried it and it works, so I assume that "fields" and "Field" are not reserved words in Django or Python. But I was just wondering if using these words can be problematic in the future or it's just fine?
And the general question:
Is there any way to check if a word is reserved in Django or Python? | Are "Field" and "Fields" reserved words in Django or Python? | 0.197375 | 0 | 0 | 1,219 |
24,536,552 | 2014-07-02T16:36:00.000 | 5 | 0 | 0 | 0 | python,opencv,image-processing,dwt | 45,240,779 | 2 | false | 0 | 0 | Answer of Navaneeth is correct but with two correction:
1- Opencv read and save the images as BGR not RGB so you should do cv2.COLOR_BGR2GRAY to be exact.
2- Maximum level of _multilevel.py is 7 not 10, so you should do : w2d("test1.png",'db1',7) | 1 | 7 | 1 | I need to do an image processing in python. i want to use wavelet transform as the filterbank. Can anyone suggest me which one library should i use?
I had pywavelet installed, but i don't know how to combine it with opencv. If i use wavedec2 command, it raise ValueError("Expected 2D input data.")
Can anyone help me? | How to Combine pyWavelet and openCV for image processing? | 0.462117 | 0 | 0 | 11,865 |
24,538,053 | 2014-07-02T18:05:00.000 | 0 | 0 | 0 | 0 | python,wxpython | 44,200,976 | 5 | false | 0 | 1 | But if you use log.Disable, the scrollbar won't work | 1 | 1 | 0 | I have code as follows
log = wx.TextCtrl(panel, wx.ID_ANY, size=(300,100),
style = wx.TE_MULTILINE|wx.TE_READONLY|wx.HSCROLL|wx.TE_DONTWRAP)
I'm writing logs into this box by redirecting stdout.
How do I make the cursor dissappear for the TextCtrl because it appends logs based on the position of the cursor right now. I dont want to give the user the privilage to place the cursor at a particular spot in the box basically | Disable cursor in TextCtrl - wxPython | 0 | 0 | 0 | 3,622 |
24,540,192 | 2014-07-02T20:10:00.000 | 1 | 0 | 1 | 1 | python,cygwin | 24,540,295 | 1 | false | 0 | 0 | Cygwin has its own option to install its own version of Python. If you run setup.exe and poke through the Development packages, you'll find it. You probably installed Python here as well, and are running it in Bash. If you use CMD, you're running a different version. The fact that the version numbers overlap is just coincidental. | 1 | 2 | 0 | Background:
I am a .NET developer trying to set up a python programming environment.
I have installed python 2.7.5. However I changed my mind and uninstalled 2.7.5 and installed python 2.7.6.
If I CMD in windows command promopt, the python version is 2.7.6
When I start the cygwin shell and type:
python --version
It says 2.7.5, this version is was uninstalled.
How do I get cygwin to understand it should use the new version. 2.7.6?
I believe there is commands to type in cygwin shell to solve this? Thanks on advance! | Cygwin not same python version as windows | 0.197375 | 0 | 0 | 736 |
24,541,330 | 2014-07-02T21:28:00.000 | 0 | 0 | 1 | 0 | python,pymongo,anaconda | 25,455,812 | 3 | false | 0 | 0 | You should use pip to do this: pip install <path_to_file>.
Alternatively, if your package is available on PyPi - you can just do a pip install <packagename> (do a pip search <packagename> to see if its on pypi.
For instance, I wanted to install pymongo - was easy - pip install pymongo.
Caveat:
I installed anaconda as root into /opt/anaconda - so, I had so sudo su, the add /opt/anaconda/bin to the start of PATH, then run pip install pymongo so that it would install the package into the anaconda dist, and not the existing ubuntu python dist. | 2 | 1 | 0 | I have a general problem about module importation. Thank you very much.
The situation is the following:
I have a python compressed package *.tar.gz
This package can not be found in conda list
if I uncompressed it and use 'python setup.py install' package do would be installed into system python namely user/local/lib/python 2.7/site-packages, but anaconda distribution, which causes a problem that if I start python in anaconda distribution this installed package can not be accessed to.
So is there any direct solution to this problem?
Secondly, I am confused what's the difference between ~anaconda/env and virtualenv
thank you very much | How to install 3. Party library into anaconda if it is not in conda list | 0 | 0 | 0 | 5,647 |
24,541,330 | 2014-07-02T21:28:00.000 | 2 | 0 | 1 | 0 | python,pymongo,anaconda | 43,233,683 | 3 | false | 0 | 0 | Unzip the *tar.gz and place the file folders into the directory Anaconda\Lib\site-packages | 2 | 1 | 0 | I have a general problem about module importation. Thank you very much.
The situation is the following:
I have a python compressed package *.tar.gz
This package can not be found in conda list
if I uncompressed it and use 'python setup.py install' package do would be installed into system python namely user/local/lib/python 2.7/site-packages, but anaconda distribution, which causes a problem that if I start python in anaconda distribution this installed package can not be accessed to.
So is there any direct solution to this problem?
Secondly, I am confused what's the difference between ~anaconda/env and virtualenv
thank you very much | How to install 3. Party library into anaconda if it is not in conda list | 0.132549 | 0 | 0 | 5,647 |
24,544,934 | 2014-07-03T04:27:00.000 | 4 | 0 | 1 | 0 | python,vim,vim-plugin | 65,147,373 | 2 | false | 0 | 0 | This simply is not true. There is no mandate for whitespace in python and in fact Google's style guidelines for python recommend a shiftwidth of 2 spaces. 4 spaces is a PEP8 standard and is entirely a formatting preference. The ftplugin in vim defaults to use the PEP8 standard.
The cleanest way to override this is adding a python.vim file to the directory chain ~/after/ftplugin. Where ~ is the location of your vim configuration folder (usually .vim for unix and vimfiles for windows). You could then add your preferred configuration of formatting to this file.
The simplest example would be:
set shiftwidth=2
set tabstop=2 | 2 | 1 | 0 | I have set tabstop and shiftwidth to 2 in my vimrc, but still when I go to a new line vim uses 4-space indents. I also get this warning if I save a file that uses 2 spaces for indentation:
E111 indentation is not a multiple of four [pep8]
How can I make vim use 2 spaces for Python? It seems to be ignoring my vimrc for Python.
I am using vim 7.2. | How to change vim Python indent size? | 0.379949 | 0 | 0 | 2,809 |
24,544,934 | 2014-07-03T04:27:00.000 | 0 | 0 | 1 | 0 | python,vim,vim-plugin | 24,545,447 | 2 | true | 0 | 0 | Whitespace is an integral part of the Python language, and four spaces is the mandate. You could, however, use only shifts and have your editor read a shift as only two spaces, but I would not recommend this. This could easily cause problems with mixing spaces and tabs, which can cause a giant hassle. | 2 | 1 | 0 | I have set tabstop and shiftwidth to 2 in my vimrc, but still when I go to a new line vim uses 4-space indents. I also get this warning if I save a file that uses 2 spaces for indentation:
E111 indentation is not a multiple of four [pep8]
How can I make vim use 2 spaces for Python? It seems to be ignoring my vimrc for Python.
I am using vim 7.2. | How to change vim Python indent size? | 1.2 | 0 | 0 | 2,809 |
24,548,398 | 2014-07-03T08:15:00.000 | 7 | 0 | 1 | 0 | python,intellij-idea,pycharm,intellij-plugin | 45,478,266 | 4 | false | 0 | 0 | Ubuntu 16.04 defines Ctrl + Alt + Left as a workspace switch shortcut
Then it does nothing on Pycharm.
So you have to either disable the Ubuntu shortcut with:
dash
keyboard
shortcuts
navigation
or redefine the PyCharm shortcuts to something else.
Linux distro desktop devs: please make all desktop system wide shortcuts contain the Super key. | 1 | 37 | 0 | While browsing the code in PyCharm(community edition) how to go back to the previously browsed section? I am looking for eclipse back button type functionality with Pycharm. | How to go back in PyCharm while browsing code like we have a back button in eclipse? | 1 | 0 | 0 | 17,353 |
24,552,964 | 2014-07-03T11:51:00.000 | 0 | 0 | 0 | 0 | python,django | 24,554,222 | 1 | true | 1 | 0 | First you need to create page in admin console. Then add the placeholder in your template
like what tutorial saying
{% get_page "news" as news_page %}
{% for new in news_page.get_children %}
<li>
{{ new.publication_date }}
{% show_content new body %}
{% endfor %} | 1 | 0 | 0 | I have installed django-page-cms successfully i think. Like other cms, it is also for creating new pages. But I already have html pages in my project. How to integrate with that?
They want me to put place holder in html page, like:
{% load pages_tags %}
but I think this will bring the content from the already created page in admin
Can anyone tell me how to integrate with my existing pages? | integrating with existing html page django-page-cms | 1.2 | 0 | 0 | 64 |
24,557,707 | 2014-07-03T15:25:00.000 | 0 | 0 | 0 | 1 | android,python,solr,webserver,tokyo-tyrant | 24,558,290 | 4 | false | 1 | 0 | Both "localhost" and "127.0.0.1" are local loopback interfaces only: they only make sense within the same machine. From your Android device, assuming it's on the same wifi network as your machine, you'll need to use the actual IP address of your main machine: you can either find that from the network settings of that machine, or from your router's web interface. | 3 | 1 | 0 | I have an Echoprint local webserver (uses tokyotyrant, python, solr) set up on a Linux virtual machine.
I can access it through the browser or curl in the virtual machine using http//localhost:8080 and in the non-virtual machine (couldn't find out how to say it better) I use the IP on the virtual machine also with the 8080 port.
However, when I try to access it through my android on the same wifi I get a connection refused error. | Difficulty accessing local webserver | 0 | 0 | 1 | 184 |
24,557,707 | 2014-07-03T15:25:00.000 | 0 | 0 | 0 | 1 | android,python,solr,webserver,tokyo-tyrant | 24,635,024 | 4 | true | 1 | 0 | In case someone has the same problem, I solved it.
The connection has to be by cable and on the VMware Player settings the network connection has to be bridged, also you must click "Configure adapters" and uncheck the "VirtualBox Host-Only Ethernet Adapter". | 3 | 1 | 0 | I have an Echoprint local webserver (uses tokyotyrant, python, solr) set up on a Linux virtual machine.
I can access it through the browser or curl in the virtual machine using http//localhost:8080 and in the non-virtual machine (couldn't find out how to say it better) I use the IP on the virtual machine also with the 8080 port.
However, when I try to access it through my android on the same wifi I get a connection refused error. | Difficulty accessing local webserver | 1.2 | 0 | 1 | 184 |
24,557,707 | 2014-07-03T15:25:00.000 | 0 | 0 | 0 | 1 | android,python,solr,webserver,tokyo-tyrant | 24,557,803 | 4 | false | 1 | 0 | Is the server bound to localhost or 0.0.0.0?
Maybe your host resolves that ip to some kind of a localhost as well, due to bridging. | 3 | 1 | 0 | I have an Echoprint local webserver (uses tokyotyrant, python, solr) set up on a Linux virtual machine.
I can access it through the browser or curl in the virtual machine using http//localhost:8080 and in the non-virtual machine (couldn't find out how to say it better) I use the IP on the virtual machine also with the 8080 port.
However, when I try to access it through my android on the same wifi I get a connection refused error. | Difficulty accessing local webserver | 0 | 0 | 1 | 184 |
24,562,068 | 2014-07-03T19:40:00.000 | 0 | 1 | 0 | 0 | python,mysql,django,rest,sqlalchemy | 24,677,758 | 2 | false | 1 | 0 | You want to see what queries are generated by django ORM or tastepy?
I think one easy way is to do a wrappper arround the DB class, where you run the DB class method, analise the results and print our save them to a file.
Another way to do this, is to use mysql slow_query_log with 0 seconds to log all the queries that are being made to MYSQL.
You can use a diferent user our schema to parse the results more easy.
Not good to test in production services :) | 1 | 2 | 0 | I want to test web applications that were developed using Django framework and Tastypie.
My plan was to test the REST API calls of the web apps against the queries they perform on the MySql DB. In order to do so I've investigated a little bit about DB access framework, and have encountered SQLalchemy framework, and the reflection attitude.
My thought were to try and access the Web Apps REST API in the same attitude and test the results from both sources.
Can you please suggest a different approach for examining this problem? Is there framework that will help for this task? | Projection of a Tastypie REST API into python objects | 0 | 0 | 0 | 276 |
24,562,961 | 2014-07-03T20:40:00.000 | 1 | 0 | 0 | 0 | python,django,abstract-class | 24,563,099 | 1 | true | 1 | 0 | Since an abstract base class is not a registered model, it makes absolutely no difference where it lives. It can be in any Python file that can be imported by the models.py files in each app. | 1 | 1 | 0 | Sorry for the silly question, I can't seem to find a definitive answer.
In almost all of the models in all of the apps in my Django project, there are two common fields - last_updated and date_created. I want to cut down on code by putting them into an abstract base class, of which all of my models extend.
Is there some way to use a single Abstract Base Class across all of my apps - and if so, is there a natural place for that class to live?
Thanks. | Django Models Abstract Base Class - can they be extended across apps? | 1.2 | 0 | 0 | 123 |
24,563,148 | 2014-07-03T20:53:00.000 | 1 | 0 | 1 | 0 | python,beautifulsoup,pickle | 24,563,375 | 3 | false | 1 | 0 | In fact, as suggested by dekomote, you have only to take advantadge that you can allways convert a soup to an unicode string and then back again the unicode string to a soup.
So IMHO you should not try to pass soup object through the multiprocessing package, but simply the strings representing the soups. | 1 | 9 | 0 | I have a soup from BeautifulSoup that I cannot pickle. When I try to pickle the object the python interpreter silently crashes (such that it cannot be handled as an exception). I have to be able to pickle the object in order to return the object using the multiprocessing package (which pickles objects to pass them between processes). How can I troubleshoot/work around the problem? Unfortunately, I cannot post the html for the page (it is not publicly available), and I have been unable to find a reproducible example of the problem. I have tried to isolate the problem by looping over the soup and pickling individual components, the smallest thing that produces the error is <class 'BeautifulSoup.NavigableString'>. When I print the object it prints out u'\n'. | BeautifulSoup Object Will Not Pickle, Causes Interpreter to Silently Crash | 0.066568 | 0 | 0 | 3,438 |
24,563,337 | 2014-07-03T21:09:00.000 | 0 | 0 | 0 | 0 | python,opengl,text,fonts,pyopengl | 24,563,549 | 1 | false | 0 | 1 | Not without modifying your GLUT implementation to add additional enums and font bitmaps. | 1 | 0 | 0 | I'm using glutBitmapCharacter of pyOpenGL in Python, but the only fonts I can choose to use are helvetica and times_new_roman. Is it possible to add more fonts? | python more fonts in glut (glutBitmapCharacter) | 0 | 0 | 0 | 571 |
24,563,782 | 2014-07-03T21:48:00.000 | 1 | 0 | 0 | 0 | python,sublimerepl | 24,564,624 | 1 | true | 0 | 0 | You could try download and install SublimeREPL using Package Control on a computer with an internet connection and then in Sublime Text go to preferences > Browse packages… where you should find a folder named SublimeREPL. Copy that folder to the same directory on the other computer. That should work. | 1 | 0 | 0 | I'm trying to install SublimeREPL on an offline computer (it has secure data and so can't be Internet-connected). Any ideas for how to do so?
I can copy any installation files to a USB drive, but haven't found any--everywhere I've seen insists on using the Package Manager (which requires connection to function properly) | Installing SublimeREPL offline | 1.2 | 0 | 0 | 1,408 |
24,579,269 | 2014-07-04T18:21:00.000 | 0 | 0 | 0 | 0 | python,numpy,random,distribution | 61,270,549 | 3 | false | 0 | 0 | Use numpy.random.zipf and just reject any samples greater than or equal to m | 1 | 6 | 1 | What function can I use in Python if I want to sample a truncated integer power law?
That is, given two parameters a and m, generate a random integer x in the range [1,m) that follows a distribution proportional to 1/x^a.
I've been searching around numpy.random, but I haven't found this distribution. | Sample a truncated integer power law in Python? | 0 | 0 | 0 | 2,482 |
24,583,777 | 2014-07-05T06:44:00.000 | 0 | 1 | 1 | 1 | python,virtualenv,pythonpath,virtualenvwrapper,zshrc | 39,600,194 | 3 | false | 0 | 0 | The $PYTHONPATH appears in your virtualenv because that virtualenv is just a part of your shell environment, and you (somewhere) told your shell to export the value of PYTHONPATH to child shells.
One of the joys of working in virtual environments is that there is much less need to put additional directories on your PYTHONPATH, but it appears as though you have unwittingly been treating it as a global (for all shells) setting, when it's more suited to being a per-project setting. | 1 | 7 | 0 | So I'm migrating all my tools from python2 to python3.4 on an Ubuntu 14.04 machine. So far I've done the following:
aliased python to python3 in my zshrc for just my user
installed pip3 on the system itself (but I'll just be using virtualenvs for everything anyway so I won't really use it)
changed my virtualenvwrapper "make" alias to mkvirtualenv --python=/usr/bin/python3 ('workon' is invoked below as 'v')
Now curiously, and you can clearly see it below, running python3 from a virtualenv activated environment still inherits my $PYTHONPATH which is still setup for all my python2 paths. This wreaks havoc when installing/running programs in my virtualenv because the python3 paths show up AFTER the old python2 paths, so python2 modules are imported first in my programs. Nulling my $PYTHONPATH to '' before starting the virtualenv fixes this and my programs start as expected. But my questions are:
Is this inheritance of $PYTHONPATH in virtualenvs normal? Doesn't that defeat the entire purpose?
Why set $PYTHONPATH as an env-var in the shell when python already handles it's own paths internally?
Am I using $PYTHONPATH correctly? Should I just be setting it in my 'zshrc' to only list my personal additions ($HOME/dev) and not the redundant '/usr/local/lib/' locations?
I can very easily export an alternate python3 path for use with my virtualenvs just before invoking them, and reset them when done, but is this the best way to fix this?
○ echo $PYTHONPATH
/usr/local/lib/python2.7/site-packages:/usr/local/lib/python2.7/dist-packages:/usr/lib/python2.7/dist-packages:/home/brian/dev
brian@zeus:~/.virtualenvs
○ python2
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys, pprint
>>> pprint.pprint(sys.path)
['',
'/usr/local/lib/python2.7/dist-packages/pudb-2013.3.4-py2.7.egg',
'/usr/local/lib/python2.7/dist-packages/Pygments-1.6-py2.7.egg',
'/usr/local/lib/python2.7/dist-packages/urwid-1.1.1-py2.7-linux-x86_64.egg',
'/usr/local/lib/python2.7/dist-packages/pythoscope-0.4.3-py2.7.egg',
'/usr/local/lib/python2.7/site-packages',
'/usr/local/lib/python2.7/dist-packages',
'/usr/lib/python2.7/dist-packages',
'/home/brian/dev',
'/usr/lib/python2.7',
'/usr/lib/python2.7/plat-x86_64-linux-gnu',
'/usr/lib/python2.7/lib-tk',
'/usr/lib/python2.7/lib-old',
'/usr/lib/python2.7/lib-dynload',
'/usr/lib/python2.7/dist-packages/PILcompat',
'/usr/lib/python2.7/dist-packages/gst-0.10',
'/usr/lib/python2.7/dist-packages/gtk-2.0',
'/usr/lib/pymodules/python2.7',
'/usr/lib/python2.7/dist-packages/ubuntu-sso-client',
'/usr/lib/python2.7/dist-packages/ubuntuone-client',
'/usr/lib/python2.7/dist-packages/ubuntuone-storage-protocol',
'/usr/lib/python2.7/dist-packages/wx-2.8-gtk2-unicode']
>>>
brian@zeus:~/.virtualenvs
○ v py3venv
(py3venv)
brian@zeus:~/.virtualenvs
○ python3
Python 3.4.0 (default, Apr 11 2014, 13:05:11)
[GCC 4.8.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys, pprint
>>> pprint.pprint(sys.path)
['',
'/usr/local/lib/python2.7/site-packages',
'/usr/local/lib/python2.7/dist-packages',
'/usr/lib/python2.7/dist-packages',
'/home/brian/dev',
'/home/brian/.virtualenvs/py3venv/lib/python3.4',
'/home/brian/.virtualenvs/py3venv/lib/python3.4/plat-x86_64-linux-gnu',
'/home/brian/.virtualenvs/py3venv/lib/python3.4/lib-dynload',
'/usr/lib/python3.4',
'/usr/lib/python3.4/plat-x86_64-linux-gnu',
'/home/brian/.virtualenvs/py3venv/lib/python3.4/site-packages']
>>>
(py3venv) | Why does virtualenv inherit $PYTHONPATH from my shell? | 0 | 0 | 0 | 8,833 |
24,584,249 | 2014-07-05T07:50:00.000 | 2 | 0 | 1 | 0 | python | 24,584,318 | 3 | true | 0 | 0 | You could use introspection. Here are the most important ones (for me anyway):
dir(object) returns all methods and attributes of an object.
module.__doc__ should return the module's doc string
type(object) returns the object type
help(object) might be useful too | 1 | 3 | 0 | I've been checking out docs.python when I need something, is that the right site to be using? I come from java, and docs.python looks more like a tutorial site than a documentation site.
For example, when I lookup a class in the java api reference, at a glance I know all of its return types, method names and params, very simple, very effective website. With docs.python I have to read ALL of the method descriptions if I want to find a method that returns X...they dont have a short list of all the methods with no descriptions, and the method descriptions dont even tell you what exceptions are raised... there has to be a better site.
I've been playing around with python, I like the way it has less bloat, more tricks, but not being able to quickly look things up is killing me, reading full pages of mostly useless information is interupting my train of thought.
EDIT
Downvoters, I'm really trying to use this language here so... if you think I'm doingitwrong and have any suggestions... maybe explain how you roll because I've googled, binged, yandex'd duckduckgo'd and found no good suggestions, you cant be using docs.python... sure I could use a combo of dir() help() and doc.python, but thats beyond a joke.
EDIT
Ok, much to learn I still have, maybe when I learn python a little better I'll understand why the docs are the way they are, I suppose I should be happy there even is documentation :P
Thanks for the input folks | Alternative python documentation | 1.2 | 0 | 0 | 463 |
24,591,132 | 2014-07-05T22:37:00.000 | 0 | 1 | 1 | 0 | python,raspberry-pi,real-time,interrupt,interrupt-handling | 24,591,388 | 1 | true | 0 | 0 | From a theoretical perspective, Python running as a userspace process on a Linux kernel makes no realtime guarantees whatsoever.
In practice, interrupt response times will usually be in the low millisecond range. In all probability, the pump will take considerably longer to shut off than the rPi will take to respond. | 1 | 2 | 0 | I want to use an RPi to control some water pumps. My question is, what kind of guaranties can I make about the "real timeness"? I have a pump filling a container, and when a sensor signals the RPi that it is full the pump should turn off. How much extra room in the container needs to be left for the worst case response time? | Python/Raspberry Pi guaranty about interrupt response time | 1.2 | 0 | 0 | 582 |
24,592,036 | 2014-07-06T01:50:00.000 | 0 | 0 | 0 | 0 | python,beautifulsoup | 27,249,435 | 4 | false | 1 | 0 | I'm new to this as well.
I agree - it's not always clear from forum answers where to type the various suggestions when you're new to the subject.
Start by opening a command prompt. You can do this by typing CMD into the searchbox after pressing the start button.
At the command prompt, type "python -m pip install beautifulsoup4"
At this point, the module should be downloaded and installed.
To check that it's installed correctly, you can open the python command line program - (the one that gives you ">>>" prompts) and type "from bs4 import BeautifulSoup"
Note that everything is case sensitive, so be careful when you type.
If no errors are reported, the module is correctly installed on your machine and you can proceed to your next hurdle. :)
Good luck. | 3 | 12 | 0 | I want to try and make a program that downloads images from the internet, and I have found a guide that uses Beautiful soup. I have heard of Beautiful Soup before, so I figured that I would try it out. My only issue is that I can't seem to find a version for Python 3. I went to their website, but I was unable to find a version that worked with Python 3.
Whenever I would run the setup.py file, I would get an error that was too quick to read, but it looked like it was saying syntax error.
So I looked at the code and realized that there weren't any parenthesis in front or after strings that were supposed to be printed.
I have tried numerous different webpages and different searches, but was unable to find an answer.
I'm also sorry if this is not a question related to programming, if it is not, please leave a comment on this question and I will delete the question ASAP. | Is Beautiful Soup available for Python 3.4.1? | 0 | 0 | 0 | 11,995 |
24,592,036 | 2014-07-06T01:50:00.000 | 0 | 0 | 0 | 0 | python,beautifulsoup | 37,222,073 | 4 | false | 1 | 0 | Try using "python -m pip install beautifulsoup4"
This line is working perfectly fine for me on Python 3.4 | 3 | 12 | 0 | I want to try and make a program that downloads images from the internet, and I have found a guide that uses Beautiful soup. I have heard of Beautiful Soup before, so I figured that I would try it out. My only issue is that I can't seem to find a version for Python 3. I went to their website, but I was unable to find a version that worked with Python 3.
Whenever I would run the setup.py file, I would get an error that was too quick to read, but it looked like it was saying syntax error.
So I looked at the code and realized that there weren't any parenthesis in front or after strings that were supposed to be printed.
I have tried numerous different webpages and different searches, but was unable to find an answer.
I'm also sorry if this is not a question related to programming, if it is not, please leave a comment on this question and I will delete the question ASAP. | Is Beautiful Soup available for Python 3.4.1? | 0 | 0 | 0 | 11,995 |
24,592,036 | 2014-07-06T01:50:00.000 | 0 | 0 | 0 | 0 | python,beautifulsoup | 37,579,583 | 4 | false | 1 | 0 | Below work for me.
Source of content is quora.com.
Make sure you download the recent beautifulsoup version.
For Windows how to install beautifulsoup4 for python2 or python3.
Place the file you downloaded in any file directory on your computer.
First locate where your Python file directory is below am using C:\Python27
To open open command prompt type cmd into the run file.
in command prompt do this.
type cd Python27
then type
pip install beautifulsoup4.
You may have to use the full path:
C:\Python27\Scripts\pip install beauifulsoup4
or even
C:\Python27\Scripts\pip.exe install beautifulsoup4
for Python3
in command prompt do this.
type cd Python34
then type
pip install beautifulsoup4
You may have to use the full path:
C:\Python34\Scripts\pip install beauifulsoup4
or even
C:\Python34\Scripts\pip.exe install beautifulsoup4 | 3 | 12 | 0 | I want to try and make a program that downloads images from the internet, and I have found a guide that uses Beautiful soup. I have heard of Beautiful Soup before, so I figured that I would try it out. My only issue is that I can't seem to find a version for Python 3. I went to their website, but I was unable to find a version that worked with Python 3.
Whenever I would run the setup.py file, I would get an error that was too quick to read, but it looked like it was saying syntax error.
So I looked at the code and realized that there weren't any parenthesis in front or after strings that were supposed to be printed.
I have tried numerous different webpages and different searches, but was unable to find an answer.
I'm also sorry if this is not a question related to programming, if it is not, please leave a comment on this question and I will delete the question ASAP. | Is Beautiful Soup available for Python 3.4.1? | 0 | 0 | 0 | 11,995 |
24,598,160 | 2014-07-06T16:54:00.000 | 3 | 0 | 0 | 0 | python,opencv,ubuntu,uninstallation | 24,598,296 | 1 | true | 0 | 0 | The procedure depends on whether or not you built OpenCV from source with CMake, or snatched it from a repository.
From repository
sudo apt-get purge libopencv* will cleanly remove all traces. Substitute libopencv* as appropriate in case you were using an unofficial ppa.
From source
If you still have the files generated by CMake (the directory from where you executed sudo make install), cd there and sudo make uninstall. Otherwise, you can either rebuild them with the exact same configuration and use the above command, or recall your CMAKE_INSTALL_PREFIX (/usr/local by default), and remove everything with opencv in its name within that directory tree. | 1 | 5 | 1 | Im using openCV on Ubuntu 14.04, but some of the functions that I require particularly in cv2 library (cv2.drawMatches, cv2.drawMatchesKnn) does not work in 2.4.9. How do I uninstall 2.4.9 and install 3.0.0 from the their git? I know the procedure for installing 3.0.0 but how do I make sure that 2.4.9 get completely removed from disk? | Unistall opencv 2.4.9 and install 3.0.0 | 1.2 | 0 | 0 | 18,476 |
24,598,269 | 2014-07-06T17:08:00.000 | 0 | 0 | 1 | 1 | python,executable | 24,598,315 | 1 | false | 0 | 0 | py2exe is what you need for windows. | 1 | 1 | 0 | I have an extremely basic text based game that I have been coding and want to turn into an executable for both Windows and Mac. I am an extreme beginner and am not quite sure how this works.
Thus far, in the coding process, I've been running the game in terminal (I have a Mac), in order to test it and debug.
I've installed PyInstaller to my computer, tried to follow the directions to make it work, yet when I finally get the Game.app (again, for a Mac because I was testing the process), it does not open.
The game is all contained between two files, ChanceGame.py (the one with the actual game), and ChanceGameSetup.py (one that contains a command to setup the game.) ChanceGame.py imports ChanceGameSetup.py at the start so that it can use the functions in ChanceGameSetup.py where needed. My point in this is that I don't actually have to be able to run ChanceGameSetup.py, it only needs to be able to be imported by ChanceGame.py.
Is there a way to turn ChanceGame.py into an executable? Or is it just too simple of a file? I'm an extreme beginner, therefore I have no experience on the subject.
Thanks in advance for any help!
P.S. I just want to be able to email the game to some friends to try out, and I assume this is the only way of doing so without them having their own compiler, etc. If there is actually an easier way, I would appreciate hearing that as well. Thanks! | Turning .py file into an executable? | 0 | 0 | 0 | 237 |
24,602,022 | 2014-07-07T01:43:00.000 | 0 | 0 | 0 | 0 | python,mysql,django,django-models | 24,602,097 | 2 | false | 1 | 0 | It is not possible with an auto-incrementing field, exclusively. You could set up another column inside each table with a single letter identifying the table, have an auto-incrementing column, and then make the key the combination of the two. You can also set this up using a trigger.
However, this doesn't fully make sense because the letter really isn't needed within a single table.
I suspect that you are trying to solve a different problem, which is to have multiple "types" of an entity in different tables (say a table of contacts, with one for email, one for mail, one for telephone contact). If so, another approach is to have a master table of everything with an auto-incrementing id, and then to use this id in the subtables, defined with a 1-1 relationship. | 1 | 0 | 0 | What I would like to do is have my primary key field (or another field as long as it accepts auto incrementing) auto increment with a letter in front of it. For example:
I would like to be able to make pk equal A1,A2,A3 in one table, and if i choose, B1, B2,B3 in another table. Is this possible with django? mysql? The field doesn't have to be the primary key field as long as it auto increments. Please let me know. | Is it possible to add a letter or some other character in front of an incrementing field? | 0 | 0 | 0 | 113 |
24,602,844 | 2014-07-07T04:01:00.000 | 0 | 0 | 0 | 0 | python,algorithm | 24,602,897 | 2 | false | 0 | 0 | My first stab at this is recognizing this schedule will be periodic, so determine what the schedule period is (the LCM of all the periods). From there, you can think of everything like a Gantt chart. For each task, you need to pick a phase offset that maximizes the distance between start times across tasks. I'm not sure you can compute it algebraically, but you could run gradient descent on it. | 2 | 2 | 0 | Problem:
Given a set of recurring tasks, and knowing how long each take will take and how often they must recur, create a schedule that most evenly spreads out the time to do the tasks.
For example, I have to "run antivirus", "defragment hard drive", and "update computer" every 2 days, I have to "update hardware", "clean internals" every 3 days, and I have to "replace hard drive", "replace graphics card", "replace cpu" every 5 days. Each chore takes a certain amount of time, in minutes.
Given this information, what days in the month should I do each chore?
Attempts:
I have "solved" this problem twice, but each solution will take over 1 million years to compute.
First algorithm I used generated all possible "days", ie, every combination of tasks that could be run. Then, for every possible combination of days in a month, I checked the month to see if it fit the constraints (tasks run every x days, day is evenly spread). This is the most naive approach and would have taken python longer than the age of the universe to check.
The algorithm I'm currently running would also work. It generated every possible "day plan" across the month for each task, ie, I could run a every-5-day task starting on the 1st, 2nd, 3rd, 4th or 5th. Then, for every combination of every day plan, I add up the day plans to form a month plan. I then sum up the times of each task in every day of that month to see how it fares. This algorithm must run through 1e14 combinations, which might be possible if written in a compiled languages and executed in parallel across a huge cluster for a few days...
Summary
I would like to know if there's a better way to do this. To summarize, I have some tasks, that must recur every x days, and want to spread them across a given month so that each day is filled with the same amount of time on these tasks.
Thank you. | Algorithm for scheduling multiple recurring jobs evenly | 0 | 0 | 0 | 1,255 |
24,602,844 | 2014-07-07T04:01:00.000 | 0 | 0 | 0 | 0 | python,algorithm | 24,610,071 | 2 | false | 0 | 0 | You could try out some local search algorithms (e.g. hill-climbing, simulated annealing).
Basically you start out with a candidate solution (random assignment of tasks), evaluate it (you'll need to come up with an evaluation function) and then check each of the neighbor states' values and move to the one with the highest value. The neighbor states could be all states in which one of the tasks is moved one day to the future or past. You repeat this until there is no more improvement of the states value.
Now you should have found a local maximum of the evaluation function which is probably not yet the optimal solution. However since the algorithm is very fast, you can repeat it very often with varying starting assignments and find a good solution anyway. You can "soften" the greediness of the algorithm by including random steps as well. | 2 | 2 | 0 | Problem:
Given a set of recurring tasks, and knowing how long each take will take and how often they must recur, create a schedule that most evenly spreads out the time to do the tasks.
For example, I have to "run antivirus", "defragment hard drive", and "update computer" every 2 days, I have to "update hardware", "clean internals" every 3 days, and I have to "replace hard drive", "replace graphics card", "replace cpu" every 5 days. Each chore takes a certain amount of time, in minutes.
Given this information, what days in the month should I do each chore?
Attempts:
I have "solved" this problem twice, but each solution will take over 1 million years to compute.
First algorithm I used generated all possible "days", ie, every combination of tasks that could be run. Then, for every possible combination of days in a month, I checked the month to see if it fit the constraints (tasks run every x days, day is evenly spread). This is the most naive approach and would have taken python longer than the age of the universe to check.
The algorithm I'm currently running would also work. It generated every possible "day plan" across the month for each task, ie, I could run a every-5-day task starting on the 1st, 2nd, 3rd, 4th or 5th. Then, for every combination of every day plan, I add up the day plans to form a month plan. I then sum up the times of each task in every day of that month to see how it fares. This algorithm must run through 1e14 combinations, which might be possible if written in a compiled languages and executed in parallel across a huge cluster for a few days...
Summary
I would like to know if there's a better way to do this. To summarize, I have some tasks, that must recur every x days, and want to spread them across a given month so that each day is filled with the same amount of time on these tasks.
Thank you. | Algorithm for scheduling multiple recurring jobs evenly | 0 | 0 | 0 | 1,255 |
24,604,661 | 2014-07-07T06:59:00.000 | 2 | 1 | 0 | 0 | python,algorithm,gps,reverse-geocoding | 24,612,105 | 3 | true | 0 | 0 | Use a point-in-polygon algorithm to determine if the coordinate is inside of a state (represented by a polygon with GAP coordinates as points). Practically speaking, it doesn't seem like you would be able to improve much upon simply checking each state one at a time, though some optimizations can be made if it's too slow.
However, parts of Alaska are on both sides of the 180th meridian which cases problems. One solution would be to offset the coordinates a bit by adding 30 degrees modulus 180 to the longitude for each GPS coordinate (user coordinates and state coordinates). This has the effect of moving the 180th meridian about 30 degrees west and should be enough to ensure that the entire US on one side of the 180th meridian. | 1 | 3 | 0 | I'm analyzing tweets and need to find which state (in the USA) the user was in from their GPS coordinates. I will not have an internet connection available so I can't use an online service such as the Google Maps API to reverse geocode.
Does anyone have any suggestions? I am writing the script in python so if anyone knows of a python library that I can use that would be great. Or if anyone can point me to a research paper or efficient algorithm I can implement to accomplish this that would also be very helpful. I have found some data that represents the state boundaries in GPS coordinates but I can't think of an efficient way to determine which state the user's coordinates are in. | Determine the US state from GPS coordinates without using online service | 1.2 | 0 | 1 | 2,493 |
24,606,305 | 2014-07-07T08:38:00.000 | 1 | 0 | 0 | 0 | python,fft,dft,spectrum | 24,606,561 | 1 | true | 0 | 0 | If you look at a DC signal 1 1 1 1, its DFT is 4 0 0 0.
To normalize that back into [0,1], you need to divide by 4, i.e. the number of points. | 1 | 1 | 0 | I have a list of samples of a wave with all values between -1 and +1. Those values have been read from a music file. I will now,
apply the direct fourier transform, (scipy.fftpack.rfft)
normalize the values by dividing them by the square root of the number of samples,
calculate the power for each item in the list. (sqrt(real^2 + imag^2))
What are the maximum values I can expect to be in this list after all of this? I would have expected the maximum power to be 1, as the maximum amplitude in the input data is also 1. However, this is only the case for a simple sine wave. As soon as I start doing this with real music, I get higher values.
How would I "normalize" the power to get values between 0and 1? Is it even possible to find out the maximum value? If not, what is the best practice to scale the results? | Maximum value of a direct fourier transform | 1.2 | 0 | 0 | 1,046 |
24,606,855 | 2014-07-07T09:07:00.000 | 1 | 0 | 1 | 0 | python,pycharm | 24,606,972 | 2 | true | 0 | 0 | Try this,
Go to File -> Settings -> File and Code Templates -> Python Script and then change it as you want. | 1 | 1 | 0 | Every time I create new Python file it starts with __author__ = 'username'. I don't want have this, but I want to have import of some modules, for example, from __future__ import division.
How can I do this? | Add import statements to the beginning of new file | 1.2 | 0 | 0 | 50 |
24,609,197 | 2014-07-07T11:09:00.000 | 1 | 0 | 0 | 0 | python,wxpython,wxgrid | 24,620,168 | 3 | true | 0 | 1 | The Grid class has AutoSizeColumn/AutoSizeRow and AutoSizeColumns/AutoSizeRows methods that do a fairly good job of generically resizing rows or cols to be large enough for the contents of the row or col. They can be fairly expensive operations for large grids however, so they should be used with care. | 1 | 1 | 0 | Basically if I drag a cell I can size it to whatever size I want. Is there functionality in wxPython where cells expand automatically to fit the size of the text inside of them? | How to make text fit in cells in a wxPython grid? | 1.2 | 0 | 0 | 2,930 |
24,610,784 | 2014-07-07T12:36:00.000 | 2 | 0 | 1 | 0 | python | 24,610,865 | 2 | false | 0 | 0 | It's trivial to do so with a user-defined object: just set a flag each time you modify the object, and have the iterator check that flag each time it tries to retrieve an item.
Generally, you should not modify a set while iterating over it, as you risk missing an item or getting the same item twice. | 1 | 1 | 0 | I'm new to python coming from a c++ background. I was just playing around with sets trying to calculate prime numbers and got a "Set changed size during iteration" error.
How internally does python know the set changed size during iteration?
Is it possible to do something similar in user defined objects? | Set changed size during iteration | 0.197375 | 0 | 0 | 5,941 |
24,610,812 | 2014-07-07T12:38:00.000 | 0 | 0 | 0 | 1 | python,tcp,scapy | 24,612,638 | 1 | false | 0 | 0 | I don't know if I understand you correctly. Is there any difference between your two SYN packets? If so, just create two SYN as you want and then send them together. If not, send same packets twice using scapy.send(pkt, 2).I don't remember the specific parameters, but I'm sure scapy.send can send as many packets and fast as you like. | 1 | 0 | 0 | I am doing allot of network developing and I am starting a new research.
I need to send a packet which will then cause another SYN packet to be sent.
This is how I want it to look:
I send syn --> --> sends another SYN before SYN/ACK packet.
How can I cause?
I am using Scapy + Python. | How do I create a double SYN packet? | 0 | 0 | 1 | 270 |
24,611,812 | 2014-07-07T13:32:00.000 | 0 | 0 | 0 | 0 | python,sql-server,browser,sync | 24,659,493 | 2 | false | 0 | 0 | So in summary you have a website/sql server application. Then some of your users have a separate local database with a python front end. And you need to bridge the two applications.
You can expose your sql server database with an rest api (using whatever tech of your choice). Then create a python app that calls that api (either via a button or automatically scheduled) and then executes the needed python code for the reporting.
Another approach would be to add that needed reporting functionality into your web app. So if you look at the functionality of what the 3rd party database and reporting provides and add that functionality into your website so there's no longer a need to use that 3rd party application at all.
Those are your two paths to go down based on on the info provided. | 1 | 3 | 0 | I have made an online website which acts as the fronted for a database into which my customers can save sales information. So each customer logs onto the online website with their own credentials and only see their own sales records. The database comes in the form of SQL Server 2008.
Some of these customers have a third party Windows tool on their PCs which itself acts as a fronted for a database with specific sales records. This tool is used by them for printing receipts. This tool comes with a Python interface which can be used to update the database, if the tool itself is not used. I've installed the tool on my PC and successfully added records to this tool's database by running a simple Python Script.
At the moment, customers are adding by hand sales information from my website to the tool on the PC. I would like to provide them with an automatic way of doing this. This syncing should only occur when the customer requests such a sync and indeed they have all requested that it should work so. This is to ensure that they get an opportunity to validate the information.
How might I solve this problem? Should I develop a PC application which they install on their local computer or can I do this via the browser? Either solution will need to execute Python code in order to update the database on their PC and then there are of course security issues. | Syncing PC data with online data | 0 | 1 | 0 | 995 |
24,615,687 | 2014-07-07T16:46:00.000 | 0 | 0 | 0 | 0 | python,artificial-intelligence,genetic-algorithm | 24,616,160 | 3 | false | 0 | 0 | To start, let's make sure I understand your problem.
You have a set of sample data, each element containing a time series of a binary variable (we'll call it V). When V is set to True, a function (A, B, or C) is applied which returns V to it's False state. You would like to apply a genetic algorithm to determine which function (or solution) will return V to False in the least amount of time.
If this is the case, I would stay away from GAs. GAs are typically used for some kind of function optimization / tuning. In general, the underlying assumption is that what you permute is under your control during the algorithm's application (i.e., you are modifying parameters used by the algorithm that are independent of the input data). In your case, my impression is that you just want to find out which of your (I assume) static functions perform best in a wide variety of cases. If you don't feel your current dataset provides a decent approximation of your true input distribution, you can always sample from it and permute the values to see what happens; however, this would not be a GA.
Having said all of this, I could be wrong. If anyone has used GAs in verification like this, please let me know. I'd certainly be interested in learning about it. | 2 | 0 | 1 | I'm a data analysis student and I'm starting to explore Genetic Algorithms at the moment. I'm trying to solve a problem with GA but I'm not sure about the formulation of the problem.
Basically I have a state of a variable being 0 or 1 (0 it's in the normal range of values, 1 is in a critical state). When the state is 1 I can apply 3 solutions (let's consider Solution A, B and C) and for each solution I know the time that the solution was applied and the time where the state of the variable goes to 0.
So I have for the problem a set of data that have a critical event at 1, the solution applied and the time interval (in minutes) from the critical event to the application of the solution, and the time interval (in minutes) from the application of the solution until the event goes to 0.
I want with a genetic algorithm to know which is the best solution for a critical event and the fastest one. And if it is possible to rank the solutions acquired so if in the future on solution can't be applied I can always apply the second best for example.
I'm thinking of developing the solution in Python since I'm new to GA.
Edit: Specifying the problem (responding to AMack)
Yes is more a less that but with some nuances. For example the function A can be more suitable to make the variable go to F but because exist other problems with the variable are applied more than one solution. So on the data that i receive for an event of V, sometimes can be applied 3 ou 4 functions but only 1 or 2 of them are specialized for the problem that i want to analyze. My objetive is to make a decision support on the solution to use when determined problem appear. But the optimal solution can be more that one because for some event function A acts very fast but in other case of the same event function A don't produce a fast response and function C is better in that case. So in the end i pretend a solution where is indicated what are the best solutions to the problem but not only the fastest because the fastest in the majority of the cases sometimes is not the fastest in the same issue but with a different background. | Genetic Algorithm in Optimization of Events | 0 | 0 | 0 | 335 |
24,615,687 | 2014-07-07T16:46:00.000 | 1 | 0 | 0 | 0 | python,artificial-intelligence,genetic-algorithm | 24,616,007 | 3 | false | 0 | 0 | I'm unsure of what your question is, but here are the elements you need for any GA:
A population of initial "genomes"
A ranking function
Some form of mutation, crossing over within the genome
and reproduction.
If a critical event is always the same, your GA should work very well. That being said, if you have a different critical event but the same genome you will run into trouble. GA's evolve functions towards the best possible solution for A Set of conditions. If you constantly run the GA so that it may adapt to each unique situation you will find a greater degree of adaptability, but have a speed issue.
You have a distinct advantage using python because string manipulation (what you'll probably use for the genome) is easy, however...
python is slow.
If the genome is short, the initial population is small, and there are very few generations this shouldn't be a problem. You lose possibly better solutions that way but it will be significantly faster.
have fun... | 2 | 0 | 1 | I'm a data analysis student and I'm starting to explore Genetic Algorithms at the moment. I'm trying to solve a problem with GA but I'm not sure about the formulation of the problem.
Basically I have a state of a variable being 0 or 1 (0 it's in the normal range of values, 1 is in a critical state). When the state is 1 I can apply 3 solutions (let's consider Solution A, B and C) and for each solution I know the time that the solution was applied and the time where the state of the variable goes to 0.
So I have for the problem a set of data that have a critical event at 1, the solution applied and the time interval (in minutes) from the critical event to the application of the solution, and the time interval (in minutes) from the application of the solution until the event goes to 0.
I want with a genetic algorithm to know which is the best solution for a critical event and the fastest one. And if it is possible to rank the solutions acquired so if in the future on solution can't be applied I can always apply the second best for example.
I'm thinking of developing the solution in Python since I'm new to GA.
Edit: Specifying the problem (responding to AMack)
Yes is more a less that but with some nuances. For example the function A can be more suitable to make the variable go to F but because exist other problems with the variable are applied more than one solution. So on the data that i receive for an event of V, sometimes can be applied 3 ou 4 functions but only 1 or 2 of them are specialized for the problem that i want to analyze. My objetive is to make a decision support on the solution to use when determined problem appear. But the optimal solution can be more that one because for some event function A acts very fast but in other case of the same event function A don't produce a fast response and function C is better in that case. So in the end i pretend a solution where is indicated what are the best solutions to the problem but not only the fastest because the fastest in the majority of the cases sometimes is not the fastest in the same issue but with a different background. | Genetic Algorithm in Optimization of Events | 0.066568 | 0 | 0 | 335 |
24,617,063 | 2014-07-07T18:13:00.000 | 0 | 1 | 0 | 0 | python,django,python-social-auth | 24,626,165 | 1 | false | 0 | 0 | You can still use the @login_required decorator | 1 | 0 | 0 | I can build Facebook login with Python Social Auth. But in order to access full content of the site I want users to be authorised first. Would it be possible to get guidelines how such solution should be build? | User authorization in Python Social Auth | 0 | 0 | 1 | 145 |
24,619,330 | 2014-07-07T20:31:00.000 | 1 | 0 | 1 | 1 | python,bash,multiprocessing,numerical-methods | 24,635,814 | 2 | false | 0 | 0 | I would think they are about the same. I would prefer screen just because I have an easier time managing it. Depending on the scripts usage, that could also have some effect on time to process. | 1 | 4 | 0 | Running a python script on different nodes at school using SSH. Each node has 8 cores. I use GNU Screen to be able to detach from a single process.
Is it more desirable to:
Run several different sessions of screen.
Run a single screen process and use & in a bash terminal.
Are they equivalent?
I am not sure if my experiments are poorly coded and taking an inordinate amount of time (very possible) OR my choice to use 1. is slowing the process down considerably. Thank you! | Multiprocessing with Screen and Bash | 0.099668 | 0 | 0 | 869 |
24,620,225 | 2014-07-07T21:31:00.000 | 0 | 0 | 0 | 1 | python,django,subprocess,popen,django-celery | 24,620,400 | 1 | true | 1 | 0 | I suggest using Celery.
subprocess, multiprocessing, and threading all are powerful tools, but are in general hard to get working. They're more useful if you already have a working system, are running at the limit of the hardware, and don't mind spending a good deal of effort to get lower latency or parallel processing or higher throughput. | 1 | 0 | 0 | I am trying to execute a python script from a webpage through a Django view. Other questions related to a known script from within the Django project directory. I need to be able to execute a script anywhere on the system given the file path. Eventually, multiple scripts will be run in parallel using Celery or a similar method. Should I be using some permutation of popen or sub-processing? | Execute Python Script from Django | 1.2 | 0 | 0 | 292 |
24,621,580 | 2014-07-07T23:36:00.000 | -2 | 0 | 1 | 0 | python,random | 24,621,616 | 2 | false | 0 | 0 | random.choice() is your friend. | 1 | 6 | 0 | How to make python choose randomly between multiple strings? | How to make python choose randomly between multiple strings? | -0.197375 | 0 | 0 | 12,744 |
24,622,614 | 2014-07-08T02:03:00.000 | 0 | 0 | 0 | 1 | python,mercurial,cmd,executable | 24,623,014 | 1 | true | 0 | 0 | So you have mercurial calling a hook that runs a python script that launches an executable that is a python script compiled to an exe? Likely the 3-layer deep script is being run w/o a "terminal" (headless), but it sounds like if you un-snarled a few of those layers you might be better off. | 1 | 0 | 0 | I have tied a python (2.7) script to a commit in mercurial. In this script, a .exe is called (via the subprocess module), which has previously been generated via the cx_freeze. This .exe basically opens a cmd prompt for receiving user inputs.
When I run a commit through the hg workbench, everything works as intended... the Python script runs, calls the executable, and does its stuff, and the commit works without a hitch.
However, when running a commit via "hg commit" in an initial cmd prompt, the executable portion of this setup never appears. I know the python script still runs. No errors are ever displayed/returned.
Am I missing something obvious, and is there a simple way to get this executable to run properly even when called from a commit in cmd prompt? | Run .exe from a python script called by mercurial in cmd shell | 1.2 | 0 | 0 | 136 |
24,622,714 | 2014-07-08T02:16:00.000 | 11 | 0 | 0 | 0 | python,django | 24,644,610 | 5 | false | 1 | 0 | A django app doesn't really map to a page, rather, it maps to a function. Apps would be for things like a "polls" app or a "news" app. Each app should have one main model with maybe a couple supporting ones. Like a news app could have a model for articles, with supporting models like authors and media.
If you wanted to display multiple, you would need an integration app. One way to do this is to have a "project" app next to your polls and news apps. The project app is for your specific website- it is the logic that is specific to this application. It would have your main urls.py, your base templat(s), things like that. If you needed information from multiple apps in one page, you have to have a view that returns info from multiple apps. Say, for example, that you have a view that returns the info for a news article, and one that returns info for a poll. You could have a view in your project app that calls those two view functions and sticks the returned data into a different template that has spots for both of them.
In this specific example, you could also have your polls app set up so that its return info could be embedded- and then embed the info into a news article. In this case you wouldn't really have to link the apps together at all as part of your development, it could be done as needed on the content creation end. | 1 | 15 | 0 | I've looked all over the net and found no answer.
I'm new to Django. I've done the official tutorial and read many more but unfortunately all of them focus on creating only one application. Since it's not common to have a page as a single app, I would like to ask some Django guru to explain how I can have multiple apps on a webpage. Say I go to mysite.com and I see a poll app displaying a poll, gallery app displaying some pics, news app displaying latest news etc, all accessed via one url. I know I do the displaying in template but obviously need to have access to data. Do I create the view to return multiple views? Any advice, links and examples much appreciated. | Django - Multiple apps on one webpage? | 1 | 0 | 0 | 11,743 |
24,627,525 | 2014-07-08T08:51:00.000 | 20 | 0 | 1 | 1 | python,pip | 34,416,503 | 28 | false | 0 | 0 | python -m pip
really works for the problem Fatal error in launcher: Unable to create process using '"'.Worked on Windows 10 | 15 | 231 | 0 | Searching the net this seems to be a problem caused by spaces in the Python installation path.
How do I get pip to work without having to reinstall everything in a path without spaces ? | Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe"" | 1 | 0 | 0 | 338,555 |
24,627,525 | 2014-07-08T08:51:00.000 | 1 | 0 | 1 | 1 | python,pip | 36,456,213 | 28 | false | 0 | 0 | i solve my problem in Window
if u install both python2 and python3
u need enter someone \Scripts change all file.exe to file27.exe,then it solve
my D:\Python27\Scripts edit django-admin.exe to django-admin27.exe so it done | 15 | 231 | 0 | Searching the net this seems to be a problem caused by spaces in the Python installation path.
How do I get pip to work without having to reinstall everything in a path without spaces ? | Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe"" | 0.007143 | 0 | 0 | 338,555 |
24,627,525 | 2014-07-08T08:51:00.000 | 2 | 0 | 1 | 1 | python,pip | 41,300,328 | 28 | false | 0 | 0 | I had the same issue on windows 10, after trying all the previous solution the problem persists so I decided to uninstall my python 2.7 and install the version 2.7.13 and it works perfectly. | 15 | 231 | 0 | Searching the net this seems to be a problem caused by spaces in the Python installation path.
How do I get pip to work without having to reinstall everything in a path without spaces ? | Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe"" | 0.014285 | 0 | 0 | 338,555 |
24,627,525 | 2014-07-08T08:51:00.000 | 5 | 0 | 1 | 1 | python,pip | 25,314,022 | 28 | false | 0 | 0 | Here's how I solved it:
open pip.exe in 7zip and extract __main__.py to Python\Scripts folder.
In my case it was C:\Program Files (x86)\Python27\Scripts
Rename __main__.py to pip.py
Run it! python pip.py install something
EDIT:
If you want to be able to do pip install something from anywhere, do this too:
rename pip.py to pip2.py (to avoid import pip errors)
make C:\Program Files (x86)\Python27\pip.bat with the following contents:
python "C:\Program Files (x86)\Python27\Scripts\pip2.py" %1 %2 %3 %4
%5 %6 %7 %8 %9
add C:\Program Files (x86)\Python27 to your PATH (if is not already)
Run it! pip install something | 15 | 231 | 0 | Searching the net this seems to be a problem caused by spaces in the Python installation path.
How do I get pip to work without having to reinstall everything in a path without spaces ? | Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe"" | 0.035699 | 0 | 0 | 338,555 |
24,627,525 | 2014-07-08T08:51:00.000 | 1 | 0 | 1 | 1 | python,pip | 32,795,747 | 28 | false | 0 | 0 | Please add this address :
C:\Program Files (x86)\Python33
in Windows PATH Variable
Though first make sure this is the folder where Python exe file resides, then only add this path to the PATH variable.
To append addresses in PATH variable, Please go to
Control Panel -> Systems -> Advanced System Settings -> Environment
Variables -> System Variables -> Path -> Edit ->
Then append the above mentioned path & click Save | 15 | 231 | 0 | Searching the net this seems to be a problem caused by spaces in the Python installation path.
How do I get pip to work without having to reinstall everything in a path without spaces ? | Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe"" | 0.007143 | 0 | 0 | 338,555 |
24,627,525 | 2014-07-08T08:51:00.000 | -2 | 0 | 1 | 1 | python,pip | 32,889,820 | 28 | false | 0 | 0 | Instead of calling ipython directly, it is loaded using Python such as
$ python "full path to ipython.exe" | 15 | 231 | 0 | Searching the net this seems to be a problem caused by spaces in the Python installation path.
How do I get pip to work without having to reinstall everything in a path without spaces ? | Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe"" | -0.014285 | 0 | 0 | 338,555 |
24,627,525 | 2014-07-08T08:51:00.000 | 0 | 0 | 1 | 1 | python,pip | 72,440,994 | 28 | false | 0 | 0 | I had this problem when using django rest framework and simplejwt. All I had to was upgrade pip and reinstall the packages | 15 | 231 | 0 | Searching the net this seems to be a problem caused by spaces in the Python installation path.
How do I get pip to work without having to reinstall everything in a path without spaces ? | Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe"" | 0 | 0 | 0 | 338,555 |
24,627,525 | 2014-07-08T08:51:00.000 | 0 | 0 | 1 | 1 | python,pip | 60,451,992 | 28 | false | 0 | 0 | You can remove previous python folder and also environment variable path from you pc then Reinstall python .it will be solve | 15 | 231 | 0 | Searching the net this seems to be a problem caused by spaces in the Python installation path.
How do I get pip to work without having to reinstall everything in a path without spaces ? | Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe"" | 0 | 0 | 0 | 338,555 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.