Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
30,020,944 | 2015-05-04T00:23:00.000 | 2 | 0 | 1 | 0 | python,database,concurrency | 30,021,940 | 2 | false | 0 | 0 | In short, yes it is perfectly reasonable (and actually preferred) to let your database worry about the concurrency of your database operations.
Any relevant database driver (MongoDB included) will handle concurrent operations for you automatically. | 1 | 6 | 0 | I'm using the Python multiprocessing library to generate several processes that each write to a shared (MongoDB) database. Is this safe, or will the writes overwrite each other? | Can concurrent processes write to a shared database? | 0.197375 | 0 | 0 | 1,753 |
30,023,529 | 2015-05-04T06:04:00.000 | 0 | 0 | 1 | 0 | python,pygame,pip | 30,042,074 | 1 | false | 0 | 0 | Okay I have found out why its not working - nothing to do with pygame or pip. It seems the issue is permissions. The init.py files isnt able to be read by student users. If i manual change their permissions for the pygame lib folder it seems to work. | 1 | 0 | 0 | I'm trying to install Pygame for my students. I'm using PIP to install the Windows 64 binaries .WHL file. Unfortunately, I'm finding that although it installs for myself (the administrator), for all other users of the computer, they receive an AttributeError: 'module' object has no attribute 'init' when they try to call pygame.init()
To install pygame,
I'm calling
c:Python34/scripts/ pip install pygame‑1.9.2a0‑cp34‑none‑win_amd64.whl | Installing Pygame for Windows 64 for multiple users | 0 | 0 | 0 | 486 |
30,026,870 | 2015-05-04T09:35:00.000 | 1 | 0 | 0 | 0 | python,python-3.x,hyperlink,pycharm | 40,915,552 | 1 | false | 1 | 0 | As outlined above, you need to use a terminal that supports clicking of URL's.
On linux, most terminals do this. Ex Gnome-terminal, terminator etc..
On Mac, try iterm2. | 1 | 4 | 0 | My python program outputs a set of url links. When I run this on pycharm, I can directly click on the links and that will open them up in the browser. However, when I run the python file by double clicking on the .py file, the links are not clickable. I want the links to be clickable so it takes me to the browser directly.
Please support solutions with explanations as I am still learning. Thanks! | Clickable links in terminal output | 0.197375 | 0 | 1 | 3,643 |
30,029,827 | 2015-05-04T12:10:00.000 | 0 | 0 | 0 | 0 | python,sqlite,date,datetime,sqlalchemy | 30,030,232 | 3 | false | 0 | 0 | Can you try sqlalchemy.extract(func.date('year', Task.timestamp)) == ... ? | 1 | 0 | 0 | I have a table that stores tasks submitted by users, with timestamps. I would like to write a query that returns certain rows based on when they were submitted (was it this day/week/month..).
To check if it was submitted on this week, I wanted to use date.isocalendar()[1] function. The problem is, that my timestamps are datetimes, so I would need to transform those to dates.
Using func:
filter(func.date(Task.timestamp) == datetime.date(datetime.utcnow()))
works properly.
But I need the date object's isocalendar() method, so I try
filter(func.date(Task.timestamp).isocalendar()[1]==datetime.date(datetime.utcnow()).isocalendar()[1])
and it's no good, I get AttributeError: Neither 'Function' object nor 'Comparator' object has an attribute 'isocalendar'
If I make a simple query and try datetime.date(task.timestamp).isocalendar()[1] it works properly.
How do I get it to work in the query's filter? | SQLAlchemy func issue with date and .isocalendar() | 0 | 1 | 0 | 1,710 |
30,030,659 | 2015-05-04T12:51:00.000 | 9 | 0 | 1 | 0 | python,random,uniform | 40,606,389 | 5 | false | 0 | 0 | In random.random() the output lies between 0 & 1 , and it takes no input parameters
Whereas random.uniform() takes parameters , wherein you can submit the range of the random number.
e.g.
import random as ra
print ra.random()
print ra.uniform(5,10)
OUTPUT:-
0.672485369423
7.9237539416 | 1 | 63 | 1 | In python for the random module, what is the difference between random.uniform() and random.random()? They both generate pseudo random numbers, random.uniform() generates numbers from a uniform distribution and random.random() generates the next random number. What is the difference? | In python, what is the difference between random.uniform() and random.random()? | 1 | 0 | 0 | 131,400 |
30,032,693 | 2015-05-04T14:28:00.000 | 0 | 0 | 1 | 0 | python,file,class | 30,033,258 | 3 | false | 0 | 0 | Variables are simply names that are bound to object references. Variables have no type. The type lives within the object itself.
Almost everything in python is an object.
When you open a file, you are creating a file object in memory. In order to prevent python from automatically garbage collecting this reference, you bind it to a variable name that stores the file objects memory address. If all you are doing is processing a file then having the file object temporarily live in memory might be desirable since it will be cleaned up after the process is run.
The file name you pass to open is simply a string used to locate the file and store its location in memory inside python. | 2 | 2 | 0 | What is the difference between a file object and a filename of a class? I'm slightly confused about this. My current answer to this question is: A file object is an object that can alter a file and a file name is just the name of the file that is being altered. But I don't think I have it quite right. | file object Vs. a filename | 0 | 0 | 0 | 2,257 |
30,032,693 | 2015-05-04T14:28:00.000 | 6 | 0 | 1 | 0 | python,file,class | 30,032,847 | 3 | true | 0 | 0 | There seems to be more confusion than you're aware of so let's go through them all
File Object: an object returned by a call to open (or in python 2, file)
File-like Object: an object that is not necessarily returned by open but still has the member functions read, write, etc. just like a real File Object.
Filename: the name of a file, usually passed as an argument to open.
Filename of a Class: the name of the python source file in which the class was defined. | 2 | 2 | 0 | What is the difference between a file object and a filename of a class? I'm slightly confused about this. My current answer to this question is: A file object is an object that can alter a file and a file name is just the name of the file that is being altered. But I don't think I have it quite right. | file object Vs. a filename | 1.2 | 0 | 0 | 2,257 |
30,035,123 | 2015-05-04T16:31:00.000 | 0 | 0 | 0 | 0 | python,matplotlib,mayavi | 30,125,166 | 1 | false | 0 | 0 | Mayavi is not really good at plotting 2d-diagramms, you can cheat a little by setting your camera position parallel to an 2d image. If you want to plot 2d-diagramms try using matplotlib. | 1 | 0 | 1 | I have a dataset of a tennis game. This dataset contains the ball positions in each rally and the current score. I already 3d-visualized the game and ball positions in mayavi.
Now I want to plot 2d line diagrams in mayavi that visualizes the score developement after specific events (such as after: a break, a set-win, set-loss,...).
I came up with some ideas, but none of them are satisfying:
I could use imshow and "draw" the diagram
I could use points3d to plot the diagram
Maybe I can somehow use pyplot to plot the diagram, then make a screenshot und then plot this screenshot in mayavi... Any idea if this is possible?
Do you have any other idea how I could plot a 2d line diagram in mayavi? | Plot 2d line diagram in mayavi | 0 | 0 | 0 | 783 |
30,036,175 | 2015-05-04T17:26:00.000 | 0 | 0 | 0 | 0 | python-2.7,packet,scapy | 47,197,343 | 1 | false | 0 | 0 | pkt.time gives you the epoch time that is included in the FRAME layer of the packet in wireshark.
Just after the notation pkt[IP].time gives you the time that is included in the IP layer of the packet in wireshark. But the IP layer has no time, so I don't think this command will work. | 1 | 3 | 0 | I was wondering what is the difference between using pkt.time and pkt[IP].time since they both give different times for the same packet.
I was also wondering how to interpret packet time such as 1430123453.564733
If anyone has an idea or knows where I can find such information it would be very helpful.
Thanks. | Scapy packet time interpretation | 0 | 0 | 1 | 501 |
30,037,065 | 2015-05-04T18:15:00.000 | 0 | 0 | 0 | 1 | python,scheduled-tasks,celery,background-process | 30,047,898 | 2 | false | 0 | 0 | One possible method I thought of, though not ideal, is to patch the celery.worker.heartbeat Heart() class.
Since we already use heartbeats, the class allows for a simple modification to its start() method (add another self.timer.call_repeatedly() entry), or an additional self.eventer.on_enabled.add() __init__ entry which references a new method that also uses self.timer.call_repeatedly() to perform a periodic task. | 1 | 1 | 0 | Is there any Celery functionality or preferred way of executing periodic background tasks locally when using a single worker? Sort of like a background thread, but scheduled and handled by Celery?
celery.beat doesn't seem suitable as it appears to be simply tied to a consumer (so could run on any server) - that's the type of scheduling I was after, but just a task that is always run locally on each server running this worker (the task does some cleanup and stats relating to the main task the worker handles).
I may be going about this the wrong way, but I'm confined to implementing this within a celery worker daemon. | Celery worker: periodic local background task | 0 | 0 | 0 | 541 |
30,041,371 | 2015-05-04T22:55:00.000 | 0 | 1 | 1 | 0 | python,cython,python-internals,python-extensions,function-signature | 30,050,402 | 2 | false | 0 | 0 | "This system fails for classes defined in cython because X"
Doesn't this mean that the answer you're looking for is X?
To know if a class is of the kind that crashes some function, like when you do inspect.getargspec(X.__init__), then just do inspect.getargspec(X.__init__) in a try: except block. | 1 | 1 | 0 | Given a class object in Python, how can I determine if the class was defined in an extension module (e.g. c, c++, cython), as opposed to being defined in standard python?
inspect.isbuiltin returns True for functions defined in an extension module, and False for functions defined in python, but it unfortunately does not have the same behavior for classes -- it returns False for both kinds of classes.
(The larger goal here is that we've got a system that generates a command line API for a set of classes based on parsing the docstring and signature of their __init__ functions. This system fails for classes defined in cython because inspect.getargspec doesn't work correctly on these classes, so I'm trying to figure out a workaround) | Check if class object is defined in extension module | 0 | 0 | 0 | 462 |
30,043,202 | 2015-05-05T02:16:00.000 | 0 | 1 | 1 | 0 | python,png,appdata | 30,076,292 | 1 | true | 0 | 0 | The problem is file size. I split the file into ten parts, and each file can be opened. Thx all answers. | 1 | 0 | 0 | I have used seqdiag to generate sequence diagram, and it generates 3MB png file. It sounds great, right? But something wrong when I open it. When I open the file, appdata/local/temp has gained 3GB, and generates big files named ~PI*.tmp. After I send the png file to others, they can't open the file in their computer. What is the root cause and how I can send this kind of file to others? | python seqdiag file is big and can't be openned in other computer? | 1.2 | 0 | 0 | 118 |
30,045,659 | 2015-05-05T06:26:00.000 | 0 | 1 | 0 | 0 | python,raspberry-pi,gpio | 30,187,490 | 1 | false | 0 | 0 | There is a built-in function GPIO.cleanup() that clean up all the ports you've used.
For the power and ground pins, they are not under software control. | 1 | 0 | 0 | Basically, I need to disable or turn off a GPIO pin whenever I execute a method in python.
Does anyone knows how to disable the pins? | How to disable GPIO pins on the RaspberryPi? | 0 | 0 | 0 | 4,582 |
30,047,341 | 2015-05-05T08:03:00.000 | 1 | 0 | 0 | 0 | python,nlp,tf-idf | 30,048,240 | 1 | false | 0 | 0 | Did you check the stop_words and max_features? If you provide values in either of these two, it will exclude some words. | 1 | 0 | 1 | I am trying to build a TFIDF model with TfidfVectorizer. The feature name list namely the number of column of sparse matrix is shorter than the length of word set of documents even though I set min_df as 1. What happened? | TfidfVectorizer does not use the whole set of words in all documents? | 0.197375 | 0 | 0 | 525 |
30,048,364 | 2015-05-05T08:55:00.000 | 3 | 0 | 0 | 0 | python,django,django-rest-framework,django-authentication,http-token-authentication | 30,053,316 | 2 | true | 1 | 0 | In my case I have written custom middle-ware to handle the situation.
1. When I am login using API and api path is **/api/accounts/login**. So when request comes on this url then I am removing sessionid and csrftoken both.
When HTTP_AUTHORIZATION is available in request, I remove the session and csrftoken.
Using above two removal situation can be handled in my case.
Thanks to everyone for helping.!! | 1 | 2 | 0 | I want to use both token and session based authentication in my application with the priority of token. I have created two portal with the same URL one is using session and other is using token. So when session is available in cookie then token based request goes failed with "CSRF Token is missing" error message.
One solution I have in my mind using middle-ware where I can make priority to token. If both are available in request then custom middle-ware will remove session related stuffs and keep only token related information and proceed.
If anyone has solution available for this problem then please post in answer?
Thanks in advance. | How to handle session and token based authentication simultaneously using middle ware in django? | 1.2 | 0 | 0 | 755 |
30,049,051 | 2015-05-05T09:25:00.000 | 0 | 0 | 0 | 0 | python,arrays,sorting | 30,059,056 | 1 | false | 0 | 0 | I figure out why it didn't work with np.sort().
I misused the structured array function.
With the following dtype, I have created my array with the following line:
Data = np.zeros((78000,11),dtype=dtype2)
I though that I had to create 1 row for each structured data. WRONG ! The right line is: Data = np.zeros((78000,1),dtype=dtype2) | 1 | 0 | 1 | I'm facing something strange, the function sort and the attribut argsort don't give me the same results.
I have a Data array (CFD results) with the following structure:
dtype([('nodenumber', '<f8'), (' x-coordinate', '<f8'), (' y-coordinate', '<f8'), (' z-coordinate', '<f8'), (' pressure', '<f8'), (' total-pressure', '<f8'), (' x-velocity', '<f8'), (' y-velocity', '<f8'), (' z-velocity', '<f8'), (' temperature', '<f8'), ('total-temperature\n', '<f8')])
So, each column contains one measured parameters at one point. I would like to sort the array by increasing 'z-coordinate' AND of course move line by line during the sorting (1line <=> 1 point and coresponding values).
I tried this function:
Data_sorted = np.sort(Data,axis=0,kind='mergesort',order=' z-coordinate')
It returns me a sorted array but the lines are completely messed up. For example, the previous point 1 has now a completely different z-coordinate. I don't want that.
Then I used this function (The 3rd column is the z-coordinate) :
order = Data[:, 3].argsort(kind='mergesort')
Data_sorted = np.take(Data, order, axis=0)
And... it works ! The array has been sorted by increasing z-coordinate and the points are still coherent (it seems, at least).
Do you have an idea why these both similar function provide different results ?
Because in a 2nd step, I will need to do something like that:
Data_sorted = np.sort(Data,axis=0,kind='mergesort',order=(' z-coordinate',' y-coordinate'))\= | Python: np.sort VS array.argsort() | 0 | 0 | 0 | 906 |
30,049,490 | 2015-05-05T09:45:00.000 | 0 | 0 | 0 | 0 | python,opencv,training-data | 32,845,800 | 2 | true | 0 | 0 | I later found the answer and would like to share it if someone will be facing the same challenges.
You need pictures only for the different people you are trying to recognise. I created my training set with 30 images of every person (6 persons) and figured out that histogram equalisation can play an important role when creating the training set and later when recognising faces. Using the histogram equalisation model accuracy was greatly increased. Another thing to consider is eye axis alignment so that all pictures have their eye axis aligned before they enter face recognition. | 1 | 0 | 1 | I am using python and openCV to create face recognition with Eigenfaces. I stumbled on a problem, since I don't know how to create training set.
Do I need multiple faces of people I want to recognize(myself for example), or do I need a lot of different faces to train my model?
First I tried training my model with 10 pictures of my face and 10 pictures of ScarJo face, but my prediction was not working well.
Now I'm trying to train my model with 20 different faces (mine is one of them).
Am I doing it wrong and if so what am I doing wrong? | What does eigenfaces training set have to look like? | 1.2 | 0 | 0 | 247 |
30,049,523 | 2015-05-05T09:47:00.000 | 0 | 0 | 1 | 0 | python,compression | 30,050,024 | 2 | false | 0 | 0 | If you want to compress a file without writing to it, you could run a shell command such as gzip using the Python libraries subprocess or popen or os.system. | 1 | 0 | 0 | I have a text file which I constantly append data to. When processing is done I need to gzip the file. I tried several options like shutil.make_archive, tarfile, gzip but could not eventually do it. Is there no simple way to compress a file without actually writing to it?
Let's say I have mydata.txt file and I want it to be gzipped and saved as mydata.txt.gz. | How to compress a processed text file in Python? | 0 | 0 | 0 | 991 |
30,049,608 | 2015-05-05T09:50:00.000 | 0 | 0 | 1 | 1 | python,linux,python-idle | 30,049,734 | 3 | false | 0 | 0 | Install the "idle" package. Many distros break the core python installation into several pieces, and idle is usually one that is packaged separately. | 2 | 2 | 0 | Is there a way to have a python IDLE shell, like how it is seen on windows? Sorry if this topic has been raised before, I did not find it.
Using Linux mint 17.1 rebecca, Cinnamon 32 bit | Python shell in linux | 0 | 0 | 0 | 4,968 |
30,049,608 | 2015-05-05T09:50:00.000 | 0 | 0 | 1 | 1 | python,linux,python-idle | 30,049,783 | 3 | false | 0 | 0 | If you really want IDLE, just run python -m idlelib.idle | 2 | 2 | 0 | Is there a way to have a python IDLE shell, like how it is seen on windows? Sorry if this topic has been raised before, I did not find it.
Using Linux mint 17.1 rebecca, Cinnamon 32 bit | Python shell in linux | 0 | 0 | 0 | 4,968 |
30,051,770 | 2015-05-05T11:29:00.000 | 0 | 0 | 1 | 0 | python,localization,windows-installer,installation,multiple-languages | 30,155,408 | 2 | false | 0 | 0 | A lot of this seems to be a question about how bdist_msi works, and it seems to be a tool that nobody here knows anything about. I would get some clarification from that tool somehow. The docs seem non-existent to me.
It might generate only one MSI in English. If so then you need to use a tool like Orca to translate the MSI text into each language and save each difference as a transform, an .mst file. Then you'd write a program that gets the language from the user and installs the MSI with a TRANSFORMS= command line that refers to the .mst file for the language.
It might work like Visual Studio, where each language has its own separate MSI file. Again, you'd need a setup program asks the user what language and you fire off the appropriate MSI.
In general, there's no need to ask the user what language to use. I have seen those dialogs but I don't know why they bother. I think it's better to assume the current user language rather than show a dialog that says "Choose a language". You'd need to localise that "Choose a language" text to the user's language anyway unless you assume that everyone already understands English.
You might be able to use something like WiX Burn to package your MSI and provide localisation, not sure. | 1 | 0 | 0 | I have created a python application and created a .msi installer for it to work and get installed on other machines.
I would like to know how can the user change the language during the installation. ie the localization of msi. | How to localize a msi setup installer file to support various languages? | 0 | 0 | 0 | 381 |
30,054,372 | 2015-05-05T13:28:00.000 | 3 | 0 | 1 | 0 | python,conda,miniconda | 30,086,036 | 1 | true | 0 | 0 | activate currently doesn't support Powershell. You'll need to modify your path manually, or else call the full path to the Python in the environment. | 1 | 3 | 0 | When I create a virtual environment in Miniconda on Windows 8 and activate it in PowerShell ("activate env"), it says the environment is being activated, but upon typing "conda env list", it shows me that I'm still in the root environment. I checked the envs folder in Miniconda, and the folder with the env is there and seems to be fine with a Python distribution and everything, but for some reason I'm unable to go into the environment itself. For some reason, it works when I use the Command Prompt instead, but this isn't an ideal solution, since I'd like to be able to do everything in PowerShell.
Any help would be appreciated. | Can't activate virtual environment in Miniconda | 1.2 | 0 | 0 | 1,391 |
30,056,331 | 2015-05-05T14:50:00.000 | 3 | 0 | 0 | 0 | python,scikit-learn | 56,863,216 | 4 | false | 0 | 0 | AdaBoostClassifier
BaggingClassifier
BayesianGaussianMixture
BernoulliNB
CalibratedClassifierCV
ComplementNB
DecisionTreeClassifier
ExtraTreeClassifier
ExtraTreesClassifier
GaussianMixture
GaussianNB
GaussianProcessClassifier
GradientBoostingClassifier
KNeighborsClassifier
LabelPropagation
LabelSpreading
LinearDiscriminantAnalysis
LogisticRegression
LogisticRegressionCV
MLPClassifier
MultinomialNB
NuSVC
QuadraticDiscriminantAnalysis
RandomForestClassifier
SGDClassifier
SVC
_BinaryGaussianProcessClassifierLaplace
_ConstantPredictor | 2 | 22 | 1 | I need a list of all scikit-learn classifiers that support the predict_proba() method. Since the documentation provides no easy way of getting that information, how can get this programatically? | How to list all scikit-learn classifiers that support predict_proba() | 0.148885 | 0 | 0 | 8,716 |
30,056,331 | 2015-05-05T14:50:00.000 | 0 | 0 | 0 | 0 | python,scikit-learn | 72,497,753 | 4 | false | 0 | 0 | If you are interested in a spesific type of estimator(say classifier), you could go with:
import sklearn
estimators = sklearn.utils.all_estimators(type_filter="classifier")
for name, class_ in estimators:
if not hasattr(class_, 'predict_proba'):
print(name) | 2 | 22 | 1 | I need a list of all scikit-learn classifiers that support the predict_proba() method. Since the documentation provides no easy way of getting that information, how can get this programatically? | How to list all scikit-learn classifiers that support predict_proba() | 0 | 0 | 0 | 8,716 |
30,056,603 | 2015-05-05T15:02:00.000 | 0 | 0 | 0 | 0 | python,scikit-learn,roc,precision-recall | 30,084,402 | 1 | false | 0 | 0 | The threshold values have two major differences.
The orders are different. roc_curve has thresholds in decreasing order, while precision_recall_curve has thresholds in increasing order.
The numbers are different. In roc_curve, n_thresholds = len(np.unique(probas_pred)), while in precision_recall_curve the number n_thresholds = len(np.unique(probas_pred)) - 1. In the latter, the smallest threshold value from roc_curve is not included. In the same time, the last precision and recall values are 1. and 0. respectively with no corresponding threshold. Therefore, the numbers of items for tpr, fpr, precision and recall are the same.
So, back to your question, how to make a table to include tpr, fpr, precision and recall with corresponding thresholds? Here are the steps:
Discard the last precision and recall values
Reverse the precision and recall values
Compute the precision and recall values corresponding to the lowest threshold value from the thresholds of roc_curve
Put all the values into the same table | 1 | 0 | 1 | I need to make a table with the TPR and FPR values, as well as precision and recall. I am using the roc_curve and precision_recall_curve functions from sklearn.metrics package in python. My problem is that every function give me a different vector for the thresholds, and I need only one, to merge the values as columns in a single table. Could anyone help me?
Thanks in advance | How to get the same thresholds values for both functions precision_recall_curve and roc_curve in sklearn.metrics | 0 | 0 | 0 | 1,042 |
30,056,836 | 2015-05-05T15:11:00.000 | 0 | 1 | 0 | 1 | php,python,iis | 30,057,339 | 2 | false | 0 | 0 | Since you provide no example code or describe what you are doing... There are a few things to consider.
Anything running in the context of a webpage in IIS is running in a different context than a logged in user.
The first part of that is simply what file system level permissions might be different for the IIS user account. The proper way you want to handle that is by assigning the necessary changes at the filesystem level for the IIS user. Do not change the IIS user if you do not understand the ramifications of doing that.
The next part is that certain operations cannot be done in the context of the IIS user account (regardless of account permissions), because there are certain things that only a logged in user with access to the console/desktop can do.
Certain operations called from IIS are purposely blocked (shell.execute) regardless of permissions, account used, etc. This occurs in versions of IIS in Windows Server 2008 and later and is done for security. | 2 | 0 | 0 | I have a server A where some logs are saved, and another server B with a web server (IIS) on it.
I can access serverA from Windows Explorer with zero problems, but when I want to access it from serverB with some PHP code, it doesn't work.
I made a python script that accesses the file from serverA on serverB. It works if I run that script from CMD, but when I run that script from PHP code it doesn't work anymore.
I run IIS server as a domain account that has access on serverA
I try to run that as LocalService, NetworkService, System, LocalUser but no success.
That script is a simple open command, so problem it's not from python. | IIS access to a remote server in same domain | 0 | 0 | 0 | 78 |
30,056,836 | 2015-05-05T15:11:00.000 | 0 | 1 | 0 | 1 | php,python,iis | 30,117,742 | 2 | true | 0 | 0 | Resolved.
Uninstall IIS and use XAMPP.
No problem found till now, everything works okay.
So use XAMPP/WAMP! | 2 | 0 | 0 | I have a server A where some logs are saved, and another server B with a web server (IIS) on it.
I can access serverA from Windows Explorer with zero problems, but when I want to access it from serverB with some PHP code, it doesn't work.
I made a python script that accesses the file from serverA on serverB. It works if I run that script from CMD, but when I run that script from PHP code it doesn't work anymore.
I run IIS server as a domain account that has access on serverA
I try to run that as LocalService, NetworkService, System, LocalUser but no success.
That script is a simple open command, so problem it's not from python. | IIS access to a remote server in same domain | 1.2 | 0 | 0 | 78 |
30,057,240 | 2015-05-05T15:30:00.000 | 1 | 1 | 1 | 0 | python,list,python-2.7,io,save | 37,910,499 | 4 | false | 0 | 0 | I've done some profiling of many methods (except the numpy method) and pickle/cPickle is very slow on simple data sets. The fastest way depends on what type of data you are saving. If you are saving a list of strings and/or integers. The fastest way that I've seen is to just write it directly to a file using a for loop and ','.join(...); read it back in using a similar for loop with .split(','). | 1 | 10 | 0 | What's the fastest way to save/load a large list in Python 2.7? I apologize if this has already been asked, I couldn't find an answer to this exact question when I searched...
More specifically, I'm testing out methods for simulating something, and I need to compare the result from each method I test out to an exact solution. I have a Python script that produces a list of values representing the exact solution, and I don't want to re-compute it every time I run a new simulation. Thus, I want to save it somewhere and just load the solution instead of re-computing it every time I want to see how good my simulation results are.
I also don't need the saved file to be human-readable. I just need to be able to load it in Python. | What's the fastest way to save/load a large list in Python 2.7? | 0.049958 | 0 | 0 | 17,077 |
30,057,321 | 2015-05-05T15:34:00.000 | 0 | 0 | 1 | 0 | python,pep8 | 30,057,386 | 1 | true | 1 | 0 | At the end of the day, whats more important is consistency within your own code. Its important that your convention be constant. Or if you are coding with a group that your convention be consistent across your project.
With that said, each language will have its own Style Guides. Style Guides offer consistency across the different projects, and help the community use an agreed upon convention. Python Style Guides will be different than Node.js style guides. So I would recommend following PEP when you are using Python. | 1 | 1 | 0 | I created a small project on Python. Main app use some plugins. I would like to know which naming convention for namespaces is better? PEP-0423 recommends to use "project" namespace for main program and "projectcontrib.pluginname" for plugins. But django or node.js often use "project-pluginname" convention. What are pro and cons for both conventions? | What is better: projectcontrib.pluginname or project-pluginname as a python project naming convention for small project? | 1.2 | 0 | 0 | 43 |
30,063,430 | 2015-05-05T21:21:00.000 | 1 | 0 | 1 | 0 | python,arrays,image,image-processing,colors | 30,063,676 | 1 | false | 0 | 0 | The image is being opened as a color image, not as a black and white one. The shape is 181x187x3 because of that: the 3 is there because each pixel is an RGB value. Quite often images in black and white are actually stored in an RGB format. For an image image, if np.all(image[:,:,0]==image[:,:,1]) and so on, then you can just choose to use any of them (eg, image[:,:,0]). Alternatively, you could take the mean with np.mean(image,axis=2).
Note too that the range of values will depend on the format, and so depending upon what you mean by color intensity, you may need to normalize them. In the case of a jpeg, they are probably uint8s, so you may want image[:,:,0].astype('float')/255 or something similar. | 1 | 0 | 1 | Suppose I have got a black an white image, how do I convert the colour intensity at each point into a numerical value that represents its relativity intensity?
I checked somewhere on the web and found the following:
Intensity = np.asarray(PIL.Image.open('test.jpg'))
What's the difference between asarray and array?
Besides, the shape of the array Intensity is '181L, 187L, 3L'. The size of the image test.jpg is 181x187, so what does the extra '3' represent?
And are there any other better ways of extracting the colour intensity of an image?
thank you. | how to extract the relative colour intensity in a black and white image in python? | 0.197375 | 0 | 0 | 473 |
30,063,974 | 2015-05-05T21:58:00.000 | 3 | 1 | 1 | 0 | python,camera,raspberry-pi,camera-calibration | 30,064,269 | 3 | false | 0 | 0 | What do you mean by "black and white image," in this case? There is no "true" black and white image of anything. You have sensors that have some frequency response to light, and those give you the values in the image.
In the case of the Raspberry Pi camera, and almost all standard cameras, there are red, green and blue sensors that have some response centered around their respective frequencies. Those sensors are laid out in a certain pattern, as well. If it's particularly important to you, there are cameras that only have an array of a single sensor type that is sensitive to a wider range of frequencies, but those are likely going to be considerable more expensive.
You can get raw image data from the raspi camera with picamera. This is not the "raw" format described in the documentation and controlled by format, which is really just the processed data before encoding. The bayer option will return the actual raw data. However, that means you'll have to deal with processing by yourself. Each pixel in that data will be from a different color sensor, for example, and will need to be adjusted based on the sensor response.
The easiest thing to do is to just use the camera normally, as you're not going to get great accuracy measuring light intensity in this way. In order to get accurate results, you'd need calibration, and you'd need to be specific about what the data is for, how everything is going to be illuminated, and what data you're actually interested in. | 1 | 2 | 0 | Are there any ways to set the camera in raspberry pi to take black and white image?, like using some commands / code in picamera library?
Since I need to compare the relative light intensity of a few different images, I'm worried that the camera will already so some adjustments itself when the object is under different illuminations, so even if I convert the image to black and white later on the object's 'true' black and white image will have been lost.
thanks
edit: basically what I need to do is to capture a few images of an object when the camera position is fixed, but the position of the light source is changed (and so the direction of illumination is changed as well). Then for each point on the image I will need to compare the relative light intensity of the different images. As long as the light intensity, or the 'brightness' of all the images are relative to the same scale, then it's ok, but I'm not sure if this is the case. I'm sure if the camera will adjust something like contrast automatically itself when an image is 'inherently' darker or brighter. | how to set the camera in raspberry pi to take black and white image? | 0.197375 | 0 | 0 | 16,954 |
30,065,905 | 2015-05-06T01:22:00.000 | 1 | 0 | 1 | 0 | python,numpy,matrix,matrix-multiplication | 30,066,780 | 1 | false | 0 | 0 | Numpy dot is implemented in multiarraymodule.c as PyArray_MatrixProduct2. The implementation it actually uses is dependent upon a number of factors.
If you have numpy linked to a BLAS implementation, your dtypes are all double, cdouble, float, or cfloat, and your arrays have 2 or fewer dimensions each, then numpy hands off the array to the BLAS implementation. What is does is dependent upon the package you're using.
Otherwise, no, it doesn't do this. However, at least on my machine, doing this (or just a dot product in general) with a transpose and einsum is ten times slower than just using dot, because dot pushes to BLAS. | 1 | 1 | 1 | While multiplying large matrices (say A and B, A.dot(B)), does numpy use spatial locality by computing the transpose of the B and using row wise multiplication, or does it access the elements of B in column-wise fashion which would lead to many cache misses. I have observed that memory bandwidth is becoming a bottleneck when I run multiple instances of the same program. For example, if I run 4 independent instances of a program which does matrix multiplication (for large matrices) on a 20 core machine, I only see a 2.3 times speedup. | Does numpy use spatial locality in memory while doing matrix multiplication? | 0.197375 | 0 | 0 | 312 |
30,067,051 | 2015-05-06T03:39:00.000 | 14 | 0 | 0 | 0 | python,numpy,pandas,scipy,data-analysis | 30,067,640 | 3 | true | 0 | 0 | Pandas is not particularly revolutionary and does use the NumPy and SciPy ecosystem to accomplish it's goals along with some key Cython code. It can be seen as a simpler API to the functionality with the addition of key utilities like joins and simpler group-by capability that are particularly useful for people with Table-like data or time-series. But, while not revolutionary, Pandas does have key benefits.
For a while I had also perceived Pandas as just utilities on top of NumPy for those who liked the DataFrame interface. However, I now see Pandas as providing these key features (this is not comprehensive):
Array of Structures (independent-storage of disparate types instead of the contiguous storage of structured arrays in NumPy) --- this will allow faster processing in many cases.
Simpler interfaces to common operations (file-loading, plotting, selection, and joining / aligning data) make it easy to do a lot of work in little code.
Index arrays which mean that operations are always aligned instead of having to keep track of alignment yourself.
Split-Apply-Combine is a powerful way of thinking about and implementing data-processing
However, there are downsides to Pandas:
Pandas is basically a user-interface library and not particularly suited for writing library code. The "automatic" features can lull you into repeatedly using them even when you don't need to and slowing down code that gets called over and over again.
Pandas typically takes up more memory as it is generous with the creation of object arrays to solve otherwise sticky problems of things like string handling.
If your use-case is outside the realm of what Pandas was designed to do, it gets clunky quickly. But, within the realms of what it was designed to do, Pandas is powerful and easy to use for quick data analysis. | 2 | 7 | 1 | I have been using numpy/scipy for data analysis. I recently started to learn Pandas.
I have gone through a few tutorials and I am trying to understand what are the major improvement of Pandas over Numpy/Scipy.
It seems to me that the key idea of Pandas is to wrap up different numpy arrays in a Data Frame, with some utility functions around it.
Is there something revolutionary about Pandas that I just stupidly missed? | Python - What are the major improvement of Pandas over Numpy/Scipy | 1.2 | 0 | 0 | 3,241 |
30,067,051 | 2015-05-06T03:39:00.000 | 1 | 0 | 0 | 0 | python,numpy,pandas,scipy,data-analysis | 30,096,156 | 3 | false | 0 | 0 | A main point is that it introduces new data structures like dataframes, panels etc. and has good interfaces to other structure and libs. So in generally its more an great extension to the python ecosystem than an improvement over other libs. For me its a great tool among others like numpy, bcolz. Often i use it to reshape my data, get an overview before starting to do data mining etc. | 2 | 7 | 1 | I have been using numpy/scipy for data analysis. I recently started to learn Pandas.
I have gone through a few tutorials and I am trying to understand what are the major improvement of Pandas over Numpy/Scipy.
It seems to me that the key idea of Pandas is to wrap up different numpy arrays in a Data Frame, with some utility functions around it.
Is there something revolutionary about Pandas that I just stupidly missed? | Python - What are the major improvement of Pandas over Numpy/Scipy | 0.066568 | 0 | 0 | 3,241 |
30,067,124 | 2015-05-06T03:46:00.000 | 0 | 0 | 0 | 0 | javascript,python,django,angularjs,django-rest-framework | 30,067,648 | 1 | false | 1 | 0 | I think the second one is better. In a restful-style project, front-end code is completely decouple with the back-end code.
Besides, separate them into two projects is good for deployment. If you want to upgrade the front-end code, just upload them and restart nginx, front-end code totally is static. | 1 | 1 | 0 | I am very new to Django Rest Framework (DRF) and AngularJs. What I wandering here is the best way to work with these two.
DRF and AngularJs together (Most of tutorials showed me this) in one project
DRF as backend and AngularJs in frontend as 2 different projects
I am very confused, though I feel the 2nd approach is better. But still not sure. Can please anyone help me in this with pros and cons of both the approaches. | Django Rest Framework and Angular | 0 | 0 | 0 | 292 |
30,073,603 | 2015-05-06T10:07:00.000 | 0 | 0 | 0 | 0 | python,selenium | 66,779,985 | 7 | false | 1 | 0 | As dear @Epiwin mentioned ,it is a horrible bug in TightVNC or combination of chrome driver and it, I've removed it completely and install TigerVNC instead and eventually it worked.
BTW, I don't know why! but the speed of the remote connection increased after migrate to TigerVNC, and it was another good point for mine. | 1 | 6 | 0 | So I see this issue on google selenium site but it has not been resolved yet.
when you element.send_key('12345')
it will return '123'. The 5 is parsed as backspace....
is there a work around for this?
Using latest selenium, chrome, chromedriver, python 2.7, ubuntu 12.04 | Python: Selenium send_key can't type numbers like 5 or 6 | 0 | 0 | 1 | 4,525 |
30,075,234 | 2015-05-06T11:20:00.000 | 0 | 1 | 0 | 1 | python,django,openshift | 30,107,482 | 2 | false | 0 | 0 | This issue should be fixed now. Please open a request at help.openshift.com if you continue to have issues with it. | 1 | 0 | 0 | I have an hourly cron job which has been running in Openshift free gear for almost a year and has been no problem. But the past 2 days, cron job stops running automatically. I have been googling around and still cannot find what went wrong. Here are what I have checked/done to date
the service I use to keep the site alive is still up and running as normal. So it is not a case of being idle.
force restart the app. Cron job still not started automatically as it used to.
fake changes to cron script file and push to Openshift. Still not fixed this.
log files looks ok
Mon May 4 13:01:07 EDT 2015: START hourly cron run
Mon May 4 13:01:29 EDT 2015: END hourly cron run - status=0
Any advice or pointer as to why it just stop working when there is no change to the app. Thank you. | OPENSHIFT Cron job just stop working | 0 | 0 | 0 | 567 |
30,076,216 | 2015-05-06T12:05:00.000 | 1 | 0 | 0 | 0 | python,django | 30,156,768 | 1 | false | 1 | 0 | There is no problem with having helpers, but I think you chose wrong naming strategy. The name of the module or package should tell you what function it performs.
In your case you should name your helpers after the job they help you do. For example, StringHelpers or UrlUtils. | 1 | 1 | 0 | Sometimes we need in auxiliary functions, classes etc.
Sometimes we put such entities into modules or packages.
Currently I have three variants:
Use helpers package or module.
Use utils package or module (like in Java).
Don't use something like it because it is anti pattern. If you have helpers then you have a application design problem.
What do you prefer? | Organising common code in django/python | 0.197375 | 0 | 0 | 214 |
30,079,853 | 2015-05-06T14:36:00.000 | 0 | 0 | 0 | 0 | python,postgresql,openerp,openerp-7 | 52,777,087 | 1 | false | 1 | 0 | If you have frontend or backend instances. Make sure that:
All code match in both instances
Update your module
check if column exist in your table | 1 | 0 | 0 | I have migrated my OpenERP 7.0 DB from dev to production and now everytime I try to create a new customder, I get this error:
View error Can't find field 'blocked' in the following view parts composing the view of object model 'res.partner':
res.partner.followup.form.inherit Either you wrongly customized this
view, or some modules bringing those views are not compatible*
Any idea of why I see this error?
I am a magento guy so have no clue whatsoever. | View error Can't find field 'blocked' in the following view parts composing the view of object model | 0 | 0 | 0 | 410 |
30,088,739 | 2015-05-06T22:23:00.000 | 3 | 0 | 1 | 0 | python,argparse | 30,090,766 | 1 | true | 0 | 0 | To get the help for a subparser, use a command like python prog.py cmd1 -h. To get the help for a sub-subparser, python prog.py cmd1 cmd12 -h should work.
There isn't a means, with the default help mechanism, to show the help for the main parser and all the subparsers (and sub-subparsers) with one command. It just gets too complicated.
I'd suggest custom usage and description. That includes titles and descriptions for the subparsers, etc. | 1 | 1 | 0 | I have multiple levels of subparsers within subparsers, but when I run the program with help flag, I see help messages and choices only for top level options. How can I see help for all suboptions, or for specific suboption in deeper level? | Multiple level argparse subparsers | 1.2 | 0 | 0 | 637 |
30,088,815 | 2015-05-06T22:30:00.000 | 0 | 0 | 0 | 0 | python,django,postgresql,deployment,web-deployment | 30,089,080 | 2 | false | 1 | 0 | From my point of view, database always should be created before deployment. And the information of the database must be posted to the settings.py
for the application it self, I think virtualenv can be very helpful in these cases with requirements.txt
You run the application in your virtual environment and then export your dependencies using
pip freeze > requirements.txt
Then in the new server you create the database, and insert the configuration in your settings, then install dependences
pip install -r /path/to/requirements.txt
Run migrations, and you are done. | 1 | 0 | 0 | Sorry I'm new to this specific topic.
I have a website implemented in django and AskBot it also has a DB (postgreSQL). I want to create a deployment package which can be distributed to any customer; such that this customer can have their own server. Taking into consideration that the deployment package should be platform independent; so it should work on all operating systems.
Can you tell me what are the available tools to achieve this? | Deploying websites in django virtual machine | 0 | 0 | 0 | 460 |
30,089,003 | 2015-05-06T22:48:00.000 | 3 | 0 | 1 | 0 | python,file | 30,089,116 | 2 | false | 0 | 0 | Another way to do it is '\1'. Cheers! | 1 | 2 | 0 | Wondering how to write unreadable ^A into a file using Python. For unreadable ^A, I mean when we use command "set list" in vi, we can see unreadable character like ^I for '\t', $ for '\n'.
thanks in advance,
Lin | how to write unreadable ^A into output file in Python? | 0.291313 | 0 | 0 | 243 |
30,089,374 | 2015-05-06T23:27:00.000 | 3 | 0 | 0 | 1 | python-3.x,ibm-cloud | 30,096,077 | 3 | true | 0 | 0 | You can define the start command in a file called Procfile. Create the Procfile in the root of your app code that you push to Bluemix. The contents of the Procfile should look like this:
web: python3 appname.py
where appname.py is the nameof your python script to run | 1 | 1 | 0 | I am trying to push a python3 app to Bluemix, but get the error msg "missing start command". I have tried to add -c "python appname.py" as Python usually has in Windows and -c "python3 appname.py" as in Python in Linux, but neither works for me. Can anyone give me the right start command to use? | What should the Python start command look like in Bluemix? | 1.2 | 0 | 0 | 2,739 |
30,089,723 | 2015-05-07T00:03:00.000 | 0 | 0 | 1 | 0 | python,command-line,pygame | 31,355,786 | 2 | false | 0 | 1 | It could be, that you named your file pygame.py, so now, whenever you try to import pygame, it imports from THAT pygame.py file. Maybe delete/rename the file and reinstall pygame would work, i had the same problem a year ago and it worked. | 2 | 0 | 0 | I have recently encountered an issue whenever I try and use pygame. The window that comes up hangs and just ends up crashing. I know this program works because I have run it many times before (this also occurs with anything else I try and run in pygame). Strangely, in the command prompt window it asks an input question from a completely unrelated program I was working on last night. | Pygame crashing | 0 | 0 | 0 | 190 |
30,089,723 | 2015-05-07T00:03:00.000 | 0 | 0 | 1 | 0 | python,command-line,pygame | 41,851,946 | 2 | false | 0 | 1 | Seems to be a setup problem.
possible solutions:
rename files (as Mooper suggested)
create the smallest possible use for pygame (once without / once with windows popping up), in order to tell us where the error comes up / does not.
look at the code again; seems to me being an error with a neverending loop / overflowing stack / a function not returning after being called.
last possible idea: reinstall pygame | 2 | 0 | 0 | I have recently encountered an issue whenever I try and use pygame. The window that comes up hangs and just ends up crashing. I know this program works because I have run it many times before (this also occurs with anything else I try and run in pygame). Strangely, in the command prompt window it asks an input question from a completely unrelated program I was working on last night. | Pygame crashing | 0 | 0 | 0 | 190 |
30,090,942 | 2015-05-07T02:30:00.000 | 1 | 0 | 1 | 1 | python,windows | 30,090,978 | 1 | true | 0 | 0 | You don't need to create a py2exe executable for this, you can simply run the Python executable itself (assuming it's installed of course), passing the name of your script as an argument.
And one way to do that is to use the task scheduler, which can create tasks to be run at boot time, under any user account you have access to. | 1 | 0 | 0 | I want to run a python script which should always start when windows boot.
i believe i can create an executable windows executable file from python by using py2exe... But how to make as a start up service which will be triggered while boot
Is there any way ? | is there any possible way to run a python script on boot in windows operating system? | 1.2 | 0 | 0 | 111 |
30,092,249 | 2015-05-07T04:55:00.000 | 3 | 0 | 1 | 1 | python,file | 30,092,316 | 5 | false | 0 | 0 | There are two good reasons.
If your program crashes or is unexpectedly terminated, then output files may be corrupted.
It's good practice to close what you open. | 1 | 9 | 0 | I know it is a good habit of using close to close a file if not used any more in Python. I have tried to open a large number of open files, and not close them (in the same Python process), but not see any exceptions or errors. I have tried both Mac and Linux. So, just wondering if Python is smart enough to manage file handle to close/reuse them automatically, so that we do not need to care about file close?
thanks in advance,
Lin | about close a file in Python | 0.119427 | 0 | 0 | 2,274 |
30,095,337 | 2015-05-07T08:10:00.000 | 2 | 0 | 0 | 0 | python,flask,python-sphinx,tableofcontents | 30,135,026 | 2 | true | 1 | 0 | Turns out there is an html file which gets generated in the build folder called http-routingtable.html which does what I'm asking above. Given the purpose of sphinx (documentation) you would think this would be... documented... clearly. Hopefully others experiencing the problem run across this post in the future. | 1 | 3 | 0 | We use Sphinx to document our Flask API. It does a pretty good job, but we are having a problem navigating the documentation it generates.
We document each blueprint separately. Our blueprints are pretty big. Each is about 1000 lines long, and our docstrings are extremely detailed. The result is a Sphinx page which lists endpoints, but with so much intervening documentation between the endpoints that it is very difficult to understand the page. The normal thing to create here would be a table of contents at the top. I believe sphinx autosummary might also be called for here, though I don't know for sure as I have never used it.
Unfortunately autosummary doesn't work, because it doesn't seem to be built to handle the same sort of input as autoflask(sphinxcontrib.autohttp.flask). Does anyone know a way to tell sphinx to create a table of contents which creates within-page links to all the members on the page which is compatible with autoflask? | Summary or toc for autoflask Sphinx | 1.2 | 0 | 0 | 539 |
30,100,641 | 2015-05-07T12:08:00.000 | 0 | 0 | 0 | 0 | python,flask,webapp2,python-babel | 30,123,126 | 1 | false | 1 | 0 | Solved by just using the flask app and the way I wanted to avoid - on every request, there is a callback to the app instance and to the localeselector decorator, language is set previously in an attribute in flask.g. Basically, by the books I guess. | 1 | 2 | 0 | The setup of the problem is simple enough:
an user selects a language preference (this preference can be read from the user’s session);
based on this choice, load the appropriate .mo from the available translations;
(no separate domains are set up, if it makes any difference)
Problem: since this return has to be done outside the scope of the flask app, it cannot be instantiated and to use @babel.localeselector. Instead, I use a simple function based on webapp2 i18n’ extension which, using Babel’s support function, loads a given translation and returns a translation instance (Translations: "PROJECT VERSION"). (inb4 ‘why not use webapp2 already?’ too many libs already).
From this point on, it is not clear to me what to do with this instance. How can I get Babel to use this specific instance? (at the moment, it always uses the default one, no 'best_match' involved). | Indicate a specific .mo file for Babel to load | 0 | 0 | 0 | 116 |
30,104,305 | 2015-05-07T14:39:00.000 | 1 | 0 | 0 | 0 | python,dataset,mahout-recommender,delicious-api | 32,162,113 | 1 | false | 0 | 0 | I had the same problem, this works for me..
Before installing the package, change this line:
rss = http_request('url').read()
to:
rss = http_request('http://feeds.delicious.com/v2/rss').read()
It is located at init.py
Then you can install the package running python setup.py install | 1 | 1 | 0 | To experiment with a recommender, I want a good amount of data consisting of users-bookmarks mapping, optionally with some user info, page tags etc. I am trying to use pydelicious to do that but not able to. Through a book I was referring to, I am trying to run get_popular() but every time it results into a result with description as "something went wrong" | How to use pydelicious to get bookmark data with user mapping | 0.197375 | 0 | 1 | 366 |
30,107,212 | 2015-05-07T16:49:00.000 | 0 | 0 | 1 | 0 | python,deque | 52,341,854 | 2 | false | 0 | 0 | While you can create a list out of the deque, for elem in list(deque), this is not always optimum if it is a frequently used function: there's a performance cost to it esp. if there is a large number of elements in the deque and you're constantly changing it to an array structure.
A possible alternative without needing to create a list is to use a while loop with some boolean var to control the conditions. This provides for a time complexity of O(1). | 1 | 17 | 0 | I have a deque in Python that I'm iterating over. Sometimes the deque changes while I'm interating which produces a RuntimeError: deque mutated during iteration.
If this were a Python list instead of a deque, I would just iterate over a copy of the list (via a slice like my_list[:], but since slice operations can't be used on deques, I wonder what the most pythonic way of handling this is?
My solution is to import the copy module and then iterate over a copy, like for item in copy(my_deque): which is fine, but since I searched high and low for this topic I figured I'd post here to ask? | Add to a deque being iterated in Python? | 0 | 0 | 0 | 9,550 |
30,108,107 | 2015-05-07T17:38:00.000 | 1 | 0 | 0 | 0 | python,django,ide,solaris | 30,108,350 | 2 | false | 1 | 0 | I'd suggest use www.pythonanywhere.com
You can build and host your Django app there for free, it supports Python 2.7 and 3.4, most recent Django versions ( 1.4 to 1.8) and has virtualenv support so you can change whatever you like. It even comes preloaded with many useful libraries.
It doesn't support text completion in the editor but I don't think that's a deal breaker considering how Python is as it is a lot more concise than most other languages.
Having it on pythonanywhere lets you test it and debug it pretty fast. Also being a cloud solution it's 100% portable and you can develop it on any device that lets you access the internet. | 1 | 0 | 0 | Could anyone tell me a well-round Python & Django IDE fully compatible with Solaris?. I'm only aware of Netbeans but as far as I know It does not have support for Django and it lacks also of important features for Python like code completion.
Thank you very much | Python/Django IDE for Solaris | 0.099668 | 0 | 0 | 535 |
30,108,404 | 2015-05-07T17:55:00.000 | 2 | 0 | 1 | 0 | python,scope | 30,108,596 | 2 | true | 0 | 0 | You're going against the point of having Scopes at all. We have local and global scopes for a reason. You can't prevent Python from seeing outer scope variables. Some other languages allow scope priority but Python's design principles enforce strong scoping. This was a language design choice and hence Python is the wrong language to try to prevent outer scope variable reading.
Just use better naming methodologies to ensure no confusion, you change up variable names by using the Find-Replace function that most text editors provide. | 1 | 4 | 0 | When defining a python function, I find it hard to debug if I had a typo in a variable name and that variable already exists in the outer scope. I usually use similar names for variables of the same type of data structure, so if that happens, the function still runs fine but just returns a wrong result. So is there a way to prevent python functions to read outer scope variables? | how to prevent python function to read outer scope variable? | 1.2 | 0 | 0 | 841 |
30,111,145 | 2015-05-07T20:33:00.000 | 1 | 0 | 1 | 0 | user-interface,python-3.x | 30,111,555 | 1 | false | 0 | 1 | Tkinter is part of python, and it is quite powerful. You have Notebook, Listbox and such...
There are some limitations, and PyQt and PyGTK are more powerful.
My advice will be to start with Tkinter (typing what widget you want + tkinter in your search engine usually works) and see what you can do.
I am not sure I understand your comment about buttons, but if you are talking about layout, the grid method of tkinter is very powerful. | 1 | 0 | 0 | I have been using Python for a bit now (5-6 months) and I would like to finally like to start programming in GUI. Is there a good tutorial for this? Also, I would really like to use something in the default Python language rather than something installed otherwise, unless Py2EXE has been updated to 3.x and it supports other modules (modules? add-ons? I'm trying to say 3rd party GUIs). Thanks!
EDIT: Toolkit? Is that the right word? Also I have used Tkinter and it feels very limited. I could only get buttons to go on the top and it was just weird. Maybe I was doing it wrong. If you have a good tutorial for that, I would greatly appreciate it. | Integrated GUI for Python3? | 0.197375 | 0 | 0 | 65 |
30,112,333 | 2015-05-07T21:54:00.000 | 0 | 1 | 0 | 0 | python,c++,c,exuberant-ctags | 31,437,333 | 1 | false | 0 | 1 | I do this using UltraEdit, but UltraEdit is not great if you do not like it :-) Its not really an IDE more like an Editor. However the way I do it can most likely be ported to e.g. Eclipse.
I generate the Ctags file my self. and force UE to use the custom generated cTags file. This works like a charm. | 1 | 5 | 0 | What I have is a large amount of C code and a bunch of swig wrappers to export all the functions into python. We like using python for testing, it's great, but my problem is there don't seem to be any editors out there that will share tags between python and C.
What I want is to ctrl+click (or whatever shortcut) on a function in a *.py file and have it go to the function definition in a *.c file.
Geany seems to do an alright job of this but it has some limitations (poor gdb support, etc). Eclipse, netbeans, Qt Creator are all good editors for C (creator being my fav) but they don't support cross-language tags. Eclipse in particular supports python quite well in PyDev but a tag in python is totally separate from a tag in C, and I can't seem to find a way to make them share. Vim/emacs probably do due to the somewhat lower level ctags use but I don't like either of them.
Any suggestions? | Code editor that supports cross-language (c)tags between C and python | 0 | 0 | 0 | 290 |
30,113,538 | 2015-05-07T23:50:00.000 | 0 | 0 | 0 | 0 | python,optimization,scipy,gradient | 30,114,151 | 1 | false | 0 | 0 | What you are getting is the gradient of the cost function with respect to each parameter, in turn.
To picture it, suppose there were only two parameters, x and y. The cost function is a surface z as a function of x and y.
The optimization is finding a minimum point on that surface.
That's where the gradients with respect to both x and y are zero (or close to it).
If either gradient is not zero, you are not at a minimum, and you would descend further.
As a further point, you could well be interested in the curvature, or second derivative, because high curvature means a narrow (precise) minimum, while low curvature means a nearly flat minimum, with very uncertain estimates.
The second derivative in the x,y case would not be a 2-vector, but a 2x2-matrix (called a "Hessian", just to snow your friends).
You might want to think about why it's a 2x2-matrix. | 1 | 0 | 1 | I am using fmin_l_bfgs_b for a bounded minimization on 4 parameters.
I would like to inspect the gradient at the minimum of the cost function and for this I call the d['grad'] parameter as described in the documentation of fmin_l_bfgs_b. My problem is that d['grad'] is an array of size 4 looking like:
'grad': array([ 8.38440428e-05, -5.72697445e-04, 3.21875859e-03,
-2.21115926e+00])
I would expect it to be a single value close to zero. Does this have something to do with the number of the parameters I am using for the minimization (4)..? Not what I would expect but any help would be appreciated. | gradient at the minimum in fmin_l_bfgs_b | 0 | 0 | 0 | 255 |
30,114,069 | 2015-05-08T00:54:00.000 | 1 | 0 | 0 | 0 | python,xml,nested | 30,114,393 | 2 | true | 1 | 0 | Make an empty stack.
Iterating through the list:
if you find a start tag, push it onto the stack.
if you find an end tag, compare it to the entry on top of the stack.
if the stack is empty or the top doesn't match, fail.
if it matches, pop the stack and continue.
At the end of the iteration:
if the stack is empty, declare success.
otherwise fail. | 1 | 0 | 0 | A question about testing proper nesting of XML tags:
I got a list of tags, extracted from top to bottom from an xml file:
Closing tags are clearly indicated by forward slash
/to and /lastname tags are incorrectly nested. They should be switched. /lastname should be within to, /to parent tags.
tag_list = ['note', 'to', 'firstname', '/firstname', 'lastname', '/to', '/lastname', '/note']
What would be the code or direction to spot that /lastname tag is outside of its parent which is to, /to pair?
Cheers. | Detecting incorrect nesting of XML tags in Python | 1.2 | 0 | 1 | 79 |
30,114,325 | 2015-05-08T01:26:00.000 | 1 | 0 | 0 | 0 | python,websocket,real-time,scalability | 30,188,812 | 4 | true | 1 | 0 | If the scenario is
a) The main web server raises a message upon an action (let's say a record is inserted)
b ) He notifies the appropriate real-time server
you could decouple these two steps by using an intermediate pub/sub architecture that forwards the messages to the indended recipient.
An implementation would be
1) You have a redis pub-sub channel where upon a client connecting to a real-time socket, you start listening in that channel
2) When the main app wants to notify a user via the real-time server, it pushes to the channel a message, the real-time server get's it and forwards it to the intended user.
This way, you decouple the realtime notification from the main app and you don't have to keep track of where the user is. | 2 | 2 | 0 | Say I have a typical web server that serves standard HTML pages to clients, and a websocket server running alongside it used for realtime updates (chat, notifications, etc.).
My general workflow is when something occurs on the main server that triggers the need for a realtime message, the main server sends that message to the realtime server (via a message queue) and the realtime server distributes it to any related connection.
My concern is, if I want to scale things up a bit, and add another realtime server, it seems my only options are:
Have the main server keep track of which realtime server the client
is connected to. When that client receives a notification/chat
message, the main server forwards that message along to only the
realtime server the client is connected to. The downside here is
code complexity, as the main server has to do some extra book
keeping.
Or instead have the main server simply pass that message
along to every realtime server; only the server the client is
connected to would actually do anything with it. This would result
in a number of wasted messages being passed around.
Am I missing another option here? I'm just trying to make sure I don't go too far down one of these paths and realize I'm doing things totally wrong. | Scaling a decoupled realtime server alongside a standard webserver | 1.2 | 0 | 1 | 559 |
30,114,325 | 2015-05-08T01:26:00.000 | 0 | 0 | 0 | 0 | python,websocket,real-time,scalability | 30,170,295 | 4 | false | 1 | 0 | Changed the answer because a reply indicated that the "main" and "realtime" servers are alraady load-balanced clusters and not individual hosts.
The central scalability question seems to be:
My general workflow is when something occurs on the main server that triggers the need for a realtime message, the main server sends that message to the realtime server (via a message queue) and the realtime server distributes it to any related connection.
Emphasis on the word "related". Assume you have 10 "main" servers and 50 "realtime" servers, and an event occurs on main server #5: which of the websockets would be considered related to this event?
Worst case is that any event on any "main" server would need to propagate to all websockets. That's a O(N^2) complexity, which counts as a severe scalability impairment.
This O(N^2) complexity can only be prevented if you can group the related connections in groups that don't grow with the cluster size or total nr. of connections. Grouping requires state memory to store to which group(s) does a connection belong.
Remember that there's 3 ways to store state:
global memory (memcached / redis / DB, ...)
sticky routing (load balancer configuration)
client memory (cookies, browser local storage, link/redirect URLs)
Where option 3 counts as the most scalable one because it omits a central state storage.
For passing the messages from "main" to the "realtime" servers, that traffic should by definition be much smaller than the traffic towards the clients. There's also efficient frameworks to push pub/sub traffic. | 2 | 2 | 0 | Say I have a typical web server that serves standard HTML pages to clients, and a websocket server running alongside it used for realtime updates (chat, notifications, etc.).
My general workflow is when something occurs on the main server that triggers the need for a realtime message, the main server sends that message to the realtime server (via a message queue) and the realtime server distributes it to any related connection.
My concern is, if I want to scale things up a bit, and add another realtime server, it seems my only options are:
Have the main server keep track of which realtime server the client
is connected to. When that client receives a notification/chat
message, the main server forwards that message along to only the
realtime server the client is connected to. The downside here is
code complexity, as the main server has to do some extra book
keeping.
Or instead have the main server simply pass that message
along to every realtime server; only the server the client is
connected to would actually do anything with it. This would result
in a number of wasted messages being passed around.
Am I missing another option here? I'm just trying to make sure I don't go too far down one of these paths and realize I'm doing things totally wrong. | Scaling a decoupled realtime server alongside a standard webserver | 0 | 0 | 1 | 559 |
30,114,763 | 2015-05-08T02:19:00.000 | 7 | 1 | 1 | 0 | python,c,regex,performance,cython | 30,114,902 | 1 | true | 0 | 0 | This is likely to depend more on the individual implementation than the language.
Just for example, some patterns are O(N2) with some implementations, but ~O(N) with others. Specifically, most RE implementations are based on NFAs (Non-Deterministic Finite State Automata). To make a long story short, this means they can and will backtrack under some circumstances with some patterns. This gives roughly O(N2) complexity. A Deterministic Finite State Automata (DFA) matching the same pattern never backtracks--it always has linear complexity. At the same time, the compilation phase for a DFA is typically more complex than for an NFA (and DFAs don't have all the capabilities of NFAs).
Therefore, with many simple patterns that don't involve backtracking any way, an NFA-based RE engine may easily run faster than a DFA-based engine. But, when the NFA-based RE engine is trying to match a pattern than involves backtracking, it can (and will) slow down drastically. In the latter case, the DFA-based engine may easily be many times faster.
Most RE libraries basically start from a regular expression represented as a string. When you do an RE based search/match, most compile that into a data structure for their NFA/DFA. That compilation step takes some time (not a huge amount, but can become significant, especially if you're working with a lot of different REs). A few RE engines (e.g., Boost XPressive) can compile regular expressions statically--that is, the RE is compiled at the same time as the program's source code. This can eliminate the time to compile the RE from the program's execution time, so if your code spends a significant amount of its time on compiling REs, it could gain a substantial improvement from that (but that's independent of just static typing--at least to my knowledge, you can't get the same in Java or C, or example). A few other languages (e.g., D) provide enough capabilities that you could almost certainly do the same with them, but I'm not aware of an actual implementation for them that you can plan on using right now. | 1 | 4 | 0 | I am looking for benchmarks that compare regular expression speeds between python and statically typed languages like C, Java or C++. I would also like to hear about Cython performance for regular expressions. | How much faster are regular expressions processed in C/Java than in Python? | 1.2 | 0 | 0 | 1,664 |
30,118,631 | 2015-05-08T07:52:00.000 | 1 | 1 | 0 | 0 | python,timer | 30,118,684 | 1 | true | 0 | 0 | Two general ways:
Create a separate timer for each user when he joins, do something when the timer fires and destroy it when the user leaves.
Have one timer set to fire, say, every second (or ten seconds) and iterate over all the users when it fires to see how long they have been idle.
A more precise answer would require deeper insight into your architecture, I’m afraid. | 1 | 0 | 0 | I apologize I couldn't find a proper title, let me explain what I'm working on:
I have a Python IRC bot, and I want to be able to keep track of how long users have been idle in the channel, and allow them to earn things (I have it tied to Skype/Minecraft/my website) each x amount of hours they're idle in the channel.
I already have everything to keep track of each user and have them validated with the site and stuff, but I am not sure how I would keep track of the time they're idle.
I have it capture on join/leave/part messages. How can I get a timer set up when they join, and keep that timer running, along with other times for all of the users who are in that channel, and each hour they've been idle (not all at same time) do something then restart the timer over for them? | Keep track of items in array with timer | 1.2 | 0 | 1 | 103 |
30,119,149 | 2015-05-08T08:23:00.000 | 1 | 0 | 0 | 0 | android,python,numpy,scikit-learn | 30,120,067 | 2 | false | 0 | 1 | Depends on what you need....
Python on a server using Flask/ Django would allow you to build an http UI or even an API interface for your Android (or any) device.
Qpython is a brilliant way to run python on an Android but probably won't cope with the whole of scipy so depends on what libraries have already been ported across by the Qpython team. It's a great tool though and worth a look anyway.
IMHO learning a bit of flask for server side running would be easier and more flexible than using Kivy. | 1 | 1 | 0 | I am having some Python code that heavily relies on numpy/scipy and scikit-learn. What would be the best way to get it running on an Android device? I have read about a few ways to get Python code running on Android, mostly Pygame and Kivy but I am not sure how this would interact with numpy and scikit-learn.
Or would it be better to consider letting the android application send data to some server where Python is running? | Port Python Code to Android | 0.099668 | 0 | 0 | 2,766 |
30,124,541 | 2015-05-08T13:00:00.000 | 5 | 0 | 0 | 0 | python,backup,playlist,recovery,grooveshark | 30,194,984 | 2 | false | 0 | 0 | You can access some information left in your browser by checking the localStorage variable.
Go to grooveshark.com
Open dev tools (Right click -> Inspect Element)
Go to Resources -> LocalStorage -> grooveshark.com
Look for library variables: recentListens, library and storedQueue
Parse those variables to extract your songs
Might not give your playlists, but can help retrieving some of your collection. | 1 | 4 | 0 | The Grooveshark music streaming service has been shut down without previous notification. I had many playlists that I would like to recover (playlists I made over several years).
Is there any way I could recover them? A script or something automated would be awesome. | How can I make a script to recover my Grooveshark playlists now that the service has been shut down? | 0.462117 | 0 | 1 | 12,131 |
30,126,475 | 2015-05-08T14:37:00.000 | 2 | 0 | 1 | 0 | python,oop | 30,126,523 | 4 | false | 0 | 0 | For Your Question and you requirements ..a short answer is "No" | 3 | 2 | 0 | Newbie Python question here - I am writing a little utility in Python to do disk space calculations when given the attributes of 2 different files.
Should I create a 'file' class with methods appropriate to the conversion and then create each file as an instance of that class? I'm pretty new to Python, but ok with Perl, and I believe that in Perl (I may be wrong, being self-taught), from the examples that I have seen, that most Perl is not OO.
Background info - These are IBM z/OS (mainframe) data sets, and when given the allocation attributes for a file on a specific disk type and file organisation (it's block size) and then given the allocation parameters for a different disk type & organisation, the space requirements can vary enormously. | Is it Pythonic to use objects wherever possible? | 0.099668 | 0 | 0 | 90 |
30,126,475 | 2015-05-08T14:37:00.000 | 0 | 0 | 1 | 0 | python,oop | 30,126,563 | 4 | false | 0 | 0 | The use of objects is not in itself "object oriented". Functional programming uses objects, too, just not in the same way. Python can be used in a very FP way, even though Python uses objects heavily behind the scenes.
Overuse of primitives can be a problem, but it's impossible to say whether that applies to your case without more data.
I think of OO as an interface design approach: If you are creating tools that are straightforward to interact with (and substitutable) as objects with predictable methods, then by all means, create objects. But if the interactions are straightforward to describe with module-level functions, then don't try too hard to engineer your code into classes. | 3 | 2 | 0 | Newbie Python question here - I am writing a little utility in Python to do disk space calculations when given the attributes of 2 different files.
Should I create a 'file' class with methods appropriate to the conversion and then create each file as an instance of that class? I'm pretty new to Python, but ok with Perl, and I believe that in Perl (I may be wrong, being self-taught), from the examples that I have seen, that most Perl is not OO.
Background info - These are IBM z/OS (mainframe) data sets, and when given the allocation attributes for a file on a specific disk type and file organisation (it's block size) and then given the allocation parameters for a different disk type & organisation, the space requirements can vary enormously. | Is it Pythonic to use objects wherever possible? | 0 | 0 | 0 | 90 |
30,126,475 | 2015-05-08T14:37:00.000 | 7 | 0 | 1 | 0 | python,oop | 30,126,539 | 4 | true | 0 | 0 | Definition nitpicking preface: Everything in Python is technically an object, even functions and numbers. I'm going to assume you mean classes vs. functions in your question.
Actually I think one of the great things about Python is that it doesn't embrace classes for absolutely everything as some other languages (e.g., Java and C#).
It's perfectly acceptable in Python (and the built-in modules do this a lot) to define module level functions rather than encapsulating all logic in objects.
That said, classes do have their place, for example when you perform multiple actions on a single piece of data, and especially when these actions change the data and you want to keep its state encapsulated. | 3 | 2 | 0 | Newbie Python question here - I am writing a little utility in Python to do disk space calculations when given the attributes of 2 different files.
Should I create a 'file' class with methods appropriate to the conversion and then create each file as an instance of that class? I'm pretty new to Python, but ok with Perl, and I believe that in Perl (I may be wrong, being self-taught), from the examples that I have seen, that most Perl is not OO.
Background info - These are IBM z/OS (mainframe) data sets, and when given the allocation attributes for a file on a specific disk type and file organisation (it's block size) and then given the allocation parameters for a different disk type & organisation, the space requirements can vary enormously. | Is it Pythonic to use objects wherever possible? | 1.2 | 0 | 0 | 90 |
30,126,607 | 2015-05-08T14:44:00.000 | 32 | 0 | 0 | 0 | python,flask,flask-admin | 30,292,629 | 2 | true | 1 | 0 | I am the developer of Flask-AppBuilder, so maybe a strong bias here. I will try to give you my most honest view. I do not know Flask-Admin that much, so i will probably make some mistakes.
Flask-Admin and Flask-AppBuilder:
Will both give you an admin interface for Flask with bootstrap.
Will both make their best to get out of your way.
Will both help you develop Flask more Object Oriented style.
Will both let you override almost everything on the admin templates.
Will both support Babel.
Both inspired on Django-Admin.
Pros for Flask-AppBuilder:
Has a nicer look and feel (bias? maybe...).
Security has been taken care of for you, and supports out of the box, database, LDAP, OpenID, Web server integrated (REMOTE_USER), and in the near future OAuth too. Will let you extend the user model and security views.
Granular permissions, creates one permission for each web exposed method and action (you have to try it).
You can easily render Google graphs.
Smaller project, it's easier to request new features, and get your pull requests merged.
MasterDetail views and multiple views can be setup easily.
Backends: supports SQLAlchemy, MongoEngine, GenericInterface (you can integrate with your own builtin data still a bit beta).
Pros for Flask-Admin:
You have to assemble your own security (models, views, auth etc), it's ready though to integrate nicely with flask-security. This can be a pro or a con depending on what you want.
Builtin File Admin.
Bigger project with bigger community.
Backends: supports SQLAlchemy, GeoAlchemy, MongoEngine, Pewee and PyMongo.
Better support for MongoEngine (EmbeddedDocument, ListFields etc..).
On the overall i think Flask-Admin makes no assumptions at all like Flask, you have to code more, but this will let you be more free. Flask-AppBuilder makes some assumptions (on security), you will have to code much less, but some things can get on your way if your building very specific security models.
Hope this helps you and others, i tried my best to keep the bias out. | 1 | 13 | 0 | I am new to Flask and have noticed that there are two plugins that enable CRUD views and authorized login, Flask-Admin and Flask-AppBuilder.
These two features interest me along with nice Master-Detail views for my model, where I can see both the rows of the master table and the relevant details on the same screen.
Any idea which one to prefer? I see that Flask-AppBuilder has far more commits in Github, while Flask-Admin many more stars.
How to tell the difference, without spending too much time with the wrong choice? | Flask-Admin vs Flask-AppBuilder | 1.2 | 0 | 0 | 8,504 |
30,130,277 | 2015-05-08T18:12:00.000 | 0 | 1 | 0 | 0 | python,numpy,scipy | 30,130,970 | 3 | false | 0 | 0 | Use struct.pack() with the f type code to get them into 4-byte packets. | 1 | 7 | 1 | I need to store a massive numpy vector to disk. Right now the vector that I am trying to store is ~2.4 billion elements long and the data is float64. This takes about 18GB of space when serialized out to disk.
If I use struct.pack() and use float32 (4 bytes) I can reduce it to ~9GB. I don't need anywhere near this amount of precision disk space is going to quickly becomes an issue as I expect the number of values I need to store could grow by an order of magnitude or two.
I was thinking that if I could access the first 4 significant digits I could store those values in an int and only use 1 or 2 bytes of space. However, I have no idea how to do this efficiently. Does anyone have any idea or suggestions? | Binary storage of floating point values (between 0 and 1) using less than 4 bytes? | 0 | 0 | 0 | 934 |
30,133,960 | 2015-05-08T22:49:00.000 | 0 | 0 | 1 | 0 | python | 30,134,247 | 1 | false | 0 | 0 | If you're saying that you want to launch the Python interactive terminal after a script finished running (both normally and by keyboard interrupt), then just launch Python with the -i tag. For example: python -i <script-name> <script-args> | 1 | 0 | 0 | I use Ctrl+C each time to kill a running script and go back to the prompt. But then I lose the prompt, only to get "KeyboardInterrupt".
How should I kill a running script and get the python prompt back? | what shall I do to kill a running script and go back to the prompt in python | 0 | 0 | 0 | 48 |
30,134,589 | 2015-05-09T00:10:00.000 | 0 | 1 | 0 | 0 | python | 69,837,261 | 3 | false | 0 | 0 | via the python interface where your python source file is abc.py:
import py_compile
py_compile.compile('abc.py') | 1 | 4 | 0 | I have a python script. Lets say http://domain.com/hello.py, which only prints "Hello, World!".
Is it possible to precompile this Python file?
I get around 300 requests per second and the overhead of compiling is way to high. In Java the server can handle this easily but for calculations Python works much easier. | Can I pre-compile a python script? | 0 | 0 | 0 | 12,688 |
30,142,780 | 2015-05-09T17:08:00.000 | 2 | 0 | 0 | 0 | python,bayesian,pymc,mcmc | 31,457,037 | 1 | false | 0 | 0 | I recently ran (successfully) a model with 2,958 parameters. It was on a 8 Gb Windows machine. You should be fine with 750. | 1 | 1 | 1 | I want to sample from my posterior distribution using the pymc package.
I am wondering if there is a limit on the number of dimensions such algorithm can handle. My log likelihood is the sum of 3 Gaussians and 1 mixture of Gaussians. I have approx 750 parameters in my model. Can pymc handle such a big number of parameters? | Number of parameters in MCMC | 0.379949 | 0 | 0 | 267 |
30,148,133 | 2015-05-10T05:41:00.000 | 0 | 0 | 0 | 0 | python | 30,148,423 | 1 | false | 0 | 0 | Try to find hashlib module within your system. It is likely that you have two modules and the one that is being imported is the wrong one (remove the wrong one if it is the case) or you should simply upgrade your python version. | 1 | 0 | 0 | Os: Mac 10.9
Python ver: 2.7.9
database: postgresql 9.3
I am putting the following command to install psycopg2 in my virtualenv:
ARCHFLAGS=-Wno-error=unused-command-line-argument-hard-error-in-future pip install psycopg2
I am getting the following error:
Traceback (most recent call last):
File "/Users/dialynsoto/python_ex/crmeasy/venv/bin/pip", line 7, in
from pip import main
File "/Users/dialynsoto/python_ex/crmeasy/venv/lib/python2.7/site-packages/pip/init.py", line 13, in
from pip.utils import get_installed_distributions, get_prog
File "/Users/dialynsoto/python_ex/crmeasy/venv/lib/python2.7/site-packages/pip/utils/init.py", line 18, in
from pip.locations import (
File "/Users/dialynsoto/python_ex/crmeasy/venv/lib/python2.7/site-packages/pip/locations.py", line 9, in
import tempfile
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/tempfile.py", line 35, in
from random import Random as _Random
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/random.py", line 49, in
import hashlib as _hashlib
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 138, in
_hashlib.openssl_md_meth_names)
AttributeError: 'module' object has no attribute 'openssl_md_meth_names'
Any clues ? | psycopg2 error installation in virtualenv | 0 | 1 | 0 | 213 |
30,150,071 | 2015-05-10T10:03:00.000 | 0 | 0 | 0 | 0 | python,r,distribution,frequency-distribution,goodness-of-fit | 30,153,269 | 2 | false | 0 | 0 | Have you tried transforming the data? Simulate multiple transformations and take the best approximation to a distribution amenable for statistical inference. | 1 | 0 | 1 | I need to find the distribution of data, which is from a retail chain network( demand of product across all stores). I tried to fit distribution using EasyFit (which has 82 distribution to check the best distributions) but no distribution fits the data. What can be done? Is there any way to find if the data distribution is a sum or convolution of multiple distribution? I have removed the spikes or seasonality or promotional data from the dataset but still no distribution fits. | I need to find the distribution of data, which is from a retail chain network. No distribution fits the data | 0 | 0 | 0 | 714 |
30,151,258 | 2015-05-10T12:14:00.000 | 1 | 0 | 1 | 0 | ipython,python-3.4,qtconsole | 30,160,102 | 1 | true | 0 | 0 | Reposting as an answer:
If you just run ipython3 in a terminal, what you get is a pure terminal interface, it's not running a kernel that the Qt console can talk to.
If you run ipython3 console, you'll get a similar interface but it will be talking to a kernel, so you can start a Qt console to interact with it. You can either run %qtconsole from inside that interface, or run ipython qtconsole --existing in a shell to start a Qt console and connect to an existing kernel. | 1 | 1 | 0 | When I run %qtconsole from within ipython3 I get ERROR: Line magic function%qtconsolenot found., but ipython3 qtconsole in terminal starts fine. According to this, how can I run qtconsole instance connected to ipython3 instance? And how to run it on a single core -- rc[0].execute(%qtconsole)?
P.S. If someone know, tell me please how to escape `(backquote) symbol in code-mode. | How to run qtconsole connected to ipython3 instance? | 1.2 | 0 | 0 | 493 |
30,152,626 | 2015-05-10T14:35:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,python-3.x | 30,152,899 | 1 | false | 0 | 0 | From what you say, these scripts are independent - you can't share variables between 2.7 and 3. How do you communicate between them?
You could:
Convert the 2.7 script to 3, using the 2to3 script provided by Python3. So you don't have to refactor manually. This usually works quite well.
Run one script, and call the other script from there calling, for example, subprocess to execute the other using the correct interpreter. Something like (from the 2.7 script):
subprocess.call(['python3', 'other_script.py']) // or
result = subprocess.check_output(['python3', 'other_script.py']) // if you need the script's output
Write a small bash (or .bat) and call one and the other. | 1 | 1 | 0 | I have problem such is I have to use two scripts, one which is compatible only with Python2.7 and second which is compatible only with Python3.
So my question is if it is possible in any way to do that? (not refactoring the code)
I thought about using execfile() but it also use only one compiler. | using both Python3 and Python2.7 in one app | 0.197375 | 0 | 0 | 74 |
30,155,400 | 2015-05-10T18:54:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,module,web.py | 53,768,539 | 1 | false | 1 | 0 | Sometimes the problem is in the PYTHONPATH, and the IDE modify the environment variables so when you run it from there you don't have a problem. | 1 | 1 | 0 | I am trying to import the web module of python to create a simple Hello World program on my browser using web.py.
When I run it from the command line I am getting errors about the web.py package. If I run it from IDLE, it works fine. | Unable to import python module 'web'? | 0 | 0 | 1 | 83 |
30,156,152 | 2015-05-10T20:10:00.000 | 8 | 0 | 1 | 0 | python,virtualenv | 30,156,162 | 1 | true | 0 | 0 | No, anything that can be generated should not be included.
Dependencies should be managed with something like pip, and the requirements.txt file can be included.
The only files under source control should be files you absolutely need to get you development environment going. So it can included boot strapping of some sort, ie, you can script the creation of the virtual environment, and that would be the first thing you run once you have cloned.
Also consider that your virtual environment contain binary files. You absolutely do not want those in your repository.
As pointed out by @JeremyBank below, your virtual environment can also differ from system to system meaning that your virtual environment will not be portable. | 1 | 5 | 0 | Perhaps this is more an opinion-based question, but I was wondering whether the contents of a virtualenv should be included in a GitHub repository. Why should it or should it not be included? | Virtualenv in source control | 1.2 | 0 | 0 | 941 |
30,157,895 | 2015-05-11T00:01:00.000 | 15 | 0 | 1 | 0 | python,multiprocessing | 30,157,933 | 3 | true | 0 | 0 | Whenever you fork, the entire Python process is duplicated in memory (including the Python interpreter, your code and any libraries, current stack etc.) to create a second process - one reason why forking a process is much more expensive than creating a thread.
This creates a new copy of the python interpreter.
One advantage of having two python interpreters running is that you now have two GIL's (Global Interpreter Locks), and therefore can have true multi-processing on a multi-core system.
Threads in one process share the same GIL, meaning only one runs at a given moment, giving only the illusion of parallelism. | 1 | 12 | 0 | I understand that threads in Python use the same instance of Python interpreter. My question is it the same with process created by os.fork? Or does each process created by os.fork has its own interpreter? | Does python os.fork uses the same python interpreter? | 1.2 | 0 | 0 | 3,520 |
30,158,068 | 2015-05-11T00:29:00.000 | 0 | 1 | 0 | 1 | python,python-2.7,amazon-ec2,config,python-2.6 | 31,082,418 | 1 | false | 0 | 0 | Creating an alias in your ~/bashrc is a good approach.
It sounds like you have not run source ~/.bashrc after you have edited it. Make sure to run this command.
Also keep it mind that when you run sudo python your_script.py it will not use your alias (because you are running as the root, not at the ec2-user).
Make sure to not change your default python, it could break several programs in your linux distributions (again, using an alias in your ~/bashrc is good). | 1 | 0 | 0 | Upon running python in any dir in my amazon EC2 instance, I get the following printout on first line: Python 2.6.9 (unknown, todays_date). Upon going to /usr/bin and running python27, I get this printout on first line: Python 2.7.9 (default, todays_date).
This is a problem because the code that I have only works with Python 2.6.9, and it seems as though my default is Python 2.7.9. I have tried the following things to set default to 2.6:
1) Editing ~/.bashrc and creating an alias for python to point to 2.6
2) Editing ~/.bashrc and exporting the python path
3) Hopelessly scrolling through the /etc folder looking for any kind of file that can reset the default python
What the hell is going on?!?! This might be EC2 specific, but I think my main problem is that upon running /usr/bin/python27 I see that it is default on that first line.
Even upon running python -V, I get Python 2.6. And upon running which python I get /usr/bin/python, but that is not the default that the EC2 instance runs when it attempts to execute my code. I know this because the EC2 prints out Python/2.7.9 in the error log before showing my errors. | Python Default Confusion | 0 | 0 | 0 | 51 |
30,159,923 | 2015-05-11T04:57:00.000 | 0 | 0 | 1 | 0 | python,pip | 30,159,937 | 1 | true | 0 | 0 | I find it's mainly because a dependency was installed using an earlier version of pip. To fix it, try pip install --upgrade pip followed by using pip to install the package again. | 1 | 1 | 0 | When use pip version 6.0+ to install a package, I got an error like the title. | AttributeError: 'VersionInfo' object has no attribute 'semantic_version' | 1.2 | 0 | 0 | 4,889 |
30,163,728 | 2015-05-11T09:15:00.000 | 1 | 0 | 1 | 0 | ipython | 30,163,899 | 1 | true | 0 | 0 | Press Ctrl+Enter, instead of Shift+Enter | 1 | 0 | 0 | After running a code in given cell, IPython goes down to the next cell below and activate it.
How to set IPython so that it "stays" in the same cell after code is run? | How to prevent IPython from activate to the cell below after running a cell? | 1.2 | 0 | 0 | 15 |
30,164,030 | 2015-05-11T09:29:00.000 | 3 | 0 | 0 | 0 | python,django,django-models | 30,164,129 | 3 | false | 1 | 0 | Just don't include (or comment out) the middleware or apps that you don't need. While the MIDDLEWARE_CLASSES setting is required to be present...it doesn't need to contain the authentication middleware.
In other words, keep MIDDLEWARE_CLASSES and INSTALLED_APPS in your settings.py, but remove the middleware classes and apps that you don't need. | 1 | 2 | 0 | Here is my question, I have to do a new django project but I don't need the database tables that django offers by default, auth_user, table... etc. Is there any way for start a project synchronizing the database but without all this stuff?
I have tried to comment the INSTALLED_APPS by default and MIDDLEWARE_CLASSES but it retrieves me errors or issues. | I don't need django tables by default | 0.197375 | 0 | 0 | 1,066 |
30,164,528 | 2015-05-11T09:52:00.000 | 1 | 1 | 0 | 0 | python,django,apache2,debian,mod-wsgi | 30,165,117 | 1 | true | 1 | 0 | >>> a2dismod python did the trick. Mod-wsgi 4.0+ doesn't work with mod-python. | 1 | 0 | 0 | I have installed libapache2-mod-wsgi on a Debian 8 64 bit server. Whenever I loaded the domain before this install, the default page for apache2 loaded. But after the install it shows The webpage is not available error, the same error occurs when there is no internet connection on my PC. I have tried a2dismod wsgi to disable it. And then it works again. Can anyone suggest me a workaround? | Apache2 doesn't work after installing mod-wsgi | 1.2 | 0 | 0 | 256 |
30,167,799 | 2015-05-11T12:34:00.000 | 0 | 0 | 1 | 0 | python,intel-mkl | 30,167,977 | 1 | true | 0 | 0 | Well they will be still functional (packages don't know the status of your license), but you'll be breaking the license by using them.
If you mean to remove all MKL libraries from your PC when you stop complying with license, then the packages would stop working (or parts of them). | 1 | 0 | 0 | Currently I do not get any fundings for my PhD. As a consequence I can use the free MKL libraries. But that will change soon. I compiled IPOPT and other packages against the MKL libraries. What will haben if I do not have the licence anymore ? Are the packages still functional ? | What happens to compiled python packages if I do not have MKL libraries? | 1.2 | 0 | 0 | 37 |
30,172,686 | 2015-05-11T16:16:00.000 | 0 | 0 | 1 | 0 | python,class,oop,object | 30,173,088 | 2 | false | 0 | 0 | The __*__ attributes of an object are meant to implement internal functions standardized by the Python language. Like __add__ (which is used to provide a result of object + whatever, __repr__ is expected to behave in a defined way, which would include to return (a) certain datatype(s).
While statically typed languages will report a compile-time error, for dynamically typed languages like Python, this might result in unexpected (yet not undefined!) runtime behaviour. This need not even result in an error message. Therefore, never change that behaviour to somethin unexpected.
If you want to return something custom, use a custom method like get_info(self) or similar. (Remember not to use __*__ names for that either) | 1 | 0 | 0 | Usually when outputting an object in Python, you define the string that is returned in __repr__, but what if you want it to return something else instead, like an integer or tuple?
I'll give an example. My class is Users and there are member variables self.username and self.email. If I want to output an object of type Users, how do I get it to return (self.username,self.email) as a tuple? | Returning a specific data type when referring to an object in Python | 0 | 0 | 0 | 89 |
30,174,585 | 2015-05-11T18:03:00.000 | 0 | 1 | 0 | 1 | python,linux,python-3.x,xubuntu | 30,175,033 | 1 | true | 0 | 0 | Found the answer.
You have to go Configuration/Systemsettings/Session and Startup,
and add the program. | 1 | 1 | 0 | How can i run a python script at startup with xubuntu 15.04?
I want to run a script that reminds me of things,like backup,buying thing,calling somebody.
I already have the script,i just need it to start at startup.
(Python 3.4)
As far i know, xubuntu 15.04 uses systemd.
All the tutorials i found,are for init.d or upstart.
I need one for systemd | Run Python script when I log into my system? [xubuntu 15.04] | 1.2 | 0 | 0 | 517 |
30,174,910 | 2015-05-11T18:21:00.000 | 0 | 0 | 0 | 1 | python,openstack,openstack-horizon | 30,254,113 | 2 | false | 0 | 0 | if you are running on Horizon on Apache then you need to check apache logs to identify issue. Horizon logs will not contains anything if apache was not able to execute WSGI script for Horizon. | 2 | 0 | 0 | I have OpenStack UI running. I made some changes in the local-setting.py file and restarted the Horizon service using service httpd restart and try to hit OpenStack UI but it returns an error: "HTTP 400 Bad request".
When I revert back all changes, restart the service and try again the error is still there.
Please help me !! | Unabe to Get OpenStack Login prompt | 0 | 0 | 1 | 808 |
30,174,910 | 2015-05-11T18:21:00.000 | 1 | 0 | 0 | 1 | python,openstack,openstack-horizon | 52,743,233 | 2 | false | 0 | 0 | Total agree with above comments, check your apache or httpd log files.
Most probably error cause due to improper info.
added (for centos) under below file.
/etc/openstack-dashboard/local_settings
ALLOWED_HOSTS = ['XX.XX.XX.XX', 'ControllerName', 'localhost']
Hopefully this will resolve issue. | 2 | 0 | 0 | I have OpenStack UI running. I made some changes in the local-setting.py file and restarted the Horizon service using service httpd restart and try to hit OpenStack UI but it returns an error: "HTTP 400 Bad request".
When I revert back all changes, restart the service and try again the error is still there.
Please help me !! | Unabe to Get OpenStack Login prompt | 0.099668 | 0 | 1 | 808 |
30,177,229 | 2015-05-11T20:39:00.000 | 1 | 0 | 1 | 0 | python,opencv | 30,209,202 | 1 | false | 0 | 0 | It doesn't look like the OpenCV's blurring and filtering functions allow masking the input. I suggest applying the filter on a Rect around the circular portion, then assign the elements of the blurred sub matrix to the original image while masking the elements that do not correspond to the circle. | 1 | 0 | 1 | How does one blur a circular portion of an image in the python bindings of Open CV. Can one apply blurring on images without making new images? | Opencv Blur just within Circle | 0.197375 | 0 | 0 | 566 |
30,178,996 | 2015-05-11T23:00:00.000 | 2 | 0 | 1 | 0 | python,python-2.7 | 30,179,054 | 4 | true | 0 | 0 | Python cannot store a 1-bit value in 1 bit of memory, or even 1 byte. There's overhead like the type pointer and the reference count, and its int type always uses a fixed number of bytes to store the actual number it represents, either 4 or 8 bytes depending on your Python build. (There's also the long type, which isn't fixed-size, but even that type uses chunks bigger than one byte.) | 1 | 0 | 0 | If 0b0 represents the 1-bit "0", why does sys.getsizeof(0b0) yield 24? Furthermore, sys.getsizeof always yield 24, not matter how long is the size of the binary value you give it.
Shouldn't sys.getsizeof(0b0) yield 1? | Python - Bit variable's size is always 24, instead of its size | 1.2 | 0 | 0 | 218 |
30,180,138 | 2015-05-12T01:14:00.000 | 2 | 0 | 0 | 0 | python,tkinter | 30,194,049 | 1 | false | 0 | 1 | Basically what @Paul Rooney indicated in his comment above.
You might be able to workaround this using canvas to create your own label. You can then use a canvas text object instead of a label.
If you create an empty canvas and add the text with the create_text(), and then place this text-canvas 'on top' of the 'main' canvas, it should simulate what you want. The reason for using two canvases is to prevent scrollability.
I can not think of way to this for buttons though..
Please post your code if you need an example of this :) | 1 | 0 | 0 | I am using Tkinter to create a GUI in python.
What I did was I insert a GIF image into the background and also created labels and buttons.
How do I make the labels and buttons background to be transparent as the background is covering my GIF image? | How to make labels background to be transparent in Tkinter? | 0.379949 | 0 | 0 | 7,124 |
30,180,448 | 2015-05-12T01:52:00.000 | 0 | 0 | 0 | 0 | python,django,django-admin,email | 30,180,570 | 1 | false | 1 | 0 | I sugest You to stick to the web interface if it will be used by a human. CLI is good when You want to control Your application by another app or some sort of script.
Many people could have a problem using CLI. Remember that potential client will almost always be 'less technical' person than You, so the UI should be user friendly for him. | 1 | 1 | 0 | In my specific case, I'm building a relatively straightforward email-sending webapp. I'll need to add the ability for a human to generate email templates from the last day's worth of updated objects, view the day's last sent emails, etc.
Django provides the Django manage.py CLI, and also the Django Admin Web UI. Both are augmentable. Are there any best-practices or rules of thumb to follow in terms of adding my email admin functionality to one or the other?
Edit: To be more clear, my big concern is around speed of implementation operations on models and such. | In Django, is it better to add admin features to the command line or the admin interface? | 0 | 0 | 0 | 48 |
30,181,471 | 2015-05-12T03:52:00.000 | 0 | 0 | 0 | 0 | python-2.7,postgresql-9.3 | 30,181,819 | 1 | false | 1 | 0 | If you close the connection, you cannot iterate a cursor anymore. There is no connection to the database. | 1 | 0 | 0 | cursor.execute(sql_statement)
conn.close()
return cursor
the above are the closing lines of my program. I've 3 html pages (users, workflows, home), returning curosor is triggering data for workflows and home page, but not for users page
Where as, if i do return cursor.fetchall(), then it's working for all 3 pages.
The reason why i want to return cursor is, the client might want to iterate or do other processing on the cursor.
I'm not sure what am doing different with Users page. | Returning cursor isn't retrieving data from DB | 0 | 1 | 0 | 24 |
30,181,545 | 2015-05-12T03:59:00.000 | 0 | 0 | 1 | 0 | biopython,fasta,genbank | 71,990,588 | 2 | false | 0 | 0 | It is possible to convert the fasta to gb format for unsubmitted sequences, which dont have accession numbers. Yet to be submitted to NCBI. | 1 | 5 | 0 | Is there a way to use BioPython to convert FASTA files to a Genbank format? There are many answers on how to convert from Genbank to FASTA, but not the other way around. | Convert FASTA to GenBank | 0 | 0 | 0 | 2,649 |
30,194,383 | 2015-05-12T14:49:00.000 | 0 | 0 | 0 | 1 | python,windows | 30,195,117 | 2 | false | 0 | 0 | In windows, the common way of keeping cmd windows opened after the end of a console process is to use cmd /k
Example : in a cmd window, typing start cmd /k echo foo
opens a new window (per start)
displays the output foo
leave the command window opened | 1 | 1 | 0 | In python, I use subprocess.Popen() to launch several processes, I want to debug those processes, but the windows of those processes disappeared quickly and I got no chance to see the error message. I would like to know whether there is any way I can stop the window from disappearing or write the contents in the windows to a file so that I can see the error message later.
Thanks in advance! | subprocess window disappear quickly | 0 | 0 | 0 | 425 |
30,195,646 | 2015-05-12T15:43:00.000 | 0 | 0 | 0 | 0 | python,multidimensional-array,matplotlib-basemap | 30,196,520 | 1 | false | 0 | 0 | Maybe try zip?
Calling zip(a,b) where a and b are some iterable things will return a new array of tuples along the lines of [ (a[0], b[0]) , (a[1],b[1]) , ... , (a[n], b[n]) ] where n is the number of things in the lists.
You could match up the lat/lon into pairs first and then pair them with the temperature. | 1 | 0 | 1 | I have 3 1D-arrays (lat, lon and temperature) and would like to plot the data using Basemap in python. However, Basemap seems to need 2D-arrays to be able to plot the data according to the latitudes and longitudes I have.
How would I do that?
Thanks for you help! | Create a 2D grid of hexagonal cells using lat/lon coordinates in Python | 0 | 0 | 0 | 451 |
30,197,523 | 2015-05-12T17:18:00.000 | 6 | 0 | 1 | 1 | python,shell | 30,197,578 | 2 | true | 0 | 0 | import name is a python keyword for use within a python script and/or the python interactive interpreter (repl).
python filename.py is a shell command for use at a command prompt or within a shell script to run the python interpreter on a given python script file.
The working directory does not matter other than for whether the file listed in python filename.py can be found.
So for python filename.py to work you must be in the same directory as filename.py but you could just as readily use python c:\users\user\some\other\path\filename.py in which case the current directory isn't used to find the file.
If you get python syntax errors from attempting to run python on a python file that's a python error in the code and you will need to look at the python file to see what the error is. | 2 | 1 | 0 | Really frustrated with this as one thing works at one time. Sometimes import filename.py works. But in the tutorials all I see is python filename.py. But when I try to give that, I am facing an error like invalid syntax.
I have edited all the environment variables and I have C:\Python27 folder in the same location. To be able to run the files using python filename.py what are the conditions that must be met? What should be the current working directory? Should the .py files be there in the same working directory? | How do I run the python files from command shell/command prompt? | 1.2 | 0 | 0 | 1,174 |
30,197,523 | 2015-05-12T17:18:00.000 | 0 | 0 | 1 | 1 | python,shell | 30,197,771 | 2 | false | 0 | 0 | Just to be clear, typing python filename.py only works from the Terminal (i.e. cmd.exe, Windows PowerShell, the "Terminal" application on a Linux kernel, etc.), not from the Python interpreter (i.e. python.exe), and only works if you have used the cd command to change into the directory in which the file is saved (so that the terminal knows where to look for filename.py). import filename can be used from the Python interpreter, but is not the ideal method as it creates a compiled version of filename.py and can only be used once (you would have to restart the interpreter to do so again). I'm not sure whether this works in the official Python distribution available from the Python website, but at least in the Anaconda distribution, you can run a file from the Python interpreter using runfile("C:/Users/CurrentUser/Subfolder/filename.py"). | 2 | 1 | 0 | Really frustrated with this as one thing works at one time. Sometimes import filename.py works. But in the tutorials all I see is python filename.py. But when I try to give that, I am facing an error like invalid syntax.
I have edited all the environment variables and I have C:\Python27 folder in the same location. To be able to run the files using python filename.py what are the conditions that must be met? What should be the current working directory? Should the .py files be there in the same working directory? | How do I run the python files from command shell/command prompt? | 0 | 0 | 0 | 1,174 |
30,203,229 | 2015-05-12T23:38:00.000 | 1 | 0 | 0 | 0 | python,scikit-learn | 30,209,508 | 1 | false | 0 | 0 | Nothing will be wrong to set the same seed. | 1 | 0 | 1 | In sciki learn, RandomForestClassifier() and train_test_split() both have random_state parameter.
Statistically, does it matter if I set them to be the same seed? Will that be wrong? Thanks. | Does it matter if I set random_state same for RandomForestClassifier() and train_test_split()? | 0.197375 | 0 | 0 | 105 |
30,203,785 | 2015-05-13T00:42:00.000 | 0 | 0 | 0 | 0 | python,scipy,sparse-matrix | 33,983,181 | 1 | false | 0 | 0 | If you are using any plugin that is named as "infinite posts scroll" or "jetpack" or any thing similar delete it. | 1 | 0 | 1 | I can find sum of all (non-zero) elements in scipy sparse matrix by mat.sum(), but how can I find their product? There's no mat.prod() method. | Product of elements of scipy sparse matrix | 0 | 0 | 0 | 44 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.