Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
29,077,131
2015-03-16T12:43:00.000
3
0
1
0
python,python-2.7
29,077,205
1
false
0
0
What are the actual values of the strings you are comparing? If both are the same, this is because of the difference between the identity operator is and the equality operator ==. In short, is yields True when objects are identical. Because a new string is created in your example, it produces False. If you'd use == a deep comparison of the strings' characters would take place and True would be returned. If the compared strings are not the same, neither == or is should produce True.
1
0
0
I was reading from the book How to Think Like A Computer Scientist: Learning with Python when I came across the following question: As an exercise, describe the relationship between string.join(string.split(song)) and song. Are they the same for all strings? When would they be different? (song had been defined as "The rain in Spain...") After checking it out, however, I found that both are different strings. I tried using string.join(string.split(song)) is song and f is song where f had been assigned the value string.join(string.split(song)) and both evaluated to False. Why is it so?
string.split and string.join in Python 2.7
0.53705
0
0
540
29,077,291
2015-03-16T12:51:00.000
0
1
0
0
python,c,dll,cygwin,robotframework
29,102,050
1
true
0
0
Given our discussion in the comments. You can't mix and match like this. The format that Cygwin builds a DLL in is different that the format Windows expects a DLL in. You need to build and run all in one environment.
1
0
0
C program is compiled and converted into .dll using cygwin compiler.In python scripting it can be loaded using ctypes and call the functions successfully.But I import that python scripting as libraray into Robot Framework Automation tool,it cant load that .dll file and the test case also failed. is the cygwin created dll file not be supported by RTF? Can anyone suggest any other method for that?
Robot framework cant load .dll file created by cygwin into python script
1.2
0
0
406
29,079,698
2015-03-16T14:45:00.000
2
0
1
1
python,ansible,kvm,libvirt
29,079,838
1
false
0
0
Of course - if you have SSH access to it. Yes, you can run Ansible using its Python API or through command-line call. About passing YAML file - also - yes.
1
1
0
I have a specific requirement where I can use only Ansible in my host machine without vagrant. Two questions associated with it: Is it possible to spin up a VM over the host machine with libvirt/KVM as hypervisor using ansible ? I know there is a module called virt in ansible which is capable of doing this. But I could'nt find any real example of how to use this. Appreciate if someone can point me to the example YAML through which I can spin up VM. With Ansible, is it possible to run my playbook from python code ? If I am not wrong there is a python API supported by Ansible. But is it possible to give YAML file as input to this API which executes tasks from YAML.
Spin up VM using Ansible without Vagrant
0.379949
0
0
561
29,079,923
2015-03-16T14:54:00.000
4
1
0
0
python,lambda,sympy
49,578,768
2
false
0
0
The above works well. In my case with Python 3.6, I needed to explicitly indicate that the saved and loaded files were binary. So modified the code above to: dill.dump(f, open("myfile", "wb")) and for reading: f_new=dill.load(open("myfile", "rb"))
1
8
0
Imagine the following three step process: I use sympy to build a large and somewhat complicated expression (this process costs a lot of time). That expression is then converted into a lambda function using sympy.lambdify (also slow). Said function is then evaluated (fast) Ideally, steps 1 and 2 are only done once, while step 3 will be evaluated multiple times. Unfortunately the evaluations of step 3 are spread out over time (and different python sessions!) I'm searching for a way to save the "lambdified" expression to disk, so that I can load and use them at a later point. Unfortunately pickle does not support lambda functions. Also my lambda function uses numpy. I could of course create a matching function by hand and use that, but that seems inefficient and error-prone.
Save/load sympy lambdifed expressions
0.379949
0
0
2,216
29,084,631
2015-03-16T18:46:00.000
0
0
0
0
python,linux,shared-libraries
29,085,462
1
false
0
0
Did you link in libglib when building libtest? That is, you need to have -lglib-2.0 in your link command line.
1
0
0
when i try to open (dlopen() in python) I get an error as above. libtest.so has some functions which use g_tree_new g_tree_new is defined in libglib-2.0.so.0 I tried setting the LD_LIBRARY_PATH to where libglib-2.0.so.0 is, but that does not help! Thank you
OSError: libtest.so: undefined symbol: g_tree_new
0
0
0
177
29,085,298
2015-03-16T19:24:00.000
0
0
0
0
python,windows,matlab,file,shared
29,085,388
3
true
0
0
I am not sure about window's API for locking files Heres a possible solution: While matlab has the file open, you create an empty file called "data.lock" or something to that effect. When python tries to read the file, it will check for the lock file, and if it is there, then it will sleep for a given interval. When matlab is done with the file, it can delete the "data.lock" file. Its a programmatic solution, but it is simpler than digging through the windows api and finding the right calls in matlab and python.
1
2
1
I have a Matlab application that writes in to a .csv file and a Python script that reads from it. These operations happen concurrently and at their own respective periods (not necessarily the same). All of this runs on Windows 7. I wish to know : Would the OS inherently provide some sort of locking mechanism so that only one of the two applications - Matlab or Python - have access to the shared file? In the Python application, how do I check if the file is already "open"ed by Matlab application? What's the loop structure for this so that the Python application is blocked until it gets access to read the file?
Shared file access between Python and Matlab
1.2
0
0
212
29,088,095
2015-03-16T22:27:00.000
0
0
0
0
python,opencv
29,088,394
1
false
0
0
You could probably use Haar cascades in openCv to do this. You will need to train haar detectors with both positive and negative samples of the logo but there is already utilities in openCv to help you with this. Just read up about haar in openCv documentation
1
2
1
I am currently making an application with OpenCV and a web server that finds certain car brands as part of an ongoing game in my family. However, I don't know where to start. I googled it but all I found was a post on finding a yellow ball. I want to find a car logo from a picture (which could be angled or glaring) so I identify the car brand and add points to the score. I know it seems like a tall order but could anybody help?
Logo recognition in OpenCV
0
0
0
1,164
29,088,344
2015-03-16T22:50:00.000
0
0
0
0
python,web-development-server
37,997,045
1
false
1
0
For that you would need a web framework like Bottle or Flask. Bottle is a simple WSGI based web framework for Python. Using either of these you may write simple REST based APIs, one for set and other for get. The "set" one could accept data from your client side and store it on your database where as your "get" api should return the data by reading it from your DB. Hope it helps.
1
0
0
I am in the middle of my personal website development and I am using python to create a "Comment section" which my visitors could leave comments at there in public (which means, everybody can see it, so don't worry about the user name registration things). I already set up the sql database to store those data but only thing I haven't figured out yet was how to get the user input (their comments) from the browser. So, is there any modules in python could do that? (Like, the "Charfield" things in django, but unfortunately I don't use django)
How I can get user input from browser using python
0
0
1
754
29,088,972
2015-03-16T23:51:00.000
-1
0
1
1
python,anaconda
37,975,575
3
false
0
0
I had a similar problem - was able to use conda from an anaconda prompt (found in the anaconda folder) and install packages I needed
2
9
0
All seemed to be working fine in my anaconda distribution on Mac. Then I tried to install the postgres library psycopg2 with conda install psycopg2. That threw an error. Something about permissions. But now nothing works. Now it can't even find the conda executable or start ipython. -bash: conda: command not found Should condo executable be in ~/ananconda/bin. The directory is there but no conda executable. Anyone know what might have happened or how I can recover from this?
How do I fix my anaconda python distribution?
-0.066568
0
0
14,137
29,088,972
2015-03-16T23:51:00.000
5
0
1
1
python,anaconda
29,105,951
3
false
0
0
You're going to have to reinstall Anaconda to fix this. Without conda, there's not much you can do to clean up the broken install.
2
9
0
All seemed to be working fine in my anaconda distribution on Mac. Then I tried to install the postgres library psycopg2 with conda install psycopg2. That threw an error. Something about permissions. But now nothing works. Now it can't even find the conda executable or start ipython. -bash: conda: command not found Should condo executable be in ~/ananconda/bin. The directory is there but no conda executable. Anyone know what might have happened or how I can recover from this?
How do I fix my anaconda python distribution?
0.321513
0
0
14,137
29,089,753
2015-03-17T01:16:00.000
0
0
1
1
python,python-multiprocessing
39,339,970
2
false
0
0
You can just run your program and see if there is python processes alive after the main process terminated. The correct way to terminate your program is making all the subprocesses terminated before the main process end. (Try to use Process.terminate() and Process.join() methods for all subprocesses before the main process terminated.)
1
1
0
I am working with the multiprocessing module on a Unix system. I have noticed memory leaks when I terminate one of my programs. I was thinking that this might be because the processes that were started in the main process kept running. Is this correct?
Python: What happens when main process is terminated.
0
0
0
780
29,090,187
2015-03-17T02:10:00.000
1
0
1
0
python,encoding,character-encoding,decode,decoding
29,090,694
2
true
0
0
check the first char, if it is 0xFA, then code = second * 256 + third + 376
1
1
0
I'm on the last part of a decoding Python exercise, and this is really confusing me. The encoding represents the 376th to 65912th word with three chars: the first char is always (0xFA), the second is ((code - 376) // 256), and third is ((code - 376) % 256). For example, if the code is 30000, then the first output char would be 0xFA, the second 0x73, and the third 0xB8. (The code for 376 would be FA 00 00.) Now here's my confusion, how can I interpret 0xfa 0x73 0xb8 as 30000? Because this word would be the 30000th word for my dictionary. Any help would be much appreciated, thanks.
How to Apply Reverse Logic for Decoding?
1.2
0
0
63
29,092,291
2015-03-17T06:01:00.000
0
0
0
0
python,python-2.7,web-crawler,scrapy
29,092,568
2
false
1
0
You can use a file to pass the urls from scrapy to your python script. Or you can print the urls with a mark in your scrapy, and use your python script to catch the stdout of your scrapy.Then parse it to list.
1
0
0
I am working with a script where i need to crawl websites, need to crawl only base_url site. Anyone who has pretty good idea how i can launch scarpy in custom python scripts and get urls link in list?
How we can get List of urls after crawling website from scrapy in costom python script?
0
0
1
343
29,092,778
2015-03-17T06:38:00.000
1
0
1
0
python,python-3.x
35,227,246
6
false
0
0
print raw_input().lower()[::-1]
2
0
0
I am working on a script that takes the user input and reverses the whole input. for example if the user inputs "London" it will be printed as "nodnol" . I am currently being able to reverse the order of a certain number of letters but not being able to reverse the entire string .
How to reverse user input in Python
0.033321
0
0
25,279
29,092,778
2015-03-17T06:38:00.000
-1
0
1
0
python,python-3.x
36,003,390
6
false
0
0
string=input('Enter String: ') print (string[::-1]) string: heaven output:nevaeh
2
0
0
I am working on a script that takes the user input and reverses the whole input. for example if the user inputs "London" it will be printed as "nodnol" . I am currently being able to reverse the order of a certain number of letters but not being able to reverse the entire string .
How to reverse user input in Python
-0.033321
0
0
25,279
29,095,769
2015-03-17T09:42:00.000
0
0
0
0
python,scikit-learn,bigdata,mixture-model
60,111,168
4
false
0
0
As Andreas Mueller mentioned, GMM doesn't have partial_fit yet which will allow you to train the model in an iterative fashion. But you can make use of warm_start by setting it's value to True when you create the GMM object. This allows you to iterate over batches of data and continue training the model from where you left it in the last iteration. Hope this helps!
3
3
1
I have a large data-set (I can't fit entire data on memory). I want to fit a GMM on this data set. Can I use GMM.fit() (sklearn.mixture.GMM) repeatedly on mini batch of data ??
Sklearn-GMM on large datasets
0
0
0
3,292
29,095,769
2015-03-17T09:42:00.000
0
0
0
0
python,scikit-learn,bigdata,mixture-model
36,488,496
4
false
0
0
I think you can set the init_para to empty string '' when you create the GMM object, then you might be able to train the whole data set.
3
3
1
I have a large data-set (I can't fit entire data on memory). I want to fit a GMM on this data set. Can I use GMM.fit() (sklearn.mixture.GMM) repeatedly on mini batch of data ??
Sklearn-GMM on large datasets
0
0
0
3,292
29,095,769
2015-03-17T09:42:00.000
2
0
0
0
python,scikit-learn,bigdata,mixture-model
29,109,730
4
false
0
0
fit will always forget previous data in scikit-learn. For incremental fitting, there is the partial_fit function. Unfortunately, GMM doesn't have a partial_fit (yet), so you can't do that.
3
3
1
I have a large data-set (I can't fit entire data on memory). I want to fit a GMM on this data set. Can I use GMM.fit() (sklearn.mixture.GMM) repeatedly on mini batch of data ??
Sklearn-GMM on large datasets
0.099668
0
0
3,292
29,095,949
2015-03-17T09:51:00.000
8
0
1
0
python,pydev
38,560,011
2
false
0
0
pip uninstall traitlets just worked for me.
1
6
0
I get the following error code when trying to run an interactive python console in pydev I can't figure out what is wrong. When I google about the Console already exited with value: 1 part of the error, nothing useful comes up. What is stranger is that this thing occurs in only one of my python workspace projects. Only one. And all other things are the same as in the others. I want to add an image the question but it requires me over than 10 reputations....... Error initializing console. Unexpected error connecting to console. Failed to recive suitable Hello response from pydevconsole. Last msg received: Console already exited with value: 1 while waiting for an answer.
pydev console already ex
1
0
0
1,996
29,097,191
2015-03-17T10:50:00.000
3
0
1
0
python,enthought,canopy
29,098,226
3
true
0
0
This has nothing to do with Canopy per se. It is how Python works in general. Once a module is loaded, it is not reloaded/recompiled if you change it. This can be avoided with reload as suggested in one of the other answers. There were different attempts in the past at having auto-reload mechanism but none of them were particularly robust, causing more troubles than what they solve. The caching behaviour will happen in any Python session (Canopy, IPython frontends like the notebook, console, etc., a regular python shell, a server process, ...). There are others ways to workaround the problem. In IPython and Canopy, you can use the !python command to execute your code as if you were on the shell.
2
0
0
I am using Canopy Version: 1.4.1 (64 bit) on a Windows 7. I have two files A.py and B.py. A.py contains some modules and B.py imports one of A.py's modules. When I change something on A.py and then run B.py a new compiled A.py (A.pyc) should be created but this is not the case when using the canopy IDE. However a A.py does get compiled after restarting the kernel (Ctrl+.). Is there a way to recompile A.py without having to restart the kernel? Please help me. Thanks!
Enthought Canopy does not create .pyc file
1.2
0
0
395
29,097,191
2015-03-17T10:50:00.000
0
0
1
0
python,enthought,canopy
29,097,892
3
false
0
0
Not sure if this will fit the question, but you also can use: python -m compileall . from a command line pointed to your modules dir.
2
0
0
I am using Canopy Version: 1.4.1 (64 bit) on a Windows 7. I have two files A.py and B.py. A.py contains some modules and B.py imports one of A.py's modules. When I change something on A.py and then run B.py a new compiled A.py (A.pyc) should be created but this is not the case when using the canopy IDE. However a A.py does get compiled after restarting the kernel (Ctrl+.). Is there a way to recompile A.py without having to restart the kernel? Please help me. Thanks!
Enthought Canopy does not create .pyc file
0
0
0
395
29,100,233
2015-03-17T13:23:00.000
1
0
0
0
wxpython
29,101,131
3
false
0
1
If your trouble is to pass an argument to the function, consider following: self.Bind(wx.EVT_BUTTON, lambda e: otherFunction(arg1, arg2), button_name)
1
1
0
I'm using wxpython. I'd like to solve a simple problem for a GUI that consists of a button "Add", which when pressed creates a new button called "Remove". You can press "Add" as many times as you like and many remove buttons are created and added to the panel. What I would like is for when you press one of the remove buttons, that remove button itself is removed from the panel. The problem is when you bind a function to a button with this: self.Bind(wx.EVT_BUTTON, self.remove_function, button_name) you can't pass an argument to the function telling it which button to remove. (Or can you?)
A simple example for adding and removing buttons in wxpython
0.066568
0
0
2,124
29,102,422
2015-03-17T14:56:00.000
1
0
0
0
python,mysql,django,eclipse,pydev
29,493,720
1
true
1
0
I am a mac user. I have luckily overcome the issue with connecting Django to mysql workbench. I assume that you have already installed Django package created your project directory e.g. mysite. Initially after installation of MySQL workbench i have created a database : create database djo; Go to mysite/settings.py and edit following piece of block. NOTE: Keep Engine name "django.db.backends.mysql" while using MySQL server. and STOP the other Django MySQL service which might be running. DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'. 'NAME': 'djo', # Or path to database file if using sqlite3. # The following settings are not used with sqlite3: 'USER': 'root', 'PASSWORD': '****', # Replace **** with your set password. 'HOST': '127.0.0.1', # Empty for localhost through domain sockets or '127.0.0.1' for localhost through TCP. 'PORT': '3306', # Set to empty string for default. } } now run manage.py to sync your database : $ python mysite/manage.py syncdb bash-3.2$ python manage.py syncdb Creating tables ... Creating table auth_permission Creating table auth_group_permissions Creating table auth_group Creating table auth_user_groups Creating table auth_user_user_permissions Creating table auth_user Creating table django_content_type Creating table django_session Creating table django_site You just installed Django's auth system, which means you don't have any superusers defined. Would you like to create one now? (yes/no): yes Username (leave blank to use 'ambershe'): root Email address: [email protected] /Users/ambershe/Library/Containers/com.bitnami.django/Data/app/python/lib/python2.7/getpass.py:83: GetPassWarning: Can not control echo on the terminal. passwd = fallback_getpass(prompt, stream) Warning: Password input may be echoed. Password: **** Warning: Password input may be echoed. Password (again): **** Superuser created successfully. Installing custom SQL ... Installing indexes ... Installed 0 object(s) from 0 fixture(s)
1
0
0
I am new to this so a silly question I am trying to make a demo website using Django for that I need a database.. Have downloaded and installed MySQL Workbench for the same. But I don't know how to setup this. Thank you in advance :) I tried googling stuff but didn't find any exact solution for the same. Please help
Connect MySQL Workbench with Django in Eclipse in a mac
1.2
1
0
1,235
29,104,675
2015-03-17T16:34:00.000
3
0
0
1
python,logrotate,log-rotation
29,104,883
2
false
0
0
RotatingFileHandler allows a log file to grow up to size N, and then immediately and automatically rotates to a new file. logrotate.d runs once per day usually. If you want to limit a log file's size, logrotate.d is not the most helpful because it only runs periodically.
1
12
0
Python has its own RotatingFileHandler which is supposed to automatically rotate log files. As part of a linux application which would need to rotate it's log file every couple of weeks/months, I am wondering if it is any different than having a config file in logrotate.d and using a WatchedFileHandler instead. Is there any difference in how they operate? Is one method safer, more efficient, or considered superior to the other?
Is there a difference between RotatingFileHandler and logrotate.d + WatchedFileHandler for Python log rotation?
0.291313
0
0
4,420
29,105,684
2015-03-17T17:22:00.000
0
0
0
1
python,build,permissions,chromium
29,161,669
2
true
0
0
Actually the directory was not mounted with execution permission. So I remounted the directory with execution permission using mount -o exec /dev/sda5 /media/usrname and it worked fine.
1
0
0
I am getting the following error while running gclient runhooks for building chromium. running '/usr/bin/python src/tools/clang/scripts/update.py --if-needed' in '/media/usrname/!!ChiLL out!!' Traceback (most recent call last): File "src/tools/clang/scripts/update.py", line 283, in sys.exit(main()) File "src/tools/clang/scripts/update.py", line 269, in main stderr=os.fdopen(os.dup(sys.stdin.fileno()))) File "/usr/lib/python2.7/subprocess.py", line 522, in call return Popen(*popenargs, **kwargs).wait() File "/usr/lib/python2.7/subprocess.py", line 710, in init errread, errwrite) File "/usr/lib/python2.7/subprocess.py", line 1327, in _execute_child raise child_exception OSError: [Errno 13] Permission denied Error: Command /usr/bin/python src/tools/clang/scripts/update.py --if-needed returned non-zero exit status 1 in /media/usrname/!!ChiLL out!! In order to get permission of the directory "/usr/bin/python src/tools/clang/scripts" I tried chown and chmod but it returned the same error.
Chromium build gclient runhooks error number 13
1.2
0
1
769
29,106,586
2015-03-17T18:06:00.000
0
0
0
1
java,javascript,android,python,ios
29,106,853
1
false
0
0
Add #!/bin/bash or #!/usr/bin/env bash as the very first line of the script that you're executing.
1
0
0
i installed Docker in debian wheezy 64bit and when i try to run it these error are displayed: /usr/local/bin/docker: line 1: --2015-03-17: command not found /usr/local/bin/docker: line 2: syntax error near unexpected token (' /usr/local/bin/docker: line 2:Resolving get.docker.io (get.docker.io)...162.242.195.84' how to solve this problem? thanks
Docker not run in debian wheezy
0
0
0
72
29,107,842
2015-03-17T19:18:00.000
2
0
0
0
python,tkinter,tkinter-canvas
29,107,985
1
true
0
1
It is not possible to reset the id numbers generated by a canvas. Your solution of deleting and recreating the canvas is a reasonable alternative, though it may not be necessary. There are probably better ways to do whatever it is you think is solved by resetting canvas ids. For example, you could generate your own ids, and associate each id with each canvas item as a tag. You can then reset the ids any time you want.
1
2
0
I am coding a Tkinter GUI for a senior research project at school. I need to, at some point in the code, reset the counter used to create id values for new additions. Does anyone know if this is possible beside replacing the entire canvas? Basically, after adding a bunch of lines and ovals, I need to delete them all and restart the counter at 1 for new ones. Second, I have taken the path of replacing the entire canvas, and afterwards, nothing EVER shows up in the winfo_get() method. Why could this be? I know I'm trying to do something that TKinter isn't supposed to do, but it must be possible.
Tkinter Canvas reset ID's
1.2
0
0
813
29,108,020
2015-03-17T19:28:00.000
0
0
0
1
python,hadoop,mapreduce,hadoop-streaming
34,297,373
1
false
0
0
This may be done by packaging the dependencies and the reducer script in a zip, and adding this zip as a resource in Hive. Let's say the Python reducer script depends on package D1, which in turn depends on D2 (thus resolving OP's query on transitive dependencies), and both D1 and D2 are not installed on any machine in the cluster. Package D1, D2, and the Python reducer script (let's call it reducer.py) in, say, dep.zip Use this zip like in the following sample query: ADD ARCHIVE dep.zip; FROM (some_table) t1 INSERT OVERWRITE TABLE t2 REDUCE t1.col1, t1.col2 USING 'python dep.zip/dep/reducer.py' AS output; Notice the first and the last line. Hive unzips the archive and creates these directories. The dep directory will hold the script and dependencies.
1
0
0
I have a hive query with custom mapper and reducer written in python. The mapper and reducer modules depend on some 3rd party modules/packages which are not installed on my cluster (installing them on the cluster is not an option). I realized this problem only after running the hive query when it failed saying that the xyz module was not found. How do I package the whole thing so that I have all the dependencies (including transitive dependencies) available in my streaming job? How do I use such a packaging and import modules in my mapper and reducer? The question is rather naive but I could not find an answer even after an hour of googling. Also, it's not just specific to hive but holds for hadoop streaming jobs in general when mapper/reducer is written in python.
Python packaging for hive/hadoop streaming
0
0
0
496
29,111,073
2015-03-17T22:40:00.000
0
0
1
0
python,module,compilation
31,042,764
1
true
0
1
The problem seems to have been an issue with the version of python I had installed. A new installation is able to run long modules fine, without any noticeable delay even from pressing f5 to first print statement.
1
0
0
I have written a Python module designed to solve the Tetris cube tiling puzzle, and it contains a lot of code, but it doesn't want to run, when I press f5 to run the program, the shell restarts, but then stops working, even when I enable debugging nothing happens still. I put a print statement at the very start of the program, yet it is not executed, and I cannot terminate the program using ctrl-c. I was able to fix this problem by putting some of the functions I had defined into a separate module, and now it works fine, but if I create a new function, and add enough print statements, the problem reappears, but can be fixed by just removing one statement. The version of Python I am using is Python 2.7.8
Does python have a limitation on the amount of code in a module?
1.2
0
0
118
29,111,388
2015-03-17T23:06:00.000
0
1
0
0
python,twitter,bots,twitter-follow
29,111,422
1
false
0
0
In from twitter import Twitter, OAuth, TwitterHTTPError "Twitter" does not exists in "twitter". Try to re-download or double check if that file is even within "twitter".
1
0
0
Hy. So I installed python 2.7.9 and the twitter follow bot from github. I don't really know what I'm doing wrong but when I try to use a command I get an error. using this from twitter_follow_bot import auto_follow_followers_for_userresults in this Traceback (most recent call last): File "<pyshell#2>", line 1, in <module> from twitter_follow_bot import auto_follow File "twitter_follow_bot.py", line 21, in <module> from twitter import Twitter, OAuth, TwitterHTTPError ImportError: cannot import name Twitter Any idea what I did wrong. I never use python before so if you could explain it to me step by step it would be great. Thnaks
twitter python script generates error
0
0
1
249
29,111,407
2015-03-17T23:09:00.000
1
0
1
1
python,multiprocessing
29,111,506
1
false
0
0
I/O goes to the system cache in RAM before hitting a hard drive. Fro writes, you may find the copies are fast until you exhaust RAM and then slows down and that multiple reads of the same data are fast. If you copy the same file to several places, there is an advantage to do the copies of that file before moving to the next. I/O to a single hard drive (or group of hard drives joined with a RAID or volume manager) is mostly serial except that the operating system and drive may reorder operations to read / write nearby tracks before seeking for tracks that are further away. There is some advantage to doing parallel copies because there are more opportunities to reorder, but since you are really writing from the system RAM cache sometime after your application writes, the benefits may be hard to measure. There is a greater benefit moving between drives. Those go mostly in parallel, although there is some contention for the buses (eg, pcie, sata) that run the drives. If you have a lot of files to copy, multiprocessing is a reasonable way to go, but you may find that subprocess to the native copy utilities is faster.
1
0
0
I would like to know if there is any benefit to using python2.7's multiprocessing module to asynchronously copy files from one folder to another. Is diskio always forced to be in serial? Does this change if you are copying from one hard disk to a different hard disk? Does this change depending on operating system (windows / linux)? Perhaps it is possible to read in parallel, but not possible to write? This is all assuming that the fiels being moved/copied are different files going to different locations.
Is there any benefit to using python2.7 multiprocessing to copy files
0.197375
0
0
53
29,111,443
2015-03-17T23:12:00.000
0
0
1
0
python,regex,whitespace,removing-whitespace
71,220,983
4
false
0
0
Input File: filea.txt (With many spaces) dragon dragonballs test.py filea = open('filea.txt','r') fileb = open('fileb.txt','a') for line in filea: for i in range(len(line)): if ' \n' in line: line = line.replace(' \n','\n') fileb.write(line) op: fileb.txt dragon dragonballs
1
3
0
How can I check if a Python string at any point has a single space before new line? And if it does, I have to remove that single space, but keep the new line symbol. Is this possible?
Remove Space at end of String but keep new line symbol
0
0
0
3,187
29,112,631
2015-03-18T01:20:00.000
0
0
0
0
python,user-interface,wxpython
29,130,791
1
false
0
1
You probably can't do that in a normal toolbar. However, you can easily just roll your own toolbar by using a horizontally oriented BoxSizer and adding the necessary widgets to it. Then you can add the text control with the correct proportion and the wx.EXPAND flag.
1
0
0
Is there any form to set a 100% of horizontal size for TextCtrl in Toolbar? Because I have been looking for about this in the documentation and I did not found anything.
Is there any form to set a 100% of horizontal size for TextCtrl in Toolbar?
0
0
0
20
29,114,480
2015-03-18T05:02:00.000
0
0
0
0
python-2.7,scrapy,virtualenv,virtualenvwrapper
29,121,164
1
false
1
0
You need to pip install all the set-up within the virtualenv.
1
0
0
I create a virtualenv name as ScrapyProject. when I use scrapy command or pip command it does not work but when I enter the python command it works. Here is how he shows me. (ScrapyProject) C:\Users\Jake\ScrapyProject>scrapy (ScrapyProject) C:\Users\Jake\ScrapyProject>pip (ScrapyProject) C:\Users\Jake\ScrapyProject>python python2.7.6 etc. Here is how my paths are in virtualenv. C:\Users\Jake\ScrapyProject\Scripts;Then some windows paths then some python paths There is not extra spaces between them I am sure! and here is how my python paths look;C:\python27;C:\Python27\Lib\site-packages Can anybody help me If he needs some extra information, I will give you? I totally didnot understand it!
Commands not working in ScrapyProject
0
0
0
40
29,116,195
2015-03-18T07:20:00.000
0
0
0
0
python-2.7,e-commerce,shopping-cart,turbogears2,satchless
29,133,585
1
true
0
1
Stroller is a pretty old project that is not maintained anymore. The reason why it was not working for you is that it was still pending some changes required to make it compatible with TG2.3+, it was compatible only with <=2.2. And so was looking for some Pylons imports. To solve your problem I just released stroller 0.5.1 which is now compatible with TG2.3, please keep in mind that as it is a pretty old project it depends on ToscaWidgets1 and doesn't work with TW2 so you should: Remove tw2.forms dependency from your project setup.py Uninstall tw2.core and tw2.forms from your virtualenv (if they are available sprox will use tw2 and some forms in stroller won't work) change base_config.prefer_toscawidgets2 in base_config.use_toscawidgets = True Those are the required step to disable ToscaWidgets2 in your project and revert back to ToscaWidgets1 which is required by stroller.
1
0
0
I'am trying to build an e-commerce website using turbogears. Initially I modified the tgapp-photos to make the items come on the page properly. But then I find it difficult to make cart from the scratch and hence thought to use stroller. However, when I'am plugging it in the config.py file, all of a sudden my app stops. And moreover it is not showing any error too, it just stops. Could someone please tell me what wrong am I doing. Can I use satchless or shoppingCart in the turbogears framework?
stroller (turbogears) not working
1.2
0
0
56
29,119,880
2015-03-18T10:43:00.000
0
0
0
0
python,numpy,matrix-inverse
29,120,093
3
false
0
0
linalg is right and you are wrong. The matrix it gave you is indeed the inverse. However, if you are using np.array instead of np.matrix then the multiplication operator doesn't work as expected, since it calculates the component-wise product. In that case you have to do mat.dot(inv(mat)). In any case, what you will get will not be a perfect identity matrix due to rounding errors (when I tried it the off-diagonal matrix entries were in the order of 10 ** (-16)).
1
1
1
I'm trying to generate the inverse matrix using numpy package in python. Unfortunately , I'm not getting the answers I expected. Original matrix: ([17 17 5] [21 18 21] [2 2 19]) Inverting the original matrix by Cramer's rule gives: ([4 9 15] [15 17 6] [24 0 17]) Apparently using numpy.linalg.inv() gives -3.19488818e-01,3.80191693e-01,-6.38977636e-03, 3.33333333e-01, -3.33333333e-01, 2.26123699e-18, -2.84345048e-01, 2.68370607e-01, 5.43130990e-02n I expected that multiplying the original matrix and the inverse would have given an identity matrix but as you can see I give a matrix filled with floating points. Where can be the issue?
How to calculate inverse using cramer's rule in python?
0
0
0
2,359
29,122,343
2015-03-18T12:40:00.000
0
0
1
0
python,csv,xlsx,string-split
29,122,418
2
false
0
0
You can get rid of the newlines in the field by doing a string .replace('\n', ' ') on it.
1
1
0
I have an issue where I am reading a csv file as a file ( easier for me to get results that reading as csv) that is exported from an application. I have my code working to read that file and import it into a list. Then I iterate through the list and save to the appropriate fields. I am using the built in method .split(',') and it works great to a point. I am able to get my colors the way I want but all the way down on line 70 is were my mistake occurs. I am found that due to a certain field has multiple lines is causing the program to import each line into the list and creating a column for each line. How can I ignore a filed that has multiple lines when trying to do a split?
Python split string(',') issue on a csv field
0
0
0
123
29,124,446
2015-03-18T14:16:00.000
0
0
0
1
python,mongodb,tornado
29,156,089
1
false
0
0
You need to understand how Tornado works asynchronously. Everytime you yield a Future object, Tornado suspends current coroutine and jumps to the next coroutine. Doing queries synchronous or asynchronous depends on the situation. If your query is fast enough, you can use synchronous driver. Also, keep in mind, jumping between coroutines has a cost, too. If it is not fast enough, you might consider doing asychronous calls.
1
0
0
Basically, what is a Futures on Tornado's approach? I've read on some stackoverflow threads that a tornado coroutine must return a Future, but returning a Future how do my db queries work? Using Futures will my Tornado app be waiting for the query to return anything like a blocking i/o or it will just dispatch the request and change the context until the query return? And this Motorengine solution? Do I need to use Futures or just make the queries?
Do I need to use Tornado Futures with Motorengine?
0
0
0
113
29,127,341
2015-03-18T16:22:00.000
4
0
0
1
python,linux,queue,pipe,fifo
30,587,524
8
false
0
0
There are several options 1) If the daemon should accept messages from other systems, make the daemon an RPC server - Use xmlrpc/jsonrpc. 2) If it is all local, you can use either TCP sockets or Named PIPEs. 3) If there will be a huge set of clients connecting concurrently, you can use select.epoll.
2
9
0
I have a Python daemon running on a Linux system. I would like to feed information such as "Bob", "Alice", etc. and have the daemon print "Hello Bob." and "Hello Alice" to a file. This has to be asynchronous. The Python daemon has to wait for information and print it whenever it receives something. What would be the best way to achieve this? I was thinking about a named pipe or the Queue library but there could be better solutions.
How to feed information to a Python daemon?
0.099668
0
0
2,615
29,127,341
2015-03-18T16:22:00.000
0
0
0
1
python,linux,queue,pipe,fifo
30,565,140
8
false
0
0
Why not use signals? I am not a python programmer but presumably you can register a signal handler within your daemon and then signal it from the terminal. Just use SIGUSR or SIGHUP or similar. This is the usual method you use to rotate logfiles or similar.
2
9
0
I have a Python daemon running on a Linux system. I would like to feed information such as "Bob", "Alice", etc. and have the daemon print "Hello Bob." and "Hello Alice" to a file. This has to be asynchronous. The Python daemon has to wait for information and print it whenever it receives something. What would be the best way to achieve this? I was thinking about a named pipe or the Queue library but there could be better solutions.
How to feed information to a Python daemon?
0
0
0
2,615
29,129,589
2015-03-18T18:10:00.000
0
0
1
0
python,mysql,linux,amazon-ec2,rds
29,169,114
1
false
0
0
I solved this by upping my instance type to a m3.Large instance without limited CPU credits. Everything works well now.
1
0
0
I have some very peculiar behavior happening when running a data importer using multiprocessor in python. I believe that this is a database issue, but I am not sure how to track it down. Below is a description of the process I am doing: 1) Multiprocessor file that runs XX number of processors doing parts two and three 2) Queue processor that iterates through an sqs queue pulling a company id. This id is used to pull a json string stored in mysql. This json string is loaded as a json object and sent to a parsing file that normalizes the data so that it can be imported into mysql as normalized data. 3) Company parser/importer reads through json object and creates inserts into a mysql database, normalizing the data. These are batch inserted into RDS in batches of XXX size to mitigate IOPS issues. This code is run from a c4.Large instance and works. When it is started, it works fast (~30,000 inserts per min) without maxing out IOPS, CPU, or other resources on either the RDS or ec2 instance. Then, after a certain amount of time (5-30min), the RDS server's CPU drops to ~20% and has a weird heartbeat type of rhythm. I have tried launching additional ec2 instances to speed up this process and the import speed remains unchanged and slow (~2000 inserts per min), so I believe the bottleneck is with the RDS instance. I tried changing the RDS instance's size from medium to large with no change. I also tried changing the RDS instance's IOPS to provisioned SSD with 10k. This also did not fix the problem As far as I can tell, there is some sort of throttling or limitation by the RDS server. But, I don't know where else to look. There are no red flags about what is being limited. Can you please provide other potential reasons for why this type of behavior would be happening? I don't know what else to test. Current setup is 500gb t2.medium RDS instance with ~200 Write IOPS, CPU at ~20%, Read IOPS < 20, Queue < 1, stable 12 db connections(this is not connecting and then disconnecting), and plenty of free memory.
Importing data to mysql RDS with python multiprocessor - RDS
0
1
0
260
29,132,741
2015-03-18T21:10:00.000
2
0
0
0
python,qt,pyqt,pyside
29,132,919
1
false
0
1
One way to do it is to use viewmodels. Have one QAbstractItemModel adapter to your underlying data model. All interaction must pass through that model. When you need to further adapt the data to a view, simply use a proxy view model class that refers to the adapter above and reformats/adapts the data for a view. All the view models will then be automagically synchronized. They can derive from QAbstractProxyModel, although that's not strictly necessary. There is no other way of doing it if the underlying source of data doesn't provide change notification both for the contents and for the structure. If the underlying data source provides relevant notifications, it might as well be a QAbstractItemModel to begin with :)
1
2
0
I've been working really hard trying to find a solution to this for the past few weeks. I have committed to a direction now, but I am still not entirely satisfied with what I have come up with. Asking this now purely out of curiosity and for hope of a more proper solution for next time. How on earth do I keep multiple QAbstractItemModel classes in sync that are referring to the same source data but displaying in different ways in the tree view? One of the main reasons for using model/view is to keep multiple views in sync with one another. However, if each of my views requires different data being displayed at the same column, as far as I can tell I need to then subclass my model to two different models with different implementations that will then cater to each of those unique view displays of the same items. Underlying source items are the same, but data displayed is different. Maybe the flags are different as well, so that the user can only select top level items in one view and then can only select child items in the other view. I'll try to give an example: Lets say my TreeItem has three properties: a, b, c. I have two tree views: TreeView1, TreeView2. Each has two columns. TreeView1 displays data as follows: column1 -> a, column2 -> b TreeView2 displays data as follows: column1 -> a, column2 -> c I then need to create two different models, one for TreeView1 and one for TreeView2, and override the data and flags methods appropriately for each. Since they are now different models, even though they are both referring to the same TreeItem in the background, they are no longer staying in sync. I have to manually call the refresh on TreeView2 whenever I change data on TreeView1, and vice versa. Consider that column1, or property a, is editable and allows the user to set the name of the TreeItem. Desired behaviour would be for the edit that is done in TreeView1 to instantly be reflected in TreeView2. I feel like I am missing some important design pattern or something when approaching this. Can anyone out there see where I am going wrong and correct me? Or is this a correct interpretation? Thanks!
How to keep multiple QAbstractItemModel classes in sync
0.379949
0
0
442
29,133,682
2015-03-18T22:12:00.000
1
0
1
1
python
29,135,637
2
false
0
0
If I were a beginner, I would have my remote script periodically check the value of the variable in a text file. When I needed to update the variable, I would just ssh to my remote machine and update the text file.
1
1
0
I have a python script running on a vps. Now i just want to change 1 variable in the running script using my desktop computer. What is the simplest way to that for a beginner?
simplest way to make two python scripts talk to each other?
0.099668
0
0
1,235
29,133,963
2015-03-18T22:33:00.000
3
0
0
0
python,nginx,flask,virtualenv,uwsgi
29,134,999
1
true
1
0
First of all, Nginx never goes in Virtualenv. It is a os service and has nothing to do with python. It only serves webrequests and knows how to pass them to a upstream service (like uwsgi). Second; don't put things in virtualenv that don't need seperate versions. Uwsgi is quite stable now, so you will almost never need separate versions; so don't put it in venv. Third; when you plan for production deployment, keep things as simple as possible. Any added complexity will only make the chance of failure higher. So do not put venv on your prod servers until your absolutely need it. And even then you are probably putting to much stuff on that server. Keep your servers single minded. I find it easier to use multiple machines (especially with cloud services like AWS) that each have one purpose than to cram everything on one big machine (where one screwball process can eat all the mem for everybody else) Forth; When you do need more python projects/services it is better to separate them with things like docker, since then they are better maintainable and better isolated from the rest.
1
2
0
Seems like a simple question but I cannot find it addressed anywhere. Every tutorial I look at does things slightly differently, and I'm pretty sure I've seen it done both ways. In my development environment, python, flask, and all other dependencies of my application go inside the Virtual Environment. When configuring a production environment, do Nginx and uWSGI go inside the virtual environment? Thanks!
nginx + uwsgi + virtual environment. What goes inside?
1.2
0
0
445
29,135,426
2015-03-19T00:55:00.000
1
0
0
0
python,gevent,greenlets
31,060,953
2
true
0
0
You are not leaking resources in the sense that all the used memory will be properly cleaned up when the greenlet dies (and the garbage collection happens, which is automatic). So I would not worry about that. Of course, your description of your architecture does not make it appear very robust. So while you do not leak memory per se, if you really start too many greenlets, you might find that your main greenlet executes vey rarely. In other words, whenever it yields to the hub (via sleep or any other blocking call), you might find that thousands of greenlets are called and executed before your main greenlet comes back to life. Also be aware of the overhead of switching back and forth between the hub and the greenlets.
1
1
0
We're using gevent in a long-lived Python process, and over time we spawn thousands upon thousands of Greenlets. We're not joining any of these Greenlets; we just spawn-and-forget. (The Greenlet tasks themselves are short-lived and do exit.) Is that all right? Are we leaking any resources by not joining the Greenlets?
Is It Okay Not to Join Any of My Greenlets?
1.2
0
0
520
29,137,043
2015-03-19T04:05:00.000
0
1
1
0
python,hash
29,137,224
1
true
0
0
Search on StackOverflow for code to recursively list full file names in Python Search on StackOverflow for code to return the hash checksum of a file Then list files using an iterator function. Inside the loop: Get the hash checksum of the current file in the loop Iterate through every hash. Inside the loop: Compare with the current file's checksum Algorithms? Don't worry about it. If you iterate through each line of the file, it will be fine. Just don't load it all at once, and don't load it into a data structure such as a list or dictionary because you might run out of memory.
1
0
0
I have a text file of size 2.5 GB which contains hash values of some standard known files. My task is to find the hash of all files on my file system and compare it with the hashes stored in the text file. If a match is found I need to print Known on the screen and if no match is found then I need to print unknown on the screen. Thus the approach for the task is quite simple but the main issue is that the files involved in the process are very huge. Can somebody suggest how to accomplish this task in an optimized way. Should I import the text file containing hashes to a database. If yes, then please provide some link which could possibly help me accomplish it. Secondly what algorithm can I use for searching to speed up the process? My preferred language is Python.
Perform searching on very large file programatically in Python
1.2
0
0
62
29,138,054
2015-03-19T05:54:00.000
17
0
1
0
python
29,138,079
3
false
0
0
In python string literals, the '\t' pair represents the tab character. So you would use mystring.replace('\t', 'any other string that you want to replace the tab with').
1
15
0
I need to replace tabs in a string, but only the tabs, not the spaces. If I use the str.replace() function, what would go in the first set of quotes?
How to replace tabs in a string?
1
0
0
51,177
29,141,492
2015-03-19T09:57:00.000
2
0
0
0
python,django,caching
29,141,865
1
false
1
0
You can write a so called warm up script. This is just a script that opens the URLs you want to habe in the cache. Run this script as a periodic task. The simplest Version would be a shell script with curl statements in it tha is periodicly executed by cron. The intervall with which you call it depends on your cache settings. If you configured to habe pages 10 minutes in cache calling the script every 10 minutes makes sure everything is always in the cache.
1
1
0
I have one problem when I using cache in Django. Is it possible to load the page and update cache auto? I don't want my first user wait when cache needs to be updated. Thank you.
Auto update cache in Django
0.379949
0
0
537
29,142,050
2015-03-19T10:24:00.000
1
0
1
0
python,python-3.x,virtualenv
29,143,675
1
true
0
0
I have fixed this by just installing a more recent version of Python 3 (Python 3.4.3 to be exact). My virtualenvs running Python 3.4.1 seemed to have upgraded by themselves.
1
1
0
Is there a way to upgrade e.g. python 3.4.1 to 3.4.3 on my virtualenv? I can't see and google results teaching to upgrade from pythong 3.x to 3.+x Thanks! EDIT: I have to emphasize that i am talking about upgrading my virtualenv's python 3 to a more recent version, not upgrading python 2.
upgrade virtualenv python 3 to 3.4.3
1.2
0
0
265
29,146,792
2015-03-19T14:08:00.000
21
0
1
1
python,docker,virtualenv
53,656,409
3
false
0
0
Here is my two cents, or rather comments on @gru 's answer and some of the comments. Neither docker nor virtual environments are virtual machines every line in your docker file produces overhead. But it's true that at runtime virtual environments have zero impact the idea of docker containers is that you have one process which interacts with other (docker-)services in a client-server relationship. Running different apps in one docker or calling one app from another inside a docker is somehow against that idea. More importantly, it adds complexity to you docker, which you want to avoid. "isolating" the python packages that the app sees (inside a virtual environment) from the packages installed in the docker is only necessary if you need to assure a certain version for one or more packages. the system installed inside the container only serves as an environment for the one app that you are running. Adjust it to the requirements of your app. There is no need to leave it "untouched" So in conclusion: There is no good reason for using a virtual environment inside a container. Install whatever packages you need on the system. If you need control over the exact package versions, install the (docker-wide) with pip or alike. If you think that you need to run different apps with different package versions inside a single container, take a step back and rethink your design. You are heading towards more complexity, more difficult maintenance and more headache. Split the work/services up into several containers.
2
27
0
You can build a container with Dockerfile in a few seconds. Then why do people need to install a virtual enviroment inside the docker container? It's like a "virtual machine" in a virtual machine ?
Why do people create virtualenv in a docker container?
1
0
0
11,721
29,146,792
2015-03-19T14:08:00.000
30
0
1
1
python,docker,virtualenv
33,150,800
3
true
0
0
I am working with virtualenvs in Docker and I think there are several reasons: you may want to isolate your app from system's python packages you may want to run a custom version of python but still keep the system's packages untouched you may need fine grain control on the packages installed for a specific app you may need to run multiple apps with different requirements I think these are all reasonably good reasons to add a little pip install virtualenv at the end of the installation! :)
2
27
0
You can build a container with Dockerfile in a few seconds. Then why do people need to install a virtual enviroment inside the docker container? It's like a "virtual machine" in a virtual machine ?
Why do people create virtualenv in a docker container?
1.2
0
0
11,721
29,148,746
2015-03-19T15:32:00.000
1
0
0
0
python,scikit-learn,cluster-analysis,mean-shift
29,149,537
1
true
0
0
The standard deviation of the clusters isn't 1. You have 8 dimensions, each of which has a stddev of 1, so you have a total standard deviation of sqrt(8) or something like that. Kernel density estimation does not work well in high-dimensional data because of bandwidth problems.
1
1
1
I am working with the Mean Shift clustering algorithm, which is based on the kernel density estimate of a dataset. I would like to generate a large, high dimensional dataset and I thought the Scikit-Learn function make_blobs would be suitable. But when I try to generate a 1 million point, 8 dimensional dataset, I end up with almost every point being treated as a separate cluster. I am generating the blobs with standard deviation 1, and then setting the bandwidth for the Mean Shift to the same value (I think this makes sense, right?). For two dimensional datasets this produced fine results, but for higher dimensions I think I'm running into the curse of dimensionality in that the distance between points becomes too big for meaningful clustering. Does anyone have any tips/tricks on how to get a good high-dimensional dataset that is suitable for (something like) Mean Shift clustering? (or am I doing something wrong? (which is of course a good possibility))
Generating high dimensional datasets with Scikit-Learn
1.2
0
0
681
29,149,000
2015-03-19T15:42:00.000
0
0
0
0
python,sockets,server-side
29,149,290
1
true
0
0
If you are asking about function 'listen': 'backlog' argument is the maximum length to which the queue of pending connections for socket may grow. If a connection request arrives when the queue is full, the client may receive an error with an indication of ECONNREFUSED or, if the underlying protocol supports retransmission, the request may be ignored so that a later reattempt at connection succeeds. This does not set any limit on number of socket connections. Only the ones which were not 'accept'-ed yet (e.g. have not become 'connection'). When client tries to connect the backlog grows by 1. When you call 'accept' the backlog decreases by 1. So if you will call 'accept' periodically you will open a number of connections.
1
0
0
I recently started searching for socket programming, and decided to use python for testing. I have the following question: As I read, you can only listen for a limited number of connections in a server-side socket, thus you can only have such a number of connections operating at a time. Is there a way to be able to hold as many sockets open as the system can tolerate? That is e.g. in the case of a chat server (you would not want to only have 5 active users at a time, for example). What's the solution to that? Should one create more sockets to achieve that goal? But then, would the number of ports available to the system be the next limitation?
Arbitrarily large number of sockets - Python
1.2
0
1
103
29,149,806
2015-03-19T16:17:00.000
0
1
0
0
python,azure,oauth-2.0,identity,app-secret
30,823,957
5
false
0
0
When Key Vault returns a 401 response, it includes a www-authenticate header containing authority and resource. You must use both to get a valid bearer token. Then you can redo your request with that token, and if you use the same token on subsequent requests against the same vault, it shouldn't return a 401 until the token expires. You can know the authority and resource in advance, but it's generally more robust to prepare your code to always handle the 401, specially if you use multiple vaults. Be sure to only trust on a www-authenticate header of a valid SSL connection, otherwise you might be a victim of spoofing!
1
2
0
I am very interested in using the new service recently released for secret management within Azure. I have found a few example guides walking through how to interact with key vault via powershell cmdlets and c#, however haven't found much at all in regards to getting started with using the rest API. The thing I am particularly confused with is the handling of oauth2 w/ active directory. I have written a oauth2 application listener, built a web application with an AD instance and can now generate a "access_token". It is very unclear to me how to proceed beyond this though, as I seem to consistently receive a 401 HTTP resp code whenever attempting to use my access_token to perform a key vault API call. Any guides / tips on using azure key vault with python would be greatly appreciated!
Interacting with Azure Key Vault using python w/ rest api
0
0
0
5,475
29,149,900
2015-03-19T16:22:00.000
0
1
0
0
python,xml,matlab,xml-parsing
29,150,179
1
false
0
0
Matlab brings it's own perl installation located at fullfile(matlabroot, 'sys\perl\win32\bin\'). Probably here the additional resources are missing. Navigate to this folder and install the requirements using ppm
1
0
0
I have a program written in Perl, and I want to execute this program in MATLAB. In this program I am calling an XML file, but on execution, I am getting XML: DOM error as Error using perl (line 80) System error: Can't locate XML/DOM.pm in @INC, etc. How can I get out of this error? Program is executing in Perl very well...
XML DOM ERROR when executing Python program in perl
0
0
1
56
29,155,374
2015-03-19T21:31:00.000
1
0
0
0
python,nls-lang
29,155,927
1
false
0
0
Apologies all, Further reading of the documentation says: The XXXX Server provides National Language Support (NLS) for all string values returned across the CORBA API. The client application specifies the locale of these strings by calling the setNLSLocale method. If the locale is not set the string values are returned as NLS identifiers that are not interpretable by the client application. Future releases will provide an additional interface for interpreting these NLS identifiers (see Future releases). The client application can query the NLS locale by calling getNLSLocale. Please consider the question closed
1
1
0
I'm using a Corba library where the target returns error messages with NLS encoding. e.g: wideNlsText=u'$NLS[30822b0b\x01888\x01$NLS[30822ae5\x013\x01Subscriber NPA]NLS$]NLS$' Other than $NLS starts and NLS$ ends the string, what can I do to translate these into a more acessable text string?
Python: Decoding NLS encoded text
0.197375
0
0
175
29,157,039
2015-03-19T23:50:00.000
1
0
0
1
python,c,mpi,mpi4py
29,256,366
1
true
0
0
MPI_Recvreduce is what you're looking for. Unfortunately, it doesn't exist yet. It's something that the MPI Forum has been looking at adding to a future version of the standard, but hasn't yet been adopted and won't be in the upcoming MPI 3.1.
1
2
1
I use the MPI_Sendrecv MPI function to communicate arrays of data between processes. I do this in Python using mpi4py, but I'm pretty sure my question is independent of the language used. What I really want is to add an array residing on another process to an existing local array. This should be done for all processes, so I use the MPI_Sendrecv function to send and receive the arrays in one go. I can then add the received array in the recvbuf to the local array and I'm done. It would be nice however if I could save the step of having a separate recvbuf array, and simply receiving the data directly into the local array without overwriting the existing data, but rather updating it using some operation (addition in my case). I guess what I'm looking for is a combined MPI_Sendrecv/MPI_Reduce function. Do some function like this exist in MPI?
MPI_Sendrecv with operation on recvbuf?
1.2
0
0
127
29,163,495
2015-03-20T09:40:00.000
0
0
0
0
wcf,ironpython
29,182,774
1
true
1
0
Resolved like this : I have a Memory stream on the server which compiles and executes the code then reads the data from the stdout and sends it back to the client.
1
0
0
Is it possible to register the console output of a client through a server. I'm assuming this can be done through a NetworkStream? Right now, I register the output of a desktop app to stdout through the SetOutput method provided inside Runtime.IO of IronPython. This method accepts a Stream as an arugment but the problem is how can I send that data back to the client through a stream from wcf?
Registering output to stdout through wcf in IronPython?
1.2
0
0
36
29,168,026
2015-03-20T13:45:00.000
1
0
1
0
python,tornado
29,173,394
1
true
0
0
r"/*.jpg" is a file glob pattern, not a regular expression. The equivalent regular expression would be r"/.*\.jpg".
1
0
0
I want to map "/aaa.jpg" and "/bbb.jpg".... with the same Handler I write the code as(r"/*.jpg", ImageHandler"), it that correct ? It doesn't work for me..
tornado, How to map URL and handler with a regular expression way
1.2
0
0
349
29,170,268
2015-03-20T15:33:00.000
0
0
0
0
python,sql
29,170,744
2
false
0
0
For the first question (how do I make sure I'm not duplicating ingredients?), if I understand well, is basically put your primary key as (i_id, name) in the table ingredients. This way you guarantee that is impossible insert an ingredient with the same key (i_id, name). Now for the second question (how do I insert the data so that the ingredient is linked to the id of the current Recipe object?). I really don't understand this question very well. What I think you want is link the recipes with ingredients. This can be made with the table RecipeIngredients. When you want to do that, you simple insert a new row in that table with the id of the recipe and the id of the ingredient. If isn't this what you want sorry, but I really don't understand.
1
0
0
I'm not sure what exactly the wording for the problem is so if I haven't been able to find any resource telling me how to do this, that's most likely why. The basic problem is that I have a webcrawler, coded in Python, that has a 'Recipe' object that stores certain data about a specific recipe such as 'Name', 'Instructions', 'Ingredients', etc. with 'Instructions' and 'Ingredients' being a string array. Now, the problem I have comes when I want to store this data in a database for access from other sources. A basic example of the database looks as follows: (Recipes) r_id, name, .... (Ingredients) i_id, name, .... (RecipeIngredients) r_id, i_id. Now, specifically my problem is, how do I make sure I'm not duplicating ingredients and how do I insert the data so that the ingredient is linked to the id of the current Recipe object? I know my explanation is bad but I'm struggling to put it into words. Any help is appreciated, thanks.
Inserting data into SQL database that needs to be linked
0
1
0
45
29,171,316
2015-03-20T16:23:00.000
0
0
1
0
python,json,geojson
35,938,076
2
false
0
0
Well the easiest way would be to loop through your feature collection and extract the geometry from there. I assume you have found the answer to your question by now, since there hasn't been another post since?
1
0
0
I currently have a GeoJSON FeatureCollection, but the function I need to execute on this file in Python only supports GeoJSON geometry objects such as Point or Polygon (without all of the attribute and coordinate data). Is there a way to simply convert a GeoJSON FeatureCollection to a GeoJSON geometry object in Python?
Converting GeoJSON FeatureCollection to GeoJSON geometry object
0
0
0
1,732
29,178,949
2015-03-21T03:37:00.000
1
0
1
0
python,git
29,179,099
1
true
0
0
Absolute paths that depend on your specific computer do not belong in version control. A good solution would be to have your program read an environment variable and use it as the path. Make sure to set a sensible default if the environment variable is unset.
1
1
0
I am using a python program on 2 different computers. On computer 1 some path (e.g., to an image or something), used by the program, is, say, a/b/ On computer 2, the equivalent path is different, say, b/a/ (the image, e.g., is in a different folder) When I want to run the script on computer 1 I pull the code and set the path to a/b/. Then I make changes and push. Then I go to computer 2 and pull. Now the path is a/b/ but actually I want the pull to not change the path (all the rest should change though of course). Q1: Is there a way to automatically do this (prevent the changes in the path)? Also I keep getting merge conflicts just due to the path being different. Q2: I might not even be doing this in an optimal way, how do people do this? My procedure could be wrong causing these issues.
Ignore part of file in git when using on 2 different computers (python)
1.2
0
0
48
29,179,381
2015-03-21T04:55:00.000
4
0
1
0
python,python-2.7,python-3.x
29,179,603
2
false
0
0
The "right" way is to translate the Py2-only module to Py3 and offer the translation upstream with a pull request (or equivalent approach for non-git upstream repos). Seriously. Horrible hacks to make py2 and py3 packages work together are not worth the effort.
1
2
0
I wish to write a python script for that needs to do task 'A' and task 'B'. Luckily there are existing Python modules for both tasks, but unfortunately the library that can do task 'A' is Python 2 only, and the library that can do task 'B' is Python 3 only. In my case the libraries are small and permissively-licensed enough that I could probably convert them both to Python 3 without much difficulty. But I'm wondering what is the "right" thing to do in this situation - is there some special way in which a module written in Python 2 can be imported directly into a Python 3 program, for example?
What is the correct way (if any) to use Python 2 and 3 libraries in the same program?
0.379949
0
0
67
29,179,631
2015-03-21T05:35:00.000
17
0
1
0
python,contextmenu,edit,python-idle
45,523,858
11
false
0
0
As a newer update, for people that are having the "missing idle" issue with Windows 10 using Python 3.6 (64-bit). From my experience, this happens when you install other python editors, and you change your default app to open with that editor. The easiest way to fix this issue is to click the "start" button, then navigate to settings --> System --> Default Apps --> "Choose default apps by file type". Scroll down till you find ".py" file type and click on the icon and choose "Python"(has a little rocket ship in the icon). This will change your default app back to "Python.exe", and the context menu "edit with idle" will appear once again on your ".py" files. Hope this helps!
6
25
0
I have Python 2.7.5 that installed with ArcGIS 10.2.2. When I first right-clicked a .py script I'd previously written it listed the "Edit with IDLE" option in the context menu. However, this option no longer appears when I right-click a .py file. I have read numerous threads concerning this issue and attempted some of them, such as modifying/removing registry keys and then reinstalling/repairing the software. I am not interested in using an IDE at this point, though many will be happy to know I intend to use an IDE later on. Right now, the purpose is to fix the problem rather than avoid and work around it. I appreciate the help I've gotten from the online community in the past, and I'm confident someone will come through with a solution for me. How do I get "Edit with IDLE" back in the context menu?
"Edit with IDLE" option missing from context menu
1
0
0
60,527
29,179,631
2015-03-21T05:35:00.000
3
0
1
0
python,contextmenu,edit,python-idle
52,921,669
11
false
0
0
I got the "Edit with IDLE" back with the option "Repair" of the deinstallation-menu.
6
25
0
I have Python 2.7.5 that installed with ArcGIS 10.2.2. When I first right-clicked a .py script I'd previously written it listed the "Edit with IDLE" option in the context menu. However, this option no longer appears when I right-click a .py file. I have read numerous threads concerning this issue and attempted some of them, such as modifying/removing registry keys and then reinstalling/repairing the software. I am not interested in using an IDE at this point, though many will be happy to know I intend to use an IDE later on. Right now, the purpose is to fix the problem rather than avoid and work around it. I appreciate the help I've gotten from the online community in the past, and I'm confident someone will come through with a solution for me. How do I get "Edit with IDLE" back in the context menu?
"Edit with IDLE" option missing from context menu
0.054491
0
0
60,527
29,179,631
2015-03-21T05:35:00.000
0
0
1
0
python,contextmenu,edit,python-idle
45,709,793
11
false
0
0
I think the majority of cases are caused by the Py launcher that comes with Python 3. When you install Python 3 alongside Python 2.x, the *.py and *.pyw files are associated to run with the new Py launcher. Since *.py and *.pyw files are no longer associated with Python.exe, that breaks the "Edit with IDLE" and similar context menu options, despite all relevant registry entries being present and correct. Right clicking a file and choosing Python.exe and selecting "always use the selected program to open this kind of file" option fixes the problem (even if Python.exe seems to be already set as the default program) but then you lose the Py launcher functionality. This may well be considered a bug with the Python 3.x installer and I think should be fixed at that level by the Python developers. Meanwhile, I'm sure registry wizards can find a workaround for this but unfortunately, that's beyond me at the moment.
6
25
0
I have Python 2.7.5 that installed with ArcGIS 10.2.2. When I first right-clicked a .py script I'd previously written it listed the "Edit with IDLE" option in the context menu. However, this option no longer appears when I right-click a .py file. I have read numerous threads concerning this issue and attempted some of them, such as modifying/removing registry keys and then reinstalling/repairing the software. I am not interested in using an IDE at this point, though many will be happy to know I intend to use an IDE later on. Right now, the purpose is to fix the problem rather than avoid and work around it. I appreciate the help I've gotten from the online community in the past, and I'm confident someone will come through with a solution for me. How do I get "Edit with IDLE" back in the context menu?
"Edit with IDLE" option missing from context menu
0
0
0
60,527
29,179,631
2015-03-21T05:35:00.000
-1
0
1
0
python,contextmenu,edit,python-idle
51,091,088
11
false
0
0
After uninstalling both 2.7 and 3.6, reinstalling 3.6, I ran the init.py ,main.py, and idle.pyw found in C:\Program Files\python\Lib\idlelib and the edit with menu reappeared
6
25
0
I have Python 2.7.5 that installed with ArcGIS 10.2.2. When I first right-clicked a .py script I'd previously written it listed the "Edit with IDLE" option in the context menu. However, this option no longer appears when I right-click a .py file. I have read numerous threads concerning this issue and attempted some of them, such as modifying/removing registry keys and then reinstalling/repairing the software. I am not interested in using an IDE at this point, though many will be happy to know I intend to use an IDE later on. Right now, the purpose is to fix the problem rather than avoid and work around it. I appreciate the help I've gotten from the online community in the past, and I'm confident someone will come through with a solution for me. How do I get "Edit with IDLE" back in the context menu?
"Edit with IDLE" option missing from context menu
-0.01818
0
0
60,527
29,179,631
2015-03-21T05:35:00.000
-1
0
1
0
python,contextmenu,edit,python-idle
47,934,165
11
false
0
0
This issue is arising because of the problem in the registry of Python installation. While one may edit the registry and resolve the issue, the simple solution for this can be: DELETE ALL THE REGISTRIES pertaining to the py extensions and Re-install Python and let installation take its course of action. The problem will definitely resolve. Happy Programming
6
25
0
I have Python 2.7.5 that installed with ArcGIS 10.2.2. When I first right-clicked a .py script I'd previously written it listed the "Edit with IDLE" option in the context menu. However, this option no longer appears when I right-click a .py file. I have read numerous threads concerning this issue and attempted some of them, such as modifying/removing registry keys and then reinstalling/repairing the software. I am not interested in using an IDE at this point, though many will be happy to know I intend to use an IDE later on. Right now, the purpose is to fix the problem rather than avoid and work around it. I appreciate the help I've gotten from the online community in the past, and I'm confident someone will come through with a solution for me. How do I get "Edit with IDLE" back in the context menu?
"Edit with IDLE" option missing from context menu
-0.01818
0
0
60,527
29,179,631
2015-03-21T05:35:00.000
0
0
1
0
python,contextmenu,edit,python-idle
41,958,627
11
false
0
0
As click to save button to save your python code there will be Two Extensions...1) .py and 2) .pyw. So for Python 2 you have to save python program using extension .pyw.
6
25
0
I have Python 2.7.5 that installed with ArcGIS 10.2.2. When I first right-clicked a .py script I'd previously written it listed the "Edit with IDLE" option in the context menu. However, this option no longer appears when I right-click a .py file. I have read numerous threads concerning this issue and attempted some of them, such as modifying/removing registry keys and then reinstalling/repairing the software. I am not interested in using an IDE at this point, though many will be happy to know I intend to use an IDE later on. Right now, the purpose is to fix the problem rather than avoid and work around it. I appreciate the help I've gotten from the online community in the past, and I'm confident someone will come through with a solution for me. How do I get "Edit with IDLE" back in the context menu?
"Edit with IDLE" option missing from context menu
0
0
0
60,527
29,183,178
2015-03-21T13:18:00.000
1
0
0
0
python,statistics,scipy,scikit-learn,scikits
29,191,059
1
false
0
0
I think you can make use of a "naive Bayes" classifier here. In that case, the class (M or F) probability is a product of terms, one term for each available feature set, and you just ignore (exclude from the product) any feature set that is missing. Here is the justification. Let's say the feature sets are X1, X2, X3. Each of these is a vector of features. The naive Bayes assumption is that feature sets are independent given the class, i.e., P(X1, X2, X3 | C) = P(X1 | C) P(X2 | C) P(X3 | C). (Remember that this is just a simplifying assumption -- it might or might not be true!) When all feature sets are present, the posterior class probability is just P(C | X1, X2, X3) = P(X1, X2, X3 | C) P(C) / Z = P(X1 | C) P(X2 | C) P(X3 | C) P(C) / Z, where Z is the normalizing constant that makes the probabilities of the 2 classes add up to 1. So to make use of this formulation, you need a density model for each of the feature sets; if this approach makes sense to you, we can talk about those density models. Now what if a feature set (let's say X3) is missing? That means we need to calculate P(C | X1, X2) = P(X1, X2 | C) P(C) / Z. But note that P(X1, X2 | C) = integral P(X1, X2, X3 | C) dX3 = integral P(X1 | C) P(X2 | C) P(X3 | C) dX3 = P(X1 | C) P(X2 | C) integral P(X3 | C) dX3 by the naive Bayes assumption. Note that integral P(X3 | C) dX3 = 1, so P(X1, X2 | C) = P(X1 | C) P(X2 | C), i.e., the naive Bayes assumption still holds for just the observed feature sets, so you can go ahead and calculate P(C | X1, X2) = P(X1 | C) P(X2 | C) P(C) / Z, i.e., when some feature set is missing in a naive Bayes model, just ignore it.
1
1
1
I have some data containing usernames and their respective genders. For example, an entry in my data list may look like: {User: 'abc123', Gender: 'M'} For each username, I am also given a bag of text, images, and locations attached to each of them, although it's not necessary that a user has at least one text, one image and one location attached to them. For each data source, I can translate them into a vector of features, which I then feed into a classifier. I can then confirm if the classifier is effective via 10-fold cross-validation. I want to combine some output from all the classifiers such that I can feed them into a meta-classifier to hopefully improve accuracy. The problem is that since the data is incomplete, I can't simply combine all the vectors generated from each data source and feed them into one classifier. Some users may not have image data, or others may not have location data. My current idea is to use each classifier to obtain some category probability set for each user, something like [Male: 0.75, Female: 0.25] from each data source's classifier, multiply all the categories' values, and use the highest value as the program's predicted category. So if I have 1 classifier for each data source (text, image, location), then I have a total of 3 classifiers. Even if one or two of the data sources are missing for some users, I can still obtain a category probability set for those users. Does scikit-learn have any algorithm that can output a probability weight that the user is of some gender instead of just classifying them? Or is there some other algorithm that satisfies my needs? Thanks for going through my wall of text!
Classifying users by demographic using incomplete data
0.197375
0
0
74
29,186,447
2015-03-21T18:30:00.000
0
0
1
0
python,audio
29,192,486
1
false
0
0
I dont't know how you recorded your text but if you call every word in an other line you can type import time at line 1 and and then after every audio file you could set time.sleep(0.3) (0.3 stands for 0.3 seconds). but that may take a while. it would be useful to see your code. so can you send it maybe?
1
0
0
I want to make a simple script that uses audio files to talk a user through a process. When the user has finished the current step, it should stop trying to explain that step and move into the next. This is easy to do, except that it sounds very ugly and unnatural when the audio stops mid-word. I've noticed in Grand Theft Auto 5, when a character is talking and something unexpected happens, the audio pauses between words and then resumes at the beginning of the sentence a few seconds later -- which sounds very natural, because that's how people really speak. I'd like to find a way to do this with my script. It doesn't need to resume at the beginning of the sentence, because it's responding the the user being ready to move on, but I'd like it to somehow find the spaces between words and pause there. Is there a simple way to do this in Python, or maybe an easier way to do it in something else, assuming that it's not simple in Python? Edit: The audio has yet to be recorded, so if there's some way I should record it to make this happen, I can do that. I'm aware that GTA5 had a team of programmers, and I'm just one guy making a simple script -- but it seems like there should be a simple solution, like something that looks for silence in the audio, or maybe marking the spaces between words by hand?
python - stop audio between words
0
0
0
100
29,190,350
2015-03-22T02:39:00.000
0
0
1
0
python-2.7,pygame,importerror
29,221,166
1
false
0
1
There is only really only one way to do this: Create a folder Put the Python Interpreter you are using in that folder Put the PyGame module you are using in that same folder And your problem is now solved. I hope this helps you!
1
0
0
Okay, so I am brand new at this and I really need for this to be dumbed down for me. My python version is 2.7.9 and I downloaded pygame-1.9.1.win32-py2.7.msi and I am on a windows computer. I really need someone to explain why this is not working. I was reading on some of the other questions that you have to change the path of python and I have no idea how to do that. I am literally screwed because I have to turn something in for a major grade and I have literally been trying to figure this out for like about four weeks now. When I try to import pygame, I get this error message and I have absolutely no idea what I am doing wrong: Traceback (most recent call last): File "C:\Users\Tiffany\Documents\NetBeansProjects\NewPythonProject\src\newpythonproject.py", line 3, in import pygame ImportError: No module named pygame
ImportError: No module named pygame and how to change the path of pygame?
0
0
0
3,786
29,190,427
2015-03-22T02:53:00.000
0
0
1
0
python,user-interface,tkinter
29,191,478
2
false
0
1
I would say that if your program is simply accessing data and not interacting with the data, then a GUI seems to be a bit of overkill. GUI's are guided user interfaces, as you know, and are made for guiding a user through an interface. If the interface is just a status, as you indicated, then I see nothing wrong with a console program. If, however, your program also interacts with data in a way that would be difficult without a GUI, then the GUI is likely the right choice. Have you considered a GUI in another programming language? Python is known to be a bit slow, even in console. In my experience, C++ is faster in terms of viewing data. Best of luck!
1
1
0
I have written a monitoring program for the control system at our plant. It is basically a GUI which lets the operator see the current status of the lock of the closed loop system and aware the operator in case the lock/loop breaks. Now, the operation is heavily dependent on the responses of the GUI. My seniors told me that they prefer just the console prints instead of using TKinter based GUI as TKinter has lags while working in real time. Can anyone please comment on this aspect? Can this lag be checked and corrected? Thanks in advance.
Python TKinter for real time GUIs
0
0
0
1,676
29,193,127
2015-03-22T10:13:00.000
2
0
1
0
python,list,optimization,slice,micro-optimization
29,193,425
2
true
0
0
It'll depend entirely on how many elements you delete. In CPython, the list type uses a dynamic overallocation strategy to avoid having to resize the underlying C array too often. There is an array to hold the elements, and it is kept slightly too large at all times. Deletion then (using del TruncList[-n:]) could be a virtually free operation, provided n is sufficiently small. In fact, you can safely delete up to half the size of the over-allocated array, before a resize occurs. Resizing requires copying across all existing references to a new array. Using a slice is always going to create new list object, requiring allocation of memory and copying across of the elements involved. This is slightly more work than re-allocation of data. So, without measuring time performance (using timeit), I'd expect the del option to be faster than slicing; in the case of n < len(TruncList) // 2 (less than half the length) in many cases you don't even incur a resize, and even if you did, slightly less work needs to be done as only the internal array has to be recreated. When you remove items from the front, you'll always have to recreate the internal array. The differences won't be a stark then, but creating a slice is still going to result in allocation for an entirely new object.
1
4
0
Suppose I have a list TruncList with some number of elements greater than n. If I want to remove n elements from the end of that list, is it faster to redefine the list as a slice of itself preserving the desired elements, as by TruncList = TruncList[:-n], or to delete the slice of unwanted elements from the list, as by del TruncList[-n:]? Does the answer change if I was removing the first n elements from TruncList instead, as in TruncList = TruncList[n:] versus del TruncList[:n]? Besides speed, is one of these methods more Pythonic than the other? I would imagine that the redefinition method might be slower, since it iterates through TruncList and then reassigns it, while del truncates the list in place, but I'm not sure if either of these are the case. I would also suppose del is the better route, because it seems like the natural use of the function.
Is it faster to truncate a list by making it equal to a slice, or by using del?
1.2
0
0
2,584
29,196,096
2015-03-22T15:21:00.000
1
0
0
0
python,mysql,mysql-python
29,199,028
2
false
0
0
For this purpose you can use Persistence Connection or Connection Pool. Persistence Connection - very very very bad idea. Don't use use it! Just don't! Especially when you are talking about web programming. Connection Pool - Better then Persistence Connection, but with no deep understanding of how it works, you will end with the same problems of Persistence Connection. Don't do optimization unless you really have performance problems. In web, its common to open/close connection per page request. It works really fast. You better think about optimizing sql queries, indexes, caches.
1
0
0
I have a MySQLdb installation for Python 2.7.6. I have created a MySQLdb cursor once and would like to reuse the cursor for every incoming request. If 100 users are simultaneously active and doing a db query, does the cursor serve each request one by one and block others? If that is the case, is there way to avoid that? Will having a connection pool will do the job in a threadsafe manner or should I look at Gevent/monkey patching? Your responses are welcome.
Is a MySQLdb cursor for Python blocking in nature by default?
0.099668
1
0
633
29,197,996
2015-03-22T18:11:00.000
0
0
1
0
c#,python,windows-8,windows-store-apps,ironpython
29,978,493
1
true
0
1
I don't think MS allows this functionality. As an alternative, you can have the user put their mouse over the window and press a keyboard shortcut (What I am doing). That is the best one can do.
1
0
0
Background I am working on a program that needs to find a list of open Metro apps. I originally tried using pure python with ctypes and using win32 api. Unfortunately, I couldn't get the names of Metro apps. So I moved on to IronPython thinking I could leverage a .net function or two to get what I want. No luck. Where I am at I can easily get a list of running processes and their PID, but filtering the Metro apps from the non-metro apps is proving near impossible. Also, I need to get the HWND of that window so I can interact with it. What Won't Work EnumWindows doesn't seem to work. FindWindowEx seems promising, but I am not sure how to do this. Thanks in advance. EDIT: I am now able to get what I want using IsImmersiveProcess, but the process list is doesn't include the Windows Apps.
IronPython-List Open Metro Apps only
1.2
0
0
121
29,199,958
2015-03-22T21:16:00.000
0
0
1
0
python,validation,parameters
29,200,223
1
false
0
0
In general, I think it does not hurt to validate input parameters to a method every time it is called, even if it is unlikely that the parameters are wrong. The computational overhead is negligible in most cases (say checking type with if type(x) is not int: raise TypeError takes ~100 ns on my laptop, if the condition is verified). Besides, I'm not sure that doing conditional validation is worth it, with regards to code maintainability (it just makes things more complicated). Of course, this is also problem specific. For instance, if you have a computationally critical function that is called repeatedly (say more then million times) within a loop, it is probably worth skipping the validation step and check the parameters beforehand.
1
2
0
I have a bit of a stylistic question about parameter validation. (Using Python) Say I have a method with a parameter a, which needs to be an int, and maybe needs to be in a certain range - i.e. a list index or something. I could use assertions/other validation to ensure this, but what if I only call the function from one or two places, and the parameter is validated to the proper value/type there? Maybe its possible that the function could be called from other places in the future, but for now, it is 'basically' impossible to have an invalid parameter passed. It feels unnecessary to add validation code to something that doesn't really need it, but it also seems sloppy to leave the function open to raising an uncaught error if its called from somewhere different. Sorry if this is too abstract - I expect the answer may just be "it depends" but I was curious if there was a general consensus about this.
Making assumptions about parameter types and values?
0
0
0
34
29,200,214
2015-03-22T21:43:00.000
0
0
1
0
python,widget
29,200,555
1
false
0
1
Now that I think about it, as long as I do a .place_forget() prior to any .place that should take care of any extra instances in memory. Can anyone confirm?
1
0
0
Up until now to move widgets I have been issuing another .place on it. I wonder if that is creating another instance of the widget in memory or not? If it is creating another copy in memory, what is the correct way to move a widget when using place? Do I need to keep destroying the widget and placing it perhaps over and over?
Python: When using place what is the correct way to move widgets?
0
0
0
13
29,203,302
2015-03-23T04:31:00.000
-1
0
0
1
google-app-engine,google-cloud-storage,google-app-engine-python
29,222,762
1
false
1
0
I think it happens because when get_serving_url service resize the image, it always resize the image from the longest side of the Image, keeping the aspect ration same. If you have a image of 1600x2400, then the resize image is 106x160 to keep the aspect ratio is same. In your case one of the image is 306x408 (which is correct)as Image is resized from the height, and the other image is 360x270 (in which orientation change) the image is resized from width. I think in the later-one the orientation is changed just to keep the aspect ratio same.
1
1
0
We are developing an image sharing service using GAE. Many users have reported since last week that "portrait images are oriented in landscape". We found out that from a specific timing, the specification of images uploaded and distributed through GAE has changed. So the specs seem to have changed around 3/18 03:25(UTC) . The "orientation" of Exif is not properly applied. We are using GAE/Python. We save images uploaded by the users to GoogleCloudStorage, then use the URL we get with get_serving_url to distribute them. Is this problem temporal? Also, is it possible to return to the specs before 3/18 03:22(UTC)?
Orientation error for images uploaded to GAE (GCS + get_serving_url)
-0.197375
0
0
218
29,204,576
2015-03-23T06:44:00.000
2
0
0
0
python,windows,paramiko
29,214,110
1
false
0
0
It depends on the protocol you are using, if the file is big, use UDP, if the file is small use TCP, if the file is small use SSH. You don's necessarily need paramiko or fabric to communicate with another computer, since they are for ssh connections. If you know the protocol, then it is easier to communicate.
1
0
0
What would be the best way of downloading / uploading files / directory to / from remote windows server and local windows machine using python scripting language? Modules I heard of are paramiko and fabric... Apart from these any other good option/choice?
How to download files from remote windows server
0.379949
0
1
608
29,205,574
2015-03-23T08:07:00.000
0
0
0
0
python,bokeh
61,935,681
2
false
0
0
First you save your grid in an object, Let's say "a" the you confirms that "a" is a grid. Example grid = bly.gridplot(graficas,ncols=3) # Here you save your grid bpl.show(grid). # Here you show your grid from bokeh.io import export_png # You need this library to export Exportar grid export_png(grid, filename="your path/name.png")
1
4
1
I am using bokeh (0.8.1) in combination with the ipython notebook to generate GridPlots. I would like to automatically store the generated GridPlots to a single png (hopefully pdf one day) on disk, but when using "Preview/Save" it cycles through all of the figures asking me to store them separately. Is there a more efficient way?
How to save a bokeh gridplot as single file
0
0
0
1,852
29,208,955
2015-03-23T11:20:00.000
0
1
0
0
oauth,openid,python-social-auth
30,144,643
1
false
1
0
You could simply use the yahoo site's favicon. It's already squared and gets as close to official as it can get.
1
0
0
I want my users to be able to login using all common OpenIds. But there seems to be a forest of clauses on how to use logos from google, facebook, yahoo and twitter. Actually I'd prefer a wider button with the text on it, but it seems that all these buttons don't have the aspect ratio. And some of them I am not allowed to scale. Also some pages switched to javascript buttons which seems incredibly stupid, because that causes the browser to download all the javascripts and the page looks like 1990 where you can watch individual elements loading. So finally I surrendered and went for the stackoverflow approach of using square logos with the text "login with" next to them. But now I can't find an official source for a square yahoo icon. The question: Can you recommend any articles or do you have any tipps on how to make OpenId look uniform? And: Have you seen an official source for a square Icon that I'm allowed to use? PS: I'm using python-social-auth for Django.
How to make most common OpenID logins look the same?
0
0
0
21
29,212,385
2015-03-23T14:08:00.000
1
0
0
0
python,numpy
29,212,482
2
false
0
0
Your p array is in the wrong order. You should start with the coefficient of the highest exponent. Try with p=[1,0,2,1].
1
1
1
I don't know much about Python and I'm trying to use it to do some simple polynomial interpolation, but there's something I'm not understanding about one of the built-in functions. I'm trying to use polyval(p,x) to evaluate a polynomial p at x. I made an example polynomial p(x) = 1 + 2x + x^3, I created an array p = [1,2,0,1] to define it and I want to know the values at x = 0,1,2 so I created another array x = [0,1,2]. Doing polyval(p,x) gave me an output of [1, 4, 17]. p(0) and p(1) are correct, but p(2) should be 13, not 17. Why is it giving me 17?
Polyval(p,x) to evaluate polynomials
0.099668
0
0
5,765
29,214,605
2015-03-23T15:51:00.000
1
1
0
0
python,openshift,flask-restful
29,459,242
1
false
0
0
In openshift, if you want your data to be persistent you have to save it in the 'OPENSHIFT_DATA_DIR'. I fixed the problem by placing the knowledge base file in this directory instead and referencing this location in my code.
1
1
0
I am working on a customer care chatbot based on aiml using flaskrestful and pyaiml. My problem is when I test it locally it is working, i.e. responding from the aiml knowledge base, when I put it on openshift the python script won't find the aiml file and I don't understand why.
openshift won't load aiml file
0.197375
0
0
59
29,215,850
2015-03-23T16:52:00.000
1
1
0
0
python-2.7,automated-tests,robotframework
29,216,521
1
false
0
0
In the "Run" Tab, put "--noncritical your_tags" in the "Arguments" box. That should do it.
1
0
0
Let's assume I want to set a test case status with (non critical) so that when the generated report which is displayed after execution excludes this test cases from its critical test cases... if i want to do that from the command line I would write ( pybot --noncritical ) but what if I want to do the same in RIDE tool ?
How to set the criticality of certain test case in RIDE?
0.197375
0
0
677
29,220,054
2015-03-23T20:47:00.000
0
0
0
0
python,graph,reportlab,pydot
29,267,352
2
false
0
0
After an endless search, I figured there is no way. So, in the end I exported the pydot as a pdf, imported the pdf via PyPDF2 and then merged it with the ReportLab document. Far from ideal, but it does the job.
2
0
0
I created a graph in pydot, e.g. the python graphviz interface. Now, I would like to enter this graph into my reportlab report. Is there any direct way of doing this?
How to insert a pydot chart into reportlab?
0
0
0
90
29,220,054
2015-03-23T20:47:00.000
0
0
0
0
python,graph,reportlab,pydot
31,353,473
2
false
0
0
You can use a similar (but perhaps slightly better) path with pdfrw -- you can run pdfrw in conjunction with reportlab, and import the PDF you exported from pydot, and use it as a form XObject (similar to an image) on the reportlab canvas. There are a few examples showing how to do this sort of thing in the pdfrw examples/rl1 subdirectory. Disclaimer: I am the pdfrw author.
2
0
0
I created a graph in pydot, e.g. the python graphviz interface. Now, I would like to enter this graph into my reportlab report. Is there any direct way of doing this?
How to insert a pydot chart into reportlab?
0
0
0
90
29,220,747
2015-03-23T21:27:00.000
1
0
1
1
python,configuration,docker
29,226,897
1
true
0
0
I guess that will very much depend. It might be useful to distinguish between two types of configuration: the one which define the way the container (application code contained) functions and the one which defines infrastructure (db credentials, collaborators endpoints, etc.). The functional configuration would more naturally be a part of the image, as often you would like to minimize the variation in the behavior of the resulting containers. The infrastructure configuration on the other hand has to be specified at the run time for a particular instance (container). The more docker way is to use environmental variables, but at the end it can be anything that suits your needs.
1
1
0
Much like data volumne, the configuration for a python app should persist across changes in the app container. A file in a separate data container? Database in a separate data container? I realize there are multiple ways to store the configuration information. But what patterns are being used in today's Dockerized web apps?
Where to store Dockerized python application configuration
1.2
0
0
58
29,221,836
2015-03-23T22:44:00.000
0
1
0
1
python,openshift
29,236,634
2
false
0
0
You can not use the "yum" command to install packages on OpenShift. What specific issue are you having? I am sure that at least some of those packages are already installed in OpenShift online already (such as wget). Have you tried running your project to see what specific errors you get about what is missing?
1
0
0
Hello i want to install these dependencies in OpenShift for my App yum -y install wget gcc zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel libffi-devel libxslt libxslt-devel libxml2 libxml2-devel openldap-devel libjpeg-turbo-devel openjpeg-devel libtiff-devel libyaml-devel python-virtualenv git libpng12 libXext xorg-x11-font-utils But don't know how, is it through rhc? if so, how?
How to install dependencies in OpenShift?
0
0
0
461
29,222,699
2015-03-24T00:03:00.000
2
0
0
0
python,wxpython,wxwidgets
29,233,860
1
true
0
1
You might be able to steal focus by calling Raise on the frame. There is the wx.STAY_ON_TOP style flag that could also be applied. Then you might be able to just use the frame's Show and Hide methods to make it work. Depending on what exactly you want to do, you might take a look at the ToasterBox widget. I can't recall if it actually steals focus, but it just pop up and go away on its own, which is handy in some cases.
1
1
0
I want to make a heads-up display that pops up it's frame, taking focus from the previous application and then gives focus back to that application when it's done. wxPython frames have Hide and Show methods that work but don't focus the application. Also, there's SetFocus which you'd think would do it but doesn't. Update I found a nasty hack that works. Rather than hide and show the frame, you save the app state and close the app instead of hiding. Then when you want to show again you spark a new one in a new multiprocessing.Process. Hardly ideal though. I hope someone has something better. Clarification To clarify, I'm trying to get the keyboard focus, not just to get the frame on top of the stack. This issue might be specific to OS X.
How can a wxpython frame "steal" and "return" focus similar to the Dash app?
1.2
0
0
542
29,223,787
2015-03-24T02:04:00.000
1
0
0
0
python,text,input,io
29,223,811
2
false
0
0
Try putting input("Press Enter to continue") between printing each wall of text.
1
0
0
I am currently working on a game and I was wondering if there was any way to execute commands like a text file by user input? I would like to make it where the text doesn't pop up all at once, but where you could do something like "Press any key to continue" and when they do that, the next wall of text appears. Any help is appreciated. Thanks.
Python: Execute a line by user input?
0.099668
0
0
196
29,226,186
2015-03-24T06:19:00.000
0
0
1
0
autocomplete,pycharm,ipython-notebook
29,228,795
1
true
0
0
OK by trial and error I got a fix. For some reason if you put %pylab inline or %matplolib inline in the same cell with your imports this will mess up the code autocompletion. Put %pylab in its own cell and it fixes the problem
1
0
0
Hi I've been trying out Pycharms integrated IPython and at first the code autocompletion was working just fine, but now it stopped working.. It still works in a .py file inside the IDE but not in .ipynb files. Any idea why this would be happening?
Pycharm 4 code autocompletion does not work in IPython
1.2
0
0
124
29,226,713
2015-03-24T07:03:00.000
4
0
1
0
python,dictionary
29,227,080
1
true
0
0
Python doesn't offer a hybrid dict natively. For the most part, it has no need for a blend between list-style storage and hashtable-style storage. The dict methods are thread-safe in the sense that they don't break the dict invariants when used in a multi-threaded environment (you won't segfault or break the dict). However, a number of the operations aren't atomic (hashing and equality tests, for example, can make callbacks into pure python code), so you can avoid race conditions by using locks around the relevant operations.
1
1
0
I am new to Python. Currently I am developing a Python project. For it, there is a need for thread safe hybrid dictionary. Is there any thread safe dictionary for python like hybrid dictionary in C#? I would appreciate some good answers.
Can we have Dictionary that is concurrent and hybrid in Python?
1.2
0
0
230
29,228,969
2015-03-24T09:26:00.000
0
0
1
0
python,analytics,jython,spss,spss-modeler
29,246,574
1
false
0
0
Modeler scripting uses Jython, not standard Python, but you should be able to load any library that is compatible with Jython. SPSS Statistics uses standard Python 2.7
1
0
0
I'm trying to build an analytic model with SPSS Modeler 16.0 and there is some parts that are easy to do with Python. Now my question is what are the limits for using python or jython scripting in SPSS ? Can I import all the libraries that I want as well as in a classic python program ?
Can I add python script to SPSS Modeler 16?
0
0
0
805
29,229,273
2015-03-24T09:42:00.000
3
1
0
1
python,windows,python-2.7
36,078,931
5
false
0
0
Here is another check to make, which helped me figure out what was going on. I switched from the 32bit Anaconda to the 64bit version. I deinstalled, downloaded then reinstalled, but several things didn't get cleaned up properly (quick launch stuff, and some registry keys). The problem on my side was that the default installation path changed, from C:\Anaconda to C:\Anaconda2. I first tried the assoc and ftype tricks, everything was fine there. However, the HKEY_CLASSES_ROOT\Applications\python.exe\shell\open\command registry key was pointing to the old Anaconda path. As soon as I fixed this, python.exe showed up when I tried associating with "Open with" and everything went back to normal. I also added the %* at the end in the registry key.
2
3
0
i've installed py 2.7 (64bit) on my PC with Win7 (64bit) without problem but I'm not able to run *.py scripts via DOS shell without declare python full path. Let me better explain : If I type D:\ myscript.py it doesn't work. The script is open with wordpad If I type D:\ C:\Python27 myscript.py it works and run correctly I try to change the default application software for *.py file via Win7 GUI ( control pannel etc etc) but without success. Python is not present in the list of available sw and in any case also with the manual set I'm not able to associate python.exe at *.py files. I've checked in my environment variables but I've not found problem (python path is declared in Path = C:\Python27\;C:\Python27\Scripts). I've tried also to modify HKEY_CLASSES_ROOT->Applications->python.exe->shell->open->command : old register value "C:\Python27\python.exe" "%1" new register value "C:\Python27\python.exe" "%1" %* without success. Any suggestion? Thanks
Impossible to set python.exe to *.py scripts on Win7
0.119427
0
0
1,995
29,229,273
2015-03-24T09:42:00.000
0
1
0
1
python,windows,python-2.7
56,543,680
5
false
0
0
@slv 's answer is good and helped me a bit with solving this problem. Anyhow, since I had previous installations of Python before this error occured for me, I might have to add something to this. One of the main problems hereby was that the directory of my python-installation changed. So, I opened regedit.exe and followed these to steps: I searched the entire registry for .py, .pyw, .pyx and .pyc (hopefully I did not forget to mention any here). Then, I radically deleted all occurrences I could find. I searched the entire registry for my old python-installation-path (e.g. C:\Users\Desktop\Anaconda3). Then I replaced this path with my new installation path (e.g. C:\Users\Desktop\Miniconda3). Thereby, I also came across and replaced HKEY_CLASSES_ROOT\Applications\python.exe\shell\open\command which @slv mentioned. Afterwards, it was possible again to connect a .py-file from the Open with...-menu with my python.exe.
2
3
0
i've installed py 2.7 (64bit) on my PC with Win7 (64bit) without problem but I'm not able to run *.py scripts via DOS shell without declare python full path. Let me better explain : If I type D:\ myscript.py it doesn't work. The script is open with wordpad If I type D:\ C:\Python27 myscript.py it works and run correctly I try to change the default application software for *.py file via Win7 GUI ( control pannel etc etc) but without success. Python is not present in the list of available sw and in any case also with the manual set I'm not able to associate python.exe at *.py files. I've checked in my environment variables but I've not found problem (python path is declared in Path = C:\Python27\;C:\Python27\Scripts). I've tried also to modify HKEY_CLASSES_ROOT->Applications->python.exe->shell->open->command : old register value "C:\Python27\python.exe" "%1" new register value "C:\Python27\python.exe" "%1" %* without success. Any suggestion? Thanks
Impossible to set python.exe to *.py scripts on Win7
0
0
0
1,995
29,229,735
2015-03-24T10:08:00.000
0
0
0
0
python,django,internationalization,translation,gettext
33,277,961
3
false
1
0
I think the problem lies in your MIDDLEWARE_CLASSES. The thing is, there are some middlewares that might change your request, including a language prefix. Especially, when you use AJAX calls for querying extra template data, translated by ugettext, gettext, etc.
2
1
0
I try to translate my django site to another languages but translation in python doesn't work. But translation in templates using trans tag, works as expected. I have tried ugettext, gettext, gettext_lazy and ugettext_lazy, and every time I got original untranslated strings. My sources all in utf-8 encoding, original strings in Ukrainian language
django translation doesn't work but translation in templates works
0
0
0
3,112
29,229,735
2015-03-24T10:08:00.000
0
0
0
0
python,django,internationalization,translation,gettext
33,302,047
3
false
1
0
The ugettext_lazy will not work if string contains non Latin symbols. So in my case the original strings must be the Unicode objects.
2
1
0
I try to translate my django site to another languages but translation in python doesn't work. But translation in templates using trans tag, works as expected. I have tried ugettext, gettext, gettext_lazy and ugettext_lazy, and every time I got original untranslated strings. My sources all in utf-8 encoding, original strings in Ukrainian language
django translation doesn't work but translation in templates works
0
0
0
3,112
29,230,333
2015-03-24T10:36:00.000
0
0
0
1
python,django,web-applications,stress-testing,djcelery
29,233,194
1
false
1
0
I think there is no need to write your own stress testing script. I have used www.blitz.io for stress testing. It is set up in minutes, easy to use and it makes beautiful graphs. It has a 14 day trial so you just can test the heck out of your system for 14 days for free. This should be enough to find all your bottlenecks.
1
0
0
I have a web application running on django, wherein an end user can enter a URL to process. All the processing tasks are offloaded to a celery queue which sends notification to the user when the task is completed. I need to stress test this app with the goals. to determine breaking points or safe usage limits to confirm intended specifications are being met to determine modes of failure (how exactly a system fails) to test stable operation of a part or system outside standard usage How do I go about writing my script in Python, given the fact that I also need to take the offloaded celery tasks into account as well.
How to write a stress test script for asynchronous processes in python
0
0
0
988
29,232,101
2015-03-24T11:58:00.000
0
0
1
0
python,windows,anaconda
43,085,233
3
false
0
0
Install newest version of Anaconda from their website. This will upgrade your Anaconda and python as well. Also, uninstall the previous version of Anaconda.
1
6
0
After successful installation of Anaconda on Windows 7 I realized the default Python version is 2.7.8. However I need 2.7.9. So how do I upgrade?
How do I upgrade python 2.7.8 to 2.7.9 in Anaconda without conflicting other components in its environment?
0
0
0
6,678