Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
26,266,186 | 2014-10-08T20:48:00.000 | 0 | 0 | 1 | 0 | python | 26,276,515 | 1 | false | 0 | 0 | To answer my own question: You need to create a contradiction. Create 3 variables (A, B, C). In order to create an unsolvable computation you need to create a boolean operation that makes it impossible to solve.
(A or B) AND (Not A or Not B) AND (B or C) AND (Not B or C)
A truth table would show that no combination of values of A, B or C could result in the above computation being true. To represent this for pycosat it would look like:
[[1, 2], [-1, -2], [2, 3], [-2, 3]] | 1 | 0 | 0 | I have a class assignment in which I need to write a function that tests for SAT using the pycosat library. I'm having difficulty trying to figure out a set of parameters that would return "UNSAT" from the library. Can someone please help me find a set of parameters that are not "solvable"? Looking over the unit tests for the library, the only instance I can find is [[1], [-1]]
**The assignment is much more complex, and I'm only looking to understand the SAT solver that is used to test my assignment. | Python SAT with pycosat | 0 | 0 | 0 | 783 |
26,268,586 | 2014-10-09T00:38:00.000 | 0 | 1 | 1 | 1 | python,ruby,ipython,ipython-notebook | 33,757,780 | 1 | false | 0 | 0 | Require ./your_program works well for me | 1 | 3 | 0 | Is there a way to run a Ruby program with iruby? I want to run a script instead of entering my code in iruby notebook console.
I assume that iruby would be the same with ipython. | Run a Ruby or Python script from iruby or ipython notebook? | 0 | 0 | 0 | 435 |
26,269,366 | 2014-10-09T02:27:00.000 | 0 | 1 | 1 | 1 | python,multithreading,multiprocessing,ipython,interpreter | 26,269,376 | 1 | false | 0 | 0 | One option is to set a variable (e.g. environment variable, commandline option) when debugging. | 1 | 0 | 0 | I'm working on a project that spins off several long-running workers as processes. Child workers catch SIGINT and clean up after themselves - based on my research, this is considered a best practice, and works as expected when terminating scripts.
I am actively developing this project, which means that I am regularly testing changes in the interpreter. When I'm working in an interpreter, I often hit CTRL+C to clear currently written text and get a fresh prompt. Unfortunately, if I do this while a subprocess is running, SIGINT is sent to that worker, causing it to terminate.
Is there a solution to this problem other than "never hit CTRL+C in your interpreter"? | Handling SIGINT (ctrl+c) in script but not interpreter? | 0 | 0 | 0 | 47 |
26,269,514 | 2014-10-09T02:50:00.000 | 1 | 1 | 0 | 0 | python,ssl,network-programming,network-security | 26,291,779 | 3 | true | 0 | 0 | What is the safest/smartest way to securely transmit data from one server to another (think government/bank-level security)
It depends on your threat model, but intrasite VPN is sometimes used to tunnel traffic like this.
If you want to move up in the protocol stack, then mutual authentication with the client pinning the server's public key would be a good option.
In contrast, I used to perform security architecture work for a US investment bank. They did not use anything - they felt the leased line between data centers provided enough security. | 3 | 0 | 0 | I need to build a Python application that receives highly secure data, decrypts it, and processes & stores in a database. My server may be anywhere in the world so direct connection is not feasible. What is the safest/smartest way to securely transmit data from one server to another (think government/bank-level security). I know this is quite vague but part of the reason for that is to not limit the scope of answers received.
Basically, if you were building an app between two banks (this has nothing to do with banks but just for reference), how would you securely transmit the data?
Sorry, I should also add SFTP probably will not cut it since this python app must fire when it is pinged from the other server with a secure data transmission. | Most secure server to server connection | 1.2 | 0 | 1 | 697 |
26,269,514 | 2014-10-09T02:50:00.000 | 1 | 1 | 0 | 0 | python,ssl,network-programming,network-security | 26,269,821 | 3 | false | 0 | 0 | Transmission and encryption need not happen together. You can get away with just about any delivery method, if you encrypt PROPERLY!
Encrypting properly means using a large, randomly generated keys, using HMACs (INSIDE! the encryption) and checking for replay attacks. There may also be a denial of service attack, timing attacks and so forth; though these may also apply to any encrypted connection. Check for data coming in out of order, late, more than once. There is also the possibility (again, depending on the situation) that your "packets" will leak data (e.g. transaction volumes, etc).
DO NOT, UNDER ANY CIRCUMSTANCES, MAKE YOUR OWN ENCRYPTION SCHEME.
I think that public key encryption would be worthwhile; that way if someone collects copies of the encrypted data, then attacks the sending server, they will not have the keys needed to decrypt the data.
There may be standards for your industry (e.g. banking industry), to which you need to conform.
There are VERY SERIOUS PITFALLS if you do not implement this sort of thing correctly. If you are running a bank, get a security professional. | 3 | 0 | 0 | I need to build a Python application that receives highly secure data, decrypts it, and processes & stores in a database. My server may be anywhere in the world so direct connection is not feasible. What is the safest/smartest way to securely transmit data from one server to another (think government/bank-level security). I know this is quite vague but part of the reason for that is to not limit the scope of answers received.
Basically, if you were building an app between two banks (this has nothing to do with banks but just for reference), how would you securely transmit the data?
Sorry, I should also add SFTP probably will not cut it since this python app must fire when it is pinged from the other server with a secure data transmission. | Most secure server to server connection | 0.066568 | 0 | 1 | 697 |
26,269,514 | 2014-10-09T02:50:00.000 | 0 | 1 | 0 | 0 | python,ssl,network-programming,network-security | 26,320,472 | 3 | false | 0 | 0 | There are several details to be considered, and I guess the question is not detailed enough to provide a single straight answer. But yes, I agree, the VPN option is definitely a safe way to do it, provided you can set up a VPN.If not, the SFTP protocol (not FTPS) would be the next best choice, as it is PCI-DSS compliant (secure enough for banking) and HIPAA compliant (secure enough to transfer hospital records) and - unlike FTPS - the SFTP protocol is a subsystem of SSH and it only requires a single open TCP port on the server side (22). | 3 | 0 | 0 | I need to build a Python application that receives highly secure data, decrypts it, and processes & stores in a database. My server may be anywhere in the world so direct connection is not feasible. What is the safest/smartest way to securely transmit data from one server to another (think government/bank-level security). I know this is quite vague but part of the reason for that is to not limit the scope of answers received.
Basically, if you were building an app between two banks (this has nothing to do with banks but just for reference), how would you securely transmit the data?
Sorry, I should also add SFTP probably will not cut it since this python app must fire when it is pinged from the other server with a secure data transmission. | Most secure server to server connection | 0 | 0 | 1 | 697 |
26,272,360 | 2014-10-09T07:10:00.000 | 1 | 0 | 1 | 0 | python,csv | 26,272,399 | 2 | false | 0 | 0 | Convert the number to string with formatting operator; in your case: "%09d" % number. | 1 | 1 | 0 | I'm trying to write a csv file from json data. During that, i want to write '001023472' but its writing as '1023472'. I have searched a lot. But dint find an answer.
The value is of type string before writing. The problem is during writing it into the file.
Thanks in advance. | Write Number as string csv python | 0.099668 | 0 | 0 | 2,718 |
26,280,838 | 2014-10-09T14:19:00.000 | 0 | 0 | 0 | 0 | python,math,signal-processing | 26,286,979 | 3 | false | 0 | 0 | Assuming that you've loaded multiple readings of the PSD from the signal analyzer, try averaging them before attempting to find the bandedges. If the signal isn't changing too dramatically, the averaging process might smooth away any peaks and valleys and noise within the passband, making it easier to find the edges. This is what many spectrum analyzers can do to make for a smoother PSD.
In case that wasn't clear, assume that each reading gives you 128 tuples of the frequency and power and that you capture 100 of these buffers of data. Now average the 100 samples from bin 0, then samples from 1, 2, ..., 128. Now try and locate the bandpass on this data. It should be easier than on any single buffer. Note I used 100 as an example. If your data is very noisy, it may require more. If there isn't much noise, fewer.
Be careful when doing the averaging. Your data is in dB. To add the samples together in order to find an average, you must first convert the dB data back to decimal, do the adds, do the divide to find the average, and then convert the averaged power back into dB. | 1 | 0 | 1 | The data that i have is stored in a 2D list where one column represents a frequency and the other column is its corresponding dB. I would like to programmatically identify the frequency of the 3db points on either end of the passband. I have two ideas on how to do this but they both have drawbacks.
Find maximum point then the average of points in the passband then find points about 3dB lower
Use the sympy library to perform numerical differentiation and identify the critical points/inflection points
use a histogram/bin function to find the amplitude of the passband.
drawbacks
sensitive to spikes, not quite sure how to do this
i don't under stand the math involved and the data is noisy which could lead to a lot of false positives
correlating the amplitude values with list index values could be tricky
Can you think of better ideas and/or ways to implement what I have described? | How can I find the break frequencies/3dB points from a bandpass filter frequency sweep data in python? | 0 | 0 | 0 | 1,113 |
26,281,339 | 2014-10-09T14:42:00.000 | 0 | 0 | 1 | 0 | python,windows,virtualenv | 26,281,449 | 1 | false | 0 | 0 | My suggestion would be to find where vcvarsall.bat is located on your computer. Check your path environment variable to see if that directory is there. Then, check path as it stands inside your virtualenv, and see if activating your virtualenv removed that directory from your path. | 1 | 0 | 0 | On my Windows7 (64bit) computer, I installed Python 2.7 from python.org and did a pip install of ipython, pyzmq, jinja2, and tornado in order to use the notebook. I also installed numpy and scipy, which at some point required to install a C++ compiler (I used VCForPython27). Everything worked just fine. Then, I did a pip install of virtualenv and virtualenvwrapper-win. I created a test virtualenv, with "mkvirtualenv test", and inside it I did "pip install ipython", which worked, but then "pip install pyzmq" failed with the message: "error: unable to find vcvarsall.bat". I did some research and the suggestions I found involve installing other software, such as a C++ compiler, which I already did. My question is, why do I need to do that? pyzmq was installed without problems from the "root" python install, but for some reason I cannot pip-install it inside virtualenvs. Perhaps some important environment variable is gone when workon test is called? Any suggestions? | pip install pyzmq in a virtualenv on windows7 | 0 | 0 | 0 | 311 |
26,282,986 | 2014-10-09T16:04:00.000 | 4 | 0 | 1 | 1 | python | 36,884,696 | 2 | false | 0 | 0 | Python 2 and 3 can safely be installed together. They install most of their files in different locations. So if the prefix is /usr/local, you'll find the library files in /usr/local/lib/pythonX.Y/ where X.Y are the major and minor version numbers.
The only point of contention is generally is the file python itself, which is generally a symbolic link.
Currently it seems most operating systems still use Python 2 as the default, which means that python is a symbolic link to python2. This is also recommended in the Python documentation.
It is best to leave it like that for now. Some programs in your distributions may depend on this, and might not work with Python 3.
So install Python 3 (3.5.1 is the latest version at this time) using your favorite package manager or compiling it yourself. And then use it by starting python3 or by putting #!/usr/bin/env python3 as the first line in your Python 3 scripts and making them executable (chmod +x <file>). | 1 | 4 | 0 | My OS is CentOS 7.0. It's embedded python version is 2.7, and I want to update it to Python 3.4.
when input the
print sys.path
output is:
['', '/usr/lib/python2.7/site-packages/setuptools-5.8-py2.7.egg',
'/usr/lib64/python27.zip', '/usr/lib64/python2.7',
'/usr/lib64/python2.7/plat-linux2', '/usr/lib64/python2.7/lib-tk',
'/usr/lib64/python2.7/lib-old', '/usr/lib64/python2.7/lib-dynload',
'/usr/lib64/python2.7/site-packages',
'/usr/lib64/python2.7/site-packages/gtk-2.0',
'/usr/lib/python2.7/site-packages']
So, if I download the python 3.7, then ./configure , make , make install. Will it override all the python-related files ? Or if I use the
./configure --prefix=***(some path)
then is it safe to remove all the old python files or directory?
In a word, hope someone gives me instructions about how to update to python 3 on linux. Thanks a lot. | how to update python 2.7 to python 3 in linux? | 0.379949 | 0 | 0 | 20,066 |
26,284,375 | 2014-10-09T17:21:00.000 | 0 | 0 | 1 | 0 | ipython-notebook | 28,226,995 | 2 | false | 0 | 0 | CTRL-Enter, Enter
(Release CTRL before the second Enter.)
CTRL-Enter 1) executes the cell, 2) keeps you in the same cell, and 3) puts you into command mode.
Enter puts you into input/editing mode again. | 2 | 0 | 0 | When I execute a cell in iPython by pressing command+enter, it moves to the next cell. Frequently I'd like to quickly go back to the previous cell and modify the code snippet, and would prefer not reaching for my mouse.
The up arrow brings me back to the cell I just executed. Is there a keyboard shortcut to continue typing in that cell?
Thanks. | iPython Notebook keyboard shortcut to continue editing code | 0 | 0 | 0 | 139 |
26,284,375 | 2014-10-09T17:21:00.000 | 1 | 0 | 1 | 0 | ipython-notebook | 26,285,195 | 2 | true | 0 | 0 | shift-enter execute cell and select the next, ctrl-enter execute and keep the same cell selected. Please read the help -> keyboard shortcut dialog. | 2 | 0 | 0 | When I execute a cell in iPython by pressing command+enter, it moves to the next cell. Frequently I'd like to quickly go back to the previous cell and modify the code snippet, and would prefer not reaching for my mouse.
The up arrow brings me back to the cell I just executed. Is there a keyboard shortcut to continue typing in that cell?
Thanks. | iPython Notebook keyboard shortcut to continue editing code | 1.2 | 0 | 0 | 139 |
26,285,393 | 2014-10-09T18:19:00.000 | 0 | 0 | 0 | 0 | c++,python-2.7,windows-xp,gettickcount | 26,286,120 | 1 | false | 0 | 1 | Guess I posted to fast since I think I've got it sorted myself. For anyone else in the same position I downloaded the Python source and compiled with Windows XP flags in VS2005 and all seems well with the world. | 1 | 0 | 0 | I'm attempting to embed a python module within a larger c++ program (Relevant details:VS2005, WinXP Python 2.7). When I create a new instance of the class that includes 'python.h' and attempt to run my program I get the error message "The procedure entry point GetTickCount64 could not be located in the dynamic link library KERNEL32.ll".
I've read online that this happens because GetTickCount64 doesn't exist in XP so I made sure to add the correct windows headers to all of my files. However I still get the error and it occurs even if I comment out everything in the offending class except the include for Python.h.
So to get to an actual question. I was wondering if Python itself could be calling or including GetTickCount64 and if so how to stop it from doing so.
Thanks for any help! | GetTickCount64 error using Python and C++ | 0 | 0 | 0 | 505 |
26,286,599 | 2014-10-09T19:37:00.000 | 0 | 0 | 1 | 0 | python,pip | 26,286,699 | 1 | false | 0 | 0 | Answer: You cannot!
Why would you even consider such thing?
This why so many project still use Python 2.7 and not 3.4, because so many modules are not ported yet.
However, more and more are, especially the famous ones.
The only thing you can do, is to find an equivalent / replacement module doing the same thing.
Update:
For creating queues, you could leverage the Deque (collections.deque) or find some implementations of queues using deque. | 1 | 0 | 0 | I would like to install the mailer library with pip, but it imports the module Queue, which has been renamed in python 3 to queue, how can I download it using pip? | How to make pip install python 2.x modules in python 3.x | 0 | 0 | 0 | 88 |
26,286,604 | 2014-10-09T19:37:00.000 | 0 | 1 | 0 | 0 | python,python-2.7,openshift,ftplib | 26,286,904 | 1 | false | 0 | 0 | Have you added it to your dependencies for your application?
Python now supports the use of requirements.txt to handle your dependencies, although python handles things a little different from php/perl. Your requirements.txt can be in your app’s root directory. If both your setup.py and requirements.txt exist within your repo, then both will be processed. | 1 | 0 | 0 | I'm having application using python 2.7 in openshift and trying to copy file using ftplib.
When I try it in local virtenv everything is OK. But after deployment on openshift getting 500 on web site. Removing code related to ftplib makes it working (it is enough to comment out import ftplib).
It looks like openshift is missing ftplib. Anybody with similar problem? How to get it there? | ftplib is missing in openshift | 0 | 0 | 0 | 128 |
26,287,047 | 2014-10-09T20:04:00.000 | 3 | 0 | 1 | 0 | python,regex,string,replace | 26,287,152 | 1 | false | 0 | 0 | This might work. (?<!\d)\d{2}(?!\d) | 1 | 0 | 0 | I have a String input and I need to remove two digit numbers from the string, wherever it appears.
Example:
str = "abcdef12efgh abc12 abc12345defg 12 12abc abc123"
The required output should be:
abcdefefgh abc abc12345defg abc abc123
I am able to remove the two digits prefixed/suffixed by '<space>', but not 'abcdef12efgh'.
Is there a regex for doing this? Or should I iterate through the string and remove the two digit numbers checking if the string has non-numeric character before/after it. | Python regex to remove two digit number in between strings | 0.53705 | 0 | 0 | 1,049 |
26,289,153 | 2014-10-09T22:43:00.000 | 12 | 0 | 1 | 0 | python,multiprocessing,ipython,pdb,ipdb | 35,398,114 | 3 | false | 0 | 0 | Sometimes for debugging You can change your code to use multiprocessing.dummy . This way, no fork will be done, it will work with threads and be easier to debug.
Later on (after the bug is squashed...) you can switch back to multiprocessing
multiprocessing.dummy - should offer the same API as multiprocessing so its an easy change... | 1 | 23 | 0 | I use ipdb.set_trace() whenever I need to set a break point in my code. Right now, I'm trying to use it in a process that I've created using multiprocessing, while the code does stop, I can't type anything to continue debugging. Is there any way to get my stdin directed properly?
Ideally, I would like to imagine a new console opening everytime a forked process is stopped for debugging, however I don't think this is possible. | How to use ipdb.set_trace in a forked process | 1 | 0 | 0 | 2,836 |
26,290,757 | 2014-10-10T01:54:00.000 | 0 | 0 | 0 | 0 | wxpython,wxwidgets | 26,307,804 | 4 | false | 0 | 1 | Well then, I would suppose that the code is handling the resize event, allowing you to make the window larger, but not smaller.
Look for something like void handlerFuncName(wxSizeEvent& event)
Also look for wxEVT_SIZE | 2 | 1 | 0 | I have inherited a wxPython app (story of my life of late) and you cannot make the window any smaller (but you can make it larger). What could be preventing it from being resized smaller? What could I grep for to find what is causing this? The window contains a Notebook with 2 tabs. One tab has a Grid and the other has a Panel and 3 Grids. | wxPython: what is preventing me from making a window smaller? | 0 | 0 | 0 | 193 |
26,290,757 | 2014-10-10T01:54:00.000 | 0 | 0 | 0 | 0 | wxpython,wxwidgets | 26,292,854 | 4 | false | 0 | 1 | It is likely a call to SetMinSize for your Dialog/Frame. | 2 | 1 | 0 | I have inherited a wxPython app (story of my life of late) and you cannot make the window any smaller (but you can make it larger). What could be preventing it from being resized smaller? What could I grep for to find what is causing this? The window contains a Notebook with 2 tabs. One tab has a Grid and the other has a Panel and 3 Grids. | wxPython: what is preventing me from making a window smaller? | 0 | 0 | 0 | 193 |
26,293,138 | 2014-10-10T06:22:00.000 | 4 | 1 | 1 | 0 | python,pydev | 26,293,169 | 1 | true | 0 | 0 | Window-Preferences-PyDev-Editor-Templates-Change your Empty template | 1 | 1 | 0 | I'm using pydev in eclipse. When new a .py file, there will be file info(author, create date etc.)generated like below:
"""
Created on Fri Oct 10 13:50:18 2014
@author: XXXX
"""
How to change the format? | auto-generation of python file info(author, create date etc.) | 1.2 | 0 | 0 | 2,065 |
26,293,481 | 2014-10-10T06:49:00.000 | 0 | 0 | 1 | 0 | django,python-2.7,django-nonrel | 26,293,980 | 1 | false | 1 | 0 | Django-nonrel isn't "compatible" with anything. It is actually a fork of Django, currently based on the 1.5 release. | 1 | 1 | 0 | I am using Django 1.7 and I want to use MongoDB, So for that I try to install django-nonrel. Please let me know django-nonrel is compatible with Django 1.7? | does django-nonrel is compatible with django 1.7 | 0 | 1 | 0 | 250 |
26,293,638 | 2014-10-10T07:00:00.000 | 1 | 0 | 0 | 0 | python,checkbox,pyqt,pyqt4,pyside | 26,311,689 | 1 | true | 0 | 1 | use QButtonGroup to make them as a group and you might want to derive a class from this and override the basic check/uncheck depending on how you want the checkboxes to behave | 1 | 0 | 0 | I am making a GUI for a script in Python, using PySide Qt. I have a couple of checkboxes in the main window and I was wondering if it is possible to make them dependant to each other. What I mean is that, if I check one of them, all the others should become unchecked. So only one can be checked at a time.
Is there a comfortable way to do this? Otherwise I would just write a function to uncheck the others. | How to allow only one checkbox checked at a time? | 1.2 | 0 | 0 | 2,086 |
26,295,908 | 2014-10-10T09:16:00.000 | 1 | 0 | 0 | 0 | django,python-2.7,django-paypal,paypal,django-oscar | 27,332,269 | 1 | true | 1 | 0 | The problem was i was using get method for redirecting to paypal,later i changed that to post and it worked for me.I have used oscar for this, in that checkout view i wrote redirection in post function,which made it work | 1 | 0 | 0 | Am using django oscar paypal and django oscar-0.7, when i submit for payment page get redirected to paypal site. After continuing that step and redirecting back to my site, it shows basket is empty and all paypal session is lost. Am stuck up here. Please help me in this? | Paypal loses session data | 1.2 | 0 | 0 | 151 |
26,296,993 | 2014-10-10T10:11:00.000 | 0 | 0 | 1 | 1 | python,python-2.7,cmd | 26,297,007 | 1 | false | 0 | 0 | It's totally posible and will fork fine. | 1 | 0 | 0 | I have got a python (2.7) script and I would like to start it in multiple CMDs at the same time. Is this possible or would the script crash? | Running a python script several times at the same time? | 0 | 0 | 0 | 70 |
26,299,460 | 2014-10-10T12:28:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,openssl,fips | 26,355,082 | 3 | false | 0 | 0 | I dont think sslv3 is supported in FIPS mode. Try using SSLv23_server_method instead of SSLv3_method | 1 | 0 | 0 | I have built python with fips capable openssl, all things seem to be working fine but call to wrap_socketfails with the error "Invalid SSL protocol variant specified" when fips mode is enabled. This call succeeds when not in fips mode
Debugging through the code it was found that the call to SSL_CTX_new(SSLv3_method() in _ssl.c is returning null in fips mode as a result of which the above mentioned error is occurring
Any idea as to what might be causing this, is it possible that some non fips components are getting called ? | call to ssl.wrap_socket fails with the error Invalid SSL protocol variant specified | 0 | 0 | 1 | 593 |
26,299,978 | 2014-10-10T12:55:00.000 | 1 | 0 | 0 | 0 | python,algorithm,pattern-matching,cluster-analysis,data-mining | 26,303,687 | 2 | false | 0 | 0 | If your comparison works with "create a sum of all features and find those which the closest sum", there is a simple trick to get close objects:
Put all objects into an array
Calculate all the sums
Sort the array by sum.
If you take any index, the objects close to it will now have a close index as well. So to find the 5 closest objects, you just need to look at index+5 to index-5 in the sorted array. | 1 | 0 | 1 | I have 1,000 objects, each object has 4 attribute lists: a list of words, images, audio files and video files.
I want to compare each object against:
a single object, Ox, from the 1,000.
every other object.
A comparison will be something like:
sum(words in common+ images in common+...).
I want an algorithm that will help me find the closest 5, say, objects to Ox and (a different?) algorithm to find the closest 5 pairs of objects
I've looked into cluster analysis and maximal matching and they don't seem to exactly fit this scenario. I don't want to use these method if something more apt exists, so does this look like a particular type of algorithm to anyone, or can anyone point me in the right direction to applying the algorithms I mentioned to this? | Algorithm for matching objects | 0.099668 | 0 | 0 | 543 |
26,301,635 | 2014-10-10T14:21:00.000 | 2 | 0 | 0 | 0 | python,linux,qt,installation,pyqt4 | 27,922,827 | 2 | false | 0 | 1 | Search string 'PrintCurrentPage' in files of your PyQt-package. You will find it in 4 files.
Remove corresponded lines with string 'PrintCurrentPage' | 1 | 5 | 0 | I know this is probably something trivial, but I cannot seem to find the answer. I have just completed a fresh install of Scientific Linux 6.5 - which ships with Python 2.6 and Qt 4.6.2. I wish to use the Python interpreter python2.7.8 so downloaded this and installed. I use the QtDesigner for ease when making guis, so then need the PyQt bindings to go with it. I therefore downloaded SIP-4.16.3, configured with:
python2.7 ./configure (in the sip download directory)
to make the bindings for the newer version of python. Everything works fine so far.
I then try to install PyQt4.11.2 in the same way:
python2.7 ./configure --qmake=/usr/lib/qt4/bin/qmake -g (to pick up the qt4 version of qmake with static qt libraries)
the configure script completes fine, but I get the following error during 'make':
error: ‘PrintCurrentPage’ is not a member of ‘QAbstractPrintDialog’
..../Downloads/PyQt-x11-gpl-4.11.2/QtGui/sipQtGuiQAbstractPrintDialog.cpp:1787: error: too many initializers for ‘sipEnumMemberDef’
make[1]: * [sipQtGuiQAbstractPrintDialog.o] Error 1
make[1]: Leaving directory `..../Downloads/PyQt-x11-gpl-4.11.2/QtGui'
make: * [all] Error 2
I am at this point a little lost and have been bashing my head for a while, it must be something simple I have missed, any help would be great.
Thanks in advance | Installing PyQt4.11.2 on Scientific Linux 6.5 | 0.197375 | 0 | 0 | 1,861 |
26,302,718 | 2014-10-10T15:18:00.000 | 0 | 0 | 0 | 1 | python,apache,deployment,openerp,mod-wsgi | 26,330,930 | 1 | false | 1 | 0 | The schedulers don't work when running through wsgi because your Odoo instances are just workers. AFAIK you just run a standalone instance on a 127.0.0.1 port and it runs your scheduled tasks. | 1 | 0 | 0 | When I deploy openerp/odoo using mod_wsgi, I found my schedulers stop working, can any one help how can I get my cron/schedulers working. If I deploy it using mod_proxy it will solve the issue but I want to deploy using mod_wsgi. | odoo mod_wsgi schedulers not working | 0 | 0 | 0 | 246 |
26,303,493 | 2014-10-10T16:00:00.000 | 1 | 0 | 0 | 0 | python,colors,crosstab,tibco,spotfire | 26,714,084 | 3 | false | 0 | 0 | One can make a color according to category by using Properties->Color->Add Rule , where you can see many conditions to apply your visualization. | 1 | 0 | 1 | Spotfire 5.5
Hi, I am looking for a way to color code or group columns together in a Spotfire cross-table. I have three categories (nearest, any, all) and three columns associated with each category. Is there a way I can visually group these columns with their corresponding category.
Is there a way to change column heading color?
Is there a way to put a border around the three column groups?
Can I display their category above the three corresponding columns?
Thanks | Spotfire column title colors | 0.066568 | 0 | 0 | 3,241 |
26,304,413 | 2014-10-10T16:57:00.000 | 0 | 0 | 0 | 0 | python,django | 26,305,048 | 1 | false | 1 | 0 | When you say "procedures" i guess you're talking about pages (or views in Django). So I would implement a single "app" to do that.
Remember a project is composed of apps. When you create a project a main app (with the same name of the project) is created. This is a good place to code the procedures you said.
Think of apps as indepent sections of your project (site); maybe some forum, a blog, a custom administration panel, a game and stuff like that. Every one of them could be an independent app.
A project is mostly intended as a single website, so there's no need to create another project on the example you mentioned. | 1 | 1 | 0 | I'm learning Django but it's difficult to me to see how should I divide a project into apps?
I've worked on some Java EE systems, almost for procedures for government and all that stuff but I just can't see how to create a Django project for this purposes?
For example, if you should have to do a web app for making easier three process: Procedure to get the passport, procedure to get the driver license and procedure to get the social number.
The 3 procedures have steps in common: Personal information, Contact Information, Health Information. Would you do a project for each procedure, an app for each procedure, an app for each step?
I'm sorry if I'm posting this in the wrong Stack Exchange site.
Thank you. | Django apps structure | 0 | 0 | 0 | 73 |
26,306,159 | 2014-10-10T18:53:00.000 | 2 | 0 | 0 | 0 | python,windows,python-3.x | 26,306,346 | 4 | true | 0 | 0 | There are many possible solutions, but my recommendation is to solve this with a database. I prefer MySQL since it is free and easy to setup. It is immediate and you can avoid simultaneous update file locking problems because Innodb feature of MySQL automatically handles row locking. It's actually easier to setup a database than to try to write your own solution using files or other communication mechanisms (unless you already have experience with other techniques). Multiple computers is also not an issue and security is also built-in.
Just setup your MySQL server and write a client application to update the data from multiple computer to your server. Then you can write your server application to process the input and that program can reside on your MySQL server or any other computer.
Python is an excellent programming language and provides full support through readily available modules for MySQL. This solution is also scalable, because you can start with basic command line programs... then create desktop user interface with pyqt and later you could add web interfaces if you desire. | 3 | 1 | 0 | I'm looking for a reliable method of sending data from multiple computers to one central computer that will receive all the data, process and analyse it. All computers will be on the same network.
I will be sending text from the machines so ideally it will be a file that I need sending, maybe even an XML file so I can parse it into a database easily.
The main issue I have is I need to do this in near enough real-time. For example if an event happens on pc1, I need to be able to send that event plus any relevant information back to a central pc so it can be used and viewed almost immediately.
The plan is to write a python program that possibly acts as a sort of client that detects the event and sends it off to a server program on the central pc.
Does anyone have suggestions on a reliable way to send data from multiple computers to one central computer/server all on the same network and preferably without going out onto the web. | method of sending data from multiple computers too a central location in real-time using python | 1.2 | 0 | 1 | 252 |
26,306,159 | 2014-10-10T18:53:00.000 | 0 | 0 | 0 | 0 | python,windows,python-3.x | 26,308,107 | 4 | false | 0 | 0 | I assume you are working only on a secure lan, since you do not speak about security, but only low latency. In fact there are many solutions each with its advantages and drawbacks.
Simple messages using UDP. Very low overhead on network, client and server. Drawback : in UDP you can never be sure that a message cannot be lost. Use case : small pieces of information that can be generated at any time with high frequency, and it is not important if one is lost
Messages over a pre-established TCP connection. High overhead on server, client and network, because each client will establish and maintain a connection to the server, and the server will simultaneouly listen to all its clients. Drawback: need to reopen a connection if it breaks, server will have to multiplex io, you have to implement a protocol to separate messages. Use case : you cannot afford to loose any message, they must be sent as soon as possible and each client must serialize its own messages.
Messages over TCP, each message will use its own connection. Medium overhead. Drawback : as a new connection is established per message, it may introduce a latency and overhead in high frequency events. Use case : low frequency events, but a single client PC may send simultaneously multiple messages | 3 | 1 | 0 | I'm looking for a reliable method of sending data from multiple computers to one central computer that will receive all the data, process and analyse it. All computers will be on the same network.
I will be sending text from the machines so ideally it will be a file that I need sending, maybe even an XML file so I can parse it into a database easily.
The main issue I have is I need to do this in near enough real-time. For example if an event happens on pc1, I need to be able to send that event plus any relevant information back to a central pc so it can be used and viewed almost immediately.
The plan is to write a python program that possibly acts as a sort of client that detects the event and sends it off to a server program on the central pc.
Does anyone have suggestions on a reliable way to send data from multiple computers to one central computer/server all on the same network and preferably without going out onto the web. | method of sending data from multiple computers too a central location in real-time using python | 0 | 0 | 1 | 252 |
26,306,159 | 2014-10-10T18:53:00.000 | 0 | 0 | 0 | 0 | python,windows,python-3.x | 26,306,417 | 4 | false | 0 | 0 | Netcat over TCP for reliability, low overhead and simplicity. | 3 | 1 | 0 | I'm looking for a reliable method of sending data from multiple computers to one central computer that will receive all the data, process and analyse it. All computers will be on the same network.
I will be sending text from the machines so ideally it will be a file that I need sending, maybe even an XML file so I can parse it into a database easily.
The main issue I have is I need to do this in near enough real-time. For example if an event happens on pc1, I need to be able to send that event plus any relevant information back to a central pc so it can be used and viewed almost immediately.
The plan is to write a python program that possibly acts as a sort of client that detects the event and sends it off to a server program on the central pc.
Does anyone have suggestions on a reliable way to send data from multiple computers to one central computer/server all on the same network and preferably without going out onto the web. | method of sending data from multiple computers too a central location in real-time using python | 0 | 0 | 1 | 252 |
26,307,163 | 2014-10-10T19:58:00.000 | 1 | 1 | 0 | 1 | php,python,wordpress,ssh,webpage | 26,307,516 | 2 | false | 1 | 0 | Have a program monitor the file, either locally or via SSH. Have that program push updates into your web backend, via HTTP API or such. | 1 | 1 | 0 | I have a temperature logger that measures the records temperature values at specified time intervals. Currently I push these to a google spreadsheet but would like to display the values automatically on a web-page.
I have no experience with anything to do with web-pages, except setting up a few Wordpress sites but am reasonably comfortable with C++, Python, Matlab and Java.
A complicating factor is that the machine is in a VPN, so that access it via SSH I need to join the VPN.
I suspect the best way is to have a Python script that periodically send a up-to-date file to the web-server via ftp and then some script on the server that plots this.
My initial though was to use Python via something like CGI to read the data and create a plot on the server. However, I have no idea what the best approach on the server side would be. Is it worth to learn some PHP? Or should I write a Java Applet? Or CGI is the way to go?
Thank you for your help | Best way to display real-time data from an SSH accesible file on webpage | 0.099668 | 0 | 0 | 856 |
26,307,256 | 2014-10-10T20:06:00.000 | 3 | 1 | 0 | 0 | python,imap,long-polling,gmail-api | 26,307,963 | 2 | true | 0 | 0 | Would definitely recommend against IMAP, note that even with the IMAP IDLE command it isn't real time--it's just polling every few (5?) seconds under the covers and then pushing out to the connection. (Experiment yourself and see the delay there.)
Querying history.list() frequently is quite cheap and should be fine. If this is for a sizeable number of users you may want to do a little bit of optimization like intelligent backoff for inactive mailboxes (e.g. every time there's no updates backoff by an extra 5s up to some maximum like a minute or two)?
Google will definitely not bust down your door or likely even notice unless you're doing it every second with 1M users. :)
Real push notifications for the API is definitely something that's called for. | 1 | 2 | 0 | I'm building an installation that will run for several days and needs to get notifications from a GMail inbox in real time. The Gmail API is great for many of the features I need, so I'd like to use it. However, it has no IDLE command like IMAP.
Right now I've created a GMail API implementation that polls the mailbox every couple of seconds. This works great, but times out after a while (I get "connection reset by peer"). So, is it reasonable to turn off the sesson and restart it every half an hour or so to keep it active (like with IDLE)? Is that a terrible, terrible hack that will have google busting down my door in the middle of the night?
Would the proper solution be to log in with IMAP as well and use IDLE to notify my GMail API module to start up and pull in changes when they occur? Or should I just suck it up and create an IMAP only implementation? | Long polling with GMail API | 1.2 | 0 | 1 | 1,369 |
26,308,150 | 2014-10-10T21:17:00.000 | 1 | 0 | 0 | 0 | python,twisted,qnx | 26,308,463 | 2 | true | 0 | 0 | Did you try reactor.run(installSignalHandlers=False)? This limits the reactor's functionality a bit, but it may allow you to limp along. | 2 | 0 | 0 | I'm experimenting with running twisted/crossbar.io on QNX (target:powerpc-unknown-nto-qnx6.5.0), however it appears that QNX does not have siginterrupt() and SA_RESTART flag is not supported. As result signals.siginterrupt() does not exist in embedded python.
Is there any way to run/patch python/twisted on a system like this? Right now it dies when the handlers are installed because signals module does not have siginterrupt(). Even in the old 2.6 days when iternet/signals were built as c library, they relied on essentially implementing siginterrupt using SA_RESTART.
Is there any other alternative? | Running twisted on posix (QNX) system that does not have siginterrupt() | 1.2 | 0 | 0 | 169 |
26,308,150 | 2014-10-10T21:17:00.000 | 1 | 0 | 0 | 0 | python,twisted,qnx | 26,308,907 | 2 | false | 0 | 0 | Is there any way to run/patch python/twisted on a system like this?
The general answer is "port Twisted to your target platform". Twisted has interacts extensively with the platform it is running on. You might trick it into not dying with an AttributeError in one place with a simple patch but this doesn't mean that Twisted will actually behave the way it is intended to behave.
Do you have plans to complete a porting effort of Twisted to QNX? Or do you just have your fingers crossed that with signal issues out of the way everything else will Just Work? At minimum, you should be running the test suite to see where there may be problems (though passing tests also don't guarantee Twisted is actually working correctly, since those tests were all written with other platforms in mind).
A more specific answer is that you could grab an older version of the twisted.internet._signals module (try r35834; r35835 deleted a lot of old support code). The Python 3 porting effort removed some of the alternate (not as good but more portable) signal handling strategies from this module. | 2 | 0 | 0 | I'm experimenting with running twisted/crossbar.io on QNX (target:powerpc-unknown-nto-qnx6.5.0), however it appears that QNX does not have siginterrupt() and SA_RESTART flag is not supported. As result signals.siginterrupt() does not exist in embedded python.
Is there any way to run/patch python/twisted on a system like this? Right now it dies when the handlers are installed because signals module does not have siginterrupt(). Even in the old 2.6 days when iternet/signals were built as c library, they relied on essentially implementing siginterrupt using SA_RESTART.
Is there any other alternative? | Running twisted on posix (QNX) system that does not have siginterrupt() | 0.099668 | 0 | 0 | 169 |
26,310,822 | 2014-10-11T03:41:00.000 | -1 | 0 | 0 | 0 | python,indexing | 26,310,892 | 1 | false | 0 | 0 | You can make __getitem__ which takes arbitrary objects as indices (and floating point numbers in particular). | 1 | 0 | 1 | I am working on a fluid dynamics simulation tool in Python. Traditionally (at least in my field), integer indices refer to the center of a cell. Some quantities are stored on the faces between cells, and in the literature are denoted by half-integer indices. In codes, however, these are shifted to integers to fit into arrays. The problem is, the shift is not always consistent: do you add a half or subtract? If you switch between codes enough, you quickly lose track of the conventions for any particular code. And honestly, there are enough quirky conventions in each code that eliminating a few would be a big help... but not if the remedy is worse than the original problem.
I have thought about using even indices for cell centers and odd for faces, but that is counterintuitive. Also, it's rare for a quantity to exist on both faces and in cell centers, so you never use half of your indices. You could also implement functions plus_half(i) and minus_half(i), but that gets verbose and inelegant (at least in my opinion). And of course floating point comparisons are always problematic in case someone gets cute in how they calculate 1/2.
Does anyone have a suggestion for an elegant way to implement half-integer indices in Python? I'm sure I'm not the first person to wish for this, but I've never seen it done in a simple way that is obvious to a user (without requiring the user to memorize the shift convention you've chosen).
And just to clarify: I assume there is likely to be a remap step hidden from the user to get to integer indices (I intend to wrap NumPy arrays for my data grids). I'm thinking of the interface exposed to the user, rather than how the data is stored. | Half-integer indices | -0.197375 | 0 | 0 | 149 |
26,311,022 | 2014-10-11T04:20:00.000 | 0 | 0 | 0 | 0 | android,python,adb,monkeyrunner | 26,311,559 | 1 | true | 1 | 0 | You are trying to install an APK which is intended for a higher version onto API level 8 and thus the package manager refuses to install it. It has nothing to do with adb or monkeyrunner. | 1 | 0 | 0 | I've downloaded an APK onto a Velocity Cruz Tablet running Android 2.2.1 (API Level 8), and I'm trying to install it via whatever I can manage to make work. I already had ADT on my computer (Windows 8.1 if this helps) for API Level 19 for use with my phone. So I used the SDK Manager to get API Level 8. I can't for the life of me figure out how to make adb or monkeyrunner target API Level 8. I've got the paths right but the problem I'm having is making it target the proper API Level. I've gone through the adb commands, pm commands and MonkeyRunner API Documentation, but I don't see anything helpful. I've decided to come here to see if anyone knows what to do. Thanks. | Android - ADB/MonkeyRunner Setting API Levels | 1.2 | 0 | 0 | 225 |
26,311,527 | 2014-10-11T05:42:00.000 | 1 | 0 | 0 | 0 | django,python-2.7,sublimetext3 | 26,313,332 | 1 | false | 1 | 0 | please check if you are permitted to write on that directory. | 1 | 0 | 0 | I am a total noob in python.I am trying to save a .py file which I wrote it using the sublime3 editor but it doesn't allow me to save it.It says the file is read only.I am using django framework .Any suggestions ? | unable to save a .py file in sublime using django | 0.197375 | 0 | 0 | 619 |
26,314,316 | 2014-10-11T11:50:00.000 | 0 | 0 | 0 | 0 | python,amazon-web-services,boto,amazon-emr | 56,300,935 | 4 | false | 0 | 0 | My Step Arguments are: bash -c /usr/bin/flink run -m yarn-cluster -yn 2 /home/hadoop/mysflinkjob.jar
Trying execute same run_job_flow, but getting error:
Cannot run program "/usr/bin/flink run -m yarn-cluster -yn 2
/home/hadoop/mysflinkjob.jar" (in directory "."): error=2, No such
file or directory
Executing same command from Master node working fine, but not from Python boto3
Seems like issue is due to quotation marks which EMR or boto3 add into Arguments.
UPDATE:
Split ALL your Arguments with white-space.
I mean if you need to execute "flink run myflinkjob.jar"
pass your Arguments as this list:
['flink','run','myflinkjob.jar'] | 1 | 22 | 0 | I'm trying to launch a cluster and run a job all using boto.
I find lot's of examples of creating job_flows. But I can't for the life of me, find an example that shows:
How to define the cluster to be used (by clusted_id)
How to configure an launch a cluster (for example, If I want to use spot instances for some task nodes)
Am I missing something? | How to launch and configure an EMR cluster using boto | 0 | 0 | 0 | 20,431 |
26,315,848 | 2014-10-11T14:44:00.000 | 1 | 0 | 0 | 0 | python,button,user-interface,python-3.x,tkinter | 26,315,912 | 2 | false | 0 | 1 | You need a variable in a global or class instance scope and a function that has access to the scope of the variable that increments the variable when called. Set the function as the command attribute of the Button so that the function is called when the button is clicked. | 1 | 0 | 0 | So I am writing a program for school, and I have to make a maths quiz, the quiz needs to be out of 10 questions. I have made a button that is defined with a command that generates a new questions, clears the text box, get the answer from the dictionary, and inserts the new question into the textbox. At the moment the user can press the button as many times as they want. I dont actually know how to count or monitor the amount of times a button in tkinter has been pressed. I would be very grateful if someone could provide me with some code for Python(3.1.4) that I could use to count the amount of times the button has been pressed. | How to count the number of times a button is clicked Python(tkinter) | 0.099668 | 0 | 0 | 17,843 |
26,316,273 | 2014-10-11T15:34:00.000 | 0 | 1 | 0 | 0 | python,heroku,importerror,psycopg2 | 26,317,020 | 1 | false | 1 | 0 | You could keep the psycopg2 directory in the same directory of apps, but that's actually a hack and you should try fix on installing psycopg2 on Heroku. | 1 | 1 | 0 | I have deployed my local python web service project to Heroku. I have all my dependencies in requirement.txt file. But, one module psycopg2 is not getting installed properly and I am getting installation error. So, I removed it from requirement.txt and thought I will push everything to heroku first, and then I will manually copy the psycopg2 module folder in /app/.heroku/python/lib/python2.7/site-packages folder. But I don't know how to access this folder!
Can you please help? | How to manually copy folder to /app/.heroku/python/lib/python2.7/site-packages/? | 0 | 0 | 0 | 248 |
26,318,326 | 2014-10-11T19:16:00.000 | 1 | 0 | 0 | 0 | android,segmentation-fault,android-browser,qpython,pyjnius | 26,428,421 | 2 | true | 1 | 1 | Apparently, this happens just in console mode, so in other QPython mods it works fine. | 1 | 3 | 0 | I'm new developer in QPython (experienced with python), I want to open an url with user's default browser.
I tried AndroidBrowser().open("...") but, to my surprise, I got Segmentation Fault!
So I said OK, let's try to open it manually as activity, then I tried to import jnius and got Segmentation Fault as well.
Any suggestion how to fix it or other ways to open the browser? | Open a URL with default broswer? | 1.2 | 0 | 0 | 1,277 |
26,319,099 | 2014-10-11T20:45:00.000 | 1 | 0 | 1 | 0 | python,keyboard-shortcuts,ipython | 26,319,459 | 1 | false | 0 | 0 | You could try using the %edit or %ed commands to enter into your default editor and have much more flexibility. | 1 | 2 | 0 | Occasionally I'll write a 10 or 20-line function in IPython and notice after I try to execute it that I made a few typos. The commands Ctrl+P and Ctrl+N just take me to previous commands rather than lines, meaning that I currently have to retype the entire function to correct a few typos. Obviously this is time consuming.
Is there a built-in IPython command that will let me navigate across lines in a single long command? The official IPython documentation has not been particularly helpful. Thank you! | How do I move between lines in an IPython terminal on a Mac? | 0.197375 | 0 | 0 | 327 |
26,319,382 | 2014-10-11T21:22:00.000 | 3 | 0 | 1 | 0 | python,fonts,truetype | 26,319,577 | 1 | false | 0 | 1 | It's pretty tricky to do analytically.
One way is trial and error. Choose a large font size and render the layout to see whether it fits
Use bisection algorithm to converge on the largest font that fits | 1 | 0 | 0 | I'm trying to adequate the FontSize of my text into a width-height specific context. For instance, if I got a image (512x512 pixels) and I got for instance 140 characters. What would be the ideal FontSize?
In the above case, a 50 pixels Fontsize seems to be ok but what happened if there's a lot more text? the text will not fit into the picture so it needs reduction. I've been trying to calculate this on my own without success.
What I've tried is this:
get total pixels, with a picture 512x512 = 262144 and divide into the length of the text. But that gives a big number. Even if I divided that number by 4 (thinking about a box-pixel-model for the font).
Do you have any solutions for this?
PS. I've been using truetype (if this is somehow useful)
I've been using python for this purpose, and PIL for image manipulation.
Thank you in advance. | Calculate fontsize | 0.53705 | 0 | 0 | 376 |
26,320,645 | 2014-10-12T00:30:00.000 | 0 | 0 | 0 | 0 | python,deployment,pyramid,wsgi,gunicorn | 26,890,297 | 1 | false | 1 | 0 | I mis-diagnosed the problem. It seems that both Firefox and Chrome perform certain optimizations when loading the same page address multiple times. I thought the server was becoming irresponsive, but in fact there were no requests generated to serve. | 1 | 2 | 0 | It seems to me that my gunicorn workers are restarted each time there is connection reset by a browser (e.g. by reloading a page while a request is still progress or as a result of connectivity problems).
This doesn't seem to be a sensible behaviour. Effectively I can bring down all the workers just by refreshing a page in a browser a few times.
Questions:
What are the possible causes for a gunicorn worker restart?
What would be the right way to debug this behaviour?
I'm using Pyramid 1.4, Gunicorn (tried eventlet, gevent and sync workers - all demonstrate the same behaviour). The server runs behind nginx. | Gunicorn workers restart by connection reset | 0 | 0 | 0 | 882 |
26,320,899 | 2014-10-12T01:17:00.000 | 232 | 0 | 1 | 0 | python,optional-parameters,pylint,optional-arguments | 26,320,917 | 2 | true | 0 | 0 | It's dangerous only if your function will modify the argument. If you modify a default argument, it will persist until the next call, so your "empty" dict will start to contain values on calls other than the first one.
Yes, using None is both safe and conventional in such cases. | 1 | 186 | 0 | I put a dict as the default value for an optional argument to a Python function, and pylint (using Sublime package) told me it was dangerous. Can someone explain why this is the case? And is a better alternative to use None instead? | Why is the empty dictionary a dangerous default value in Python? | 1.2 | 0 | 0 | 83,067 |
26,321,861 | 2014-10-12T04:26:00.000 | 1 | 0 | 0 | 0 | python,macos,sublimetext3 | 26,322,148 | 1 | true | 0 | 1 | You can try to edit the session file and start removing references to the offending code and see if that helps.
On a mac it should be found here:
~/Library/Application Support/Sublime Text 2/Settings/Session.sublime_session
This is just a guess, it might be in another file but it might point you in the right direction. | 1 | 0 | 0 | I am having a severe glitch after accidentally running an infinite loop in sublime text 3 with python.
I was forced to kill the program (force quit) as it became unresponsive. I subsequently tried to reopen Sublime Text 3, however the application became stuck once more in what I can only assume to be an infinite loop, even though I did not build the file again.
I am running Mac OS X version 10.9.5. I believe the problem may be with the fact that OS X reopens closed windows exactly to the state they were and this conflicts with IDEs, but even after unchecking the "Close open windows" option in System Preferences, ST3 remains broken. Any help? Because at this moment, ST3 is completely unusable for me. | Sublime text 3 refuses to load after executing infinite loop in Python | 1.2 | 0 | 0 | 421 |
26,323,645 | 2014-10-12T09:26:00.000 | 1 | 0 | 1 | 0 | python,crash,formatting,usb,python-idle | 26,324,989 | 1 | false | 0 | 0 | Fixed
Method used to fix it:
Go for dinner.
Come back.
Fixed.
Seriously though, i restarted my computer, still the same. I left the computer while i showered and had dinner and when i came back and tried again i was able to open IDLE without it crashing. Part 1 Fixed!
The second fix was to change the default program that IDLE opens in, somehow it had changed to PythonIcon or something that i have never seen before. I changed it back to Python Console and the "edit with Idle" returned.
Strange how things change on their own but all is good now, happy coding.
Ben | 1 | 1 | 0 | I've had Python 3.3.3 installed on my computer for nearly a year now and used it very frequently. This morning I have been having trouble with my USB stick with the PY file I was working on (USB stick was unplugged without ejecting it and now it needs formatting followed by some other errors).
So I am forced to use a PY file from a few days ago that I made as a backup. The problem is that I can no longer right click my PY file and "edit with IDLE".
I opened IDLE up separately and that worked.
I tried opening my PY file from IDLE, this opened my file for a brief second then closed.
The opening then closing happens when I try to save a new file in IDLE and when I try to create a new file.
This is rather odd as I have not edited any of the inner workings of my Python for a number of weeks, I've just been editing and running programs. This leads me to believe that it all stems back to my USB problem.
I hope someone can suggest some ideas for me. | Python 3.3.3 IDLE closes on save and new file creation | 0.197375 | 0 | 0 | 297 |
26,323,942 | 2014-10-12T10:03:00.000 | 3 | 0 | 1 | 0 | python | 26,324,214 | 1 | true | 0 | 1 | Do not use import to implement application logic.
In your use case, a room is the classic example of an object in object-oriented programming. You should have a class Room which defines the functionality for rooms. Individual rooms are instances of that class (later you can add subclasses, but I would not worry about that initially).
Your application will have a "current room" as a variable. It will ask the room about its description and display that to the user. When the user types "go Kitchen", your application will ask the current room "hey, do you have a room named 'Kitchen' as a neighbor?" This method will return the appropriate room object, which your application then can set as the current room.
From the above, you can see two functionalities (methods) rooms should have: "Give me your description" and "give me the adjacent room named 'X', if any".
This should get you started. | 1 | 0 | 0 | I was thinking of making a text-based game about detectives, case-solving, with complete freedom, loads of variables, etc.
But before I get serious with it I need to know how to make rooms. E.g. you start in the hall and you type in "Go kitchen" and you go to the kitchen.
I have achieved this by using import file when you type in "Go kitchen" (the file is the kitchen file), but if I want to go back and forth between them it gives an error.
Is there something I am missing about this method? Is there a better way to do it? The simpler, the better, please. | How best to implement rooms for a text-based game? | 1.2 | 0 | 0 | 486 |
26,324,636 | 2014-10-12T11:28:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine,python-2.7 | 39,908,883 | 1 | false | 1 | 0 | On my PC it is found under this directory:
C:\Users\Bob\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine
I would assume that on Apple OS it will be similar based on where you decided to install the Cloud SDK. | 1 | 2 | 0 | i have searched through many python issues and none seem to help me with mine or i just don't understand it enough to resolve the issue. basically i am trying to learn python so in the process on my mac i have installed serveral versions and had many version issues which so far i have been able to resolve by looking up ways to resolve it. however i am now at a part in head first python that needs to install google app engine the book is a good bit out of date so i installed the latest app engine and made the symbolic links but when i run the application the browser is greyed out. i have seen many references to dev_appserver.py in my long search to resolve this. i cannot my this file anywhere on my machine so i presume i have an issue with my install of python2.7 i have re-installed and then uninstalled and re-installed python 2.7 over and over but still cannot find the dev_appserver.py file . does anyone have a concrete way to ensure dev_appserver.py will be installed. thanks in advance from a seriously frustrated python beginner. | google app engine dev_appserver.py file not found | 0.197375 | 0 | 0 | 1,087 |
26,328,818 | 2014-10-12T19:01:00.000 | 0 | 0 | 1 | 1 | python,macos | 26,328,952 | 1 | true | 0 | 0 | Things that are in System are there for a reason: because they're used by the system. You should not change things in there unless you know what you're doing, and even then not unless you have a very good reason. Library is the right place for software you install for your own use. | 1 | 0 | 0 | On my OSX 10.6.8 there was an old version of python installed and it was in:
System/Library/Frameworks/Python.framework/Versions/2.7/bin/python
I downloaded and installed a newer version from the official website and it went to:
/Library/Frameworks/Python.framework/Versions/2.7/bin/python
I was just wondering. Which one is the correct path? And
Should I move this installation into /System/? | Right path for python on OSX | 1.2 | 0 | 0 | 92 |
26,329,613 | 2014-10-12T20:25:00.000 | 1 | 0 | 1 | 0 | python,mysql,sql,database,nosql | 26,329,692 | 1 | true | 0 | 0 | Yes, you could fake a lot of DB operations with a nested dict structure. Top level is your "tables", each table has entries (use a "primary key" on these) and each entry is a dict of key:value pairs where keys are "column names" and values are, well, values.
You could even write a little sql-like query language on this if you wanted, but you'd want to start by writing some code to manage this. You don't want to be building this DB freehand, it'll be important to define the operations as code. For example, insert should deal with enforcing value restrictions and imposing defaults and setting auto-incrementing keys and so forth (if you really want to be "performing sql like queries" against it) | 1 | 1 | 0 | I have been given a few TSV files containing data, around 800MB total in a couple of files.
Each of them has columns that link up with columns in another file.
I have so far imported all of my data into Python and stored it in an array. I now need to find a way to build a database out of this data without using any SQL, NoSQL, etc.
In the end I will be performing SQL-like queries on it (without SQL) and performing OLAP operations on the data. I can also NOT use any external libraries.
After doing some research, I have came across using dictionaries as a way to do this project, but I am not sure how to go about linking the tables together with dictionary. Would it be a list of dictionaries? | Database-like operations without any database use | 1.2 | 1 | 0 | 101 |
26,339,828 | 2014-10-13T12:17:00.000 | 0 | 0 | 0 | 0 | python,date,subset | 26,340,459 | 3 | false | 0 | 0 | Assuming you're using Pandas.
dfQ1 = df[(df.date > Qstartdate) & (df.date < Qenddate)] | 1 | 0 | 1 | I have a large dataset with a date column (which is not the index) with the following format %Y-%m-%d %H:%M:%S.
I would like to create quarterly subsets of this data frame i.e. the data frame dfQ1 would contain all rows where the date was between month [1 and 4], dfQ2 would contain all rows where the date was between month [5 and 8], etc... The header of the subsets is the same as that of the main data frame.
How can I do this?
Thanks! | How can I subset a data frame based on dates, when my dates column is not the index in Python? | 0 | 0 | 0 | 1,077 |
26,349,503 | 2014-10-13T21:56:00.000 | 0 | 0 | 0 | 0 | python,emacs,pdb | 26,373,563 | 1 | true | 0 | 0 | Normally there is no "=>" inserted at all. What there is instead is a "=>" that is displayed but which is not part of the buffer's content. Are you sure it's really in the code and are you sure you can delete it as if it were normal text? | 1 | 0 | 0 | I'm trying to learn how to use pdb in emacs.
I run emacs in console mode and in my python file I have something like import pdb and pdb.set_trace() at the beginning of the file. I use C-c C-c to execute the buffer and pdb starts running. It works fine except that I end up with a => inserted into my code on the line that pdb is looking at. When pdb ends, the => characters remain in my code on the last line and I have to manually delete it. How do I prevent this from happening? | Using pdb in emacs inserts => into my code | 1.2 | 0 | 0 | 105 |
26,351,509 | 2014-10-14T01:55:00.000 | 3 | 0 | 0 | 0 | python,django,django-rest-framework,pythonanywhere | 26,351,856 | 1 | true | 1 | 0 | It turns out that it was a python version issue. It was installing rest_framework under python 2.7 and my application was using python 3.3. To install it for python 3.3 I ran the following.
pip3.3 install djangorestframework | 1 | 2 | 0 | I am having trouble deploying my app on pythonanywhere.com. I have followed instructions to get teh django rest frameowrk package installed via pip by running the following
pip install --user djangorestframework
When I go into my console and run pip freeze it outputs djangorestframework==2.4.3 as one of the installed packages.
However, if I got to my python console and try import rest_framework or try to add rest_framework as an installed app in my django settings, I get this error.
ImportError: No module named 'rest_framework'
How do I get it to be recognized by my console and my application? | Module rest_framework not found on pythonanywhere.com app | 1.2 | 0 | 0 | 1,418 |
26,352,386 | 2014-10-14T03:53:00.000 | 2 | 0 | 1 | 0 | python,conventions,convention | 26,352,431 | 1 | true | 0 | 0 | You can use any name even if it is used by a function.
What you can't is use keywords like def, class, if, else...
But of course it is not a good practice to replace those names used by functions to avoid confusion.
A known practice is to add a _ to the end: input_, class_... | 1 | 0 | 0 | in python (3.3.3), what is the proper way to name a variable that is already being used?
for example, I want to create the variable input. obviously this will not work as there is a python keyword called input.
assuming I needed a name similar to input, what is the proper way to name it without deviating much from the word input, that is, not using a name like user_input or answer? | proper way to name a variable already used? | 1.2 | 0 | 0 | 171 |
26,355,152 | 2014-10-14T07:35:00.000 | 0 | 0 | 0 | 0 | python-2.7,scrapy,web-crawler | 26,378,448 | 2 | false | 1 | 0 | For this we have to make a list in fields_to_export in the BaseItemExporter class
field_iter = ['Offer', 'SKU', 'VendorCode'] like this
and then have to pass this list in the field | 1 | 0 | 0 | I have to crawl data from a web page in a specfic order as liked i declared fields in my item class and then have to put them in csv file.problem now occuring is there its stores data not in specfic order as like its scrapping data of any field and putting in csv file but i want it should store data as i declared in my item class. I am newbie in python. can you tell me how to do this
For ex:
my item class is
class DmozItem(Item):
title = Field()
link = Field()
desc = Field()
Now when its storing data in csv file its storing first desc ,link and then title
"desc": [], "link": ["/Computers/Programming/"], "title": ["Programming"]} | How to store data crawled in scrapy in specfic order? | 0 | 0 | 0 | 262 |
26,358,511 | 2014-10-14T10:26:00.000 | 1 | 0 | 0 | 0 | python,django,templates,variables | 26,358,812 | 2 | false | 1 | 0 | You can add variables to context | 1 | 0 | 0 | I have a django and I wrote some views. I know how to pass my variables to template, but I also has some external modules with their own views, which I wont modify. Please help me understand how can I get one of my object "Menu.objects.all()" exist in all templates? So for example a I have django-registration and i want to have all my menu items appear at top when someone visits not my app url. I mean it will be registration app url, which returns templateresponse (and here I dont have my variable). | Make an object exists in all templates | 0.099668 | 0 | 0 | 58 |
26,358,543 | 2014-10-14T10:28:00.000 | -3 | 0 | 1 | 0 | javascript,python | 26,358,654 | 2 | false | 1 | 0 | If you use an IDE such as phpstorm, it can easily find variables for you. I don't see the use in programming something in Python to do this. | 1 | 1 | 0 | I have directory with many JaveScript files. Here I want to scan each file and want to replace each JavaScript variable with string like 'str1', 'str2', 'str3', ..., 'strn' and so on.
My question is: how to identify a JavaScript variable?
Doubts:
If I say keyword after var is a variable, however there is no compulsion of var while declaring variable
If I say keyword before = is a variable, however file also contains HTML code, so inside HTML tag there is = sign between attribute and its value.
So how can I identify the variables I have to replace? | How to search for JavaScript variables in a file, using Python? | -0.291313 | 0 | 0 | 1,278 |
26,359,839 | 2014-10-14T11:34:00.000 | 0 | 0 | 1 | 0 | python,windows,webcam | 26,442,544 | 1 | true | 0 | 0 | I have investigated a couple options since then. One of the tools that I could utilize is devcon, through which usb cameras could be disabled/enabled to take snapshots. Another (simpler) option is to use a function in python opencv - cv2.VideoCapture(i) will automatically allow the program to iterate through different camera devices connected to the computer via usb. | 1 | 0 | 0 | Here is the objective I wish to achieve: I would like to connect multiple web cameras to a single Windows machine and take snapshots from one of them at different instances (so one camera must active at one point of time, while another must be active at another). From what I have tried, an user has to go to the control panel and set which camera should be active at a given moment to use it. Is there a way to achieve this objective via python? (If there is a way to keep all cameras active at once, how should I specify which camera should take a snapshot? | Python Switching Active Cameras | 1.2 | 0 | 0 | 229 |
26,360,362 | 2014-10-14T12:01:00.000 | 0 | 0 | 1 | 0 | visual-studio,ironpython | 27,236,342 | 2 | false | 0 | 0 | After installing the Microsoft Visual Studio 2008 Shell Isolated Mode Redistributable Package, go to the c:\ drive. At the root level you will find the folder VS 2008 Shell Redist\Isolated Mode. Here is the real installation file named vs_shell_isolated.enu.exe. Install it!
After completing that installation, you will be able to install IronPython Studio 1.0. | 1 | 2 | 0 | I am stuck with IronPython Studio Installation. It needs VS 2008 Shell Isolated Mode Redistributable Package. So i downloaded that and installed in my C:\Program Files. Then I ran IronPython Studio Isolated.msi but it says that
This setup requires Microsoft Visual Studio 2008 Shell Isolated Mode Redistributable Package. Please install Microsoft Visual Studio 2008 Shell Isolated Mode Redistributable Package and run this setup again.
I have Visual Studio 2010 Ultimate in my PC.
OS is Windows 8 32 bit. | How to Install IronPython Studio | 0 | 0 | 0 | 4,670 |
26,361,405 | 2014-10-14T12:55:00.000 | 1 | 0 | 1 | 1 | python,pip | 69,124,029 | 2 | false | 0 | 0 | It's fairly easy in recent versions of pip (the PR in the other answer is now part of pip).
pip freeze --user
This will output a list of packages currently installed to the user's site-packages. | 1 | 2 | 0 | I have dutifully uninstalled all the Python packages I installed with sudo pip install and installed them with pip --user install instead. Yay me :)
On Ubuntu, I know I can find the relevant binaries at /home/<USERNAME>/.local/bin and the packages themselves at /home/<USERNAME>/.local/lib/python2.7/site-packages ... but navigating there is not as simple as good old pip freeze.
How can I pip freeze and get only the packages I installed with pip --user install rather than all the Python packages, including those installed via apt? | How can I pip freeze and get only pip --user installs, no system installs? | 0.099668 | 0 | 0 | 669 |
26,362,010 | 2014-10-14T13:24:00.000 | 0 | 0 | 0 | 0 | python,numpy | 26,362,076 | 3 | false | 0 | 0 | You need to use a compound dtype, with a separate type per column. Or you can use np.genfromtxt without specifying any dtype, and it will be determined automatically, per each column, which may give you what you need with less effort (but perhaps slightly less performance and less error checking too). | 2 | 2 | 1 | I'm trying to load a matrix from a file using numpy. When I use any dtype other than float I get this error:
OverflowError: Python int too large to convert to C long
The code:
X = np.loadtxt(feats_file_path, delimiter=' ', dtype=np.int64 )
The problem is that my matrix has only integers and I can't use a float because the first column in the matrix are integer "keys" that refer to node ids. When I use a float, numpy "rounds" the integer id into something like 32423e^10, and I don't want this behavior.
So my questions:
How to solve the OverflowError?
If it's not possible to solve, then how could I prevent numpy from doing that to the ids? | Numpy's loadtxt(): OverflowError: Python int too large to convert to C long | 0 | 0 | 0 | 2,553 |
26,362,010 | 2014-10-14T13:24:00.000 | 0 | 0 | 0 | 0 | python,numpy | 26,362,482 | 3 | false | 0 | 0 | Your number looks like it would fit in the uint64_t type, which is available if you have C99. | 2 | 2 | 1 | I'm trying to load a matrix from a file using numpy. When I use any dtype other than float I get this error:
OverflowError: Python int too large to convert to C long
The code:
X = np.loadtxt(feats_file_path, delimiter=' ', dtype=np.int64 )
The problem is that my matrix has only integers and I can't use a float because the first column in the matrix are integer "keys" that refer to node ids. When I use a float, numpy "rounds" the integer id into something like 32423e^10, and I don't want this behavior.
So my questions:
How to solve the OverflowError?
If it's not possible to solve, then how could I prevent numpy from doing that to the ids? | Numpy's loadtxt(): OverflowError: Python int too large to convert to C long | 0 | 0 | 0 | 2,553 |
26,363,853 | 2014-10-14T14:51:00.000 | 1 | 0 | 0 | 0 | python,testing,numpy,scipy,integration-testing | 26,368,952 | 1 | true | 0 | 0 | This doesn't completely answer your question, but I think the policy of scipy release management since 0.11 or earlier has been to support all of the numpy versions from 1.5.1 up to the numpy version in development at the time of the scipy release. | 1 | 1 | 1 | Testing against NumPy/SciPy includes testing against several versions of them, since there is the need to support all versions since Numpy 1.6 and Scipy 0.11.
Testing all combinations would explode the build matrix in continuous integration (like travis-ci). I've searched the SciPy homepage for notes about version compatibility or sane combinations, but have not found something useful.
So my question is how to safely reduce the amount of combinations, while maintaining maximum testing compliance.
Is it possible to find all combinations in the wild? Or are there certain dependencies between Scipy and Numpy? | Testing against NumPy/SciPy sane version pairs | 1.2 | 0 | 0 | 69 |
26,374,480 | 2014-10-15T04:28:00.000 | 0 | 0 | 1 | 1 | python,python-3.x,windows-8,admin,exe | 54,080,629 | 2 | false | 0 | 0 | So, I know this question is like 5 years old, but you can make it actually kinda unkillable to even the admin. Make the program run as admin, so only admins can kill it. Then, make a loop that kills consent.exe constantly (consent.exe is the UAC pop-up). To kill the process, you need to be an admin, but you can't be an admin, because you can't accept the UAC prompt. But there is a catch. If you disable UAC via UAC settings, you can become an admin and kill the process. This can be used as a kind of failsafe. | 2 | 1 | 0 | For the last several weeks, I have been making a parental controls program (just for my friend and myself), in Python (it's what I know). I used CX_Freeze to get the .exe, and it works wonderfully. Everything is great... But I need a way to make the process unkillable to standard users. (just standard users. I want admins to be able to kill this easily if need be.)
I was pursuing a method in which my .exe was turned into a windows service, thereby making it "SYSTEM" and unkillable to standard users. So far, the service cannot kill a process by using taskkill /im, and cannot create required setup .txt files.
Since that method appears to be failing, I thought I would ask if anyone knows of a way to make a process untouchable to standard users? I'm not entirely sure what professional parental controls/keyloggers/virus protection software uses to keep the user from killing the process, but perhaps something like that? | Windows unkillable process | 0 | 0 | 0 | 1,170 |
26,374,480 | 2014-10-15T04:28:00.000 | 0 | 0 | 1 | 1 | python,python-3.x,windows-8,admin,exe | 26,374,614 | 2 | false | 0 | 0 | Making it a service is probably the right way to go - because it is the best way to automatically launch a process with some admin rights.
I think the reason that your service wasn't able to kill other processes was due to the account used to run the service under.
A service can run as system, local service, network service, or a specified account/password.
The idea here is to limit services to just be allowed to access what they need. A 'network service' service has very little access to local resources, while "local service" won't have network access. MSDN has all the details.
I don't remember offhand the details of exactly how you specify this during service registration, but I think it is pretty straightforward.
You will notice also that there is a checkbox for "Allow service to interact with the desktop".
Normally you don't want a service to directly control any UI because a UI is very hard to defend from attack - other processes could send messages which cause mischief in the service - potentially allowing them to hack the system.
I think that for your purposes, using a specific login for an admin account will suffice.
In "services.msc", it is simple enough to select your service, and enter a username/password for the service to run in. | 2 | 1 | 0 | For the last several weeks, I have been making a parental controls program (just for my friend and myself), in Python (it's what I know). I used CX_Freeze to get the .exe, and it works wonderfully. Everything is great... But I need a way to make the process unkillable to standard users. (just standard users. I want admins to be able to kill this easily if need be.)
I was pursuing a method in which my .exe was turned into a windows service, thereby making it "SYSTEM" and unkillable to standard users. So far, the service cannot kill a process by using taskkill /im, and cannot create required setup .txt files.
Since that method appears to be failing, I thought I would ask if anyone knows of a way to make a process untouchable to standard users? I'm not entirely sure what professional parental controls/keyloggers/virus protection software uses to keep the user from killing the process, but perhaps something like that? | Windows unkillable process | 0 | 0 | 0 | 1,170 |
26,375,763 | 2014-10-15T06:21:00.000 | 2 | 1 | 1 | 0 | python,z3,z3py | 26,385,910 | 1 | true | 0 | 0 | Using Z3 over Python is generally pretty slow. It includes parameter checks and marshaling (_coerce_expr among others).
For scalability you will be better of using one of the other bindings or bypass the Python runtime where possible. | 1 | 4 | 0 | I am using Z3 Python bindings to create an And expression via z3.And(exprs) where exprs is a python list of 48000 equality constraints over boolean variables. This operation takes 2 seconds on a MBP with 2.6GHz processor.
What could I be doing wrong? Is this an issue with z3 Python bindings? Any ideas on how to optimize such constructions?
Incidentally, in my experiments, the constructions of such expressions is taking more time than solving the resulting formulae :) | Why is z3.And() slow? | 1.2 | 0 | 0 | 283 |
26,378,344 | 2014-10-15T08:54:00.000 | 20 | 0 | 1 | 0 | python,pip | 26,379,031 | 13 | false | 0 | 0 | Just for completeness:
pip -V
pip --version
pip list and inside the list you'll find also pip with its version. | 3 | 146 | 0 | Which shell command gives me the actual version of pip I am using?
pip gives with pip show all version of modules that are installed but excludes itself. | How to know the version of pip itself | 1 | 0 | 0 | 384,428 |
26,378,344 | 2014-10-15T08:54:00.000 | 6 | 0 | 1 | 0 | python,pip | 60,867,180 | 13 | false | 0 | 0 | Start Python and type import pip pip.__version__ which works for all python packages. | 3 | 146 | 0 | Which shell command gives me the actual version of pip I am using?
pip gives with pip show all version of modules that are installed but excludes itself. | How to know the version of pip itself | 1 | 0 | 0 | 384,428 |
26,378,344 | 2014-10-15T08:54:00.000 | 0 | 0 | 1 | 0 | python,pip | 69,206,843 | 13 | false | 0 | 0 | py -m pip --version
This worked for python version 3.9.7 | 3 | 146 | 0 | Which shell command gives me the actual version of pip I am using?
pip gives with pip show all version of modules that are installed but excludes itself. | How to know the version of pip itself | 0 | 0 | 0 | 384,428 |
26,379,658 | 2014-10-15T09:59:00.000 | 0 | 1 | 0 | 0 | python,c++ | 26,379,890 | 1 | false | 0 | 1 | I'd suggest using python's struct to pack your values in python which could be viewed as a 'struct' in c++, then send it from python to c++ using zeromq. | 1 | 0 | 0 | So I have 2 processing units, one runs on Python and the other runs of C++. The first one will generate a set of data of around 3 - 5 values, either as a list of ints or a string. I want this value to be passed to C++, what is the best method? Like do I have to create a file in python then load it in C++? or there are an another way? This process would repeat every second, so I wish the transmission to be fast enough. | Transmitting data from Python to C++ | 0 | 0 | 0 | 76 |
26,387,529 | 2014-10-15T16:29:00.000 | 1 | 0 | 0 | 0 | python,django | 49,406,541 | 5 | false | 1 | 0 | There is no need to reload server, but sometimes there is need to copy static files to be visible for the server.
Instead running collectstatic while developing, which copy recently edited static files (like javascript) from one directory to the directory, used by server.
here is a trick:
link source directory to behalf of the target (will "override" target directory)
or run in loop:
python manage.py collectstatic --noinput
then your server will see all changes in files. | 4 | 5 | 0 | By default, Django's runserver command auto reloads the server when python or template files are changed.
Is it possible to configure Django to extend its file monitoring for this purpose to other directories or sets of files, such as JavaScript or CSS files being served statically (during development)?
This would be useful in this scenario: the Django app reads a set of static text files on startup and I would like the server to re-read them when they change, without having to add this specific feature - simply restarting would be fine.
Do I need to start meddling with (or extending) django/utils/autoreload.py ? | Can I configure Django runserver to reload when static or non-python files are changed? | 0.039979 | 0 | 0 | 3,709 |
26,387,529 | 2014-10-15T16:29:00.000 | 1 | 0 | 0 | 0 | python,django | 26,405,259 | 5 | false | 1 | 0 | The static files are automatically served from disk, so there is no need to reload the dev server.
But your browser has it's own cache, and is keeping some of your static files in it...
To reload it use this shortcut :
Ctrl + Shift + r
OR
Ctrl + f5
If your on mac use CMD button instead of ctrl | 4 | 5 | 0 | By default, Django's runserver command auto reloads the server when python or template files are changed.
Is it possible to configure Django to extend its file monitoring for this purpose to other directories or sets of files, such as JavaScript or CSS files being served statically (during development)?
This would be useful in this scenario: the Django app reads a set of static text files on startup and I would like the server to re-read them when they change, without having to add this specific feature - simply restarting would be fine.
Do I need to start meddling with (or extending) django/utils/autoreload.py ? | Can I configure Django runserver to reload when static or non-python files are changed? | 0.039979 | 0 | 0 | 3,709 |
26,387,529 | 2014-10-15T16:29:00.000 | 0 | 0 | 0 | 0 | python,django | 26,388,429 | 5 | false | 1 | 0 | As the comments on your question say, Django always pulls the file from the filesystem on every request, so they are not cached.
However, there is a check (indjango.views.static) for the mtime of the file if the browser sends an If-Modified-Since header which is why you may be seeing 304 Not Modified.
Regardless, would simply disabling browser caching meet your needs? | 4 | 5 | 0 | By default, Django's runserver command auto reloads the server when python or template files are changed.
Is it possible to configure Django to extend its file monitoring for this purpose to other directories or sets of files, such as JavaScript or CSS files being served statically (during development)?
This would be useful in this scenario: the Django app reads a set of static text files on startup and I would like the server to re-read them when they change, without having to add this specific feature - simply restarting would be fine.
Do I need to start meddling with (or extending) django/utils/autoreload.py ? | Can I configure Django runserver to reload when static or non-python files are changed? | 0 | 0 | 0 | 3,709 |
26,387,529 | 2014-10-15T16:29:00.000 | 0 | 0 | 0 | 0 | python,django | 71,053,904 | 5 | false | 1 | 0 | The answer to this is YES, all you need to do is touch your settings file which will trigger a runserver reload. If all you need to do is source new static files, you don't need this, but if you need to trigger a reload for another reason, it is possible with a simple touch. | 4 | 5 | 0 | By default, Django's runserver command auto reloads the server when python or template files are changed.
Is it possible to configure Django to extend its file monitoring for this purpose to other directories or sets of files, such as JavaScript or CSS files being served statically (during development)?
This would be useful in this scenario: the Django app reads a set of static text files on startup and I would like the server to re-read them when they change, without having to add this specific feature - simply restarting would be fine.
Do I need to start meddling with (or extending) django/utils/autoreload.py ? | Can I configure Django runserver to reload when static or non-python files are changed? | 0 | 0 | 0 | 3,709 |
26,388,938 | 2014-10-15T17:52:00.000 | 2 | 1 | 1 | 0 | python,c++,performance,struct,tuples | 26,389,438 | 1 | true | 0 | 0 | There's almost no relationship between a Python tuple and
a struct in C++. The elements of a tuple are neither named
nor typed, and must be accessed (in C++) through functions in
the Python C API (PyTuple_GetItem, etc.). Internally
(although you don't have access to it directly), a tuple is an
array of pointers to objects. Even for types like int and
float.
Because of the function calls and the added levels of
indirection, using a Python tuple will be slower than using
a struct. The "best practice" is to write a wrapper function,
which extracts the information from the tuple, doing the dynamic
type checking, etc., and use it to initialize the struct which
you then send on the the C++ function. There's really no other
way. | 1 | 0 | 0 | I would like to make use of a couple of C++ libraries from within Python to provide a Python API for scripting a C++ application. So I am wondering if there are any performance implications of passing a Python tuple in place of a C++ struct to a C++ function? Also, are the two data structures the same? If the two are the same, how can I prevent a tuple with incompatible member types from being passed? Would that impact the performance? If that impacts performance, what are the best practices for dealing with such situations in Python. Please note that I am clearly not the author of the libraries. | Are there any performance implications of passing a Python tuple in place of a C++ struct function argument? | 1.2 | 0 | 0 | 79 |
26,391,208 | 2014-10-15T20:09:00.000 | 1 | 0 | 0 | 0 | javascript,python,templates,pyramid | 26,392,168 | 1 | false | 1 | 0 | This was a red herring; the URL was wrong but the log file mentioned a missing template so I was focused in the wrong direction.
I had to get a custom redirection piece of code from one of the developers on this project and I have it working now. | 1 | 0 | 0 | I'm adding a page to a complex Pyramid-based app that uses Handlebar templates.
I need a file download URL that doesn't need a template, but the system is giving me a 404
code for missing template anyway.
How do I tell a view at configuration time "do not use a handlebar template with this one?" | Pyramid app with handlebar.js: I don't need a template for this view; how to disable? | 0.197375 | 0 | 0 | 68 |
26,393,254 | 2014-10-15T22:33:00.000 | 0 | 0 | 0 | 0 | python,matrix,scipy | 26,393,748 | 1 | false | 0 | 0 | Short answer
A positive semi-definite matrix does not have to have full rank, thus might not be invertible using the normal inverse.
Long answer
If cov does not have full rank, it does have some eigenvalues equal to zero and its inverse is not defined (because some of its eigenvalues would be infinitely large). Thus, in order to be able to invert a positive semi-definite covariance matrix ("semi": not all eigenvalues are larger than zero), they use the pseudo-inverse. The latter inverts the non-zero eigenvalues and preserves the zero eigenvalues rather than inverting them to infinity.
The determinant of a matrix without full rank is zero. The pseudo-determinant only considers non-zero eigenvalues, yielding a non-zero result.
If, however, cov does have full rank, the results should be the same as with the usual inverse and determinant. | 1 | 0 | 1 | I was looking into "scipy.stats.multivariate_normal" function, there they mentioned that they are using the pseudo inverse, and pseudo determinant.
The covariance matrix cov must be a (symmetric) positive semi-definite matrix. The determinant and inverse of cov are computed as the pseudo-determinant and pseudo-inverse, respectively, so that cov does not need to have full rank. | Why we calculate pseudo inverse over inverse | 0 | 0 | 0 | 239 |
26,393,540 | 2014-10-15T23:00:00.000 | 0 | 0 | 0 | 1 | python,parallel-processing,rabbitmq,celery,django-celery | 26,732,448 | 1 | false | 1 | 0 | If queuing up your tasks takes longer than the task, how about you increase the scope of the tasks so they operate on N files at a time. So instead of queuing up 1000 tasks for 1000 files. You queue up 10 tasks that operate on 100 files at a time.
Make your task take a list of files, rather than a file for input. Then when you loop through your list of files you can loop 100 at time. | 1 | 0 | 0 | I'm having a major problem in my celery + rabbitmq app where queuing up my jobs is taking longer than the time for my workers to perform jobs. No matter how many machines I spin up, my queuing time will always overtake my task time.
This is because I have one celery_client script on one machine doing all the queuing (calling task.delay()) sequentially. I am iterating through a list of files stored in S3. How can I parallelize the queuing process? I imagine this is a widespread basic problem, yet I cannot find a solution.
EDIT: to clarify, I am calling task.delay() inside a for loop that iterates through a list of S3 files (of which there are a huge amount of small files). I need to get the result back to me so I can return it to the client, so for this reason I iterate through a list of results after the above to see if the result is completed -- if it is, I append it to a result file.
Some solutions I can think of immediately is some kind of multi threaded support in my for loop, but I am not sure whether .delay() would work with this. Is there no built in celery support for this problem?
EDIT2 More details: I am using one queue in my celeryconfig -- my tasks are all the same.
EDIT3: I came across "chunking", where you can group a lot of small tasks into one big one. Not sure if this can help out my problem, as although I can transform a large number of small tasks into a small number of big ones, my for loop is still sequential. I could not find much information in the docs. | The time to queue tasks in celery bottlenecks my application - how to parallelize .delay()? | 0 | 0 | 0 | 349 |
26,394,754 | 2014-10-16T01:20:00.000 | 2 | 0 | 0 | 0 | python,google-app-engine,google-oauth | 26,395,567 | 1 | true | 1 | 0 | there is no reason why generating more access tokens from refresh tokens would cause an error. existing non-expired access tokens are not invalidated when a new one is produced from the refresh token.
check your code for errors there.
also there is no way to generate a long lived access token. what you ask is how oauth1/clientlogin used to work (they expired after 2 weeks instead of 1 hour). in oauth2 there is no such thing as a long lived access token. | 1 | 1 | 0 | Our app uses the usual short-lived access + refresh tokens to do a bunch of background services for users. This means that every now and then the services need to refresh the tokens.
We've run into an issue where 2 services try to refresh a token at the same time, thus resulting in an invalid token.
Is there a better way to generate a usable access token that doesn't require a refresh every hour? | Generate long-lived access token from short-lived one? | 1.2 | 0 | 0 | 236 |
26,399,754 | 2014-10-16T08:36:00.000 | 22 | 0 | 1 | 1 | python,virtualenv | 26,399,797 | 3 | true | 0 | 0 | You should not. The other computer can have a different operating system, other packages or package versions installed, so copying the files will not work.
The point of a virtual environment is to be able to replicate it everywhere you need it.
Make a script which installs all necessary dependencies from a requirements.txt file and use it.
Use pip freeze > requirements.txt to get the list of all python packages installed. Then install the dependencies in another virtual environment on another computer using pip install -r requirements.txt.
If you want the exact environment, including system packages, on another computer, use Docker. | 1 | 15 | 0 | I have created an virtual environment by using virtualenv pyenv in my linux system. Now i want to use the virtual environment in another computer. Can i direct copy the virtual environment and use it in another computer? Or need i do something to set up it? | How to use python virtual environment in another computer | 1.2 | 0 | 0 | 10,942 |
26,405,171 | 2014-10-16T13:02:00.000 | 1 | 1 | 0 | 1 | python,background-process,interprocess,mod-python | 26,405,300 | 1 | false | 0 | 0 | you can use celery / redis task queue. | 1 | 0 | 0 | We're running a Linux server running Apache2 with mod_python. One mod_python script inserts an entry in a database logging table. The logging table is large can be a point of disk-write contention, or it can be temporarily unavailable during database maintenance. We would like to spin off the logging as an asynchronous background task, so that the user request can complete before the logging is done.
Ideally there would be a single background process. The web handler would pass its log request to the background process. The background process would write log entries to the database. The background process would queue up to a hundred requests and drop requests when its queue is full.
Which techniques are available in Python to facilitate communicating with a background process? | How can you message a background process from mod_python? | 0.197375 | 0 | 0 | 76 |
26,408,871 | 2014-10-16T15:59:00.000 | 0 | 0 | 1 | 0 | python-3.x,pywin32,win32com | 28,550,799 | 1 | true | 0 | 0 | When you use com the language you are accessing it from needs the same "bittedness" as the com .dll or control. So If you have a 32-bit control or com dll you have to have a 32 bit win32com. | 1 | 0 | 0 | I've generated a win32com wrapper for a DLL and I'm trying to access it. It works except for one function called ReadPipeBytes. It works on two of my other machines but I'm using a different python version. This is the error:'' object has no attribute 'ReadPipeBytes'. I copied over the same dll to the other machine (its a driver, I have the same hardware I'm trying to access.) I did a compare on the wrapper files and they are almost identical except for the python versions they were generated with and the 3.3.5 generated version doesn't put u'FunctionName' where the 2.7 version does. If I copy over the wrapper file to the machine that doesn't work I get the same error (and/or the dict file).
1) Why would the version of python make a difference in reading this one particular function when the other function work fine (its not the wrapper?
2) How can python fail to use the function called readpipebytes when the other functions work and when I'm using the same files that I do on my other machines? | Error with accessing one fuction in pywin32 or win32com from python 3.3.5 | 1.2 | 0 | 0 | 162 |
26,409,544 | 2014-10-16T16:36:00.000 | 0 | 1 | 0 | 0 | python,python-3.x,xml,python-generateds | 26,551,274 | 1 | false | 0 | 0 | You could run generateDS to get your Python file then run, e.g.,
"2to3 -w your_python_file.py" to generate a Python 3 version of your generateDS file.
I'm went through the same process and I had luck with this. It looks to work just fine. | 1 | 0 | 0 | I have created Python classes by an XML schema file using generateDS2.12a. I am using these classes to create XML files. My module works well with a Python 2.7 environment.
Now, due to some reason my environment is changed to Python 3.0.0. Now when I try to export the XML object it is throwing me following error:
Function : export(self, outfile, level, namespace_='', name_='rootTag', namespacedef_='', pretty_print=True)
Error : s1 = (isinstance(inStr, basestring) and inStr or
NameError: global name 'basestring' is not defined
Is there a change I need to do to export XML in Python 3.0.0 or a new version of GenerateDS to be used for Python 3.0.0? | What version of generateDS is to be used for Python 3.0.0? | 0 | 0 | 1 | 866 |
26,411,303 | 2014-10-16T18:21:00.000 | 1 | 0 | 1 | 0 | python,xcode,git,github | 26,445,518 | 1 | false | 0 | 0 | IanAuld's answer sent me in the right direction and I figured out what I was doing wrong. I had been assuming that the Xcode project should be inside the directory with the git project, but that was causing problems because then git tries to track the Xcode project.
Here's what I am now doing, which seems to work:
Create a new Xcode project somewhere that is not managed by git. Make sure that "Create Git repository on ..." is not checked.
Clone the github project to a directory that does not include the Xcode project.
In Xcode, File | Add Files to "ProjectName"..., and select the folder with the git project.
Now, if I edit any of those files in the context of the project, it uses the indentation style I set for the project (though if I edit the file on its own it uses my global indentation style), and I can control git through the Source Control menu.
Problem solved. | 1 | 2 | 0 | I am working with an existing GitHub repository and I want to create a new Xcode project from it. How can this be done?
I have previously used Xcode just as a python script editor and never created a project, but I would like to do so in this case so that I can have a special indentation style just for the files in this project (this is Python, so no interest in build targets etc, just want to edit and use git).
I am using Xcode 6.0.1 on Mavericks. | How to create an Xcode project for an existing GitHub repository | 0.197375 | 0 | 0 | 4,537 |
26,411,571 | 2014-10-16T18:38:00.000 | 0 | 1 | 0 | 0 | python,testing,config,nose | 26,785,951 | 2 | true | 0 | 0 | I didnt have to do any of those. Just nosetests by themselves execute any test beginning with "test_....py" and make sure you use "--exe" if they are executable if not you can skip that option. The nosetests help page on wiki really helps. | 1 | 0 | 0 | I have a python script which takes a config file on command line and gives an output.
I am trying to see how I can use nosetests to run all these files.
I read through the nosetests info on google but i could not follow how to run them with the config file.
Any ideas on where I could get started? | running nose tests with a regular python script | 1.2 | 0 | 0 | 1,036 |
26,413,216 | 2014-10-16T20:21:00.000 | 3 | 0 | 0 | 0 | python,pdfminer | 43,130,993 | 2 | false | 1 | 0 | Perhaps,you could use pdfminer.six.
It's description:
fork of PDFMiner using six for Python 2+3 compatibility
After installing it using pip:
pip install pdfminer.six
The usage of it is just like pdfminer, at least in my code.
Hope this could save your day :) | 1 | 6 | 0 | Since I want to move from python 2 to 3, I tried to work with pdfmine.3kr in python 3.4. It seems like they have edited everything. Their change logs do not reflect the changes they have done but I had no success in parsing pdf with pdfminer3k. For example:
They have moved PDFDocument into pdfparser (sorry, if I spell incorrectly). PDFPage used to have create_pages method which is gone now. All I can see inside PDFPage are internal methods. Does anybody has a working example of pdfminer3k? It seems like there is no new documentation to reflect any of the changes. | pdfminer3k has no method named create_pages in PDFPage | 0.291313 | 0 | 0 | 8,846 |
26,413,483 | 2014-10-16T20:37:00.000 | 2 | 0 | 0 | 0 | python,curl,network-programming,linkedin | 35,821,074 | 2 | false | 0 | 0 | Thanks @Anzel, but related-profile-views API allows to read past profile view, but not trigger a new view (and therefore notify the user that I visited their profile programmatically). Unless I pop up a new window and load their profile, but then I'll need a browser to do it. Ideally I wanted to achieve it via backend, cURL and cookies, but it it's not as simple as it sounds. | 1 | 0 | 0 | A Linkedin friend's full profile is not viewable without login to our Linkedin account. Is it possible to use cookie or any other alternative way without a browser to do that?
Any tips are welcome! | How to use Python/Curl to access LinkedIn and view a friend's full profile? | 0.197375 | 0 | 1 | 686 |
26,413,505 | 2014-10-16T20:39:00.000 | 0 | 1 | 0 | 0 | python,c++,linux,signals,paramiko | 26,413,679 | 1 | false | 0 | 0 | The best way I can think of to do this is to run both of them in a web server. Use something like Windows Web Services for C++ or a native CGI implementation and use that to signal the python script.
If that's not a possibility, you can use COM to create COM objects on both sides, one in Python, and one in C++ to handle your IPC, but that gets messy with all the marshalling of types and such. | 1 | 1 | 0 | I'm working on developing a test automation framework. I need to start a process(a C++ application) on a remote linux host from a python script. I use the python module "paramiko" for this. However my c++ application takes sometime to run and complete the task assigned to it. So till the application completes processing, I cannot close the connection to the paramiko client. I wan thinking if I could do something like "the c++ application executing a callback(or some kind of signalling mechanism) and informing the script on completion of the task" Is there a way I can achieve this ?
I'm new to python, so any help would be much appreciated.
thanks!
Update: Is it not possible to have event.wait() and event.set() mechanism between the c++ application and the python script ? If yes, can somebody explain how it can be achieved ?
thanks in advance! | Can a "C++ application signal python script on completion"? | 0 | 0 | 0 | 331 |
26,413,572 | 2014-10-16T20:43:00.000 | 1 | 0 | 1 | 1 | python,io,subprocess | 26,413,875 | 2 | true | 0 | 0 | I think you'll be fine (carefully) ignoring the warnings using Popen.stdin, etc yourself. Just be sure to process the streams line-by-line and iterate through them on a fair schedule so not to fill up any buffers. A relatively simple (and inefficient) way of doing this in Python is using separate threads for the three streams. That's how Popen.communicate does it internally. Check out its source code to see how. | 2 | 2 | 0 | I want to make a Python wrapper for another command-line program.
I want to read Python's stdin as quickly as possible, filter and translate it, and then write it promptly to the child program's stdin.
At the same time, I want to be reading as quickly as possible from the child program's stdout and, after a bit of massaging, writing it promptly to Python's stdout.
The Python subprocess module is full of warnings to use communicate() to avoid deadlocks. However, communicate() doesn't give me access to the child program's stdout until the child has terminated. | Live reading / writing to a subprocess stdin/stdout | 1.2 | 0 | 0 | 439 |
26,413,572 | 2014-10-16T20:43:00.000 | 1 | 0 | 1 | 1 | python,io,subprocess | 26,413,852 | 2 | false | 0 | 0 | Disclaimer: This solution likely requires that you have access to the source code of the process you are trying to call, but may be worth trying anyways. It depends on the called process periodically flushing its stdout buffer which is not standard.
Say you have a process proc created by subprocess.Popen. proc has attributes stdin and stdout. These attributes are simply file-like objects. So, in order to send information through stdin you would call proc.stdin.write(). To retrieve information from proc.stdout you would call proc.stdout.readline() to read an individual line.
A couple of caveats:
When writing to proc.stdin via write() you will need to end the input with a newline character. Without a newline character, your subprocess will hang until a newline is passed.
In order to read information from proc.stdout you will need to make sure that the command called by subprocess appropriately flushes its stdout buffer after each print statement and that each line ends with a newline. If the stdout buffer does not flush at appropriate times, your call to proc.stdout.readline() will hang. | 2 | 2 | 0 | I want to make a Python wrapper for another command-line program.
I want to read Python's stdin as quickly as possible, filter and translate it, and then write it promptly to the child program's stdin.
At the same time, I want to be reading as quickly as possible from the child program's stdout and, after a bit of massaging, writing it promptly to Python's stdout.
The Python subprocess module is full of warnings to use communicate() to avoid deadlocks. However, communicate() doesn't give me access to the child program's stdout until the child has terminated. | Live reading / writing to a subprocess stdin/stdout | 0.099668 | 0 | 0 | 439 |
26,414,117 | 2014-10-16T21:22:00.000 | 4 | 0 | 1 | 0 | python,python-3.x,python-idle | 26,599,682 | 1 | true | 0 | 0 | To close the loop on this, based on responses received:
IDLE debugger does not support object inspection in the pop-up window
Thanks Terry Jan Reedy for putting it on list for future potential improvements. The IDLE is a big help to new Python programmers | 1 | 6 | 0 | I am new to Python. I am using the Python IDLE Debugger on Windows. Is there a way to inspect object attributes in the debugger? The debugger only shows object address/type.
A solution I tried was to create global variables and assign object attributes to them. The debugger then shows global variables. This works for mutable types such as list, but for immutable type such as int, it shows the value at assignment only. Is there a way to bind a global name to a global object's int attribute? | Object inspections in Python 3.4 IDLE debugger on Windows | 1.2 | 0 | 0 | 1,101 |
26,418,454 | 2014-10-17T05:31:00.000 | 1 | 0 | 0 | 0 | python,teradata,greenplum | 26,425,426 | 1 | false | 0 | 0 | I'm guessing the data volumes are at least moderate in size - 10's of millions or greater.
FastExport or Teradata Parallel Transport Export of the Teradata data to a
flat file or named pipe.
Ingesting using Greenplum's preferred method for bulk
loading data from a flat file or named pipe.
Other options may include invoking a Teradata FastExport API via JDBC using Python but then you still have to figure out how to efficiently ingest the data via Greenplum. | 1 | 0 | 0 | I am using Python to establish a connection to greenplum and run codes automatically. For that I am using these drivers psycopg2, psycopg2.extensions & psycopg2.extras. I also have to establish a connection to Teradata and run some codes and tranfer tables from Teradata to greenplum. Can someone please suggest some drivers or method to do this? I heard that arrays or alteryx can be used in python to do so but i couldn't anything. | How to tranfer data from Teradata to Greenplum using Python? | 0.197375 | 1 | 0 | 694 |
26,425,120 | 2014-10-17T12:33:00.000 | 2 | 0 | 0 | 0 | python,openerp | 26,478,950 | 2 | false | 1 | 0 | You can set access rights from Menu :
Settings --> Groups --> Access Rights tab | 2 | 0 | 0 | How to set full access to a group in my custom module in OpenErp? Now some users doesn't have access everywhere because they are filtered by domain_filters. | Openerp full access for a group | 0.197375 | 0 | 0 | 59 |
26,425,120 | 2014-10-17T12:33:00.000 | 2 | 0 | 0 | 0 | python,openerp | 26,483,435 | 2 | true | 1 | 0 | You can create an ir.model.access.csv file, and give access rights to all the objects you want for the particular user group. | 2 | 0 | 0 | How to set full access to a group in my custom module in OpenErp? Now some users doesn't have access everywhere because they are filtered by domain_filters. | Openerp full access for a group | 1.2 | 0 | 0 | 59 |
26,426,780 | 2014-10-17T14:02:00.000 | 1 | 0 | 0 | 0 | python,heroku | 26,463,868 | 1 | true | 1 | 0 | Via Heroku support:
Finding the app's name is not something currently possible to do,
sorry. The only thing you can potentially do is to analyse the
hostname requests are coming from and deduce the app's name with it. | 1 | 0 | 0 | How can I find the current apps name from python on heroku? I want to know, because I'm using the heroku button to start and instance automatically. Can it be made part of os.environ somehow? | How can I find the current apps name from python on heroku? | 1.2 | 0 | 0 | 82 |
26,428,639 | 2014-10-17T15:41:00.000 | 1 | 1 | 0 | 0 | python,functional-programming,chemistry | 29,393,699 | 2 | false | 0 | 0 | It might be helpful if you will look for a free or if possible with you a commercial software(written in python) which solves the same or a problem close to it, learn its functionality, problem solving approach and if possible obtain its source code. I find this to be helpful in many ways. | 1 | 5 | 0 | I am trying to learn python by making a simple program which generates a typical type of practice problem, organic chemistry students usually face on exams: the retro-synthesis question.
For those unfamiliar with this type of question: the student is given the initial and final species of a series of chemical reactions, then is asked to determine which reagents/reactions were performed to the initial reactant to obtain the final product.
Sometimes you are only given the final product and asked to list the reactions necessary to synthesize given some parameters (start only with a compound that has 5 carbons or less, only use alcohol, etc.)
So far, I've done some research, and I think RDkit w/Python is a good place to start. My plan is to use the SMILE format for reading molecules (since I can manipulate it as I would a string), then define functions for each reaction, finally I'll need a database of chemical species which the program can randomly select species from (for the inital and final species in the problem). The program then selects a random species from the database, applies a bunch of reactions to it (3-5, specified by the user) then displays the final product. The user then solves the question himself, and the program then shows the path it took (using images of the intermediates and printing the reagents used to obtain them). Simple. In principle.
But once I started actually coding the functions I ran in to some problems, first of all it is very tedious to write a function for every single reaction, second while SMILE can handle virtually all molecular complications thrown at it (stereo-chemistry, geometry, etc.) it has multiple forms for certain molecules and I'm having trouble keeping the reactions specific. Third, I'm using the "replace" method to manipulate the SMILE strings and this gets me into trouble when I have regiospecific reactions that I want to make universal
For example: Sn2 reactions react well with primary alkyl halides, but not all with tertiary ones (steric hinderance), how would I create a function for this reaction?
Another problem, I want the reactions to be tagged by their respective reagents, thus I've taken to naming the functions by the reagents used. But, this becomes problematic when there are reagents which can take many different forms (Gringard reagents for example).
I feel like there is a better, less repetitive and tedious way to tackle this thing. Looking for a nudge in the right direction | How to make a python organic chemistry retro-synthesis generator? | 0.099668 | 0 | 0 | 2,635 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.