Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
24,086,346
2014-06-06T16:12:00.000
1
1
1
0
python
24,086,656
1
true
0
0
You can retrieve loaded modules at runtime by reading sys.modules. This is a dictionary containing module names as keys and module objects as values. From module object you can get the path to the loaded module. But be aware: somewhere in you project modules can be loaded in runtime. In this case you can try to find usages of __import__(...) function and importlib.import_module(...).
1
3
0
I'm working on a fairly large Python project with a large number of modules in the main repo. The problem is that over time we have stopped using many of these modules without deleting or moving the .py file. So the result is we have far more files in the directory than we are actually using, and I would like to remove them before porting the project to a new platform to avoid porting code we don't need. Is there a tool that lets me supply the initial module (that starts the application) and a directory, and tells me which files are imported and which aren't? I've seen many similar tools that look at imports and tell me if they are used or not, but none that check files to see if they were imported to begin with.
Finding unused python modules in a directory
1.2
0
0
277
24,091,411
2014-06-06T22:15:00.000
1
0
0
0
python-2.7,scipy,integrate
24,771,340
1
true
0
0
in scipy.integrate.quad there's a lower-level call to a difference function that is iterated over, and it's iterated over divmax times. In your case, the default for divmax=20. There are some functions that you can override this default -- for example scipy.integrate.quadrature.romberg allows you to set divmax (default here is 10) as a keyword. The warning is thrown when the tolerances aren't met for the difference function. Setting divmax to a higher value will run longer, and hopefully meet the tolerance requirements.
1
3
1
I am currently trying to compute out an integral with scipy.integrate.quad, and for certain values I get the following error: /Library/Frameworks/Python.framework/Versions/7.3/lib/python2.7/site-packages/scipy/integrate/quadrature.py:616: AccuracyWarning: divmax (20) exceeded. Latest difference = 2.005732e-02 AccuracyWarning) Looking at the documentation, it isn't entirely clear to me why this warning is raised, what it means, and why it only occurs for certain values.
SciPy Quad Integration: Accuracy Warning
1.2
0
0
2,346
24,093,235
2014-06-07T03:02:00.000
1
0
1
0
python,regex
24,093,407
3
false
0
0
Let's look at each of your expressions, then we'll add one interesting twist that may resolve any outstanding confusion. [ab]* is equivalent to (?:a|b)*, in other words match a or b any number of times, for instance abbbaab. [ab] is equivalent to (?:a|b), in other words match a or b once, for instance a. a* means match a any number of times (for instance, aaaa b* means match b any number of times (for instance, bb You say that ([ab])\1 can only match aa or bb. That is correct, because ([ab])\1 means match a or b once, capturing it to Group 1, then match Group 1 again, i.e. a if we had a, or b if we had b. One more variation (Perl, PCRE) ([ab])(?1) means match a or b once, capturing it to Group 1, then match the expression specified inside Group 1, therefore, match [ab] again. This would match aa, ab, ba or bb. Therefore, ([ab])(?1)* can match the same as [ab]+, and ([ab]*)(?1)* can match the same as [ab]*
2
0
0
What makes me wonder is that why [ab]* doesn't repeat the matched part, but repeat [ab]. In other words, why it is not the same as either a*, or b*? why ([ab])\1 repeat the matched part, but not repeat [ab]. In other words, why it can only match aa and bb, but not ab and ba? Is it because the priority of () is lower than [], while the priority of * is higher than []? I wonder ift thinking of these as operators might not be appropriate. Thanks.
Compare two regexs [ab]* and ([ab])\1
0.066568
0
0
60
24,093,235
2014-06-07T03:02:00.000
1
0
1
0
python,regex
24,093,261
3
true
0
0
They both are entirely different. When you say [ab]*, it means that either a or b for zero or more times. So, it will match "", "a", "b", and any combination of a and b. But ([ab])\1 means that either a or b will be matched and then it is captured. \1 is called a backreference. It refers to the already captured group in the RegEx. In our case, ([ab]). So, if a was captured, then it will match only a again. If it is b then it will match only b again. It can match only aa and bb.
2
0
0
What makes me wonder is that why [ab]* doesn't repeat the matched part, but repeat [ab]. In other words, why it is not the same as either a*, or b*? why ([ab])\1 repeat the matched part, but not repeat [ab]. In other words, why it can only match aa and bb, but not ab and ba? Is it because the priority of () is lower than [], while the priority of * is higher than []? I wonder ift thinking of these as operators might not be appropriate. Thanks.
Compare two regexs [ab]* and ([ab])\1
1.2
0
0
60
24,093,913
2014-06-07T05:12:00.000
1
0
0
0
php,python,mysql,google-cloud-messaging
24,094,254
2
false
1
0
From what I understand, if you want to store unstructured data and retrieve it really fast, you should be looking at NoSql segment for storage and try to do a POC using a few of the available solutions in market. I would like to suggest giving a try to Aerospike NoSql DB which has a track record of easily doing 1 Million TPS on a single machine.
1
2
0
I need to design a mobile application which requires a lot of database queries. A lot of means, a peak value can be 1 million in a sec. I dont know which database to use and which backed to use. In client side, i will be using phonegap for android and ios and i will be needing a webinterface for pc also. My doubts are, i am planning to host the system online and use google cloud message to push data to users. Can online hostings handle this much traffic? I am planning to use php as backed. Or python? the software need not be having a lot of calculation but a lot of queries. And, which database system to use? Mysql or, google cloud sql? Also tell me about using hadoop or other technologies like load balancers. I may be totally wrong about the question itself. Thank you very much in advance.
very high number of queries in a second.
0.099668
0
0
90
24,096,036
2014-06-07T10:01:00.000
0
0
1
0
macos,python-2.7,neural-network,pybrain
24,102,842
1
true
0
0
I used pillow, and downloaded some fonts, so I successfully generated many files. Thank you, community, for help.
1
0
0
I want to train my pyBrain network to recognise digits. They should be written in 16x16 bmp files. Since drawing manually 10 digits is not enough for machine to learn, how is it possible to generate a lot (20-40 each) of single digit files using many fonts (preferably already installed on computer)?
Creating a set of letters written in different fonts
1.2
0
0
22
24,100,963
2014-06-07T19:46:00.000
2
0
1
0
python,pycharm
60,985,796
4
false
0
0
To run in debug mode press the 'bug' button (or Shift + F9). Step over - F8 Step into - F7 Step out - Shift+F8. Step to next breakpoint (or end) - F9
1
16
0
I am using PyCharm (community version) for my Python IDE. I want the program to debug in a line-by-line fashion. So I don't want to set every line as a break point... Is there a way I could do this?
PyCharm: debugging line by line?
0.099668
0
0
17,147
24,103,468
2014-06-08T06:06:00.000
0
0
1
0
python-3.x,pycharm,couchdb-python
24,103,871
2
false
0
0
Python 3 support for couchdb-python was recently committed, but not yet included in any release. You can either wait for the next release (0.10 I guess) or install package from sources, not from PyPI as PyCharm does.
2
0
0
I am using IDE PyCharm 3.4 and python 3.3 (tried use python 3.4 but the same problem). I want to install CouchDB package through IDE: Step by step: 1) Install "Pygments" package (because it's required package) 2) Trying to install "CouchDB"package - print 'Pygments not installed, syntax highlighting disabled' When I am using python 2.7, I have't this problem.
Python 3 can't use with couch db
0
0
0
415
24,103,468
2014-06-08T06:06:00.000
0
0
1
0
python-3.x,pycharm,couchdb-python
24,103,890
2
false
0
0
I tried installing "Pygments" and "CouchDB" with PyCharm 3.4 and Python 3.4.1. I received the same results that you did, however I looked further and noticed a second CouchDB package for Python called "pycouchdb". I installed it no problem. I hope this helped.
2
0
0
I am using IDE PyCharm 3.4 and python 3.3 (tried use python 3.4 but the same problem). I want to install CouchDB package through IDE: Step by step: 1) Install "Pygments" package (because it's required package) 2) Trying to install "CouchDB"package - print 'Pygments not installed, syntax highlighting disabled' When I am using python 2.7, I have't this problem.
Python 3 can't use with couch db
0
0
0
415
24,108,241
2014-06-08T16:27:00.000
0
0
0
1
python,google-app-engine
24,108,384
1
false
1
0
App Engine does not support Python 3.x. Do you still have 2.x installed? Go to your Google App Engine Launcher > Preferences, and make sure you have the proper Python Path to your v2.x. It should be something like "/usr/bin/python2.7" From Terminal, typewhereis python to help find it. If you know you were using version 2.7, try: whereis python2.7
1
0
0
I installed Python 3.4 on my Mac (OS 10.9.3) and my command for running Google App Engine from the terminal via /usr/local/dev_appengine stopped working. I then (stupidly) did some rather arbitrary things from online help forums and now my Google App Engine itself stopped working as well. When I open it, it says: Sorry, pieces of GoogleAppEngineLauncher.app appear missing or corrupted, or I can't run python2.5 properly. Output was: I have tried to delete the application and all related files and reinstall, but nothing has worked for me. It now fails to make the command symlinks as well so when I try to run from terminal I get /usr/local/bin/dev_appserver.py: No such file or directory.
Google App Engine SDK Fatal Error
0
0
0
194
24,110,915
2014-06-08T21:16:00.000
2
0
0
0
python,networking
24,110,983
1
false
0
0
There are 2 possible ways: 1) UPnP / NATPMP / PCP - These are protocols implemented by some(most?) routers more likely local networks to allow applications behind a firewall interact in this case you send packets (from both clients) to their respective routers using the protocol mentioned above asking for a port opening, then communicate regulary using sockets. 2) In some cases NAT traversal is possible - read about STUN servers and the ICE protocol. - This is most common for UDP communication, though sometimes TCP traffic can be also traversed in the network this way - most common tech is UDP hole punching 3) If none of these apply (say, symmetric NAT on a huge scale network) the only way would be a TURN approach when you relay all data through your publicly accesible server. P2P and NAT traversal are common in SIP,Voip and torrents, hence, free libraries like VUZE (torrent open source lib) can be a good place to start digging... :)
1
0
0
How do you have two programs communicate when there is a firewall in the way. I would like something sort of like socket, but that doesn't work through firewalls. It is okay if you have to use a 3rd party resource. I am doing this in python.
Have programs communicate when there is a firewall?
0.379949
0
1
73
24,112,027
2014-06-08T23:45:00.000
26
0
1
0
python,broken-pipe
24,112,053
1
true
0
0
A pipe connects two processes. One of these processes holds the read-end of the pipe, and the other holds the write-end. When the pipe is written to, data is stored in a buffer waiting for the other processes to retrieve it. What happens if a process is writing to a pipe, but the process on the other side suddenly exits or closes the pipe? Or the other way round, a process is reading just as the writer finishes or closes? This input/output error is called a broken pipe.
1
13
0
Running code in python, I discovered a "Broken Pipe Error." Can someone please explain to me what this is simply? Thanks.
What is a broken pipe error?
1.2
0
0
18,837
24,112,422
2014-06-09T01:02:00.000
0
0
0
0
python,mysql,ruby-on-rails
24,112,483
1
true
0
0
Use ssh to login to your home computer, setup authorized keys for it and disable password login. setup iptables on your linux machine if you don't have a firewall on your router, and disable traffic on all ports except 80 and 22 (ssh and internet). That should get you started.
1
0
0
I have read a few posts on how to enable remote login to mysql. My question is: is this a safe way to access data remotely? I have a my sql db located at home (on Ubuntu 14.04) that I use for research purposes. I would like to run python scripts from my Macbook at work. I was able to remote login from my old windows OS using workbench connection (DNS ip). However the OS change has got me thinking what is the best/most secure way to accomplish this task?
Best way to access data from mysql db on other non-local machines
1.2
1
0
49
24,113,496
2014-06-09T04:01:00.000
9
0
1
0
python,utf-8,decode
24,113,534
1
true
0
0
The byte boundaries are easily determined from the bit patterns. In your case, \xc6 starts with the bits 1100, and \xe2 starts with 1110. In UTF-8 (and I'm pretty sure this is not an accident), you can determine the number of bytes in the whole character by looking only at the first byte and counting the number of 1 bits at the start before the first 0. So your first character has 2 bytes and the second one has 3 bytes. If a byte starts with 0, it is a regular ASCII character. If a byte starts with 10, it is part of a UTF-8 sequence (not the first character).
1
8
0
I've been doing a bunch of reading on unicode encodings, especially with regards to Python. I think I have a pretty strong understanding of it now, but there's still one small detail I'm a little unsure about. How does the decoding know the byte boundaries? For example, say I have a unicode string with two unicode characters with byte representations of \xc6\xb4 and \xe2\x98\x82, respectively. I then write this unicode string to a file, so the file now contains the bytes \xc6\xb4\xe2\x98\x82. Now I decide to open and read the file (and Python defaults to decoding the file as utf-8), which leads me to my main question. How does the decoding know to interpret the bytes \xc6\xb4 and not \xc6\xb4\xe2?
How does decoding in UTF-8 know the byte boundaries?
1.2
0
0
919
24,113,602
2014-06-09T04:15:00.000
0
1
0
1
python,amazon-web-services,amazon-ec2,flask,scalability
24,115,986
1
false
1
0
Autoscaling is tailor-made for situations like these. You could run an initial diagnostic to see what the CPU usage usually is when a single server is running it's maximum allowable tasks (let's say it's above X%). You can then set up an autoscaling rule to spin up more instances once this threshold is crossed. Your rule could ensure a new instance is created every time one instance crosses X%. Further, you can also add a rule to scale down (setting the minimum instances to 1) based on a similar usage threshold.
1
0
0
There is a long running task (20m to 50m) which is invoked from a HTTP call to a Webserver. Now, since this task is compute intensive, the webserver cannot take up more than 4-5 tasks in parallel (on m3.medium). How can this be scaled? Can the auto-scaling feature of EC2 be used in this scenario? Are there any other frameworks available which can help in scaling up and down, preferably on AWS EC2?
Long running task scalablity EC2
0
0
0
63
24,116,662
2014-06-09T08:47:00.000
4
1
1
0
python,x86,arm,raspberry-pi,raspbian
24,117,029
1
true
0
0
If you write code in python, it will work perfectly fine directly on both your desktop and the raspberry pi. C, you'll have to recompile but that's about it. There might also be some issues if you start writing data structures to files directly and then using the same files across the different platforms -- you'll typically want to use a portable data format where the data is stored in strings (JSON, XML, or similar...)
1
3
0
I recently purchased a Raspberry Pi. I wish to start writing code for it using either C or Python. I know the differences between ARM vs x86 architecture, viz. RISC vs CISC, but what I don't know is that are there any special considerations on the actual code that I would need to write. If I write my code on my desktop and compile it there, and then take the same code and compile on my Raspberry Pi, will it compile the same or would it break?
Differences when writing code for ARM vs x86?
1.2
0
0
3,679
24,117,449
2014-06-09T09:36:00.000
7
0
1
0
python,conda
24,172,920
3
false
0
0
There's no simple command to do this, but one thing that may help is to make a metapackage using the conda metapackage command that depends on the packages you want, so that you can install just it. Something like conda metapackage mypackage 1.0 --dependencies package1 package2 package3 .... Otherwise, you probably just need to make clever use of xargs.
1
9
0
I'm using conda to run tests against different versions of python, numpy, etc., but there are some common dependencies for all combinations of python/numpy. Is there a way to install such packages into all conda environments, or do I have to specify each by hand?
Install a package into multiple/all conda environments?
1
0
0
5,578
24,118,765
2014-06-09T10:53:00.000
1
0
0
0
python,sqlite,tkinter
24,123,191
1
true
0
1
I think you would use .grid() in a Frame or multiple Listboxes with as many scrollbars as you like.
1
0
0
I have a database table (created using the SQLite3 library) and would like to be able to open a new window (in a program that I am writing) which will simply display the contents of the table (*.db) with a scroll bar to the user (perhaps on a grid like you would find in a spreadsheet). Are there any easy ways of achieving this?
Is there a tkinter widget to easily display a table (SQLite) in a window (canvas)?
1.2
0
0
1,409
24,121,385
2014-06-09T13:31:00.000
1
0
1
0
python,numpy
24,121,981
1
false
0
0
Anaconda installs python just like it already is on your system, only to a different location. It allows you to choose what packages you want to install. If you want to replace one you can go into the site-packages folder (Anaconda/lib/site-packages) and do so. In my experience, Anaconda was well worth the switch.
1
0
0
I'm pondering switching over to Anaconda from my vanilla Python in OSX. I know Anaconda brings its own NumPy. I was wondering if it was possible to make the GitHub version of NumPy the default version, or if Anaconda only works with its own version.
Installing Numpy developmental over anaconda
0.197375
0
0
52
24,121,692
2014-06-09T13:49:00.000
3
0
1
0
python,excel,xlwings
24,122,113
1
true
0
0
In "C:\Users\xxxxx\AppData\Local\Continuum\Anaconda3\lib\site-packages\xlwings__init__.py" Try changing from xlwings import Workbook, Range, Chart, __version__ to from xlwings.xlwings import Workbook, Range, Chart, __version__
1
4
0
I'm trying to use the new excel integration module xlwings It works like a charm under Anaconda 2.0 for python 2.7 but I'm getting this error under Anaconda 2.0 for python 3.4 the xlwings file does contain class Workbook so I don't understand why it can't import it when I simply use the xlwings file in my project for 3.4 it works just fine File "C:\Users\xxxxx\AppData\Local\Continuum\Anaconda3\lib\site-packages\xlwings__init__.py", line 1, in from xlwings import Workbook, Range, Chart, version ImportError: cannot import name 'Workbook'
importing xlwings module into python 3.4
1.2
1
0
2,145
24,121,883
2014-06-09T13:59:00.000
-3
0
0
0
python,matplotlib,histogram
24,121,929
2
false
0
0
Do you want to set the number of elements in each bin? I think that's a bar plot rather than a histogram. So simply use ax.bar instead. EDIT Good point by tcaswell, for multidimensional images the equivalent is imshow rather than bar.
1
1
1
I am trying to plot values of z binned in (x,y). This would look like a hist2d in matplotlib but with the bin content being defined by another array instead of representing the number of counts. Is there any way to set the bin content in hist2d?
How to set bin content in a hist2d (matplotlib)
-0.291313
0
0
1,859
24,122,850
2014-06-09T14:50:00.000
1
0
0
0
python,numpy,pandas
27,997,033
6
false
0
0
pip uninstall numpy uninstalls the old version of numpy pip install numpy finds and installs the latest version of numpy
1
16
1
I took a new clean install of OSX 10.9.3 and installed pip, and then did pip install pandas pip install numpy Both installs seemed to be perfectly happy, and ran without any errors (though there were a zillion warnings). When I tried to run a python script with import pandas, I got the following error: numpy.dtype has the wrong size, try recompiling Traceback (most recent call last): File "./moen.py", line 7, in import pandas File "/Library/Python/2.7/site-packages/pandas/__init__.py", line 6, in from . import hashtable, tslib, lib File "numpy.pxd", line 157, in init pandas.hashtable (pandas/hashtable.c:22331) ValueError: numpy.dtype has the wrong size, try recompiling How do I fix this error and get pandas to load properly?
pandas ValueError: numpy.dtype has the wrong size, try recompiling
0.033321
0
0
27,508
24,123,128
2014-06-09T15:05:00.000
0
1
0
1
python,git
24,123,328
2
false
0
0
in the top of your python file add #!/usr/bin/python then you can rename mv myScript.py myScript and run chmod 755 myScript. This will make it so you can run the file with ./myScript look into adding the file directory to your path or linking it to the path if you want to be able to run it from anywhere.
1
3
0
I'm writing git commands through a Python script (on Windows) When I double click on myScript.py, commands are launched in the Windows Command Prompt. I would like to execute them in Git Bash. Any idea how to do that without opening Git Bash and write python myScript.py?
Launch Python script in Git Bash
0
0
0
28,190
24,123,389
2014-06-09T15:19:00.000
5
0
0
1
python,web-services,dns,flask,tornado
24,125,918
1
true
1
0
I managed to solve it by myself, but I'll add this as an answer since evidently someone thought it was a worthwhile question. It turns out that it was just me that did not understand how DNS works and what the difference between DNS and domain forwarding is. At most domain hosts you can configure "domain forwarding", which sounds what precisely what you need but is NOT. Rather, for the simple usecase above, I went into the DNS Zone Records in the options and created a DNS Zone Record type A that pointed xyz.com to a.b.c.d. The change does not seem to have propagated entirely yet, but already on some devices I can see it working exactly how I want it to, so I will consider this issue resolved.
1
6
0
I have a small home-server running Flask set up at IP a.b.c.d. I also have a domain name xyz.com. Now I would like it so that when going to xyz.com, the user is served the content from a.b.c.d, with xyz.com still showing in the address bar. Similarly, when going to xyz.com/foo the content from a.b.c.d/foo should be shown, with xyz.com/foo showing in the address bar. I have path forwarding activated at my domain name provider, so xyz.com/foo is correctly forwarded to a.b.c.d/foo, but when going there a.b.c.d/foo is shown in the address bar. I'm currently running tornado, but I can switch to another server if it is necessary. Is it possible to set up this kind of solution? Or is my only option to buy some kind of hosting?
Custom domain routing to Flask server with custom domain always showing in address bar
1.2
0
0
2,809
24,126,783
2014-06-09T18:44:00.000
0
0
1
0
python,html,regex,bash,sed
24,127,504
1
false
1
0
I will write the outline for the code in some kind of pseudocode. And I don't remember Python well to quickly write the code in Python. First find what type it is (if contains N=0 then type 3, if contains "+" then type 2, else type 1) and get a list of strings containing "N=..." by exploding (name of PHP function) by "+" sign. The first loop is on links. The second loop is for each N= number. The third loop looks in map file and finds the replacing value. Load the data of the map file to a variable before all the loops. File reading is the slowest operation you have in programming. You replace the value in the third loop, then implode (PHP function) the list of new strings to a new link when returning to a first loop. Probably you have several files with the links then you need another loop for the files. When dealing with repeated codes you nees a while loop until spare number found. And you need to save the numbers that are already used in a list.
1
1
0
I need to create a BASH script, ideally using SED to find and replace value lists in href URL link constructs with HTML sit files, looking-up in a map (old to new values), that have a given URL construct. There are around 25K site files to look through, and the map has around 6,000 entries that I have to search through. All old and new values have 6 digits. The URL construct is: One value: HREF=".*jsp\?.*N=[0-9]{1,}.*" List of values: HREF=".*\.jsp\?.*N=[0-9]{1,}+N=[0-9]{1,}+N=[0-9]{1,}...*" The list of values are delimited by + PLUS symbol, and the list can be 1 to n values in length. I want to ignore a construct such as this: HREF=".*\.jsp\?.*N=0.*" IE the list is only N=0 Effectively I'm only interested in URL's that include one or more values that are in the file map, that are not prepended with CHANGED -- IE the list requires updating. PLEASE NOTE: in the above construct examples: .* means any character that isn't a digit; I'm just interested in any 6 digit values in the list of values after N=; so I've trying to isolate the N= list from the rest of the URL construct, and it should be noted that this N= list can appear anywhere within this URL construct. Initially, I want to create a script that will create a report of all links that fulfills the above criteria and that have a 6 digital OLD value that's in the map file, with its file path, to get an understanding of links impacted. EG: Filename link filea.jsp /jsp/search/results.jsp?N=204200+731&Ntx=mode+matchallpartial&Ntk=gensearch&Ntt= filea.jsp /jsp/search/BROWSE.jsp?Ntx=mode+matchallpartial&N=213890+217867+731& fileb.jsp /jsp/search/results.jsp?N=0+450+207827+213767&Ntx=mode+matchallpartial&Ntk=gensearch&Ntt= Lastly, I'd like to find and replace all 6 digit numbers, within the URL construct lists, as outlined above, as efficiently as possible (I'd like it to be reasonably fast as there could be around 25K files, with 6K values to look up, with potentially multiple values in the list). **PLEASE NOTE:** There is an additional issue I have, when finding and replacing, is that an old value could have been assigned a new value, that's already been used, that may also have to be replaced. E.G. If the map file is as below: MAP-FILE.txt OLD NEW 214865 218494 214866 217854 214867 214868 214868 218633 ... ... and there is a HREF link such as: /jsp/search/results.jsp?Ntx=mode+matchallpartial&Ntk=gensearch&N=0+450+214867+214868 214867 changes to 214868 - this would need to be prepended to flag that this value has been changed, and should not be replaced, otherwise what was 214867 would become 218633 as all 214868 would be changed to 218633. Hope this makes sense - I would then need to run through file and remove all 6 digit numbers that had been marked with the prepended flag, such that link would become: /jsp/search/results.jsp?Ntx=mode+matchallpartial&Ntk=gensearch&N=0+450+214868CHANGED+218633CHANGED Unless there's a better way to manage these infile changes. Could someone please help me on this, I'm note an expert with these kind of changes - so help would be massively appreciated. Many thanks in advance, Alex
How to find and replace 6 digit numbers within HREF links from map of values across site files, ideally using SED/Python
0
0
1
95
24,127,601
2014-06-09T19:35:00.000
-1
0
0
0
python,uwsgi
67,135,000
3
false
0
0
it worked for me by comment #master = true and put this, lazy-apps = true in uwsgi.ini file
2
52
0
Trying to set the timeout for requests in uWSGI, I'm not sure of the correct setting. There seem to be multiple timeout options (socket, interface, etc.) and it's not readily evident which setting to configure or where to set it. The behavior I'm looking for is to extend the time a request to the resource layer of a REST application can take.
uWSGI request timeout in Python
-0.066568
0
1
51,844
24,127,601
2014-06-09T19:35:00.000
22
0
0
0
python,uwsgi
28,458,462
3
false
0
0
Setting http-timeout worked for me. I have http = :8080, so I assume if you use file system socket, you have to use socket-timeout.
2
52
0
Trying to set the timeout for requests in uWSGI, I'm not sure of the correct setting. There seem to be multiple timeout options (socket, interface, etc.) and it's not readily evident which setting to configure or where to set it. The behavior I'm looking for is to extend the time a request to the resource layer of a REST application can take.
uWSGI request timeout in Python
1
0
1
51,844
24,128,433
2014-06-09T20:26:00.000
6
0
0
0
python,django,email,nginx,django-allauth
24,129,038
2
true
1
0
Django get hostname and port from HTTP headers. Add proxy_set_header Host $http_host; into your nginx configuration before options proxy_pass.
1
8
0
I'm running django on port 8001, while nginx is handling webserver duties on port 80. nginx proxies views and some REST api calls to Django. I'm using django-allauth for user registration/authentication. When a new user registers, django-allauth sends the user an email with a link to click. Because django is running on port 8001, the link looks like http://machine-hostname:8001/accounts/confirm-email/xxxxxxxxxxxxxx How can I make the url look like http://www.example.com/accounts/confirm-email/xxxxxxxx ? Thanks!
django-allauth: how to modify email confirmation url?
1.2
0
0
2,367
24,135,896
2014-06-10T08:06:00.000
0
1
0
0
python,selenium-webdriver,selenium-firefoxdriver
25,492,157
1
false
0
0
Selenium is not the right tool to use for performance testing. jmeter is a great tool for this. You would be able to see the response for each request.
1
3
0
Is it possible to calculate the performance testing through selenium with python? If it is possible, how should I do it?
Is it possible to calculate the performance testing through selenium with python?
0
0
1
304
24,135,908
2014-06-10T08:07:00.000
1
0
0
1
google-app-engine,python-2.7,cron
49,346,121
1
true
1
0
The tip from @Greg above solved that problem for me. Note the full sequence of events: A past version of the application included a cron.yaml file that ran every hour In a successive version, I removed the cron.yaml file, thinking that was enough, but then I discovered that the cron jobs were still running! Uploading an EMPTY cron.yaml file didn't change things either Uploading a cron.yaml file with only "cron:" in it did it -- the cron jobs just stopped. From the above I reckon that the way things work is this: when a cron.yaml file is found it is parsed, and if the syntax is correct, its cron jobs are loaded in the app server. And apparently, just removing the cron.yaml file from a successive version, or loading an empty one (i.e. un-parseable, bad syntax) doesn't remove the cron jobs. The only way to remove the cron jobs is to load a new, PARSEABLE cron.yaml file, i.e. with the "cron:" line, but with no actual jobs after that. And the proof of the pudding is that after you do that, you can now remove cron.yaml from successive versions, and those old cron jobs will not come back any more.
1
1
0
I start receiving errors from the CRON service even if I have not a single cron.yaml file defined. The cron task runs every 4 hours. I really don't know where to look at in order to correct such behaviour. Please tell me what kind of information is needed to correct the error. Cron jobs First Cron error Cron Job : /admin/push/feedbackservice/process - Query APNS Feedback service and remove inactive devices Schedule/Last Run/Last Status (All times are UTC) : every 4 hours (UTC) 2014/06/10 07:00:23 on time Failed Second Cron error Cron job: /admin/push/notifications/cleanup - Remove no longer needed records of processed notifications Schedule/Last Run/Last Status (All times are UTC) : every day 04:45 (America/New_York) - 2014/06/09 04:45:01 on time Failed Console log 2014-06-10 09:00:24.064 /admin/push/feedbackservice/process 404 626ms 0kb AppEngine-Google; (+http://code.google.com/appengine) module=default version=1 0.1.0.1 - - [10/Jun/2014:00:00:24 -0700] "GET /admin/push/feedbackservice/process HTTP/1.1" 404 113 - "AppEngine-Google; (+http://code.google.com/appengine)" "xxx-dev.appspot.com" ms=627 cpu_ms=353 cpm_usd=0.000013 queue_name=__cron task_name=471b6c0016980883f8225c35b96 loading_request=1 app_engine_release=1.9.5 instance=00c61b17c3c8be02ef95578ba43 I 2014-06-10 09:00:24.063 This request caused a new process to be started for your application, and thus caused your application code to be loaded for the first time. This request may thus take longer and use more CPU than a typical request for your application.
Google Appengine runs Cron tasks even if they are no cron.yaml file defined
1.2
0
0
645
24,153,503
2014-06-11T01:54:00.000
1
1
0
0
python,c,matlab,mex
24,154,260
2
true
0
1
If you can build your program as a shared library, then you can use the ctypes foreign-function interface to call your functions. This is often less work (and less complex) than wrapping the functions with Cython or writing your own C-API extension, but it is also more limited in what you can do. Therefore, I recommend starting with ctypes, and moving up to Cython if you find that ctypes doesn't suit your needs. However, for simple libraries, ctypes will do just fine (I use it a lot).
1
0
0
I am currently working on a project to import a Matlab program to python for integration into ImageJ as a plugin. The program contains Mex files and the source code was written in C++. Is there a way to call the C++ functions without having to rewrite them in python. Thanks!!!
Call C/C++ code from Python
1.2
0
0
1,339
24,155,105
2014-06-11T05:14:00.000
0
0
1
1
python,path,pip
24,155,174
1
true
0
0
Try using easy_install easy_install -U pip This will remove pip. If you don't have easy_install ... installed, do that first. After uninstalling pip, simply reinstall it using the get-pip.py file.
1
2
0
I was able to install pip by using distribute_setup.py and get-pip.py. However, I had to add pip to path by adding C:\Python27\Scripts. This worked for a little while but I had to do a system restore for other reasons and now when I type pip in the command prompt it returns: 'pip' is not recognized as an internal or external command. I tried adding C:\Python27\Scripts to path again both to the user variable and system variable but to no avail. I also tried re-installing pip but it just said I had the latest version installed already. Pip can be imported without any error within python so I am at a loss here. Can anyone help? Thanks in advance.
Trouble with adding pip's system environment variable
1.2
0
0
1,824
24,156,992
2014-06-11T07:25:00.000
0
0
0
0
sql,node.js,python-2.7
24,157,502
3
false
1
0
Yes its possible. Two applications with different languages using one database is almost exactly the same as one application using several connections to it, so you are probably already doing it. All the possible problems are exactly the same. The database won't even know whether the connections are made from one application or the other.
2
1
0
The question is pretty much in the header, but here are the specifics. For my senior design project we are going to be writing software to control some hardware and display diagnostics info on a web front. To accomplish this, I'm planning to use a combination of Python and nodejs. Theoretically, a python service script will communicate with the hardware via bacnet IP and log diagnostic info in an SQL database. The nodejs JavaScript server code will respond to webfront requests by querying the database and returning the relevant info. I'm pretty new to SQL, so my primary question is.. Is this possible? My second and more abstract questions would be... is this wise? And, is there any obvious advantage to using the same language on both ends, whether that language is Python, Java, or something else?
Can two programs, written in different languages, connect to the same SQL database?
0
1
0
1,076
24,156,992
2014-06-11T07:25:00.000
1
0
0
0
sql,node.js,python-2.7
24,158,115
3
true
1
0
tl;dr You can use any programming language that provides a client for the database server of your choice. To the database server, as long as the client is communicating as per the server's requirements (that is, it is using the server's library, protocol, etc.), then there is no difference to what programming language or system is being used. The database drivers provide a common abstract layer, providing a guarantee that the database server and the client are speaking the same language. The programming language's interface to the database driver takes care of the language specifics - for example, providing syntax that conforms to the language; and on the opposite side it the driver will ensure that all commands are sent in the protocol that the server expects. Since drivers are such a core requirement, there are usually multiple drivers available for databases; and also because good database access is a core requirement for programmers, each language strives to have a "standard" API for all databases. For example Java has JDBC Python has the DB-API, .NET has ODBC (and ADO I believe, but I am not a .NET expert). These are what the database drivers will conform to, so that it doesn't matter which database server you are using, you have one standard way to connect, one standard way to execute queries and one standard way to fetch results - in effect, making your life as a programmer easier. In most cases, there is a reference driver (and API/library) provided by the database vendor. It is usually in C, and it is also what the "native" client to the database uses. For example the mysql client for the MySQL database server using the MySQL C drivers to connect, and it is the same driver that is used by the Python MySQLdb driver; which conforms to the Python DB-API.
2
1
0
The question is pretty much in the header, but here are the specifics. For my senior design project we are going to be writing software to control some hardware and display diagnostics info on a web front. To accomplish this, I'm planning to use a combination of Python and nodejs. Theoretically, a python service script will communicate with the hardware via bacnet IP and log diagnostic info in an SQL database. The nodejs JavaScript server code will respond to webfront requests by querying the database and returning the relevant info. I'm pretty new to SQL, so my primary question is.. Is this possible? My second and more abstract questions would be... is this wise? And, is there any obvious advantage to using the same language on both ends, whether that language is Python, Java, or something else?
Can two programs, written in different languages, connect to the same SQL database?
1.2
1
0
1,076
24,158,704
2014-06-11T09:00:00.000
2
1
0
0
python,django,heroku,imagemagick,pillow
24,158,927
1
false
1
0
I have had a similar experience (alas in Java) which might help make a decision . Calling the ImageMagick library binding from Java (using JNI) seemed like a good idea, but turned out to leak memory by the tons. We ended up moving to an external command-line invocation of ImageMagick - which worked a lot better, for the reason you mentioned- guarantee the release of memory.
1
3
0
Our Heroku-hosted Django app does some simple image processing on images our users upload to Amazon S3—mostly resizing to the sizes we will display on the site. For this we use Pillow (the fork of the Python Imaging Library), running in a Celery task. We have seen the time for this operation change from a fraction of a second to half a minute or more. My best guess for why is that we are now often getting memory-quota (R14) conditions (just because the application is bigger), which I would naïvely expect to make resizing particularly slow. So I am considering refactoring the tasks to use an external ImageMagick process to do the processing rather than in-memory PIL. The thinking is that this will at least guarantee that memory used during resizing is released when the convert process terminates. So my question is, is this going to help? Is ImageMagick’s convert going to have a smaller memory footprint than Pillow?
Which has the better memory footprint, ImageMagick or Pillow (PIL)?
0.379949
0
0
1,421
24,159,713
2014-06-11T09:47:00.000
-1
0
1
0
python,python-2.7,dictionary,set,python-internals
24,159,900
2
false
0
0
These two are use the same datastructure in the backend. e.g in sets you cannot store duplicate values but in dict you can store multople same values and you can turn the dict to sets by changing the behavior of dict
1
4
0
Can anybody tell me how the internal implementation of set and dict is different in python? Do they use the same data structure in the background? ++ In theory, one can use dict to achieve set functionality.
difference between python set and dict "internally"
-0.099668
0
0
1,691
24,160,264
2014-06-11T10:11:00.000
0
0
0
0
python,django,elasticsearch,full-text-search,django-haystack
24,213,412
1
true
1
0
The problem is that there was not enough memory. Upgrading from 2GB to 4GB in my VM fixed the problem.
1
0
0
I was wondering whats the maximum value for an indexable CharField in Django Haytack with elasticsearch? I am asking this, because I am getting a timeout when I try to index a specific charfield that has a size of at least 736166. This seems to be pretty big for me, but is there a way for me to avoid that timeout, or am I not supposed to be using fields this big? Thanks in advance.
Django Haystack Elastic Search maximum value for indexable CharField
1.2
0
0
169
24,160,530
2014-06-11T10:24:00.000
1
0
0
0
python,shell,pydoc
26,183,554
1
false
0
0
"--More--" means there is more stuff to show Press Enter to show a line more, u can keep doing that Press q to quit pydocs mode
1
0
0
I started learning Python recently. I'm using Zed Shaw's "Learning Python The Hard Way" and running my exercises mostly in Power Shell. OS is Windows 7 and I have Python 2.7. Chapter 12 mentions pydoc. So here's the problem I'm running into with pydoc. Normally, I type in python -m pydoc whatever, it gives me the definition, and then returns to C:\Users\User. But occasionally, instead of that, it gives me "--More--" in the last line and then I can't type anything. Messages appear on the screen if I press certain keys (I will post them if required) but I can't actually type. I have to restart power shell to use it again. Anyone know anything about it?
Power Shell issue with pydoc: "--More--" in last line and then can't type
0.197375
0
0
75
24,164,407
2014-06-11T13:37:00.000
0
0
1
0
python,windows
24,164,716
1
true
0
0
I've had exactly the same thing happen with Sublime. In my case I had moved / renamed the file after closing sublime. I reopened sublime some time later which reopened the old version of the file because I had left the file open when I last closed down sublime. Have you tried closing the file and then opening it again from windows i.e. right-click => open with Sublime Text, to make sure you're editing the correct version?
1
0
0
I'm currently using PyCharm and Sublime Text in Window to develop some Python script, but this morning something quite strange happens. I am changing my code and testing it by running my code over test input. It is supposed to change the output text quite a lot. The output text at the end did not change at all. Then I try commenting out my output function, simply pass through the reading script. I thought "Now it should print nothing". But it prints the same output as yesterday, as if I never modified it today. Any suggestion? It turns out it's because my colleague has pushed an unexpected change up to repo last night...I always pulled before I end the day. The text output was forced by his logger:) Mystery solved! Thanks for reading guys.
Unable to modify python script in Windows
1.2
0
0
160
24,164,787
2014-06-11T13:54:00.000
1
1
1
0
python,c++,c,unix,operating-system
24,164,847
1
false
0
0
On a lines you could use diff. To Edit: xor_hash(new) - xor_hash(old)
1
0
0
I'm looking for a way to quantify the changes made to a file. That is, if I have a file with something written on, and I edit it and save it, is there a way to know (using Python or C/C++) how much the files has changed? For example, if my file is "aaaaaaaaaaa" and I change it to "aaabbbbbbb", that quantification method should yield a greater result (assuming thats quantifiable) than if I had changed it to "aaaaaaaaaba". Hope I made myself clear, Thanks in advance Edit: It is supposed to be done without actually reading the file.
Quantify file changes
0.197375
0
0
54
24,168,152
2014-06-11T16:37:00.000
2
0
0
0
python,django,django-views,subprocess,popen
24,169,550
1
true
1
0
You don't say what "command" you want to call. call_command is for calling Django management commands only - eg syncdb, runserver, or any custom management commands. It calls the command directly, without shelling out to the system. popen and the various functions in subprocess etc are for shelling out to call any other executable file.
1
0
0
I want to start a command from django management/command from django.view but don't know which one to use. Can some one explain me the properties and differences of both. Thanks in advance
Django call_command and popen. When to use what?
1.2
0
0
437
24,168,239
2014-06-11T16:43:00.000
1
0
0
0
mongodb,python-2.7,pymongo
24,200,782
1
true
0
0
After trying a combinaison of skip() + limit() I ended up with one of a easiest solution, it's just hard to find something about it. So to retrieve a chunk in mongodb, for pagination for example use : cursor = db.collection.find()[startItemPosition:stopItemPosition] It's giving me a cursor with stopItemPosition - startItemPosition elements.
1
1
0
I'm trying to create a module that give me X documents of a collection each time I call it. I want to iterate through all my collection and get the next X elements each time I ask it. I've tried to change the batch_size() of a cursor but I couldn't make it worked. I tried to set the limit() but it wasn't what I wanted. I managed to get one element each time but I lose too much time and I had to keep the cursor alive. Is there a function that do what I want ? Or can you give me hints ?
Read collection by chunk
1.2
0
0
1,666
24,169,037
2014-06-11T17:27:00.000
2
0
0
0
python,json,authentication,flask,werkzeug
24,169,289
1
false
1
0
It goes as follows: your client POSTs JSON data without authentication server receives the request (not necessarily in one long chunk, it might come in parst) server evaluates the requests and finds it is not providing credentials, so decides stopping processing the request and replies 401. With short POST size server consumes all and does not have time to break the POST requests in the middle. With growing POST size the chances to interrupt unauthorized POST request is higher. You client has two options: Either start sending credentials right away. Or try / catch broken pipe and react to it by forming proper Digest based request. The first feeling is, something is broken, but it is rather reasonable approach - imagine, someone could post huge POST request, consume resources on your server while not being authorized to do so. The reaction of the server seems reasonable in this context.
1
0
0
I've developed a fairly simple web service using Flask (Python 2.7, current Flask and dependencies), where clients POST a hunk of JSON to the server and get a response. This is working 100% of the time when no authentication is enabled; straight up POST to my service works great. Adding HTTP Digest authentication to the endpoint results in the client producing a 'Broken Pipe' error after the 401 - Authentication Required response is sent back... but only if the JSON hunk is more than about 22k. If the JSON hunk being transmitted in the POST is under ~22k, the client gets its 401 response and cheerfully replies with authentication data as expected. I'm not sure exactly what the size cut-off is... the largest I've tested successfully with is 21766 bytes, and the smallest that's failed is 43846 bytes. You'll note that 32k is right in that range, and 32k might be a nice default size for a buffer... and this smells like a buffer size problem. The problem has been observed using a Python client (built with the 'requests' module) and a C++ client (using one of Qt's HTTP client classes). The problem is also observed both when running the Flask app "stand-alone" (that is, via app.run()) and when running behind Apache via mod_wsgi. No SSL is enabled in either case.
http POSTs over a certain size failing when authentication is enabled
0.379949
0
0
119
24,169,303
2014-06-11T17:44:00.000
2
1
0
0
python,logging
24,169,956
1
true
0
0
Syslog handler is a service, not a file You seem to be confused by trying to specify logfile for syslog. Quoting Wikipedia: Syslog is a standard for computer message logging. It permits separation of the software that generates messages from the system that stores them and the software that reports and analyzes them. As syslog is a service, it decides about what to do with log records on it's own. That is why, you can only say address (like localhost on default port) of the syslog service but have no chance to control more from your (separated) application. On SysLog side, there is configuration, controlling where should what log entry end up, but this is out of control of your log handler. If you omit address, it would by default talk to localhost and default syslog port. In such a case it is very likely, you find your log records in /var/log/syslog file. Forwarding log records from another log to syslog If you have another log in some file and want to send it to syslog, you must: parse the log file and find log records send log records to syslog However, this might create issues with timestamps of LogRecords as usually the created time is derived automatically at the moment log record is created. But it could be possibly resolved. Conclusions Do not expect your logger in Python to decide the file where syslog writhes log records. This must be configured at syslog level, not in your app. If you have to forward logs from another source, most optimal is to manage that it goes directly there. If you cannot do that, you have to parse the log file and resend log records to syslog handler. Here you will have to resolve details about timestamps in log records.
1
0
0
I'm trying to find a Python script that uses logger to write file data to syslog. There's an application that outputs reports as log files, but I need these log files to be sent to syslog. I have this code, but I cannot determine how to target the log files and send it to syslog in Python. This code has me on the right track, but I do not know where to specify the file I need to transfer to syslog. Can someone help steer me in the right direction? Or, perhaps provide the documentation I need?
How to use Python script for logger to write file data to syslog?
1.2
0
0
1,589
24,169,491
2014-06-11T17:56:00.000
2
0
1
0
python,ipython-notebook
26,172,404
2
false
0
0
One way would be to make your cell a function. In another cell, just call time.sleep () in an appropriate loop. You could also have it check for the existence of some external resource to quit, if you don't want to loop an a priori known finite number of times.
1
4
0
I have a script in a notebook browser that does some live data analysis, that I would like to run periodically by itself once every few seconds/minutes, rather than I having to do Ctrl+Enter all the time. Is there any way to achieve this? Do I need any additional plug-ins, and which ones
Python Notebook run automatically periodically
0.197375
0
0
4,298
24,169,539
2014-06-11T17:59:00.000
1
1
0
1
python,database,web-applications,ipc
24,171,852
2
false
0
0
Python has since early stages a very comfortable PyZMQ binding for ZeroMQ. MATLAB can have the same, a direct ZeroMQ at work for your many-to-many communications. Let me move in a bit broader view, from a few, but KEY PRINCIPAL POINTS, that are not so common in other software-engineering "products" & "stacks" we meet today around us: [1] ZeroMQ is first of all a very powerful concept rather than a code or a DIY kit [2] ZeroMQ's biggest plus for any professional grade project sits in rather using the genuine Scaleable Formal Communication Patterns end-to-end, not in the ability to code pieces or to "trick/mod" the published internals [3] ZeroMQ team has done a terrific job and saves users from re-inventing wheels ("inside") and allows to rather stay on the most productive side by a re-use of the heroic knowledge ( elaborated, polished & tested by the ZeroMQ gurus, supporters & team-members ) from behind the ZMQ-abstraction-horizon. Having said these few principles, my recommendation would be to spend some time on the concepts in a published book from Peter Hintjens on ZeroMQ ( also available in PDF). This a worthwhile place to start from, to get the bigger picture. Then, there it would be a question of literally a few SLOC-s to make these world most powerful ( and believe me, that this sounds bold only on first sight, as there are not many real alternatives to compare ZeroMQ with ... well, ZeroMQ co-architect Martin Sustrik's [nanomsg] is that case, to mention at least one, if you need to go even higher in speed / lower in latency, but the above key principal points hold & remain the same even there ... ) Having used a ZeroMQ orchestrated Python & MQL4 & AI/ML system in FOREX high speed trading infrastructure environment is just a small example, where microseconds matter and nanosecond make a difference in the queue ... Presented in a hope that your interest in ZeroMQ library will only grow & that you will benefit as much as many other uses of this brilliant piece of art have gained giant leap & benefited from whatever the PUB/SUB, PAIR/PAIR, REQ/REP formal patterns does best match the very communication need of your MATLAB / Python / * heterogeneous multi-party / multi-host Project.
1
0
0
At a high level, what I need to do is have a python script that does a few things based on the commands it receives from various applications. At this stage, it's not clear what the application may be. It could be another python program, a MATLAB application, or a LAMP configuration. The commands will be sent rarely, something like a few times every hour. The problem is - What is the best way for my python script to receive these commands, and indicate to these applications that it has received them? Right now, what I'm trying to do is have a simple .txt file. The application(s) will write commands to the file. The python script will read it, do its thing, and remove the command from the file. I didn't like this approach for 2 reasons- 1) What happens if the file is being written/read by python and a new command is sent by an application? 2) This is a complicated approach which does not lead to anything robust and significant.
How do I communicate and share data between python and other applications?
0.099668
0
0
2,813
24,176,266
2014-06-12T04:04:00.000
0
0
0
0
python,flash,flask,client,directory
24,183,349
1
false
1
0
Flask doesn't produce directory views; that's something generic file-serving HTTP servers like Apache and nginx can do (if so configured) but it is not functionality that Flask offers out of the box. You'd have to code this yourself.
1
0
0
Directory Structure: static game something1.swf something2.swf something3.swf templates something.html main.py I have a small web game application that interacts with Flash .SWF files to show the images and everything back to the website. If I call ./static/game/something1.swf, it loads. It loads any other file that is being called specifically, however my application needs to call the whole directory in general for whatever reason. If I call ./static/game/ or ./static/game, for some reason, Flask returns a 404 error. It's like the directory does not exist at all to Flask. My hypothesis is that Flask sees game as a file, when it isn't.
Flask can't find directory.. returns a 404?
0
0
0
212
24,185,302
2014-06-12T13:11:00.000
0
0
0
0
django,python-2.7,pandas
35,639,956
1
false
0
0
This is over a year old, but this is the only SO thread I found on this issue so thought I'd comment on what we did to fix it. It turns out there are issues with pd.read_csv(FileObject, engine="C") on an embedded wsgi process. We ended up solving this issue by upgrading to pandas 0.17.0. Another working solution was to run mod_wsgi in daemon mode as this issue seems to relate to some conflict in how MPM is running read_csv with the C-engine while in embedded mode. We still aren't quite sure what the exact issue is however...
1
0
1
Pandas read_csv causes a timeout on my production server with python 2.7, django 1.6.5, apache and nginx. This happens only when using a string buffer like StringIO.StringIO or io.BytesIO. When supplying a filename as argument to read_csv everything works fine. Debugging does not help because on my development server this problem does not occur. Any ideas?
timeout pandas read_csv stringio timeout
0
0
0
1,792
24,185,800
2014-06-12T13:33:00.000
0
0
0
0
python,django,rest,extjs
24,186,117
3
false
1
0
In settings.py, set DEBUG=True to make Django show the whole error information in the web response. With DEBUG=False you have to manually print the error to console using print or export it to a log with some log tool.
1
1
0
I'm developing a REST application using ExtJS4 and using Django 1.6.5 to create a simple mock API for this application, that for now I want to only save some data in the SQLite db and output some other to the console. While testing the GET methods is fairly simple, when I have a problem with POST, PUT and DELETE I can't see the returning error from Django in the browser. Is there any way to make these errors show up in the Django's developer server console instead?
Show Django errors on console rather than Web Browser
0
0
0
2,805
24,193,690
2014-06-12T20:51:00.000
-3
0
1
0
python,performance,comparison,equals
24,193,976
3
false
0
0
My experience with some Cortex M3 assembly (or at least my professors say that's what it is), is that when you check equality or non-equality, there is one compare command that sets up 3 bits, and one if statement (if you can call it that), which looks at a particular one. Essentially you compare A and B, the 3 bits are Smaller, Equal and Bigger, then when you check anything it should be either checking if it is either bigger or smaller (2 checks, so 2 cycles) for non-equality, or applying a NOT to the equality flag, which depending on the architecture may be 2 different actions or a single cycle. Thus I speculate that it depends on compilers, assemblers and the CPU's architecture. This however should mean that you can make two programs, each making an immense amount of such checks, and time their execution, where immense would run into the tens of thousands (judging by execution times in C/C++). In my humble opinion this is a fairly feasible task, which can be timed by hand so you don't mess with timers, and as timers have some peculiarities as to their precision in many languages, they might not even catch the execution time of single statements. Or you can time the immense loops and see what the computer says. Keep in mind however that if you get 1.1x time in the not-equal loop, this does not mean that inequality takes 1.1x the time of equality checks, as the loop has a far bigger cycle, it will probably be 2x the time. With more tests with more checks per loop it should be very easy to determine the time it takes for the loop and the time spent on the checks. Hope this helps.
1
5
0
When comparing any variable, there is the choice of comparing equality versus comparing inequality. For variables with one element, not a string/list/tuple/etc..., the difference is probably either non-existent or negligible. The Question: When comparing two multi-element variables, is checking whether they are equal slower or faster than comparing whether they are not equal. My gut is telling me comparing whether they are not equal should be faster. I'm curious if anybody can tell me if this is true, and for which types of multi-element types this is so. Note: I have looked, and haven't found any posts here that answers my question. It might just be obvious, but I'd like to have more opinions than just my own.
Comparing Performance in Python: equal versus non equal
-0.197375
0
0
2,494
24,194,217
2014-06-12T21:28:00.000
0
0
0
1
python,google-app-engine,file-transfer,google-compute-engine
24,194,583
4
false
1
0
The most straightforward approach seems to be: A user submit a form on App Engine instance. App Engine instance makes a POST call to a handler on GCE instance with the new data. GCE instance updates its own file and processes it.
2
0
0
I am working on a project that involves using an Google App Engine (GAE) server to control one or more Google Compute Engine (GCE) instances. The rest of the project is working well, but I am having a problem with one specific aspect: file management. I want my GAE to edit a file on my GCE instance, and after days of research I have come up blank on how to do that. The most straightforward example of this is: Step 1) User enters text into a GAE form. Step 2) User clicks a button to indicate they would like to "submit" the text to GCE Step 3) GAE replaces the contents of a particular (hard-coded path) text file on the GCE with the user's new content. Step 4) (bonus step) GCE notices that the file has changed (either by detecting a change or by way of GAE alerting it when the new content is pushed) and runs a script to process the new file. I understand that this is easy to do using SCP or other terminal commands. I have already done that, and that works fine. What I need is a way for GAE to send that content directly, without my intervention. I have full access to all instances of GAE and GCE involved in this project, and can set up whatever code is needed on either of the platforms. Thank you in advance for your help!
Using Google App Engine to update files on Google Compute Engine
0
0
0
273
24,194,217
2014-06-12T21:28:00.000
0
0
0
1
python,google-app-engine,file-transfer,google-compute-engine
24,215,374
4
false
1
0
You can set an action URL in your form to point to the GCE instance (it can be load-balanced if you have more than one). Then all data will be uploaded directly to the GCE instance, and you don't have to worry about transferring data from your App Engine instance to GCE instance.
2
0
0
I am working on a project that involves using an Google App Engine (GAE) server to control one or more Google Compute Engine (GCE) instances. The rest of the project is working well, but I am having a problem with one specific aspect: file management. I want my GAE to edit a file on my GCE instance, and after days of research I have come up blank on how to do that. The most straightforward example of this is: Step 1) User enters text into a GAE form. Step 2) User clicks a button to indicate they would like to "submit" the text to GCE Step 3) GAE replaces the contents of a particular (hard-coded path) text file on the GCE with the user's new content. Step 4) (bonus step) GCE notices that the file has changed (either by detecting a change or by way of GAE alerting it when the new content is pushed) and runs a script to process the new file. I understand that this is easy to do using SCP or other terminal commands. I have already done that, and that works fine. What I need is a way for GAE to send that content directly, without my intervention. I have full access to all instances of GAE and GCE involved in this project, and can set up whatever code is needed on either of the platforms. Thank you in advance for your help!
Using Google App Engine to update files on Google Compute Engine
0
0
0
273
24,201,497
2014-06-13T09:01:00.000
1
1
0
0
python,sqlalchemy
24,201,579
1
false
1
0
You will have to set the timing on the http server's (Apache for example) configuration. The default should be more than 120 seconds, if I remember correct.
1
0
0
I have a Python CGI that I use along with SQLAlchemy, to get data from a database, process it and return the result in Json format to my webpage. The problem is that this process takes about 2mins to complete, and the browsers return a time out after 20 or 30 seconds of script execution. Is there a way in Python (maybe a library?) or an idea of design that can help me let the script execute completely ? Thanks!
Avoid Python CGI browser timeout
0.197375
0
0
678
24,202,386
2014-06-13T09:45:00.000
2
0
0
0
python
24,202,505
1
false
0
0
First of all: the question is not really related to Python at all. It's a general problem. And the answer is simple: keep track of what your script does (in a file or directly in db). If it crashes continue from the last step.
1
0
0
I have a python script running on my server which accessed a database, executes a fetch query and runs a learning algorithm to classify and updates certain values and means depending on the query. I want to know if for some reason my server shuts down in between then my python script would shut down and my query lost. How do i get to know where to continue from once I re-run the script and i want to carry on the updated means from the previous queries that have happened.
Python re establishment after Server shuts down
0.379949
1
0
22
24,205,175
2014-06-13T12:21:00.000
1
0
1
0
python,crash,startup,portability,portable-python
24,205,317
1
false
0
0
First i would start it after a fresh reboot and look which services run directly after reboot and after first, second... programm start. Then looking at the ram acn be intresting. Maybe you got a ram leak/overflow. Then i would compare theese results with a program start, when you did ur typical stuff (office, games, internet, videos) Do you maybe run the x86 version on a 64bit computer? Is your ram generally ok (memtest), on which core does it run for you (multicore processor), Wchich version do you run (is it the newest)?Do you get the same results on a different Computer? Perhaps some of this helps, yours JungerIslaender
1
0
0
When I run Python portable, sometimes it crashes, sometimes it does not. I am not sure if it is clashing with another program or if it is something else. I can never reproduce this situation. Can anyone help?
When I run python portable, sometimes it crashes, sometimes it doesnt
0.197375
0
0
120
24,207,823
2014-06-13T14:33:00.000
0
0
0
0
python,snmp
24,229,459
1
false
0
0
If .1.3.6.1.4.1.8072.2.255 is the last OID, then End of MIB is something expected for such a WALK operation. If you want to in fact query .1.3.6.1.4.1.8072.2.255, you should use GET operation (snmpget), or WALK from another OID, such as .1.3.6.1.4.1.8072.2.
1
0
0
I've to implement something like a monitoring for specific value returned by a script on the server. I've done the script, in python... now I've tried to link it to SNMP via pass_persist: pass_persist .1.3.6.1.4.1.8072.2.255 /root/isnmp/myscript.py when I call snmpwalk -On -c public -v 1 localhost .1.3.6.1.4.1.8072.2.255 I got End of MIB.
END of MIB when calling pass_persist script SNMP
0
0
0
733
24,209,812
2014-06-13T16:20:00.000
0
0
1
0
python,mean
24,209,906
1
false
0
0
Use a dictionary with the year as the key and the temp and a counter as the value (as a list). If the year isn't found add an entry with the mean temp and the counter at 1. If the year is already there add the temperature to the existing temp and increment the counter. The rest should be easy. Note this gives you the average mean temperature for the years. If you want the true mean it will be a bit more involved.
1
0
1
Year Month MeanTemp Max Temp Min Temp Total Rain(mm) Total Snow(cm) 2003 12 -0.1 9 -10.8 45 19.2 1974 1 -5.9 8.9 -20 34.3 35.6 2007 8 22.4 34.8 9.7 20.8 0 1993 7 21.7 32.5 11 87.7 0 1982 6 15.2 25.4 4 112.5 0 1940 10 7.4 22.8 -6.1 45.5 0 My data list is a tab-separated file, resembles this and goes on from 1938-2012. My issue is finding the yearly mean temperatures when all the months and years are out of order. Any sort of beginning steps would be helpful, thanks
Finding the yearly mean temperatures from scrambled months python
0
0
0
554
24,210,666
2014-06-13T17:16:00.000
0
0
0
0
python,google-maps,python-2.7,python-imaging-library,kml
24,211,949
1
false
0
0
You could use the width and height of the polygon/rectangle in longitude and latitude coordinates. Then use the ratio of that rectangle to the image size and it should fit. Edit: I should note that depending on where your images are going to show up you might need some special math for the dateline (-180 to 180) or prime-meridian (0 to 360).
1
0
0
OS X 10.7.5, Python 2.7, GE 7.1.2.2041 I have some .kml data that includes a moderately large number of polygons. Each polygon has an image associated with it. I want to use each image in a <GroundOverlay> mode with its associated polygon. The raw images are all a bit larger than the polygons. I can easily resize the images with Python's Image Library (PIL), but the amount of missizing is not consistent across the entire set. Some are as good as only ~5% larger, and some go up to ~20% larger. What I would like to do is either find (or calculate) the approximate sizes of the polygons in pixels so that I can automate the resizing of their associated images with that data. Any suggestions?
How to calculate Google Earth polygon size in pixels
0
0
0
245
24,212,724
2014-06-13T19:37:00.000
5
1
0
1
php,python
34,741,305
1
false
0
0
I know this post i old but for future reference, as I assume you have moved on... I can inform that after I read your question I contacted One.com (12 jan 2016) and they said that they do not support Python and are not planning to do so in the near future.
1
0
0
I am trying to run a python script on one.com after a user completes an action on my website. If I run it using a shell file (every couple of minutes) when I run it in the background and end the ssh session it ends the script. I have tried running if from php using shell_exec and system but these are blocked by one.com. I was wondering if anyone had any success with this?
Running Python from PHP on one.com
0.761594
0
0
3,118
24,213,788
2014-06-13T21:04:00.000
2
0
1
0
ipython,spyder
25,560,508
1
false
0
0
The development version of Spyder has a Pandas Dataframe editor (and an numpy array editor, up to 3d). You can run this from source or wait for the next release, 2.3.1. This is probably more adequate to edit or visualize dataframe than using the embedded qtconsole.
1
2
1
The new anaconda spyder has an integrated iPython console. I really dislike the way dataframes are displayed in this console. There are some border graphics around the dataframe and resizing that occurs with window resizing that make it difficult to examine the contents. In addition, often for large dataframes if one just types the variable name, we can only see a XX rows x YY columns display, not a display of the top corner of the dataframe. How can I reset the dataframe display so that it displays the way a standard iPython (or iPython QT) console would display a dataframe, i.e. with no graphics? Thanks.
Anaconda Spyder integrated IPython display for dataframes
0.379949
0
0
1,871
24,214,364
2014-06-13T22:00:00.000
0
0
0
0
python,macos,numpy
24,214,946
1
true
0
0
From our conversation in chat and adding path to .bashrc not working: Putting: /usr/local/bin first in /etc/paths will resolve the issue
1
1
1
When I run 'pip freeze' it shows that numpy==1.8.1; however, when I start Python and import numpy and then check the version number via numpy.version.version I get 1.6. This is confusing to me and it's also creating a problem for scipy and matplotlib. For example, when I attempt to do the following import 'from matplotlib import pyplot' I get an error saying 'RuntimeError: module compiled against API version 9 but this version of numpy is 6'. This I'm guessing has something to do with the numpy versions being wrong. Any suggestions?
Mac OS Mavericks numpy Version Issue
1.2
0
0
469
24,218,114
2014-06-14T08:34:00.000
0
0
0
0
python,sockets,tcp,timer
24,218,375
1
false
0
0
How about using a token that is switched when the client connects? Put it in a while loop and if the token is ever the same non-switched value twice in a row kill the loop and stop listen().
1
0
0
I'll explain you better my problem. I've code a simple python server who listening for web client connection. The server is running but i must add a function and i don't know how resolve this.. I must set up a timer, if the client don't connect every N seconds, I've to log it. I already looked for set up a timeout but in the lib socket, the timeout doesn't do what i want ... I tried to set up a timer with timestamps and compare values, but the socket.listen() method don't stop operation until a client connects. And i wanna stop the listen() method if the time is exceeded.
Python set up a timer for client connection
0
0
1
458
24,224,539
2014-06-14T21:38:00.000
1
0
0
1
python,ios,tornado
24,232,201
1
true
1
0
You can send your response with either self.write() or self.finish() (the main difference is that with write() you can assemble your response in several pieces, while finish() can only be called once. You also have to call finish() once if you're using asynchronous functions that are not coroutines, but in most cases it is done automatically). As for what to send, it doesn't really matter if it's a non-browser application that only looks at the status code, but I generally send an empty json dictionary for cases like this so there is well-defined space for future expansion.
1
1
0
So far I have a pretty basic server (I haven't built in any security features yet, like cookie authentication). What I've got so far is an iOS app where you enter a username and password and those arguments are plugged into a URL and passed to a server. The server checks to see if the username is in the database and then sends a confirmation to the app. Pretty basic but what I can't figure out is what the confirmation should look like? The server is a Python Tornado server with a MySQL dbms.. What I'm unsure of is what Tornado should/can send in response? Do I use self.write or self.response or self.render? I don't think it's self.render because I'm not rendering an HTML file, I'm just sending the native iOS app a confirmation response which, once received by the app, will prompt it to load the next View Controller. After a lot of googling I can't seem to find the answer (probably because I don't know how to word the question correctly). I'm new to servers so I appreciate your patience.
What's the proper Tornado response for a log in success?
1.2
0
0
246
24,227,510
2014-06-15T07:34:00.000
1
0
0
1
python,google-app-engine,google-cloud-datastore
24,227,785
2
false
1
0
You can't, that's not what it's for at all. It's only for very broad-grained statistics about the number of each types in the datastore. It'll give you a rough estimate of how many Person objects there are in total, that's all.
1
0
0
How can I use Google Cloud Datastore stats object (in Python ) to get the number of entities of one kind (i.e. Person) in my database satisfying a given constraint (i.e. age>20)?
How to use Google Cloud Datastore Statistics
0.099668
1
0
68
24,227,779
2014-06-15T08:17:00.000
0
0
0
0
php,python,mysql,wamp
24,228,304
1
false
0
0
I'd comment, but I don't have enough rep. yet. You should be able to start the Apache and PHP services separately from the WAMP tray icon. If you can't, try this: You should be able to use the WAMP menu on the tray icon to open WAMP's my.ini file. In there, just change the port number from 3306 to something else (might as well use 3307) and save the file, then restart WAMP. In the future, completely ignore WAMP's MySQL instance and just use the one that Python is connecting to. If that still doesn't work, you might have Skype running. By default, Skype listens on Port 80 for some bizarre reason. Open Skype's settings, go to the "Connection" section and untick "Use port 80 as an alternative for incoming connections." Then try restarting WAMP again.
1
1
0
I have a script in Python that retrieves data from a remote server into a MySQL database located on my computer (the one that runs the Python script). This script is executed daily to retrieve fresh data into the MySQL database. I am using Workbench 6.0 for Windows 64. I want to add a web GUI to the system that will present some of the data on a web page. I would like to write it in PHP and for my PHP program to use the same MySQL database that my Python script uses. I would like to make it a web server later on so users can log in to the sever and use this web GUI. Can the PHP and the Python scripts use the same DB? In the past I have worked with WAMP sever for PHP only. If I install WAMP on the sever, will it be able to use the same DB or can it cause a collision?
Python and PHP use the same mySQL database
0
1
0
1,052
24,229,203
2014-06-15T11:37:00.000
1
0
1
1
python,google-app-engine
45,396,993
7
false
1
0
YES! Google App engine supports python v3, you need to set up flexible environments. I got a chance to deploy my application on app engine and It's using python 3.6 runtime and works smoothly... :)
1
50
0
I started learning Python 3.4 and would like to start using libraries as well as Google App Engine, but the majority of Python libraries only support Python 2.7 and the same with Google App Engine. Should I learn 2.7 instead or is there an easier way? (Is it possible to have 2 Python versions on my machine at the same time?)
Does Google App Engine support Python 3?
0.028564
0
0
20,010
24,230,147
2014-06-15T13:38:00.000
2
0
0
0
python,django,caching,varnish
24,230,332
1
true
1
0
Varnish is a caching HTTP reverse proxy. It's always in front of your server. However Redis is a key-value store. So they are not located in the same level. For me, I use redis to store builded objects, result of DB queries, and varnish for static pages (Don't cache your dynamic content with Varnish this will cause a lot of trouble)
1
0
0
I've read up on some documentation and had a few questions. I am aware we can use redis as a cache backend for Django. We can then use the decorators in Django's cache framework to cache certain views. I understand to this point but I've learned about a HTTP accelerator called Varnish. How does Varnish work if used with redis + django cache? What is the difference between Varnish and Django + redis cache using in the in-built cache framework? Can these two things work side by side because having a web accelerator sounds really good actually?
Caching in Django: Redis + Django & Varnish
1.2
0
0
739
24,231,504
2014-06-15T16:05:00.000
4
0
0
0
python,django,python-2.7,networkx
24,231,580
1
true
1
0
You could simply install it as a global variable. Call the function that loads it in a module-level context and import that module when you need it (or use a singleton pattern that loads it on first access, but it's basically the same thing). You should never use a global variable in a webapp if you expect to alter the contents on the fly, but for static content there's nothing wrong with them. Just be aware that if you put the import inside a function, then that import will run for the first time when that function is run, which means that the first time someone accesses a specific server after reboot they'll have to wait for the data to load. If you instead put the import in a module-level context so that it's loaded on app start, then your app will take four seconds (or whatever) longer to start in the first place. You'll have to pick one of those two performance hits -- the latter is probably kinder to users.
1
2
0
The scenario: I have a Networkx network with around 120.000 edges which I need to query each time a user requests a page or clicks something on the page, so a lot of calls. I could load and parse the network each call, but that would be a waste of time as that would take around 4 seconds each time (excluding the querying). I was hoping I could store this network object (which is static) somewhere globally and just query it when needed, but I can't find an easy way to do so. Putting all the edges in a DB is not an option as it doesn't eliminate the time needed for parsing.
Django large variable storage
1.2
0
1
118
24,232,701
2014-06-15T18:23:00.000
6
0
0
0
python,python-2.7,pandas,offset
24,232,729
1
true
0
0
You can shift A column first: df['A'].shift(-1) - df['B']
1
5
1
I have a data frame in which column A is the start time of an activity and column B is the finish time of that activity, and each row represents an activity (rows are arranged chronologically). I want to compute the difference in time between the end of one activity and the start of the next activity, i.e. df[i+1][A] - df[i][B]. Is there a Pandas function to do this (the only thing I can find is diff(), but that only appears to work on a single column).
Pandas diff() functionality on two columns in a dataframe
1.2
0
0
5,331
24,233,264
2014-06-15T19:25:00.000
1
1
0
1
python,linux,bash,raspberry-pi,cp
24,233,353
3
false
0
0
you can use rsync, which can be used pretty much like cp is used, but offers an option for progress indicator. It is sent to standard out, and you ought to be able to intercept it for your own fancies.
1
0
0
Is there a way to poll the cp command to get its current progress? I understand there's a modified/Advanced copy utility that adds a small little ASCII progress bar, but I want to build my own progress bar using led lights and whatnot, and need to be able to see the current percentage of the file activity in order to determine how many LEDs to light up on the progress bar.
Poll the linux cp command to GET progress
0.066568
0
0
606
24,235,268
2014-06-16T00:23:00.000
1
0
1
0
python,list,max
24,235,302
9
false
0
0
Sounds like homework to me, but perhaps some logics can help you out: when you iterate the list, the point is that you want to know whether the current element is the MAX of all others. This can be solved by keeping the MAX up to the current element (say M*) and do a simple comparison between M* and your current element. Then update M* according to the result of that comparison. At the end of the iteration I think you can figure out where your MAX is.
1
0
0
I need to find the max value of a list but I can't use the max() method. How would I go about doing this? I'm just a beginner so I'm sure this is pretty easy to find a workaround, however I'm stumped. Edit: Forgot to mention that it's python I'm working in.
max value of list without max() method
0.022219
0
0
35,265
24,236,079
2014-06-16T02:53:00.000
5
1
1
0
python,numpy
24,236,471
2
true
0
0
You can use Cython to compile the bottlenecks to C. This is very effective for numerical code where you have tight loops. Python loops add quite a lot of overhead, that is non-existent if you can translate things to pure C. In general, you can get very good performance for any statically typed code (that is, your types do not change, and you can annotate them on the source). You can also write the core parts of your algorithm in C (or take an already written library) and wrap it. You can still do it writing a lot of boilerplate code with Cython or SWIG, but now there are tools like XDress that can do this for you. If you are a FORTRAN person, f2py is your tool. Modern CPUs have many cores, so you should be able to take advantage of it usin Python's multiprocessing. The guys at joblib have provided a very nice and simplified interface for it. Some problems are also suitable for GPU computing when you can use PyCUDA. Theano is a library that is a bridge between Numpy, Cython, Sympy, and PyCUDA. It can evaluate and compile expressions and generate GPU kernels. Lastly, there is the future, with Numba and Blaze. Numba is a JIT compiler based on LLVM. The development is not complete, as some syntax is missing and bugs are quite common. I don't believe it is ready for production code, unless you are sure your codebase is fully supported and you have a very good test coverage. Blaze is a next generation Numpy, with support for out of core storage and more flexible arrays; and designed to use Numba as a backend to speed up execution. It is in a quite early stage of development. Regarding your options: Pysco: the author considered the project was done, and he decided to collaborate with Pypy. Most of its features are in there now. Pyrex: abandoned project, where Cython was forked from. It has all its features and much more. Pypy: not a real option for general scientific code because the interfacing with C is too slow, and not complete. Numpy is only partially suported, and there is little hope Scipy will ever be (mainly because of the FORTRAN dependencies). This may change in the future, but probably not any time soon. Not being able to fully use C extensions limits very much the possibilities for using external code. I must add I have used it successfully with Networkx (pure Python networks library), so there are use cases where it could be of use.
1
3
0
I have written a small scientific experiment in Python, and I now need to consider optimizing this code. After profiling, what tools should I consider in order to improve performance. From my understanding, the following wouldn't work: Psyco: out of date (doesn't support Python 2.7) Pyrex: last update was in 2010 Pypy: has issues with NumPy What options remain now apart from writing C modules and then somehow interfacing them with Python (for example, by using Cython)?
As of June 2014, what tools should one consider for improving Python code performance?
1.2
0
0
165
24,237,335
2014-06-16T05:54:00.000
1
0
1
0
c#,python,.net,share,shared-objects
24,281,523
2
false
0
1
Since python is running as another process. This is no way for python to access object in c# directly since process isolation. A way of marshal and un-marshal should be included to communicate between processes. There are many way to communicate between processes. Share memory, file, TCP and so on.
2
3
0
I have created a windows app which runs a python script. I'm able to capture the output of the script in textbox. Now i need to pass a shared object to python script as an argument from my app. what type of shared object should i create so that python script can accept it and run it or in simple words how do i create shared object which can be used by python script. thanks
shared object in C# to be used in python script
0.099668
0
0
912
24,237,335
2014-06-16T05:54:00.000
1
0
1
0
c#,python,.net,share,shared-objects
32,651,105
2
false
0
1
5.4. Extending Embedded Python will help you to access the application object. In this case both application and python running in single process.
2
3
0
I have created a windows app which runs a python script. I'm able to capture the output of the script in textbox. Now i need to pass a shared object to python script as an argument from my app. what type of shared object should i create so that python script can accept it and run it or in simple words how do i create shared object which can be used by python script. thanks
shared object in C# to be used in python script
0.099668
0
0
912
24,238,274
2014-06-16T07:09:00.000
0
0
1
0
python,shapefile,gdal
24,384,358
1
true
0
0
Problem solved while using the pyshp module, along with copyfile for projections and list to input and output each set of coordinates, fields and records...
1
0
0
I will detail my problem here.. I have a polyline-shapefile, which I have successfully read into python using pyshp module. My problem is to save the 2nd node of every line feature into a separate, single point file. I have successfully been able to retrieve the 2nd node, using for i in range(len(shapRecs)): print shapRecs[i].shape.points[p-1] Now I need to save these to a single separate file. How to write a new shapefile from the results of this loop...
Copy queried attributes of a shapefile to a new shapefile
1.2
0
0
250
24,239,875
2014-06-16T08:54:00.000
0
0
1
0
python,timer,continue
24,242,953
1
true
0
0
If I've correctly undestood your problem, you want something to count how long the application is opened, and pausing all the time the application is minimized. For that, you do not need to use any sleep. You simply react to events, noting the time the event occurs and cumulating open time. In pseudo code, it gives : App starts in open mode: cumul = 0, openBeg = now, state = open app starts in minimized mode : cumul = 0, state = min app is minimized : cumul += now - openBeg, state = min app is (re-)opened : openBeg = now, state = open app is closed : if state is open: cumul += now - openBeg For python, now is time.time() and state should be managed as a boolean opened = True when state is open, opened = False when stat is min. It is simple if the application calls a similar API, it will be much more tricky if you have to hook on the windows application state changes.
1
0
0
How to create Python timer with Stop, Pause and Continue functions without using "time" module, especially time.sleep() because my programm freezes after this function? Thank you To be more specific what i am trying to acchieve , I want to create an inapplication script that counts for how long was the programm opened. If application was minimized - timer has to pause. In this step, using time.sleep() freezes my application If application is closed - timer has to stop
Python timer with Stop Pause and Continue without using time module
1.2
0
0
1,053
24,243,891
2014-06-16T12:35:00.000
0
0
0
1
python,google-app-engine,python-2.7
24,250,751
2
false
0
0
If you are running your app from the terminal, using dev_appserver.py try using the --skip_sdk_update_check switch, it may be the SDK update check that is failing.
1
0
0
I trying to run a hello world app on my device environment using ga and Python, however even not doing any explicit url request, the urllib2 is having some problems with my proxy server. I tried adding the localhost to the list of exclusion and it didn't work. If I disable the proxy on machine it works perfectly. How can I make it work without disabling the proxy for all programs?
Need to ignore proxy without disabling it on control panel
0
0
1
27
24,247,829
2014-06-16T16:03:00.000
0
0
0
0
python,angularjs,flask
24,302,497
1
true
1
0
You should give user just a link to download this data and convert it to JSON on your backend. Don't do this on frontend, because you don't do any post-processing things on frontend. Make an url for your query view with postfix like '/csv' and make a view for it.
1
0
0
I have a Flask application performs an API query to a database, which returns a typically large JSON file (around 20-50MB). My intentions are to convert this JSON response into csv data, and return this back to the frontend, which should then prompt the user to download the file. The conversion part I have handled, but what is the best way to transfer this CSV file to the client for download? Should I stream it to avoid any memory overload on the browser? Any advice would be appreciated.
Flask send large csv to angularJS frontend to download
1.2
0
0
540
24,249,626
2014-06-16T18:07:00.000
0
0
0
0
android,python,django,twitter-bootstrap,titanium
24,252,267
1
false
1
0
Titanium is a platform in which you can create mobile apps using JavaScript as your main language. However it doesn't use HTML and CSS to render UI of your app. You are creating different view objects by calling Ti.UI.create* methods which Titanium translates to native objects. This way look&feel of your app will be as close as possible to native apps. If you really want to use Bootstrap to style your mobile app, you should take a look on PhoneGap or just create responsive layout which is accessible through mobile browser.
1
0
0
I am working on building a web interface, Android and iOS applications for the same application. I have my web interface in Django. There is a multi agent system built using Python in the backend, to which I send and receive messages using zeromq from the django application. The Django application uses bootstrap for the frontend. I am looking at Titanium to build the mobile applications. Is there a way to use Titanium for the front end, and use Django as my server for the mobile applications as well? Also, I would like to know if I can use the same bootstrap theme I use for the web interface, in my Titanium project as well? I am new to Titanium, and I am just reading documents now just to get an idea of how it works. This could be a naive question, but I am total newbie and would like to get this information from this forum.
Using Django as server (backend) for Titanium mobile application?
0
0
0
222
24,249,796
2014-06-16T18:20:00.000
0
0
1
0
python,exception,exception-handling
24,249,840
4
false
0
0
how does the caller of something know if that something would throw an exception or not? By reading the documentation for that something.
2
0
0
In the Java world, we know that the exceptions are classified into checked vs runtime and whenever something throws a checked exception, the caller of that something will be forced to handle that exception, one way or another. Thus the caller would be well aware of the fact that there is an exception and be prepared/coded to handle that. But coming to Python, given there is no concept of checked exceptions (I hope that is correct), how does the caller of something know if that something would throw an exception or not? Given this "lack of knowledge that an exception could be thrown", how does the caller ever know that it could have handled an exception until it is too late?
In python how does the caller of something know if that something would throw an exception or not?
0
0
0
75
24,249,796
2014-06-16T18:20:00.000
0
0
1
0
python,exception,exception-handling
24,249,818
4
false
0
0
As far as I know Python (6 years) there isn't anything similar to Java's throws keyword in Python.
2
0
0
In the Java world, we know that the exceptions are classified into checked vs runtime and whenever something throws a checked exception, the caller of that something will be forced to handle that exception, one way or another. Thus the caller would be well aware of the fact that there is an exception and be prepared/coded to handle that. But coming to Python, given there is no concept of checked exceptions (I hope that is correct), how does the caller of something know if that something would throw an exception or not? Given this "lack of knowledge that an exception could be thrown", how does the caller ever know that it could have handled an exception until it is too late?
In python how does the caller of something know if that something would throw an exception or not?
0
0
0
75
24,250,152
2014-06-16T18:44:00.000
2
0
1
0
python,metadata,categories,pelican
24,419,741
1
false
0
0
You might try adding a key/value dictionary to your Pelican settings file that contains your category metadata, and then access that information from within your theme's index.html template.
1
4
0
I want to organize a blog into multiple categories. For each category index page I want to add some metadata like title, description, meta etc. specific to the category. Pelican uses folders to split posts into categories but how do I define the metadata for each category? Is it possible to use a metadata file to put into each category folder?
pelican blog define metadata for category index page
0.379949
0
0
590
24,254,300
2014-06-17T01:00:00.000
0
0
0
1
python,django,deployment
24,258,210
2
false
1
0
I know some of this tip can be obvious but never knows: Do you update all your settings in settings.py ? (paths to static files, path to project...) Wich server are you using ? django server ? apache ? nginx ? Do you have permissions over all file in the project ? You should check that the owner of the files is the user you have, not root. If the owner is root you'll have this permissions problem in every file that is owned by root. Are you using uwsgi ? Have you installed all the apps you got in your VM ? Have you installed the SAME version you got in your VM ? When I move a project from VM to real server I go over this steps: Review settings.py and update paths Check permissions in the folder that the web server may use I have a list with the packages and versions in a txt file, let's call it packages.txt I install all that packages using pip install -r packages.txt I allways use apache/nginx, so I have to update the virtualhost to the new paths If I'm using uwsgi, update uwsgi settings To downgrade some pip packages you may need to delete the egg files, because if you uninstall a package and reinstall it, although your using pip install package==VERSION, if you have a package already downloaded, pip will install this one, even if the VERSION is different. To check actual version of pip packages use pip freeze To export all pip packages to a file, to import them in other place: pip freeze > packages.txt nad to install packages from this file pip install -r packages.txt
1
0
0
I hope you can help me. I have been building this webshop for the company I work for with Django and Lightning Fast Shop. It's basically finished now and I have been running it of a virtual ubuntu machine on my PC. Since it got annoying leaving my PC on the entire time, so others could access the site, I wanted to deploy it on a root server. So I got a JiffyBox and installed ubuntu on it. I managed to get Gnome working on it and to connect to it with VNC. I then uploaded my finished project via FTP to the server. Now I thought I would only need to download Django-LFS, create a new project and replace the project files with my finished ones. This worked when I tested it on my virtual machine. To my disappointment it did not work on the root server. When I tried running "bin/django runserver" I got an error message saying "bash: bin/django: Permission denied" and when I try it with 'sudo' I get "sudo: bin/django: command not found" I then realized that I had downloaded a newer version of Django-LFS and tried it with the same version to no avail. I am starting to get really frustrated and would appreciate it very much if somebody could help me with my problem. Greetings, Krytos.
Copying Django project to root server is not working
0
0
0
103
24,256,518
2014-06-17T05:45:00.000
1
0
1
0
python,trace32
26,437,728
1
false
0
0
What you intend is to read register values from target under test in Trace32 using python, correct me if I am wrong. If my understanding is right, then what you intend can be achieved. api_remote.pdf help document can be found in the installation folder of Trace32. In this document hlinknet.c and hremote.c files are mentioned which provide Trace32 API's for controlling Trace32 from external application which can be developed in python, Labview or any other programming language. You can create suitable commands in python that call trace32 API's and generate your own application in python to control Trace32. I am not aware of how to develop in python but the above reference will get you what you want to achieve using python if I understood your question correctly.
1
1
0
I want to run a .cmm file that reads the register values in kernel in TRACE32 and i want to initiate that using python so that data analysis becomes easy. Kindly help me with the procedure and commands to do so
Running TRACE32 using python
0.197375
0
0
2,360
24,257,803
2014-06-17T07:16:00.000
12
0
0
1
python,google-app-engine,installation,pip,distutils
37,644,135
8
false
1
0
Another solution* for Homebrew users is simply to use a virtualenv. Of course, that may remove the need for the target directory anyway - but even if it doesn't, I've found --target works by default (as in, without creating/modifying a config file) when in a virtual environment. *I say solution; perhaps it's just another motivation to meticulously use venvs...
3
147
0
I've been usually installed python packages through pip. For Google App Engine, I need to install packages to another target directory. I've tried: pip install -I flask-restful --target ./lib but it fails with: must supply either home or prefix/exec-prefix -- not both How can I get this to work?
DistutilsOptionError: must supply either home or prefix/exec-prefix -- not both
1
0
0
56,894
24,257,803
2014-06-17T07:16:00.000
2
0
0
1
python,google-app-engine,installation,pip,distutils
48,954,780
8
false
1
0
If you're using virtualenv*, it might be a good idea to double check which pip you're using. If you see something like /usr/local/bin/pip you've broken out of your environment. Reactivating your virtualenv will fix this: VirtualEnv: $ source bin/activate VirtualFish: $ vf activate [environ] *: I use virtualfish, but I assume this tip is relevant to both.
3
147
0
I've been usually installed python packages through pip. For Google App Engine, I need to install packages to another target directory. I've tried: pip install -I flask-restful --target ./lib but it fails with: must supply either home or prefix/exec-prefix -- not both How can I get this to work?
DistutilsOptionError: must supply either home or prefix/exec-prefix -- not both
0.049958
0
0
56,894
24,257,803
2014-06-17T07:16:00.000
24
0
0
1
python,google-app-engine,installation,pip,distutils
45,668,067
8
false
1
0
On OSX(mac), assuming a project folder called /var/myproject cd /var/myproject Create a file called setup.cfg and add [install] prefix= Run pip install <packagename> -t .
3
147
0
I've been usually installed python packages through pip. For Google App Engine, I need to install packages to another target directory. I've tried: pip install -I flask-restful --target ./lib but it fails with: must supply either home or prefix/exec-prefix -- not both How can I get this to work?
DistutilsOptionError: must supply either home or prefix/exec-prefix -- not both
1
0
0
56,894
24,258,400
2014-06-17T07:47:00.000
0
0
0
0
android,python,sl4a,ase
24,258,557
2
true
1
0
The ADT could make an easy transfer between PC and android. But you could use wifi keyboard or some app like this, and write in android directly. Also, debuging on PC is a pain, emulated android is slow and error prone for third party like SL4A; the best/fastest is remote control but run script on android. I use a ssh server in android (you find at least 3 free on Play) and Vim from PC and is the fastest and most comfortable way.
1
0
0
Is there a way to write Python scripts for SL4A (ASE) on P.C. and then deploy it on the hardware? I'm new to this and I could write the basic scripts on the device but writing long scripts on phone is tedious. Is there a support for that? Thanks in advance!
SL4A Scripting on P.C
1.2
0
0
621
24,262,946
2014-06-17T11:45:00.000
0
0
0
0
django,python-2.7,django-forms,django-views
24,264,757
1
true
1
0
There is no "right" here. Forms can be defined in either views.py or forms.py (calling it "forms.py" is merely a naming convention that has been widely adopted). It really comes down to personal preference. Larger projects will likely benefit by keeping forms separate (if only to keep things less cluttered).
1
0
0
Is it right to write django forms inside views.py or should i keep 2 separate files views.py forms.py Which is the right convention ? I have seen different projects following these two convensions
Writing Django forms and views
1.2
0
0
74
24,263,981
2014-06-17T12:35:00.000
1
0
1
0
python,multithreading,list
24,264,391
2
false
0
0
So long as each thread operates on different items in the list, then there will be no data races. However, if you can partition the data off so cleanly, wouldn't it make more sense for each thread to operate on its own private data? Why have a list, visible outside the worker threads, containing items that are private to each thread. It would be cleaner, and easier to reason about the code if the data were private to the threads. On the other hand your question edit suggests that you do have multiple threads accessing the same item of the list simultaneously. In which case there could well be a data race. Ultimately, without more detail nobody can say for sure whether or not your code is safe. In principle, so long as data is not shared between threads, then code will be threadsafe. But it's not possible to be sure that is the case with the problem description as stated.
1
0
0
If I have a list of say 4 items in python and each thread accesses only 1 item in the list, for example thread 1 assigned to the first item, thread 2 assigned to the 2nd item etc... will I have race conditions? do I need to use mutexes? edit: I meant modify sorry, each thread will modify 1 item in the list. I added I was using python just because. edit: @David Heffernan The list is a global because I need to send it somewhere every 1 second, so I have 4 threads doing the modification needed on each single item and every 1 second I send it somewhere over HTTP in the main thread.
If I have 4 threads each accessing only 1 element of a list will I run into race conditions?
0.099668
0
0
112
24,268,634
2014-06-17T16:11:00.000
2
0
1
0
python
24,268,970
1
true
0
1
Segmentation Fault is an system error, which occurs when your program tries to access wrong memory address. It is internal program problem and you cannot continue process. It could be possible, but believe me, you should not. So, you should fix it. To fix it, you should dig into Qt internals, but I'd like to suggest you to install fresh library version and make sure you have latest packages.
1
1
0
Pretty self-explanatory from the title. I'm using PyQt4 to crawl pages and I occasionally get a SegFault error. The overall program seems to still be working regardless though, so all I want to do is prevent that "Segmentation Fault (core dumped)" error from being shown on the console. How is this done?
"Segmentation Fault" being thrown by third party package. How to prevent it from displaying on console?
1.2
0
0
205
24,269,245
2014-06-17T16:46:00.000
0
0
0
0
python,web-scraping,scrapy
24,275,158
2
false
1
0
It sure can, there is multiple way: Add SgmlLinkextractor to follow that next link. Or make Request in your function, like yield(Request(url)) In your case url = erthdata.---.com/full_record.do?product=UA&search_mode=GeneralSearch&qid=21&SID=d89sduisd&excludeEventConfig=ExcludeIfFromFullRecPage&page=1&doc=4&cacheurlFromRightClick=no
1
0
0
Can scrapy save a web page, follow the next button and save the next web page etc., in a series of search results? It always needs to follow the "next" button and nothing else. This is the url (obfuscated) that links to the next page: erthdata.---.com/full_record.do?product=UA&search_mode=GeneralSearch&qid=21&SID=d89sduisd&excludeEventConfig=ExcludeIfFromFullRecPage&page=1&doc=4&cacheurlFromRightClick=no Thanks John
Can scrapy step through a website by following "next" button
0
0
0
124
24,269,552
2014-06-17T17:04:00.000
4
0
1
0
python,windows,vim
24,270,897
1
true
0
0
You can try renaming / symlinking python34.dll to python32.dll; if they are fully binary-compatible, this might work. But it is expected (and safer) to recompile Vim whenever your Python (or Perl, or any other language integration for that matter) version changes (which shouldn't be too often).
1
1
0
I'm beginning my journey into Vim and have run into a snag in attempts to turning it into my primary Python IDE. I'm using Vim v7.4 (latest) in addition to the python-mode plugin. Issue is that while this version of Vim is compiled with +python/dyn and +python3/dyn, it's pointing to python32.dll instead of python34.dll. I'm seeing all of this using the :version command. Is there a way to target Vim to a newer version of Python without having to recompile it each time there is an update?
Point Vim to updated Python version instead of recompiling
1.2
0
0
305
24,269,715
2014-06-17T17:16:00.000
2
0
1
0
python,python-3.x,python-2.x
24,269,948
2
false
0
0
No, type and class means the same. Though, builtin types and C-extension (interpreter extension at all, like Java classes in Jython) may be treated a little differently, for example you may not be able to monkey-patch it. In old versions of CPython there was also an issue with subclassing builtins, but that was so long ago, that I only read about it, and haven't seen "in action". Also, there is difference with old- and new-style classes in Python2, but it's not really important in this context. In the end there are many "types of types", but names "class" and "type" are equivalent. Still - "type" is used more often for builtins, while class - not so often. With not-builtins both names are as frequent.
1
5
0
I'm reading a Python book and it always speaks about "given X its type is Y", so I'm getting confused. Is there any difference between asking the type of an object and the class of membership of an object in python2 and python3? I mean, are type and class two different concepts? This question comes from the fact that in pythons previous to the 2.1 there was a difference between calling x.__class__ and type(x).
Is there any difference between type and class?
0.197375
0
0
131
24,271,006
2014-06-17T18:36:00.000
0
1
0
0
python,django,caching,django-rest-framework
24,293,403
1
false
1
0
One technique is to key the URLs on the content of the media they are referring too. For example if you're hosting images then use the sha hash of the image file in the url /images/<sha>. You can then set far-future cache expiry headers on those URLs. If the image changes then you also update the URL referring to it, and a request is made for an image that is no longer cached. You can use this technique for regular database models as well as images and other media, so long as you recompute the hash of the object whenever any of its fields change.
1
1
0
Using Python's Django (w/ Rest framework) to create an application similar to Twitter or Instagram. What's the best way to deal with caching content (JSON data, images, etc.) considering the constantly changing nature of a social network? How to still show updated state a user creates a new post, likes/comments on a post, or deletes a post while still caching the content for speedy performance? If the cache is to be flushed/recreated each time a user takes an action, then it's not worth having a cache because the frequency of updates will be too rapid to make the cache useful. What are some techniques of dealing with this problem. Please feel free to share your approach and some wisdom you learned while implementing your solution. Any suggestions would be greatly appreciated. :)
Handling Cache with Constant Change of Social Network
0
0
0
83
24,279,321
2014-06-18T07:19:00.000
1
0
1
0
python,virtualenv
24,279,522
1
true
0
0
Delete or rename the file /virtualenv_root/lib/python3.4/no-global-site-packages.txt OR Add a symlink between /virtualenv_root/lib/python3.4/site-packages/ and /path/to/desired/site-packages/ Here virtualenv_root is the name of your virtual environment.
1
0
0
How can a virtualenv be modified after it is created so as to achieve the same effect as creating it with virtualenv --system-site-packages? In other words, how to enable accessing any systemwide installed packages in a virtualenv which was originally created with that access disabled?
How to modify virtualenv to achieve the same effect as --system-site-packages?
1.2
0
0
76
24,279,336
2014-06-18T07:19:00.000
1
0
0
0
python,django,mongoengine,flask-mongoengine,django-mongodb-engine
24,280,473
3
false
1
0
Django MongoEngine aim is to provide better integration with Django - however currently (June 2014) its not stable and the readme says DO NOT CLONE UNTIL STABLE So beware!
2
1
0
What are the differences between the Mongoengine, flask-MongoEngine and Django-MongoEngine projects? I am using Mongoengine in my Django project. Will I get any benefits if I use Django-MongoEngine instead?
Difference among Mongoengine, flask-MongoEngine and Django-MongoEngine?
0.066568
0
0
1,114
24,279,336
2014-06-18T07:19:00.000
0
0
0
0
python,django,mongoengine,flask-mongoengine,django-mongodb-engine
58,571,661
3
false
1
0
Django framework provides a unified unified interface to connect to a Database backend which is usually an SQL based database such as SQLite or Postgresql. That means that the developer does not have to worry about writing code specific to the database technology used, but defines Models and perform transactions and run all kinds of queries using the Django database interface. Flask does the same. Django does not support MongoDB from the get-go. To interact with MongoDB databases, Collections and Documents using Python one would use the PyMongo package, that has a different syntax, and paradigms than Django Models or Flask's. MongoEngine wraps PyMongo in way that provides a Django-like Database for MongoDB. MongoEngine-Django tries to allow Django web-apps developers to use a Mongo database as the web-app backend. To provide the Django Admin, Users, Authentication and other database related features that are available in Django usually with an SQL backend. MongoEngine-Flash tries to allow Flask web-apps developers to use a Mongo database as the web-app backend. Personally, I prefer using a structured SQL database for the web-app essentials, and PyMongo, or MongoEngine to interface with any further Mongo Databases where unstructured big data might reside...
2
1
0
What are the differences between the Mongoengine, flask-MongoEngine and Django-MongoEngine projects? I am using Mongoengine in my Django project. Will I get any benefits if I use Django-MongoEngine instead?
Difference among Mongoengine, flask-MongoEngine and Django-MongoEngine?
0
0
0
1,114
24,282,955
2014-06-18T10:16:00.000
0
0
0
0
python,firefox,selenium,amazon-web-services,amazon-ec2
26,747,525
1
false
0
0
I was facing the same problem. Running the scripts as root solved the problem. Also making an user running the test a sudoer worked.
1
1
0
I am trying to run a python script in which I am opening a webpage and clicking on some element. But the script is running very slow and giving random exceptions . Mostly it halts at line driver = webdriver.Firefox() Message - selenium.common.exceptions.WebDriverException: Message: 'Can\'t load the profile. Profile ?Dir: /tmp/tmp4liaEq Firefox output: Xlib: extension "RANDR" missing on display ":1733".\n1403086712970\taddons.xpi\tDEBUG\tstartup\n1403086713204\taddons.xpi\tDEBUG\tcheckForChanges\n1403086713568\taddons.xpi\tDEBUG\tNo changes found\n' Sometimes - driver.find_element_by_xpath("//a[@id='some_id']") Returning error that the element is not visible so can't be clicked. The same script runs smoothly on my system that has 4GB RAM. (EC2 system specs ~ 600mb memory) I tried looking into the system and "top" command returned - 604332k total, 577412k used, 26920k free, 6616k buffers I have installed firefox and also xvfb since i am running the firefox headlessly
Unable to run python selenium webdriver scripts over Amazon EC2
0
0
1
1,073
24,284,390
2014-06-18T11:30:00.000
3
0
0
0
python,numpy,scipy,sympy
24,289,392
2
false
0
0
Although it would be nice if there were an existing routine for calculating the spherical Hankel functions (like there is for the ordinary Hankel functions), they are just a (complex) linear combination of the spherical Bessel functions of the first and second kind so can be easily calculated from existing routines. Since the Hankel functions are complex and depending on your application of them, it can be advantageous to rewrite your expression in terms of the Bessel functions of the first and second kind, ie entirely real quantities, particularly if your final result is real.
1
3
1
I knew that there is no builtin sph_hankel1 in scipy then i want to know that how to implement it in the right way? Additional: Just show me one correct implementation of sph_hankel1 either using of Scipy or Sympy.
How can i implement spherical hankel function of the first kind by scipy/numpy or sympy?
0.291313
0
0
997