Q_Id
int64
2.93k
49.7M
CreationDate
stringlengths
23
23
Users Score
int64
-10
437
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
DISCREPANCY
int64
0
1
Tags
stringlengths
6
90
ERRORS
int64
0
1
A_Id
int64
2.98k
72.5M
API_CHANGE
int64
0
1
AnswerCount
int64
1
42
REVIEW
int64
0
1
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
15
5.1k
Available Count
int64
1
17
Q_Score
int64
0
3.67k
Data Science and Machine Learning
int64
0
1
DOCUMENTATION
int64
0
1
Question
stringlengths
25
6.53k
Title
stringlengths
11
148
CONCEPTUAL
int64
0
1
Score
float64
-1
1.2
API_USAGE
int64
1
1
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
15
3.72M
22,996,507
2014-04-10T18:48:00.000
4
0
0
0
0
python,arrays,numpy,arcgis,arcpy
0
22,996,581
0
2
0
true
0
0
If I understand your description right, you should just be able to do B[A].
1
2
1
0
I have two raster files which I have converted into NumPy arrays (arcpy.RasterToNumpyArray) to work with the values in the raster cells with Python. One of the raster has two values True and False. The other raster has different values in the range between 0 to 1000. Both rasters have exactly the same extent, so both NumPy arrays are build up identically (columns and rows), except the values. My aim is to identify all positions in NumPy array A which have the value True. These positions shall be used for getting the value at these positions from NumPy array B. Do you have any idea how I can implement this?
How to search in one NumPy array for positions for getting at these position the value from a second NumPy array?
0
1.2
1
0
0
143
23,001,932
2014-04-11T01:26:00.000
4
0
0
0
0
python,algorithm,count
0
23,001,960
0
1
0
true
0
0
Since all 1's come before the 0's, you can find the index of the first 0 using Binary search algorithm (which is log N) and you just have to do this for all the N rows. So the total complexity is NlogN.
1
1
1
0
Assuming that in each row of the array, all 1's come before the 0's, how would I be able to come up with an (O)nlogn algorithm to count the 1's in the array. I think first I would have to make a counter, search each row for 1's (n), and add that to the counter. Where does the "log n part" come into play? I read that a recursive algorithm to do this has nlogn complexity, but Im not too sure how I would do this. I know how to do this in O(n^2) with for loops. Pseudo code or hints would be helpful! Thank you
Counting 1's in a n x n array of 0's and 1's
0
1.2
1
0
0
100
23,014,432
2014-04-11T13:46:00.000
0
0
1
0
0
python,multithreading,performance,multiprocessing
0
23,014,920
0
3
0
false
0
0
In python you want multiprocessing over multithreading. Threads don't do well in Python beacause of the GIL.
1
0
0
0
I've written a script that pulls data from my school's website and I'm having some trouble with execution time. There are over 20 campuses, each with data for three semesters. The script looks up those school names, then the semesters available for each school, then the subjects/departments that are offering classes each semester. Then the script searches for the classes per department and then I do things with that data. I timed the execution of the script on just one campus, and it ran for over three minutes. When I ran it for all 24 campuses it took over an hour. I'm using the "requests" library, which runs each HTTP request in synchronously. I'm using the "requests" library, primarily because it handles sessions nicely. I'm looking for ways to bring down the time the script takes to run, by making the various requests for each semester run in parallel. I suspect that if I run three semesters asynchronously, then each school should take a minute, instead of three. Then, I can run all schools in parallel and achieve the same minute for all of the data. A minute is a lot less than an hour and a quarter! Am I wrong in my guess that multithreading/processing will bring down the execution time so drastically? What Python libraries should I be using for threads or processes? Once I've got each school being processed on a thread, how do I consolidate the data from all the schools into one place? I know that it's considered poor practice for threads to alter global state, but what's the alternative here?
Performance Improvements with Processes or Threads
1
0
1
0
1
67
23,028,941
2014-04-12T10:07:00.000
3
0
0
1
0
python,sockets,web,tornado
0
23,031,157
0
1
0
true
0
0
You can start multiple servers that share an IOLoop within the same process. Your HTTPServer could listen on one port, and the TCPServer could listen on another.
1
2
0
0
I know the httpserver module in tornado is implemented based on the tcpserver module, so I can write a socket server based on tornado. But how can I write a server that is both a socket server and a web server? For example, if I want to implement a chat app. A user can either login through a browser or a client program. The browser user can send msg to the client user through the back-end server. So the back-end server is a web and socket server.
How to use tornado as both a socket server and web server?
0
1.2
1
0
1
1,209
23,031,149
2014-04-12T13:42:00.000
0
1
1
0
1
python,python-3.x,raspbian,pycrypto
0
23,031,224
0
1
0
false
0
0
Having looked into it there does not seem to be a pycrypto version for python3 at the moment. I think you're options are to look for an alternative package or to convert your code to python 2. There are tools available which can do this automatically, for example 3to2 is available in pip.
1
0
0
0
I'm trying to install pycrypto for python 3.x.x on raspberry pi but when i run python setup.py install from the command line, it is by default installed to python 2.7.x i have installed python-dev and still with no luck, i have read that using a PIP might help, but unfortunately i don't know how to use it. all my codes are written for python 3.3.x and it would take me a very long time to re-write them all for 2.7. so how can i fix it without re-writing my codes
how to install python package in Raspbian?
0
0
1
0
0
419
23,038,209
2014-04-13T01:48:00.000
0
0
1
1
0
python,python-2.7,enthought,leap-motion
1
24,131,195
0
2
0
false
0
0
Try this: Put the four files into one folder. Right click on the Sample.py until it says "Open with" and gives some choices. Select Python Launcher.app (2.7.6) # This version of Python Launcher must match the Mac built in Python Version. If your version of LeapPython.so is constructed correctly, it should run.
1
0
0
0
I am trying to install the leap motion sdk into Enthought Canapy. The page called Hello World on leap motion mentions i need to put these four files: Sample.py, Leap.py, LeapPython.so and libLeap.dylib into my "current directory". I don't know how to find my current directory. I have tried several things including typing into terminal "python Sample.py" which tells me: /Users/myname/Library/Enthought/Canopy_64bit/User/Resources/Python.app/Contents/MacOS/Python: can't open file 'Sample.py': [Errno 2] No such file or directory I've tried to put the 4 files in the MacOS file, but it still gives me this error. Any suggestions would be greatly appreciated.
Installing Leap Motion sdk into Enthought SDK
0
0
1
0
0
671
23,070,922
2014-04-14T21:37:00.000
0
1
1
1
1
python,chipmunk,pymunk
0
23,200,199
0
1
0
true
0
0
Try and go to the folder where setup.py is first and then do python setup.py install. As you have noticed, it assumes that you run it from the same folder as where its located.
1
1
0
0
I have downloaded pymunk module on my computer. When I typed in "python setup.py install" in terminal, it says "no such file or directory", then I typed in the complete path of setup.py instead of setup.py, and it still could not run since the links to other files in the code of setup.py are not complete paths. (Like README.txt, terminal said "no such file or directory". Sorry I'm a python newbie. Someone tell me how can I fix it? Thanks!!!!
Compile pymunk on mac OS X
0
1.2
1
0
0
118
23,088,338
2014-04-15T15:36:00.000
3
0
1
1
0
python
0
51,430,363
0
4
0
false
0
0
You know, you can start python with py -specific version To run a script on interpreter with a specific version you'll just start your script with following parameters, py yourscript.py -version
1
4
0
0
I'm learning python now using a mac which pre-installed python 2.7.5. But I have also installed the latest 3.4. I know how to choose which interpreter to use in command line mode, ie python vs python 3 will bring up the respective interpreter. But if I just write a python script with this header in it "#!/usr/bin/python" and make it executable, how can I force it to use 3.4 instead of 2.7.5? As it stands, print sys.version says: 2.7.5 (default, Aug 25 2013, 00:04:04) [GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)]
how to specify version of python to run a script?
1
0.148885
1
0
0
28,056
23,096,631
2014-04-15T23:59:00.000
0
0
0
0
0
python,bittorrent
0
25,778,457
0
1
0
false
1
0
It's trivial to "break down" files as you put it. You'll need an algorithm to disassemble them, and then to reassemble them later, presumably by a browser since you mentioned HTML and CSS. Bittorrents implements this, and additionally the ability upload and download from a distributed "swarm" of peers also interested in the same data. Without reinventing the wheel by creating your own version of bittorrent, and again assuming you want to use this data in a browser, you'll want to create a torrent of all the HTML, CSS and other files relevant to your web application, and seed that using bittorrent. Next you'll want to create a bootstrap "page" that makes use of one of the several Javascript bittorrent clients now available to download the torrent, and then load the desired pages and resources when the client completes the download.
1
0
0
0
I am trying to make a program that breaks down files like HTML or CSS into chunks like that of a torrent. I am completely unsure how to do this. They need to be broken down, than later reassembled in order. anybody know how to do this? It doesn't have to be in Python, that was just my starting point.
How do I break down files in a similar way to torrents?
0
0
1
0
0
350
23,097,604
2014-04-16T01:51:00.000
0
1
0
0
0
python,multithreading,buffer,raspberry-pi
0
23,097,644
0
1
1
false
0
1
I don't think that RasPi would work that well running multithreaded programs. Try the first method, though it would be interesting to see the results of a multithreaded program.
1
0
0
0
I apologize in advance for this being a bit vague, but I'm trying to figure out what the best way is to write my program from a high-level perspective. Here's an overview of what I'm trying to accomplish: RasPi takes input from altitude sensor on serial port at 115000 baud. Does some hex -> dec math and updates state variables (pitch, roll, heading, etc) Uses pygame library to do some image manipulation based on the state variables on a simulated heads up display Outputs the image to a projector at 30 fps. Note that there's no user input (for now). The issue I'm running into is the framerate. The framerate MUST be constant. I'd rather skip a data packet than drop a frame. There's two ways I could see structuring this: Write one function that, when called, grabs data from the serial bus and spits out the state variables as the output. Then write a pygame loop that calls this function from inside it. My concern with this is that if the serial port starts being read at the end of an attitude message, it'll have to pause and wait for the message to start again (fractions of a second, but could result in a dropped frame) Write two separate modules, both to be running simultaneously. One continuously reads data from the serial port and updates the state variables as fast as possible. The other just does the image manipulation, and grabs the latest state variables when it needs them. However, I'm not actually sure how to write a multithreaded program like this, and I don't know how well the RasPi will handle such a program.
How to structure my Python code?
0
0
1
0
0
73
23,098,583
2014-04-16T03:42:00.000
0
0
1
0
1
firefox,ipython,ipython-notebook
1
23,773,427
0
1
0
false
0
0
I found that the problem occurs when changing the cookie preference "Keep until:" " they expire" to "ask me every time" (in Preferences->Privacy->History). As soon as I switch to "they expire" or "I close Firefox" and reload the page with my notebook, it renders as expected and the notebook is shown as running. Creating new notebooks works also correctly. There is an issue open for this: github.com/ipython/ipython/issues/5703
1
0
0
0
I upgraded from Ipython 1.2.1 to Ipython 2.0. When I try to open an existing notebook or create a new notebook in Firefox, I only get a blank screen. There is no error message in the terminal window that I used to start the notebook server. This happens on CentOs 6.5 with Python 2.7.5 and Firefox 24.4 as well as on Mac OS 10.8.5 with Python 2.7.6 and Firefox 28. Starting Firefox in safe-mode did not make any difference. If I use Safari instead of Firefox, the notebooks display as expected. Any ideas what could be wrong or how to debug this?
Opening notebook with Ipython 2.0 in Firefox yields only a blank screen
0
0
1
0
0
247
23,104,754
2014-04-16T09:05:00.000
1
0
1
0
1
c#,visual-studio-2013,ironpython,vsix
0
23,131,781
0
1
0
true
0
0
I would presume that there is a way to include them in the VSIX file and also know where they are on disk - at least, you could use AppDomain.CurrentDomain.GetAssemblies() for find the IronPython assembly and Assembly.Location to find where it is, and hope the VSIX puts the Lib directory near that. (My only experience with VSIX was a while ago and I hated it, so I can't provide much advice in that department.) Assuming you're embedding IronPython, once you have the location you can just use ScriptEngine.SetSearchPaths to tell IronPython where the Lib directory is. If you're shelling out to ipy.exe then set the IRONPYTHONPATH environment variable before starting it.
1
0
0
0
I'm currently writing a Visual Studio extension, which provides scripting capabilities. I'm using IronPython (the newest one), but I have some problems with Python's standard libraries. As I understand, all necessary files reside in the <IronPython folder>\Lib folder. I cannot rely on my users installing IronPython, so I have to provide these files in other way. If it is possible, I'd simply embed the whole Lib folder in my assembly and allow IronPython access to it from the code, but I'm not sure, if this is possible. I can try to add the Lib folder to extension's package and extract it to wherever Visual Studio will copy my extension's files, but I'm unsure, how to get access to them during extension's runtime. Also, I'd like to set appropriate paths transparently to the user and I'm also unsure, whether this can be done. How can I solve this problem?
Embedding IronPython's stdlib in VS extension
1
1.2
1
0
0
110
23,110,542
2014-04-16T13:13:00.000
3
0
1
0
0
python,math
0
23,110,634
0
3
0
true
0
0
With just a single point (and nothing else) you cannot solve such a problem, there are infinitely many lines going through a single point. If you know the angle to x axis then simply m=tan(angle) (you do not need any points to do that, point is only required to figure out c value, which should now be simple). To convert angle from the y-axis to the x-axis simply compute pi/2 - angle
3
0
0
0
I understand that the equation for a straight line is: y = (m * x) + c where m is the slope of the line which would be (ydelta/xdelta) but I dont know how to get this value when I only know a single point and an angle rather than two points. Any help is appreciated. Thanks in advance.
How do I find the slope (m) for a line given a point (x,y) on the line and the line's angle from the y axis in python?
0
1.2
1
0
0
928
23,110,542
2014-04-16T13:13:00.000
-1
0
1
0
0
python,math
0
23,110,747
0
3
0
false
0
0
Okay, let's say your point is (x,y)=(1,2) Then you want to solve 2 = m + c. Obviously there is no way you can do this.
3
0
0
0
I understand that the equation for a straight line is: y = (m * x) + c where m is the slope of the line which would be (ydelta/xdelta) but I dont know how to get this value when I only know a single point and an angle rather than two points. Any help is appreciated. Thanks in advance.
How do I find the slope (m) for a line given a point (x,y) on the line and the line's angle from the y axis in python?
0
-0.066568
1
0
0
928
23,110,542
2014-04-16T13:13:00.000
0
0
1
0
0
python,math
0
23,111,191
0
3
0
false
0
0
The equation of a line is y = mx + c. You are given a point on this line, and the angle of this line from the y-axis. The gradient m will be math.cot(angle_in_radians). The x and y values will be the same as your given point. To find c, simply evaluate y - mx.
3
0
0
0
I understand that the equation for a straight line is: y = (m * x) + c where m is the slope of the line which would be (ydelta/xdelta) but I dont know how to get this value when I only know a single point and an angle rather than two points. Any help is appreciated. Thanks in advance.
How do I find the slope (m) for a line given a point (x,y) on the line and the line's angle from the y axis in python?
0
0
1
0
0
928
23,117,242
2014-04-16T18:21:00.000
0
0
1
0
1
python-2.7,opencv,opensuse,undefined-symbol
0
25,503,548
0
1
0
false
0
0
Not exactly a prompt answer (nor a direct one). I had the same issue and (re)installing various dependencies didn't help either. Ultimately, I cloned (from git) and compiled opencv (which includes the cv2.so library) from scratch, replaced the old cv2.so library and got it to work. Here is the git repo: https://github.com/Itseez/opencv.git
1
0
1
0
I'm using OpenSUSE 13.1 64-bit on an Lenovo ThinkPad Edge E145. I tryed to play a bit around with Python(2.7) and Python-OpenCV(2.4). Both is installed by using YAST. When i start the Python-Interactive-Mode (by typing "python") and try to "import cv" there are 2 things that happen: case 1: "import cv" --> End's up with: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib64/python2.7/site-packages/cv.py", line 1, in <module> from cv2.cv import * ImportError: /usr/lib64/python2.7/site-packages/cv2.so: undefined symbol: _ZN2cv23adaptiveBilateralFilterERKNS_11_InputArrayERKNS_12_OutputArrayENS_5Size_IiEEddNS_6Point_IiEEi case 2: "import cv2" --> End's up with: MemoryAccessError and the interactive mode shutdown and i'm back at the normal commandline. Have anyone any idea how can i solve this problem? Greetings
Python OpenCV "ImportError: undefined Symbol" or Memory Access Error
0
0
1
0
0
892
23,128,964
2014-04-17T09:07:00.000
0
0
0
0
0
python,django,python-2.7,google-analytics-api,http-status-code-403
0
29,837,598
0
3
0
false
1
0
You should use View ID Not account ID, the 'View ID', you can go: Admin -> Select Site -> Under "View" -> View Settings , if it doesn't works you can go: Admin->Profiles->Profile Settings
2
3
0
0
I am trying to access data from google-analytics. I am following the guide and is able to gauthorize my user and get the code from oauth. When I try to access data from GA I only get 403 Insufficient Permission back. Do I somehow have to connect my project in Google API console to my analytics project? How would I do this? Or is there some other reason why I get 403 Insufficient Permission back? I am doing this in Python with Django and I have Analytics API turned on i my API console!
Google Analytics reports API - Insufficient Permission 403
1
0
1
0
1
5,740
23,128,964
2014-04-17T09:07:00.000
9
0
0
0
0
python,django,python-2.7,google-analytics-api,http-status-code-403
0
24,274,077
0
3
0
true
1
0
Had the same problem, but now is solved. Use View ID Not account ID, the 'View ID', can be found in the Admin->Profiles->Profile Settings tab UPDATE now, if you have more a account , you must go: Admin -> Select account -> under View-> click on View Settings
2
3
0
0
I am trying to access data from google-analytics. I am following the guide and is able to gauthorize my user and get the code from oauth. When I try to access data from GA I only get 403 Insufficient Permission back. Do I somehow have to connect my project in Google API console to my analytics project? How would I do this? Or is there some other reason why I get 403 Insufficient Permission back? I am doing this in Python with Django and I have Analytics API turned on i my API console!
Google Analytics reports API - Insufficient Permission 403
1
1.2
1
0
1
5,740
23,147,008
2014-04-18T03:41:00.000
0
0
1
0
1
python,tree,family-tree
0
34,161,376
0
2
0
true
0
0
After searching a lot, I found that the Graph ADT suits the above problem better. Since a family has relations over a wide span in all directions, using a graph ADT would be conventional. Each node can store details about a person. Node can consist of parent node links, and some functionalities to find relation between two nodes etc.. To find relationships, assume the parent nodes as the Parents, and the parent of the parent nodes as grandparents etc.. Traverse to the parent node, find if there is any other child nodes, mark them as siblings etc.. The idea is this, I think it will help to solve this problem!
1
1
0
0
I ve recently started with python and am working on building a Family tree using python. My idea is that the tree should grow in both sides, i.e) both the older generations as well as younger generations can be added to the same tree. I tried implementing with Binary tree ADT and N-ary tree ADT, but that doesn't work well. Can anyone suggest me an ADT that is best for building that family tree, and guide me how to implement that?
Creating Family Trees Using Python
0
1.2
1
0
0
5,025
23,154,120
2014-04-18T12:26:00.000
0
0
0
0
0
python,facebook,google-app-engine,authentication,google-cloud-endpoints
0
23,223,929
0
1
0
true
1
0
For request details, add 'HttpServletRequest' (java) to your API function parameter. For Google authentication, add 'User' (java) to your API function parameter and integrate with Google login on client. For twitter integration, use Google app-engine OpenID. For facebook/loginForm, its all on you to develop a custom auth.
1
1
0
1
I'm trying to implement a secure google cloud endpoint in python for multi-clients (js / ios / android) I want my users to be able to log by three ways loginForm / Google / Facebook. I read a lot of docummentation about that but I didn't realy understood how I have to handle connection flow and session (or something else) to keep my users logged. I'm also looking for a way to debug my endpoint by displaying objects like Request for exemple. If someone know a good tutorial talking about that, it will be verry helpfull. thank you
google endpoint custom auth python
0
1.2
1
0
1
95
23,166,158
2014-04-19T05:16:00.000
17
1
0
1
0
python,amazon-web-services,ssh,amazon-ec2
0
23,166,196
0
2
0
false
1
0
You can run the program using the nohup command, so that even when the SSH session closes your program continues running. Eg: nohup python yourscriptname.py & For more info you can check the man page for it using man nohup.
1
13
0
0
I have a python script that basically runs forever and checks a webpage every second and notifies me if any value changes. I placed it on an AWS EC2 instance and ran it through ssh. The script was running fine when I checked after half an hour or so after I started it. The problem is that after a few hours when I checked again, the ssh had closed. When I logged back in, there was no program running. I checked all running processes and nothing was running. Can anyone teach me how to make it run forever (or until I stop it) on AWS EC2 instances? Thanks a lot. Edit: I used the Java SSH Client provided by AWS to run the script
Make python script to run forever on Amazon EC2
0
1
1
0
0
8,060
23,166,386
2014-04-19T05:45:00.000
0
0
1
1
0
python
0
23,376,739
0
1
0
false
0
0
I simply replaced the executable link in my IDE from "/usr/bin/python" to "/Library/Frameworks/Python.framework/Versions/3.4/bin".
1
1
0
0
I have Python 2.7.5 running on OS X 10.9.2. I downloaded the Python installer "python-3.4.0-macosx10.6.dmg" from python.org. After the installation, I still get 2.7.5 when querying python -V. I am not sure what I need to do to replace 2.7.5 with 3.4 besides installing python-3.4.0-macosx10.6.dmg.
Replacing Python 2.7.5 with Python 3.4 on OS X 10.9.2
0
0
1
0
0
312
23,173,427
2014-04-19T17:49:00.000
1
0
0
0
0
python,networkx
1
23,183,710
0
1
0
false
0
0
To generate trees with more nodes it is only needed to increase the "number of tries" (parameter of random_powerlaw_tree). 100 tries is not enough even to have a tree with 11 nodes (it gives an error). For example, with 1000 tries I manage to generate trees with 100 nodes, using networkX 1.8.1 and python 3.4.0
1
1
1
0
I am trying to use one of the random graph-generators of NetworkX (version 1.8.1): random_powerlaw_tree(n, gamma=3, seed=None, tries=100) However, I always get this error File "/Library/Python/2.7/site-packages/networkx/generators/random_graphs.py", line 840, in random_powerlaw_tree "Exceeded max (%d) attempts for a valid tree sequence."%tries) networkx.exception.NetworkXError: Exceeded max (100) attempts for a valid tree sequence. for any n > 10, that is starting with G = nx.random_powerlaw_tree(11) I would like to generate trees with hundreds of nodes. Does anyone know how to correctly set these parameters in order to make it run correctly?
Parameters to let random_powerlaw_tree() generate trees with more than 10 nodes
0
0.197375
1
0
0
451
23,184,702
2014-04-20T16:17:00.000
0
0
0
1
0
python,google-app-engine,app.yaml,xhtml2pdf
1
23,335,617
0
1
0
false
1
0
I got it now! Don't use XHTML2PDF - use ReportLab on its own instead.
1
0
0
0
I am new to GAE, web dev and python, but am working my way up. I have been trying to get xhtml2pdf working on GAE for some time now but have had no luck. I have downloaded various packages but keep getting errors of missing modules. These errors vary depending on what versions of these packages and dependencies I use. I have even tried using the xhtml2pdf "required dependency" versions. I know xhtml2pdf used to be hosted on GAE according to a stackoverflow post from 2010, but I don't know if this is the case anymore. Have they replaced it with something else that the GAE team think is better? I have also considered that the app.yaml is preventing my app from running. As soon as I try importing the pisca module, my app stops. Could anyone please give me some direction on how to get this working? In the sense of how to install these packages with dependencies and where they should be placed in my project folder (note that I am using Windows). And what settings I would need to add to my app.yaml file.
How do I get xhtml2pdf working on GAE?
0
0
1
0
0
113
23,190,913
2014-04-21T04:42:00.000
0
1
1
0
0
python,variables,cgi,text-files
0
23,190,978
0
1
0
true
0
0
Traditionally this is done using cookies or hidden form fields.
1
0
0
0
I want to pass a variable in one python cgi script to other cgi script? how can i do this as in php. using url or something...? i saved variable in text file, thus read and get saved variable when other page load Is this method good?
How to pass variable in one .py cgi to other python cgi script
0
1.2
1
0
0
241
23,191,241
2014-04-21T05:16:00.000
1
0
0
1
0
python,cygwin,bottle
0
23,191,551
0
1
0
true
0
0
Since you get a connection refused error, the best I can think of is that this is a browser issue. Try editing the LAN settings on your Chrome browser to bypass proxy server for local address.
1
1
0
0
I am running python 2.7 + bottle on cygwin and I wanted to access a sample webpage from chrome. I am unable to access the website running on http://localhost:8080/hello but when I do a curl within cygwin I am able to access it. Error Message when accessing through Chrome Connection refused Description: Connection refused Please let me know how I can access my local bottle website running inside Cygwin from windows browser.
Accessing localhost from windows browser
0
1.2
1
0
1
1,043
23,237,444
2014-04-23T07:17:00.000
4
0
1
0
0
python,mysql,database,class,oop
0
23,237,519
0
1
0
false
0
0
Would a Class be better for this? Probably not. Classes are useful when you have multiple, stateful instances that have shared methods. Nothing in your problem description matches those criteria. There's nothing wrong with having a script with a handful of functions to perform simple data transfers (extract, transform, store).
1
3
0
0
I searched around and couldn't really find any information on this. Basically i have a database "A" and a database "B". What i want to do is create a python script (that will likely run as a cron job) that will collect data from database "A" via sql, perform an action on it, and then input that data into database "B". I have written it using functions something along the lines of: Function 1 gets the date the script was last run Function 2 Gets the data from Database "A" based on function 1 Function 3-5 Perform the needed actions Function 6 Inserts data into Database "B" My question is, it was mentioned to me that i should use a Class to do this rather than just functions. The only problem is, I am honestly a bit hazy on Classes and when to use them. Would a Class be better for this? Or is writing this out as functions that feed into each other better? If i would use a Class, could you tell me how it would look?
Collecting Data from Database, functions vs classes
0
0.664037
1
1
0
156
23,246,013
2014-04-23T13:34:00.000
1
0
0
0
1
python,numpy,fits,pyfits
0
23,254,015
0
1
0
false
0
0
The expression data.field[('zquality' > 2) & ('pgal'==3)] is asking for fields where the string 'zquality' is greater than 2 (always true) and where the string 'pgal' is equal to 3 (also always false). Actually chances are you're getting an exception because data.field is a method on the Numpy recarray objects that PyFITS returns tables in. You want something like data[(data['zquality'] > 2) & (data['pgal'] == 3)]. This expression means "give me the rows of the 'zquality' column of data containing values greater than 2. Then give me the rows of the 'pgal' column of data with values equal to three. Now give me the full rows of data selected from the logical 'and' of the two row masks.
1
0
1
0
I have opened a FITS file in pyfits. The HEADER file reads XTENSION='BINTABLE' with DIMENSION= 52989R x 36C with 36 column tags like, 'ZBEST', 'ZQUALITY', 'M_B', 'UB', 'PGAL' etc. Now, I have to choose objects from the data with 'ZQUALITY' greater than 2 & 'PGAL' equals to 3. Then I have to make a histogram for the 'ZBEST' of the corresponding objects obeying the above conditions. Also I have to plot 'M_B' vs 'UB' for those objects. At last I want to slice the 'ZBEST' into three slices (zbest < 0.5), (0.5 < zbest < 1.0), (zbest > 1.0) and want to plot histogram and 'M_B' vs 'UB' diagram of them separately. I am stuck at choosing the data obeying the two conditions. Can anyone please tell me how can I choose the objects from the data satisfying both the conditions ('ZQUALITY' > 2 & 'PGAL' == 3 )? I am using like: data.field[('zquality' > 2) & ('pgal'==3)] but it's not working.
Condtionally selecting values from a Numpy array returned from PyFITS
0
0.197375
1
0
0
229
23,258,176
2014-04-24T01:28:00.000
0
1
1
0
0
java,python,c++,compilation,translation
0
23,258,361
0
1
0
false
0
0
All of the translation process is done when you compile a Java program. This is no different than compiling a C++ program or any other compiled language. The biggest difference is that this translation is targeted to the Java Byte Code language rather than assembly or machine language. The Byte Code undergoes its own translation process (including many of the same stages) when the program is run.
1
4
0
0
So my question today is about the translation process of Java. I understand the general translation process itself but I am not too sure how it applies to Java. Where does the lexical analysis take place? When is symbol table created? When is the syntax analysis and how is the syntax tree created? From what I have already research and able to understand is that the Java source code is then translated into a independent byte-code through a JVM or Java Virtual Machine. Is this when it undergoes a lexical analysis? I also know that after it is translated into byte-code it is translated into machine code but I don't know how it progress after that. Last but not least, is the Translation process of Java and different from C++ or Python?
What is the Translastion Process of Java?
0
0
1
0
0
954
23,262,767
2014-04-24T07:41:00.000
0
1
0
0
0
python,ldap
0
23,263,099
0
1
0
false
0
0
You are right, there is an ongoing communication between your workstation and the Active Directory server, which can use LDAP protocol. Since I don't know what you tried so far, I suggest that you look into the python module python-ldap. I have used it in the past to connect, query and modify information on Active-Directory servers.
1
0
0
1
When I logon to my company's computer with the AD username/password, I find that my Outlook will launch successfully. That means the AD authentication has passed. In my opinion, outlook retrieves the AD user information, then sends it to an LDAP server to verify. But I don't know how it retrieves the information, or by some other methods?
How does auto-login Outlook successfully when in AD environment?
0
0
1
0
1
104
23,268,179
2014-04-24T11:55:00.000
1
0
0
0
0
python,sql,qt,pyqt
0
23,281,662
0
2
0
false
0
1
This question is a bit broad, but I'll try answering it anyway. Qt does come with some models that can be connected to a database. Specifically classes like QSqlTableModel. If you connect such a model to your database and set it as the model for a QTableView it should give you most of the behavior you want. Unfortunately I don't think I can be any more specific than that. Once you have written some code, feel free to ask a new question about a specific issue (remember to include example code!)
1
1
0
0
I have made a database file using SQL commands in python. i have used quite a lot of foreign keys as well but i am not sure how to display this data onto qt with python? any ideas? i would also like the user to be able to add/edit/delete data
How to display data from a database file onto pyqt so that the user can add/delete/edit the data?
0
0.099668
1
1
0
4,532
23,288,911
2014-04-25T09:14:00.000
1
0
0
0
0
python,html,django
0
23,297,917
0
1
0
true
1
0
Part of your page that contains the paragraph tags is a piece of JavaScript that contains a timer. Every once in a while it does an Ajax request to get the data with regard to "what's going on now in the system". If you use the Ajax facilites of JQuery, which is probably the easiest, you can pass a JavaScript callback function that will be called if the request is answered. This callback function receives the data served by Django as response to the asynchroneous request. In the body of this callback you put the code to fill your paragraph. Django doesn't have to "know" about Ajax, it just serves the required info from a different URL than your original page containing the paragraph tags. That URL is part the your Ajax request from the client. So it's the client that takes the initiative. Ain't no such thing as server push (fortunately).
1
0
0
0
I am developing a project on Python using Django. The project is doing lot of work in the background so i want to notify users what's going on now in the system. For this i have declared a p tag in HTML and i want to send data to it. I know i can do this by templates but i am little confused as 5 functions need to pass the status to the p tag and if i use render_to_response() it refreshes the page every time a status is passed from the function Anyone please tell me how to do this in the correct way
Pass Data From Python To Html Tag
1
1.2
1
0
0
117
23,306,361
2014-04-26T03:40:00.000
1
0
0
0
0
python,proxy
0
23,442,148
0
1
0
false
0
0
In mitmproxy 0.10, a flow object is passed to the response handler function. You can access both flow.request and flow.response.
1
0
0
0
I am trying to write my own proxy extensions. Both, burp suite as well as mitmproxy allows us to write extensions. Till now, I am successful with intercepting the request and response headers, and write it to my own output file. The problem is, I get frequent requests and responses at anonymous time and at the same time, the output is getting written in the file. How should I identify that which response belongs to which particular request ?? If we see in burp suite, when we click on particulat URL in target, we see two different tabs- "Request" and "Response". How is burp suite identifying this ? Similar is the case with mitmproxy. I am new to proxy extensions, so any help would be great. ----EDIT---- If any additional information is required then pls let me know.
how to identify http response belongs to which particular request through python?
1
0.197375
1
0
1
175
23,311,233
2014-04-26T13:00:00.000
1
0
1
1
0
python,service
0
23,311,785
0
1
0
true
0
0
The communication with daemons is usually done by signals. You can use userdefined signals or SIGSTOP(17) and SIGCONT(19) to pause and continue your daemon.
1
0
0
0
I am writing a python 'sensor'. The sensor spawns two children, one that reads in data and the other processes and outputs the data in db format. I need to run it in the background with the ability to start, stop pretty much as a service/daemon. I've looked at various options: daemonizing, init scripts etc. The problem is I need more than just start, stop, restart and status. I also want to add a 'pause' option'. I am thinking that an init script would be the best option adding start, stop, restart, status, pause cases but how would I implement this the pause functionality? Thanks
Python pseudo service
0
1.2
1
0
0
48
23,319,138
2014-04-27T03:43:00.000
0
1
0
1
0
python,hash,routing
0
69,093,482
0
2
0
false
0
0
Typical algorithms split the traffic into semi-even groups of N pkts, where N is the number of ECMP links. So if the pkt sizes differ, or if some "streams" have more pkts than others, the overall traffic rates will not be even. Some algorithms factor for this. Breaking up or moving strean is bad (for many reasons). ECMP can be tiered --at layers1,2,3, and above; or at different physical pts. Typically, the src & dst ip-addr & protocol/port are used to define each stream. Sometimes it is configurable. Publishing the details can create "DoS/"IP"(Intellectual Property) vulnerabilities. Using the same algorithm at different "tiers" with certain numbers of links at each tier can lead to "polarization" (some links getting no traffic). To address this, a configurable or random input can be added to the algorithm. BGP ECMP requires IGP cost to be the same, else routing loops can happen(link/info @ cisco). Multicast adds more issues(link/info @ cisco). There are 3 basic types (link/info @ cisco). This is a deep subject.
1
0
0
0
I would like to know , how an ECMP and hash mapping are used in load balancing or routing of a tcp packet .Any help with links,examples or papers would be really useful. Sorry for the inconvinience , as I am completely new to this type of scenario. Thanks for your time and consideration.
Hash Mapping and ECMP
0
0
1
0
0
460
23,326,430
2014-04-27T17:13:00.000
1
0
0
0
0
python,r,algorithm
1
23,326,609
0
1
0
false
0
0
Sort the points, group them by value, and try all <=2n+1 thresholds that classify differently (<=n+1 gaps between distinct data values including the sentinels +-infinity and <=n distinct data values). The latter step is linear-time if you try thresholds lesser to greater and keep track of how many points are misclassified in each way.
1
0
1
0
I have a set of {(v_i, c_i), i=1,..., n}, where v_i in R and c_i in {-1, 0, 1} are the discrimination value and label of the i-th training example. I would like to learn a threshold t so that the training error is the minimum when I declare the i-th example has label -1 if v_i < t, 0 if v_i=t, and 1 if v_i>t. How can I learn the threshold t from {(v_i, c_i), i=1,..., n}, and what is an efficient algorithm for that? I am implementing that in Python, although I also hope to know how to implement that in R efficiently. Thanks! Btw, why SO doesn't support LaTeX for math expressions? (I changed them to be code instead).
learn a threshold from labels and discrimination values?
1
0.197375
1
0
0
72
23,327,609
2014-04-27T18:58:00.000
1
0
0
0
0
javascript,jquery,python,ajax
0
23,328,225
0
1
0
true
1
0
You can use jQuery, which gives you a very simple way to do that: $.post( "yourpage.html", $('form').serialize() + "&ajax=true", function(response) { $('#results').html(response); }); Server side, detect if ajax is true and then return only the query results instead of the whole page. They will be saved in the element of id="results". Replacing the whole page is generally not a good idea.
1
0
0
0
I have a web page with a form each time a form is submitted same page loads but with different data relevant to the query. On the back-end i am using python for finding data relevant to query. I want to process all this with ajax as back-end process needs more time so i need to show status to the user i -e whats going on now in the system Also the data returned is the same html file but with some other data. so how can i display it on the current page. It should not be appended to current html file. it is standalone Anyone please give me a solution to this problem
Refresh same page with ajax with different data
1
1.2
1
0
0
540
23,329,034
2014-04-27T21:10:00.000
0
0
1
1
0
python,macos,python-2.7,twisted
0
40,758,241
0
4
0
false
0
0
I too was getting a ImportError: No module named xxxeven though I did a pip install xxx and pip2 install xxx. pip2.7 install xxx worked for me. This installed it in the python 2.7 directory.
1
10
0
0
Hello I'm trying to run twisted along with python but python cannot find twisted. I did run $pip install twisted successfully but it is still not available. ImportError: No module named twisted.internet.protocol It seems that most people have $which python at /usr/local/bin/python but I get /Library/Frameworks/Python.framework/Versions/2.7/bin/python May this be the issue? If so, how can I change the PATH env?
Python OSX $ which Python gives /Library/Frameworks/Python.framework/Versions/2.7/bin/python
0
0
1
0
0
35,517
23,366,047
2014-04-29T13:27:00.000
0
0
1
0
0
python,read-write
0
23,366,357
0
2
0
false
0
0
You can use SQLite or Pickle module instead, to allow easier data retrieval/manipulation from multiple programs/scripts.
1
2
0
0
I have a program that imports a .py file that contains lists and dictionaries and uses them in the program. I am making another program that's purpose is to change the lists and dictionaries in this database .py file (either adding or removing parts of the lists/dictionaries). How would I go about doing this? Do i need to read in the .py file line by line, modify the lists, and overwrite the document? Is there a better way? Any ideas would be much appreciated. If overwriting the file is the best plan, how do you do that?
Modifying a .py file within Python
0
0
1
0
0
3,423
23,374,854
2014-04-29T20:42:00.000
0
1
1
0
1
c#,python,ipc
0
62,980,335
0
2
1
false
0
0
Based on the what you have said, you can connect to the python process and catch standard output text. Easy, fast and reliable!
1
5
0
0
I have some C# code that needs to call a Python script several thousand times, each time passing a string, and then expecting a float back. The python script can be run using ANY version of Python, so I cannot use Iron python. It's been recommended that I use IPC named pipes. I have no experience with this, and am having trouble figuring out how to do this between C# and Python. Is this a simple process, or am I looking at a decent amount of work? Is this the best way to solve my problem?
Simplest way to communicate between Python and C# using IPC?
0
0
1
0
0
9,440
23,388,647
2014-04-30T12:53:00.000
1
0
0
0
1
python,sqlite,pygtk
0
23,389,055
0
1
0
true
0
1
AFAIK, WinXP supports setlocale just fine. If you want to do locale-aware conversions, try using locale.atof('2,34') instead of float('2,34').
1
0
0
0
I'm trying to use the data collected by a form I to a sqlite query. In this form I've made a spin button which gets any numeric input (ie. either2,34 or 2.34) and sends it in the form of 2,34 which python sees as str. I've already tried to float() the value but it doesn't work. It seems to be a locale problem but somehow locale.setlocale(locale.LC_ALL, '') is unsupported (says WinXP). All these happen even though I haven't set anything to greek (language, locale, etc) but somehow Windows does its magic. Can someone help? PS: Of course my script starts with # -*- coding: utf-8 -*- so as to have anything in greek (even comments) in the code.
pygtk spinbutton "greek" floating point
0
1.2
1
1
0
48
23,393,456
2014-04-30T16:31:00.000
0
0
0
0
0
python,machine-learning,statistics,categorization
0
23,421,883
0
1
0
true
0
0
The principled way to do this is to assign probabilities to different model types and to different parameters within a model type. Look for "Bayesian model estimation".
1
0
1
0
My problem is as follows: I am given a number of chi-squared values for the same collection of data sets, fitted with different models. (so, for example, for 5 collections of points, fitted with either a single binomial distribution, or both binomial and normal distributions, I would have 10 chi-squared values). I would like to use machine learning categorization to categorize the data sets into "models": e.g. data sets (1,2,5 and 7) are best fitted using only binomial distributions, whereas sets (3,4,6,8,9,10) - using normal distribution as well. Notably, the number of degrees of freedom is likely to be different for both chi-squared distributions and is always known, as is the number of models. My (probably) naive guess for a solution would be as follows: Randomly distribute the points (10 chi-squared values in this case) into the number of categories (2). Fit each of the categories using the particular chi-squared distributions (in this case with different numbers of degrees of freedom) Move outlying points from one distribution to the next. Repeat steps 2 and 3 until happy with result. However I don't know how I would select the outlying points, or, for that matter, if there already is an algorithm that does it. I am extremely new to machine learning and fairly new to statistics, so any relevant keywords would be appreciated too.
Categorizing points using known distributions
0
1.2
1
0
0
52
23,396,807
2014-04-30T19:47:00.000
1
0
0
0
0
python,nlp,nltk
0
23,396,854
0
1
0
false
0
0
In short "you cannot". This task is far beyond simple text processing which is provided with NLTK. Such objects relations sentiment analysis could be the topic of the research paper, not something solvable with a simple approach. One possible method would be to perform a grammar analysis, extraction of the conceptual relation between objects and then independent sentiment analysis of words included, but as I said before - it is rather a reasearch topic.
1
0
1
0
I'm using NLTK to extract named entities and I'm wondering how it would be possible to determine the sentiment between entities in the same sentence. So for example for "Jon loves Paris." i would get two entities Jon and Paris. How would I be able to determine the sentiment between these two entities? In this case should be something like Jon -> Paris = positive
How to determine the "sentiment" between two named entities with Python/NLTK?
0
0.197375
1
0
0
301
23,399,888
2014-04-30T23:38:00.000
1
0
1
0
1
python,path,console,ipython
1
23,588,025
1
2
0
true
0
0
I am pulling the answer out of the comments. Point your PATH system variable to the correct version of Python. This is accomplished (on Windows) by going to System Properties -> Advanced -> Environment Variables. If you already have a Python directory in there, modify it to the correct, new path. Otherwise, append it to the end of the existing string. Do not remove what is already present.
1
0
0
0
I've been using ipython notebook with console2 for a while now and recently installed a different version of python and now my console is giving me an error saying "No module named IPython". I think the path has been changed or something, but I don't know how to fix it. Any help is greatly appreciated!
ipython console2 no module named Ipython
0
1.2
1
0
0
1,050
23,426,916
2014-05-02T11:20:00.000
1
0
0
0
0
python,django,mezzanine,cartridge
0
23,427,146
0
1
0
true
1
0
The products most likely aren't published, but can be previewed by an authenticated administrator. Check the "status" and "published from" fields for each product.
1
0
0
0
I am trying to develop a small project to learn how mezzanine and cartridge work. I have the problem that items in the shop are listed only if I am logged in, while I'd like to be able to show them to unauthorized users. Is there a setting that has to be toggled?
Products are not shown if the user is not logged in
0
1.2
1
0
0
56
23,440,391
2014-05-03T04:49:00.000
0
0
0
1
0
python,sublimetext2
0
23,440,406
0
2
0
false
0
0
Just add raw_input("Press ENTER to exit") and it will "pause" until you press a key. You should be able to add this line anywhere and as often as needed.
1
0
0
0
I was learning python use sublime text2 dev. when I code "hello world" and build it, the "cmd"window appears and disappears in a moment. I want to make the output hold on,but I don't know how. help me, thank you.
how to prevent the window from self-close when building program in sublime
0
0
1
0
0
302
23,454,521
2014-05-04T09:16:00.000
0
1
0
0
0
python,linux,web,flask,raspberry-pi
0
23,454,864
0
2
0
false
1
0
Best practice is to never do this kind of thing. If you are giving sudo access to your pi from internet and then executing user input you are giving everyone in the internet the possibility of executing arbitrary commands in your system. I understand that this is probably your pet project, but still imagine someone getting access to your computer and turning camera when you don't really expect it.
1
0
0
0
I have created a web-app using Python Flask framework on Raspberry Pi running Raspbian. I want to control the hardware and trigger some sudo tasks on the Pi through web. The Flask based server runs in non-sudo mode listening to port 8080. When a web client sends request through HTTP, I want to start a subprocess with sudo privileges. (for ex. trigger changes on gpio pins, turn on camera etc.). What is the best practice for implementing this kind of behavior? The webserver can ask for sudo password to the client, which can be used to raise the privileges. I want some pointers on how to achieve this.
How to start a privileged process through web on the server?
0
0
1
0
0
166
23,468,132
2014-05-05T08:33:00.000
0
0
0
0
0
python,django,mercurial
0
23,479,318
0
1
0
false
1
0
This is something you should fix at the web application level, not at the Mercurial level. If you're fine with having people wait you set up a distributed locking scheme where the web worker thread tries to acquire a repository-specific lock from shared memory/storage before taking any actions. If it can't acquire the lock you respond with either a status-code 503 with a retry-after header or you have the web-worker thread retry until it can get the lock or times out.
1
0
0
0
Our Django project provides interfaces to users to create repository create new repo add new changes to existing repo Any user can access any repo to make changes directly via an HTTP POST containing changes. Its totally fine if the traffic is less. But if the traffic increases up to the point that multiple users want to add changes to same repo at exactly same time, how to handle it? We currently use Hg (Mercurial) for repos
Django: Atomic operations on a directory in media storage
0
0
1
0
0
37
23,513,981
2014-05-07T09:31:00.000
0
0
0
1
0
python,github
0
23,514,046
0
3
0
false
0
0
.sh is a shell script, you can just execute it. ./setup.sh
2
0
0
0
I have downloaded and unzip the cabot(python tool) in my linux system.But then I don't know how to install it.In the cabot folder there is setup.sh file. But when I put build or install it is not working.So What to do?
How to install cabot in linux
0
0
1
0
0
1,740
23,513,981
2014-05-07T09:31:00.000
0
0
0
1
0
python,github
0
23,514,107
0
3
0
false
0
0
It's an ".sh" file right? Then to run the same what you have to do is :- 1)Open Terminal 2)Change directory to file location 3) run the following command. sh setup.sh
2
0
0
0
I have downloaded and unzip the cabot(python tool) in my linux system.But then I don't know how to install it.In the cabot folder there is setup.sh file. But when I put build or install it is not working.So What to do?
How to install cabot in linux
0
0
1
0
0
1,740
23,515,224
2014-05-07T10:26:00.000
1
1
0
1
0
python,linux
0
23,515,529
0
1
0
true
0
0
If you are using logrotate for log rotation then it has options to remove old files, if not you could run something as simple as this once a day in your cron: find /path/to/log/folder -mtime +5 -type f -exec rm {} \; Or more specific match a pattern in the filename find . -mtime +5 -type f -name *.log -exec ls -l {} \; Why not set up logrotate for syslog to rotate daily then use its options to remove anything older than 5 days. Other options involve parsing log file and keeping certain aspect etc removing other bits etc which involved writing to another file and back etc and when it comes to live log files this can end up causing other issues such as a requirement to restart service to relog back into files. so best option would be logrotate for the syslog
1
0
0
0
I have two questions about using crontab file: 1.I am using a service. When it runs, a new log file created everyday in a log directory. i want to delete all files that already exist greater 5 day in that log directory 2.I want to delete all the infomation that exist greater than 5 days in a log file( /var/log/syslog) I don't know how to do that with crontab in linux. Please help me! Thanks in advance!
How to delete some file with crontab in linux
0
1.2
1
0
0
374
23,526,579
2014-05-07T19:25:00.000
1
0
0
0
0
python,scrapy,scrapyd
0
24,659,705
0
1
0
false
1
0
Maybe you should do a cron job that executes every three hours and performs a curl call to Scrapyd to schedule the job.
1
0
0
0
I want to make my spider start every three hours. I have a scrapy confinguration file located in c:/scrapyd folder. I changed the poll_interval to 100 the spider works, but it didn't repeat each 100 seconds. how to do that please?
scrapyd pool_intervel to scheduler a spider
0
0.197375
1
0
1
215
23,531,555
2014-05-08T02:09:00.000
0
1
0
0
1
python,emacs,virtualenv,org-mode
1
23,557,258
0
1
0
false
1
0
Reads like a bug, please consider reporting it at [email protected] As a workaround try setting the virtualenv at the Python-side, i.e. give PYTHONPATH as argument. Alternatively, mark the source-block as region and execute it the common way, surpassing org
1
1
0
0
I'm running into a few issues on my Emacs + Org mode + Python setup. I thought I'd put this out there to see if the community had any suggestions. Virtualenv: I'm trying to execute a python script within a SRC block using a virtual environment instead of my system's python implementation. I have a number of libraries in this virtual environment that I don't have on my system's python (e.g. Matplotlib). Now, I set python-shell-virtualenv-path to my virtualenv's root directory. When I run M-x run-python the shell runs from my virtual environment. That is, I can import Matplotlib with no problems. But when I import Matplotlib within a SRC block I get an import error. How can I have it so the SRC block uses the python in my virtual environment and not my system's python? Is there any way I can set the path to a given virtual environment automatically when I load an org file? HTML5 Export: I'm trying to export my org-files in 'html5', as opposed to the default 'xhtml-strict'. The manual says to set org-html-html5-fancy to t. I tried searching for org-html-html5-fancy in M-x org-customize but I couldn't find it. I tried adding (setq org-html-html5-fancy t) to my init.el, but nothing happened. I'm not at all proficient in emacs-lisp so my syntax may be wrong. The manual also says I can set html5-fancy in an options line. I'm not really sure how to do this. I tried #+OPTIONS html5-fancy: t but it didn't do anything. How can I export to 'html5' instead of 'xhtml-strict' in org version 7.9.3f and Emacs version 24.3.1? Is there any way I can view and customize the back-end that parses the org file to produce the html? I appreciate any help you can offer.
Run python from virtualenv in org file & HTML5 export in org v.7.9.3
0
0
1
0
0
510
23,551,808
2014-05-08T20:24:00.000
3
0
0
1
0
python,django,architecture,celery
0
23,846,005
0
2
0
false
1
0
Celery actually makes this pretty simple, since you're already putting the tasks on a queue. All that changes with more workers is that each worker takes whatever's next on the queue - so multiple workers can process at once, each on their own machine. There's three parts to this, and you've already got one of them. Shared storage, so that all machines can access the same files A broker that can hand out tasks to multiple workers - redis is fine for that Workers on multiple machines Here's how you set it up: User uploads file to front-end server, which stores in your shared storage (e.g. S3, Samba, NFS, whatever), and stores the reference in the database Front-end server kicks off a celery task to process the file e.g. def my_view(request): # ... deal with storing the file file_in_db = store_file(request) my_process_file_task.delay(file_in_db.id) # Use PK of DB record # do rest of view logic... On each processing machine, run celery-worker: python manage.py celery worker --loglevel=INFO -Q default -E Then as you add more machines, you'll have more workers and the work will be split between them. Key things to ensure: You must have shared storage, or this gets much more complicated Every worker machine must have the right Django/Celery settings to be able to find the redis broker and the shared storage (e.g. S3 bucket, keys etc)
2
1
0
0
Currently we have everything setup on single cloud server, that includes: Database server Apache Celery redis to serve as a broker for celery and for some other tasks etc Now we are thinking to break apart the main components to separate servers e.g. separate database server, separate storage for media files, web servers behind load balancers. The reason is to not to pay for one heavy server and use load balancers to create servers on demand to reduce cost and improve overall speed. I am really confused about celery only, have anyone ever used celery on multiple production servers behind load balancers? Any guidance would be appreciated. Consider one small use case which is currently how it is been done on single server (confusion is that how that can be done when we use multiple servers): User uploads a abc.pptx file->reference is stored in database->stored on server disk A task (convert document to pdf) is created and goes in redis (broker) queue celery which is running on same server picks the task from queue Read the file, convert it to pdf using software called docsplit create a folder on server disk (which will be used as static content later on) puts pdf file and its thumbnail and plain text and the original file Considering the above use case, how can you setup up multiple web servers which can perform the same functionality?
django-celery infrastructure over multiple servers, broker is redis
0
0.291313
1
0
0
2,804
23,551,808
2014-05-08T20:24:00.000
4
0
0
1
0
python,django,architecture,celery
0
23,552,055
0
2
0
false
1
0
What will strongly simplify your processing is some shared storage, accessible from all cooperating servers. With such design, you may distribute the work among more servers without worrying on which server will be next processing step done. Using AWS S3 (or similar) cloud storage If you can use some cloud storage, like AWS S3, use that. In case you have your servers running at AWS too, you do not pay for traffic within the same region, and transfers are quite fast. Main advantage is, your data are available from all the servers under the same bucket/key name, so you do not have to bother about who is processing which file, as all have shared storage on S3. note: If you need to get rid of old files, you may even set up some policy file on give bucket, e.g. to delete files older than 1 day or 1 week. Using other types of shared storage There are more options Samba central file server FTP Google storage (very similar to AWS S3) Swift (from OpenStack) etc. For small files you could even use Redis, but such solutions are for good reasons rather rare.
2
1
0
0
Currently we have everything setup on single cloud server, that includes: Database server Apache Celery redis to serve as a broker for celery and for some other tasks etc Now we are thinking to break apart the main components to separate servers e.g. separate database server, separate storage for media files, web servers behind load balancers. The reason is to not to pay for one heavy server and use load balancers to create servers on demand to reduce cost and improve overall speed. I am really confused about celery only, have anyone ever used celery on multiple production servers behind load balancers? Any guidance would be appreciated. Consider one small use case which is currently how it is been done on single server (confusion is that how that can be done when we use multiple servers): User uploads a abc.pptx file->reference is stored in database->stored on server disk A task (convert document to pdf) is created and goes in redis (broker) queue celery which is running on same server picks the task from queue Read the file, convert it to pdf using software called docsplit create a folder on server disk (which will be used as static content later on) puts pdf file and its thumbnail and plain text and the original file Considering the above use case, how can you setup up multiple web servers which can perform the same functionality?
django-celery infrastructure over multiple servers, broker is redis
0
0.379949
1
0
0
2,804
23,567,726
2014-05-09T14:53:00.000
2
0
0
0
1
python,qt,pyqt,pyqtgraph
0
23,572,444
0
1
0
false
0
1
It's likely you have items in the scene that accept their own mouse input, but it's difficult to say without seeing code. In particular, be wary of complex plot lines that are made clickable--it is very expensive to compute the intersection of the mouse cursor with such complex shapes. The best (some would say only) way to solve performance issues is to profile your application: run python -m cProfile -s cumulative your_script.py once without moving the mouse, and again with mouse movement (be sure to spend plenty of time moving the mouse), and then compare the outputs to see where the interpreter is spending all of its time.
1
0
0
0
I have a multi-threaded (via pyqt) application which plots realtime data (data is processed in the second thread and passed to the gui thread to plot via a pyqt-signal). If I place the mouse over the application it continues to run at full speed (as measured by the time difference between calls to app.processEvents()). As soon as I begin moving the mouse, the update rate slows to a crawl, increasing again when I stop moving the mouse. Does anyone know how I can resolve this/debug the issue? The code is quite lengthy and complex so I'd rather not post it here. Thanks!
PYQTGraph application slows down when mouse moves over application
0
0.379949
1
0
0
742
23,572,471
2014-05-09T19:27:00.000
1
0
1
1
0
python,nltk,python-idle
1
23,572,642
0
2
0
false
0
0
Supplementing the answer above, when you install python packages they will install under the default version of python you are using. Since the module imports in python 2.7.6 make sure that you aren't using the Python 3 version of IDLE.
1
1
0
0
Programming noob here. I'm on Mac OS 10.5.8. I have Python 2.7.6 and have installed NLTK. If I run Python from Terminal, I can "import nltk" with no problem. But if I open IDLE (either from Terminal or by double-clicking on the application) and try the same thing there, I get an error message, "ImportError: No module named nltk". I assume this is a path problem, but what exactly should I do? The directory where I installed NLTK is "My Documents/Python/nltk-2.0.4". But within this there are various other directories called build, dist, etc. Which of these is the exact directory that IDLE needs to be able to find? And how do I add that directory to IDLE's path?
How do I get IDLE to find NLTK?
0
0.099668
1
0
0
2,634
23,577,208
2014-05-10T04:49:00.000
0
0
1
0
0
javascript,ipython,ipython-notebook
0
61,441,244
0
3
0
false
0
0
as now in 5.7.4 version shortcut to create cell above is esc + a
1
1
0
0
I upgraded to IPython 2.0 today. Alot of changes seem good, but the button to insert a new cell, above/below seems to be gone. The option is still in the menu, and I believe the keyboard short cut works. But the button is gone. I'm sure there is a way to turn it back on, but the documentation for the new version doesn't seem super complete. Possibly to turn it back on I need to adjust something in the config. Which is just python script. Maybe even tell it to insert a new element, and bind some javascript to it.
IPython 2.0 Notebook: where has the "Add Cell Above" button gone, and how do I get it back?
1
0
1
0
0
826
23,583,017
2014-05-10T15:40:00.000
0
0
1
0
0
python,debugging,exception,web2py,pycharm
1
23,634,516
0
1
0
true
1
0
I figured this out by looking through the web2py source code. Apparently, web2py is set up to do what I want to do for a particular debugger, seemingly called Wing Db. There's a constant env var ref in the code named WINGDB_ACTIVE that, if set, redirects exceptions to an external debugger. All I had do to was define this env var, WINGDB_ACTIVE, as 1 in my PyCharm execution configuration, and voila, exceptions are now passed through to my debugger!
1
0
0
0
I'm just starting to build a web app with web2py for the first time. It's great how well PyCharm integrates with web2py. One thing I'd like to do, however, is avoid the web2py ticketing system and just allow exceptions to be caught in the normal way in PyCharm. Currently, any attempt to catch exceptions, even via an "All Exceptions" breakpoint, never results in anything getting caught by Pycharm. Can someone tell me if this is possible, and if so, how to do it?
Can web2py be made to allow exceptions to be caught in my PyCharm debugger?
1
1.2
1
0
0
95
23,587,568
2014-05-10T23:57:00.000
5
0
1
0
0
python,algorithm,list,frequency
0
23,587,628
0
5
0
false
0
0
Why not have two lists, one for questions not yet picked and one for questions that have been picked. Initially, the not-yet-picked list will be full, and you will pick elements from it which will be removed and added to the picked list. Once the not-yet-picked list is empty, repeat the same process as above, this time using the full picked list as the not-yet-picked list and vice versa.
2
4
0
0
I am building a quiz application which pulls questions randomly from a pool of questions. However, there is a requirement that the pool of questions be limited to questions that the user has not already seen. If, however, the user has seen all the questions, then the algorithm should "reset" and only show questions the user has seen once. That is, always show the user questions that they have never seen or, if they have seen all of them, always show them questions they have seen less frequently before showing questions they have seen more frequently. The list (L) is created in a such a way that the following is true: any value in the list (I), may exist once or be repeated in the list multiple times. Let's define another value in the list, J, such that it's not the same value as I. Then 0 <= abs(frequency(I) - frequency(J)) <= 1 will always be true. To put it another way: if a value is repeated in the list 5 times, and 5 times is the maximum number of times any value has been repeated in the list, then all values in the list will be repeated either 4 or 5 times. The algorithm should return all values in the list with frequency == 4 before it returns any with frequency == 5. Sorry this is so verbose, I'm struggling to define this problem succinctly. Please feel free to leave comments with questions and I will further qualify if needed. Thanks in advance for any help you can provide. Clarification Thank you for the proposed answers so far. I don't think any of them are there yet. Let me further explain. I'm not interacting with the user and asking them questions. I'm assigning the question ids to an exam record so that when the user begins an exam, the list of questions they have access to is determined. Therefore, I have two data structures to work with: List of possible question ids that the user has access to List of all question ids this user has ever been assigned previously. This is the list L described above. So, unless I am mistaken, the algorithm/solution to this problem will need to involve list &/or set based operations using the two lists described above. The result will be a list of question ids I can associate with the exam record and then insert into the database.
Efficient algorithm for determining values not as frequent in a list
1
0.197375
1
0
0
314
23,587,568
2014-05-10T23:57:00.000
0
0
1
0
0
python,algorithm,list,frequency
0
23,587,653
0
5
0
false
0
0
Let's define a "pivot" that separates a list into two sections. The pivot partitions the array such that all numbers before the pivot has been picked one more than the numbers after the pivot (or more generally, all numbers before pivot are ineligible for picking, while all numbers after the pivot are eligible for picking). You simply need to pick a random item from the list of numbers after the pivot, swap it with the number on the pivot, and then increment the pivot. When the pivot reaches the end of the list, you can reset it back to the start. Alternatively, you can also use two lists, which is much easier to implement, but is slightly less efficient as it needs to expand/shrink the list. Most of the time, the ease of implementation would trump the inefficiency so the two-list would usually be my first choice.
2
4
0
0
I am building a quiz application which pulls questions randomly from a pool of questions. However, there is a requirement that the pool of questions be limited to questions that the user has not already seen. If, however, the user has seen all the questions, then the algorithm should "reset" and only show questions the user has seen once. That is, always show the user questions that they have never seen or, if they have seen all of them, always show them questions they have seen less frequently before showing questions they have seen more frequently. The list (L) is created in a such a way that the following is true: any value in the list (I), may exist once or be repeated in the list multiple times. Let's define another value in the list, J, such that it's not the same value as I. Then 0 <= abs(frequency(I) - frequency(J)) <= 1 will always be true. To put it another way: if a value is repeated in the list 5 times, and 5 times is the maximum number of times any value has been repeated in the list, then all values in the list will be repeated either 4 or 5 times. The algorithm should return all values in the list with frequency == 4 before it returns any with frequency == 5. Sorry this is so verbose, I'm struggling to define this problem succinctly. Please feel free to leave comments with questions and I will further qualify if needed. Thanks in advance for any help you can provide. Clarification Thank you for the proposed answers so far. I don't think any of them are there yet. Let me further explain. I'm not interacting with the user and asking them questions. I'm assigning the question ids to an exam record so that when the user begins an exam, the list of questions they have access to is determined. Therefore, I have two data structures to work with: List of possible question ids that the user has access to List of all question ids this user has ever been assigned previously. This is the list L described above. So, unless I am mistaken, the algorithm/solution to this problem will need to involve list &/or set based operations using the two lists described above. The result will be a list of question ids I can associate with the exam record and then insert into the database.
Efficient algorithm for determining values not as frequent in a list
1
0
1
0
0
314
23,595,346
2014-05-11T16:53:00.000
1
0
1
0
0
python
0
23,595,617
0
2
0
false
0
0
Both python 2.7 and python 3 coexist on one machine happily. If you name the scripts .py for those you would like to run with python 2.3 and .py3 for those that you would like to run with python3 then you can just invoke the scripts by typing their names or by double clicking. These associations are set up by default by the installer. You can force the python version on the command line, assuming both are on the path by typing python or python3 for any script file regardless of the extension. It is also worth looking at virtualenv for your testing. N.B. For installing from pypi you can use pip or pip3 and the install for the appropriate version will be done.
1
0
0
0
I have two versions of Python installed in my computer, Python 3.4 and Python 2.7, and I use both of these installations. When I run a script, how do I choose which versions I want to use? May I rename the names of the executables for that (Python.exe -> Python27.exe)? Thanks.
Multiple Python versions in one computer (windows 7)
0
0.099668
1
0
0
327
23,610,748
2014-05-12T13:44:00.000
4
0
0
1
0
python,google-app-engine
0
23,612,408
0
1
0
true
1
0
There's no 100% sure way to assess the number of frontend instance hours. An instance can serve more than one request at a time. In addition, the algorithm of the scheduler (the system that starts the instances) is not documented by Google. Depending on how demanding your code is, I think you can expect a standard F1 instance to hold up to 5 requests in parallel, that's a maximum. 2 is a safer bet. My recommendation, if possible, would be to simulate standard interaction on your website with limited number of users, and see how the number of instances grow, then extrapolate. For example, let's say you simulate 100 requests per minute during 2 hours, and you see that GAE spawns 5 instances for that, then you can extrapolate that a continuous load of 3000 requests per minute would require 150 instances during the same 2 hours. Then I would double this number for safety, and end up with an estimate of 300 instances.
1
0
0
1
We are developing a Python server on Google App Engine that should be capable of handling incoming HTTP POST requests (around 1,000 to 3,000 per minute in total). Each of the requests will trigger some datastore writing operations. In addition we will write a web-client as a human-usable interface for displaying and analyse stored data. First we are trying to estimate usage for GAE to have at least an approximation about the costs we would have to cover in future based on the number of requests. As for datastore write operations and data storage size it is fairly easy to come up with an approximate number, though it is not so obvious for the frontend and backend instance hours. As far as I understood each time a request is coming in, an instance is being started which then is running for 15 minutes. If a request is coming in within these 15 minutes, the same instance would have been used. And now it is getting a bit tricky I think: if two requests are coming in at the very same time (which is not so odd with 3,000 requests per minute), is Google firing up another instance, hence Google would count an addition of (at least) 0.15 instance hours? Also I am not quite sure how a web-client that is constantly performing read operations on the datastore in order to display and analyse data would increase the instance hours. Does anyone know a reliable way of counting instance hours and creating meaningful estimations? We would use that information to know how expensive it would be to run an application on GAE in comparison to just ordering a web server.
GAE: how to quantify Frontend Instance Hours usage?
0
1.2
1
0
0
1,362
23,613,260
2014-05-12T15:37:00.000
1
0
0
0
1
javascript,python,css,selenium
1
23,649,576
0
1
0
true
0
0
Overflow takes only one of five values (overflow: auto | hidden | scroll | visible | inherit). Use visible
1
1
0
0
I am trying to use Python Selenium to click a button on a webpage, but Selenium is giving exception "Element is not currently visible and so may not be interacted with". The DOM structure is quite simple: <body style="overflow: hidden;"> ... <div aria-hidden="false" style="display: block; ..."> ... <button id="button-aaa" aria-hidden="true" style="..."> ... </div> ... </body> I have searched Google and Stackoverflow. Some users say Selenium cannot click a web element which is under a parent node with overflow: hidden. But surprisingly, I found that Selenium is able to click some other buttons which are also under a parent node with overflow: hidden. Anyway, I have tried to use driver.execute_script to change the <body> style to overflow: none, but Selenium is still unable to click this button. I have also tried to change the button's aria-hidden="true" to aria-hidden="false", but Selenium still failed to click. I have also tried to add "display: block;" to the button's style, and tried all different combination of style changes, but Selenium still failed to click. I have used this commands to check the button: buttonelement.is_displayed(). It always returns False no matter what style I change in the DOM. The button is clearly visually visible in the Firefox browser, and it is clickable and functioning. By using the Chrome console, I am able to select the button using ID. May I know how can I check what is causing a web element to be invisible to Python Selenium?
How to check what is causing a web element to be invisible to Python Selenium?
0
1.2
1
0
1
1,022
23,623,717
2014-05-13T05:54:00.000
0
0
0
0
0
python,python-2.7,ubuntu,graphviz
1
34,070,637
0
2
0
false
0
0
This is a bug in latest ubuntu xdot package, please use xdot in pip repository: sudo apt-get remove xdot sudo pip install xdot
1
0
1
0
Lately I have observed that xdot utility which is implemented in python to view dot graphs is giving me following error when I am trying to open any dot file. File "/usr/bin/xdot", line 4, in xdot.main() File "/usr/lib/python2.7/dist-packages/xdot.py", line 1947, in main win.open_file(args[0]) File "/usr/lib/python2.7/dist-packages/xdot.py", line 1881, in open_file self.set_dotcode(fp.read(), filename) File "/usr/lib/python2.7/dist-packages/xdot.py", line 1863, in set_dotcode if self.widget.set_dotcode(dotcode, filename): File "/usr/lib/python2.7/dist-packages/xdot.py", line 1477, in set_dotcode self.set_xdotcode(xdotcode) File "/usr/lib/python2.7/dist-packages/xdot.py", line 1497, in set_xdotcode self.graph = parser.parse() File "/usr/lib/python2.7/dist-packages/xdot.py", line 1167, in parse DotParser.parse(self) File "/usr/lib/python2.7/dist-packages/xdot.py", line 977, in parse self.parse_graph() File "/usr/lib/python2.7/dist-packages/xdot.py", line 986, in parse_graph self.parse_stmt() File "/usr/lib/python2.7/dist-packages/xdot.py", line 1032, in parse_stmt self.handle_node(id, attrs) File "/usr/lib/python2.7/dist-packages/xdot.py", line 1142, in handle_node shapes.extend(parser.parse()) File "/usr/lib/python2.7/dist-packages/xdot.py", line 612, in parse w = s.read_number() File "/usr/lib/python2.7/dist-packages/xdot.py", line 494, in read_number return int(self.read_code()) ValueError: invalid literal for int() with base 10: '206.05' I have observed few things; The same utility works fine for me on previous ubuntu versions(12.04, 13.04). The problem is when this is run on ubuntu 14.04. I am not sure if it is an ubuntu problem. As per the trace log above the int() function has encounterd some float value which is causing the exception at the end of log.But the contents of my dot files does not contain any float value, so how come the trace shows ValueError: invalid literal for int() with base 10: '206.05'? Any clue will be helpful.
Graphviz xdot utility fails to parse graphs
0
0
1
0
0
1,384
23,645,572
2014-05-14T04:19:00.000
5
0
0
1
0
python,django,google-app-engine
0
23,646,875
0
1
0
true
1
0
In simple words these are two versions of datastore . db being the older version and ndb the newer one. The difference is in the models, in the datastore these are the same thing. NDB provides advantages like handling caching (memcache) itself. and ndb is faster than db. so you should definitely go with ndb. to use ndb datastore just use ndb.Model while defining your models
1
2
0
0
I have been going through the Google App Engine documentation (Python) now and found two different types of storage. NDB Datastore DB Datastore Both quota limits (free) seem to be same, and their database design too. However NDB automatically cache data in Memcache! I am actually wondering when to use which storage? What are the general practices regarding this? Can I completely rely on NDB and ignore DB? How should it be done? I have been using Django for a while and read that in Django-nonrel the JOIN operations can be somehow done in NDB! and rest of the storage is used in DB! Why is that? Both storages are schemaless and pretty well use same design.. How is that someone can tweak JOIN in NDB and not in DB?
App Engine: Difference between NDB and Datastore
0
1.2
1
1
0
568
23,646,485
2014-05-14T05:42:00.000
1
1
0
0
1
python,pyramid,mako,waitress
0
23,654,584
1
2
0
false
1
1
Oh my, I found the thing... I had <%block cached="True" cache_key="${self.filename}+body"> and the file inclusion was inside of that block. Cheerious:)
1
0
0
0
I've a strange issue. pserve --reload has stopped reloading the templates. It is reloading if some .py-file is changing, but won't notice .mak-file changes anymore. I tried to fix it by: Checking the filepermissions Creating the new virtualenv, which didn't help. Installing different version of mako without any effect. Checking that the python is used from virtualenv playing with the development.ini. It has the flag: pyramid.reload_templates = true Any idea how to start debugging the system? Versions: Python 2.7 pyramid 1.5 pyramid_mako 1.02 mako 0.9.1 Yours Heikki
Pyramid Mako pserver --reload not reloading in Mac
0
0.099668
1
0
0
216
23,680,786
2014-05-15T14:12:00.000
2
0
1
0
0
python,ipython
0
30,603,070
0
1
0
false
0
0
If you are connected to the kernel via the IPython console client and are on a Unix-like OS, you can detach using Ctrl-\
1
4
0
0
Can someone please tell me how I can detach from an IPython kernel without terminating it? I see in the documentation of quit() that there is a parameter keep_kernel, but unfortunately quit(keep_kernel=True) won't work.
Detach from IPython kernel without terminating it
0
0.379949
1
0
0
682
23,691,819
2014-05-16T02:43:00.000
0
0
0
0
0
python,python-3.x,tkinter
0
23,704,276
0
1
1
false
0
1
I cannot answer this specifically for 'unlock' event (if there even is one in the GUI; I can't find one by cursory search.). But I think the real answer is to change the question. Having a program simply take focus when user unlocks the display is very un-Windows-ish behavior. The Windows user expects to see the desktop just as s/he left it before the display was locked -- why does s/he want to see your program on top when unlocking, regardless of why Windows might have locked the display? Maybe you want to recast your program as something that runs in the background and pops up a notification (in the notification area on the right side of toolbar) when it wants user to do something?
1
0
0
0
I have a tkinter program written in python 3.3.3. I see myself in the need of making the the program get focus when the user unlocks the computer. I don't really know how to go ahead and start with this, users have a .exe version of the program that I created with cxfreeze. I f i need to modify the .py and create another .exe, that wouldn't be a problem. After some research I found that one can use the ctypes module to lock the computer, but it's not very helpful because i need to know if it is locked or unlocked. I also saw commands from win32com, but i can't seem to be able to find a way to trigger a command when it gets unlocked. What is the best way to get focus on my program after the computer is unlocked?? Any help is greatly appreciated.
Get focus on tkinter program after pc is unlocked
0
0
1
0
0
100
23,695,305
2014-05-16T12:45:00.000
0
0
0
0
0
python,scipy,openshift
0
33,723,529
0
1
0
false
0
0
Either add scipy to your setup.py file or login to openshift rhc ssh yourapp and install manually: pip install scipy
1
0
0
0
I would like to install scipy on openshift but I don't know how to do it. I'm an absolute beginner with python and openshift. Therefore it would be great if somebody could provide a step by step explanation on how to proceed.
install scipy on openshift
0
0
1
0
0
232
23,716,904
2014-05-17T22:58:00.000
1
0
0
0
1
python,numpy,scipy,signal-processing,fft
0
23,717,381
0
1
0
true
0
0
If your IFFT's length is different from that of the FFT, and the length of the IFFT isn't composed of only very small prime factors (2,3,etc.), then the efficiency can drop off significantly. Thus, this method of resampling is only efficient if the two sample rates are different by ratios with small prime factors, such as 2, 3 and 7 (hint).
1
0
1
0
I'm trying to resample a 1-D signal using an FFT method (basically, the one from scipy.signal). However, the code is taking forever to run, even though my input signal is a power of two in length. After looking at profiling, I found the root of the problem. Basically, this method takes an FFT, then removes part of the fourier spectrum, then takes an IFFT to bring it back to the time domain at a lower sampling rate. The problem is that that the IFFT is taking far longer to run than the FFT: ncalls tottime percall cumtime percall filename:lineno(function) 1 6263.996 6263.996 6263.996 6263.996 basic.py:272(ifft) 1 1.076 1.076 1.076 1.076 basic.py:169(fft) I assume that this has something to do with the amount of fourier points remaining after the cutoff. That said, this is an incredible slowdown so I want to make sure that: A. This behavior is semi-reasonable and isn't definitely a bug. B. What could I do to avoid this problem and still downsample effectively. Right now I can pad my input signal to a power of two in order to make the FFT run really quickly, but not sure how to do the same kind of thing for the reverse operation. I didn't even realize that this was an issue for IFFTs :P
IFFT taking orders of magnitude more than FFT
1
1.2
1
0
0
285
23,729,919
2014-05-19T04:53:00.000
2
0
0
0
0
python,theano,summarization,deep-learning
0
23,765,727
0
2
0
false
0
0
I think you need to be a little more specific. When you say "I am unable to figure to how exactly the summary is generated for each document", do you mean that you don't know how to interpret the learned features, or don't you understand the algorithm? Also, "deep learning techniques" covers a very broad range of models - which one are you actually trying to use? In the general case, deep learning models do not learn features that are humanly intepretable (albeit, you can of course try to look for correlations between the given inputs and the corresponding activations in the model). So, if that's what you're asking, there really is no good answer. If you're having difficulties understanding the model you're using, I can probably help you :-) Let me know.
1
3
1
1
I am trying to summarize text documents that belong to legal domain. I am referring to the site deeplearning.net on how to implement the deep learning architectures. I have read quite a few research papers on document summarization (both single document and multidocument) but I am unable to figure to how exactly the summary is generated for each document. Once the training is done, the network stabilizes during testing phase. So even if I know the set of features (which I have figured out) that are learnt during the training phase, it would be difficult to find out the importance of each feature (because the weight vector of the network is stabilized) during the testing phase where I will be trying to generate summary for each document. I tried to figure this out for a long time but it's in vain. If anybody has worked on it or have any idea regarding the same, please give me some pointers. I really appreciate your help. Thank you.
Text summarization using deep learning techniques
0
0.197375
1
0
0
2,167
23,744,128
2014-05-19T17:52:00.000
-1
0
0
0
0
python,sqlite,unit-testing
0
23,744,831
0
1
0
false
0
0
I don't understand your problem. Why do you care that it's serverless? My standard technique for this is: use SQLAlchemy in tests, configure it with sqlite:/// or sqlite:///:memory:
1
0
0
0
Hi I am trying to write python functional tests for our application. It involves several external components and we are mocking them all out.. We have got a better framework for mocking a service, but not for mocking a database yet. sqlite is very lite and thought of using them but its a serverless, is there a way I can write some python wrapper to make it a server or I should look at other options like HSQL DB?
how to do database mocking or make sqlite run on localhost?
1
-0.197375
1
1
0
535
23,754,108
2014-05-20T07:59:00.000
0
0
0
0
0
python,database,sockets,pyside,qtsql
0
23,754,331
0
1
0
false
0
1
I'm not familiar with PySide .. but the idea is you need to build a function that when internet connection is available it should synchronize your local database with online database and in the server-side you need to build a script that can handle requests ( POST / GET ) to receive the scores and send it to database and I suggest MySQL .. Hope that helps
1
0
0
0
I am working on my Python project using PySide as my Ui language. My projet is a game which require an internet connection to update the users'score and store in the database. My problem is how can I store my database in the internet. I mean that all users can access this information when they are connected to an internet (when they are playing my game) and the information/database must be updated all the time. I am not sure which database is the most appropriate, how to store this information/database in the internet, how to access this information. I am using Python and PySide. For the database, I currently use PySide.QtSql . Thank you for answer(s) or suggestion(s).
Using Database with Pyside and Socket
0
0
1
1
0
193
23,763,365
2014-05-20T14:59:00.000
2
0
0
0
0
python,django
0
23,763,477
0
2
0
false
1
0
This has nothing to do with the Django template, but how you define the variable in the first place. Basckslashes are only "interpreted" when you specify them as literals in your Python code. So given your Python code above, you can either use the double backslash, or use a raw string. If you were loading the string "fred\xbf" from your database and outputting it in a template, it would not be "escaped".
1
2
0
0
I'm using Python 2.7 and Django 1.4 If I have a string variable result = "fred\xbf", how do I tell the Django template to display "fred\xbf" rather than process the backslash and display some strange character? I know I can escape the backslash: "fred\\xbf" , but can I get the Django template to understand I want the backslash not to be processed?
How do I tell Python not to interpret backslashes in strings?
0
0.197375
1
0
0
2,703
23,764,710
2014-05-20T15:56:00.000
2
1
1
0
0
python,open-source
0
23,764,809
0
1
0
true
0
0
Not sure if this is an appropriate question for SO - you might get voted down. But ... Whenever I have seen this question, the answer is almost always: find a project you like / you're interested in find something in that project that you feel you can fix / enhance (have a look through their bug tracker) fork the project (github makes this easy) make the change, find out what is appropriate for that project (documentation, unit tests, ...) submit the change back to the project (github has "request pull") Good luck!
1
0
0
0
I know python and want to contribute on OpenSource projects that features python. Anyone can help me where to contribute and how. I already googled it and find github and code.google as a good place to contribute but how to start it I don't know. Suggest how to get started.
how to contribute on open source project featuring python
0
1.2
1
0
0
435
23,781,823
2014-05-21T11:24:00.000
1
0
1
0
0
python,file,binary
0
23,781,948
0
2
0
true
0
0
Under Python 2, the only difference binary mode makes is how newlines are translated when writing; \n would be translated to the platform-dependant line separator. In other words, just write your ASCII byte strings directly to your binary file, there is no difference between your ASCII data and the binary data as far as Python is concerned.
1
0
0
0
I have this wired protocol I am implementing and I have to write binary and ASCII data into the same file, how can I do this at the same time or at least the result in the end will be the file with mixed asci and binary data ? I know that open("myfile", "rb") does open myfile in the binary mode, except that I can't find a solution how to go about this!
How to write binary and asci data into a file in python?
0
1.2
1
0
0
782
23,793,628
2014-05-21T20:56:00.000
1
0
1
0
0
python,nlp,nltk
0
23,816,393
0
3
0
false
0
0
I do not think your "algo" is even doing entity recognition... however, stretching the problem you presented quite a bit, what you want to do looks like coreference resolution in coordinated structures containing ellipsis. Not easy at all: start by googling for some relevant literature in linguistics and computational linguistics. I use the standard terminology from the field below. In practical terms, you could start by assigning the nearest antecedent (the most frequently used approach in English). Using your examples: first extract all the "entities" in a sentence from the entity list, identify antecedent candidates ("litigation", etc.). This is a very difficult task, involving many different problems... you might avoid it if you know in advance the "entities" that will be interesting for you. finally, you assign (resolve) each anaphora/cataphora to the nearest antecedent.
1
0
1
0
First: Any recs on how to modify the title? I am using my own named entity recognition algorithm to parse data from plain text. Specifically, I am trying to extract lawyer practice areas. A common sentence structure that I see is: 1) Neil focuses his practice on employment, tax, and copyright litigation. or 2) Neil focuses his practice on general corporate matters including securities, business organizations, contract preparation, and intellectual property protection. My entity extraction is doing a good job of finding the key words, for example, my output from sentence one might look like this: Neil focuses his practice on (employment), (tax), and (copyright litigation). However, that doesn't really help me. What would be more helpful is if i got an output that looked more like this: Neil focuses his practice on (employment - litigation), (tax - litigation), and (copyright litigation). Is there a way to accomplish this goal using an existing python framework such as nltk (after my algo extracts the practice areas) can I use ntlk to extract the other words that my "practice areas" modify in order to get a more complete picture?
How to extract meaning from sentences after running named entity recognition?
0
0.066568
1
0
0
1,842
23,801,986
2014-05-22T08:49:00.000
0
0
1
0
0
python,win32com
0
23,802,237
0
2
0
false
0
0
You can have multi dimensional arrays or objects, your choice :) arr = []; arr.append([1,2]); print arr; would output [[1,2]]
1
1
0
0
I need to read values from excel worksheet into 2d array.can anyone tell me how to do this using pythonwin32com.
Two dimensional array in python
0
0
1
0
0
213
23,807,459
2014-05-22T12:55:00.000
1
0
1
1
0
python,vm-implementation
0
30,846,061
0
2
0
false
0
0
It may depend on Python implementation such as Pypy, Jython. In CPython, you have to use a separate process if you want an independent interpreter otherwise at the very least GIL is shared. multiprocessing, concurrent.futures modules allow you to run arbitrary Python code in separate processes and to communicate with the parent easily.
1
3
0
1
Does anyone know how to launch a new python virtual machine from inside a python script, and then interact with it to execute code in a completely separate object space? In addition to code execution, I'd like to be able to access the objects and namespace on this virtual machine, look at exception information, etc. I'm looking for something similar to python's InteractiveInterpreter (in the code module), but as far as I've been able to see, even if you provide a separate namespace for the interpreter to run in (through the locals parameter), it still shares the same object space with the script that launched it. For instance, if I change an attribute of the sys module from inside InteractiveInterpreter, the change takes affect in the script as well. I want to completely isolate the two, just like if I was running two different instances of the python interpreter to run two different scripts on the same machine. I know I can use subprocess to actually launch python in a separate process, but I haven't found any good way to interact with it the way I want. I imagine I could probably invoke it with '-i' and push code to it through it's stdin stream, but I don't think I can get access to its objects at all.
Programmatically launch and interact with python virtual machine
0
0.099668
1
0
0
923
23,823,519
2014-05-23T07:35:00.000
2
0
1
0
0
python,setuptools,distribute
0
24,061,938
0
1
0
true
0
0
The situation is legitimately confusing as there are too many installers available for Python and the landscape has changed recently. Distribute was a fork of setuptools which itself is an extension to distutils. They merged back with setuptools in 2013. Your book is most likely out of date. The documentation of setuptools and distribute has been a confusing mess since it assumes you already have intimate knowledge of distutils. Distutils2 was an abandoned effort to get a more capable distutils into the Py3.3 standard lib. Since distutils still lacks key features like generating executable wrapper scripts you would be best off working with a recent version of setuptools. Read through the distutils documentation first as setuptools is a superset of its functionality. You can't depend on your users having setuptools installed so it is helpful to include the ez_setup.py bootstrapping script with your code. This will let your setup.py install setuptools if needed.
1
0
0
0
I am following a python bbok which says to install Distribute. However I am confused should I install Distribute or Setuptools as both of them have merged now. Is there still a difference between the two? Since I have installed pip and that automaticallly installs setuptools I want to know how can I check if Distribute or Setuptools is installed or not?
What should I install Distribute or Setuptools
0
1.2
1
0
0
55
23,831,422
2014-05-23T14:08:00.000
1
0
0
0
0
python,binary,endianness
0
23,831,750
0
3
0
false
0
0
Note: I assume Python 3. Endianness is not a concern when writing ASCII or byte strings. The order of the bytes is already set by the order in which those bytes occur in the ASCII/byte string. Endianness is a property of encodings that maps some value (e.g. a 16 bit integer or a Unicode code point) to several bytes. By the time you have a byte string, the endianness has already been decided and applied (by the source of the byte string). If you were to write unicode strings to file not opened with b mode, the question depends on how those strings are encoded (they are necessarily encoded, because the file system only accept bytes). The encoding in turn depends on the file, and possibly on the locale or environment variables (e.g. for the default sys.stdout). When this causes problems, the problems extend beyond just endianness. However, your file is binary, so you can't write unicode directly anyway, you have to explicitly encode and decode. Do this with any fixed encoding and there won't be endianness issues, as an encoding's endianness is fixed and part of the definition of the encoding.
1
4
0
0
When using file.write() with 'wb' flag does Python use big or litte endian, or sys.byteorder value ? how can i be sure that the endianness is not random, I am asking because I am mixing ASCII and binary data in the same file and for the binary data i use struct.pack() and force it to little endian, but I am not sure what happen to the ASCII data ! Edit 1: since the downvote, I'll explain more my question ! I am writing a file with ASCII and binary data, in a x86 PC, the file will be sent over the network to another computer witch is not x86, a PowerPC, witch is on Big-endian, how can I be sure that the data will be the same when parsed with the PowerPC ? Edit 2: still using Python 2.7
What endianness does Python use to write into files?
1
0.066568
1
0
0
8,145
23,846,899
2014-05-24T15:45:00.000
3
0
1
0
0
python,windows-7,installation,lighttable
0
29,813,451
0
1
0
false
0
1
It's pretty easy to do. Unzip to where ever Move the LightTable directory (from inside LightTableWin) to your Program Files (x86) directory. 2.1 If you are using Windows Explorer, you'll need to start windows explorer as an administrator (found by right clicking the program icon) open your Program Files (x86)\LightTable directory, right click and drap the LightTable.exe file into the same directory and select Create Shortcut Left Click your short cut and drag it to your start menu. When it asks if you want to pin it to your start menu, select yes. Click on Start, and then click on the light table short cut. Use Light Table
1
2
0
0
I am trying to download/install Light Table. I want it to show up in the start menu. When downloading light table, it shows up as a Zip folder in the TEMP file. I've extracted the files and am unable to get it to show up in the start menu. Normally the programs I download have an installer that does this automatically. Light Table doesn't seem to have this. I'm sure I can use it from the TEMP folder, but would really like it in the start menu, program files folder or C drive. I've only done basic use of PCs (gaming, web browsing, MS Office).
Install Light Table editor on Windows 7
0
0.53705
1
0
0
2,589
23,859,613
2014-05-25T20:22:00.000
4
0
0
0
0
python,pyqt,pyqt4
0
41,981,238
0
6
0
false
0
1
You may simply try this: os.startfile(whatever_valid_filename) This starts the default OS application for whatever_valid_filename, meaning Explorer for a folder name, default notepad for a .txt file, etc.
1
4
0
0
I have searched a lot and I know how to open a directory dialog window. But what I am looking for is the method to open a directory folder under windows OS, just like you right click one of your local folder and select open. Any suggestions?
PyQt - How to open a directory folder?
0
0.132549
1
0
0
21,145
23,869,132
2014-05-26T11:31:00.000
1
0
1
0
1
python,excel,vba
1
23,869,285
0
1
0
false
0
0
It is xlCalculationAutomatic or you could use the number -4105 in Python.
1
1
0
0
how to make excel workbook calculation to automatic in both vba and python script? I tried this Application.Calculation = xlCalculateAutomatic but it is not working. It throws me below error. global name 'xlCalculateAutomatic' is not defined.
workbook calculation to automatic in both vba and python
0
0.197375
1
0
0
93
23,870,365
2014-05-26T12:38:00.000
7
0
1
0
1
python,django,pycharm
0
32,492,931
0
6
0
false
1
0
I have met the problems today. At last, I finished it by: Create project in command line Create app in command line Just open the existing files and code in pycharm The way I use have the benefits: don't need by professional pycharm code django project in pycharm
1
51
0
1
I'm new in this area so I have a question. Recently, I started working with Python and Django. I installed PyCharm Community edition as my IDE, but I'm unable to create a Django project. I looked for some tutorials, and there is an option to select "project type", but in the latest version this option is missing. Can someone tell me how to do this?
How to set up a Django project in PyCharm
0
1
1
0
0
89,573
23,884,156
2014-05-27T08:08:00.000
1
0
0
0
0
python,gstreamer
0
23,921,536
0
1
0
false
0
1
I'd suggest to file a bug and ideally make your test files available. If you want to track this down yourself take a look at the GST_DEBUG="*:3" ./your-app output to see which element is emitting the warning.
1
0
0
0
I'm writing a mediaplayer-gui fitting some needs of a medialibrary containing classical music only. Language is python3/tkinter. One backend is gstreamer1.0, playbin (seems to be the only one, playing gapless). When playbin gets the uri of a file with 5.0 channels (FRONT_LEFT,FRONT_RIGHT,FRONT_CENTER,REAR_LEFT,REAR_RIGHT) it gives following warning: ** (python3:13745): WARNING **: Unpositioned audio channel position flag set but channel positions present and plays the file downmixed to stereo. 5.0 is most common in classical-music media(LFE is mostly unwanted). Which gstreamer-object is the one, i can tell about channel-layout and what signal do i have to connect to, to get that object? Additional info: 5.1 gives the same warning, but plays without downmixing; 5.0 using gstplay-1.0 from commandline gives warning & downmixing; using gst123 based on gstreamer0.1 plays everything right
how to make playbin of gstreamer1.0 playing multichannel-audio 5.0 playing without downmixing to stereo
0
0.197375
1
0
0
193
23,891,195
2014-05-27T13:46:00.000
-1
0
1
0
0
python
0
23,891,448
0
3
0
true
0
0
I'll answer to my own question since I got an idea while writing the question, and maybe someone will need that. I added a link from that folder to my site-packages folder like that: ln -s /home/me/python/pyutils /path/to/site-packages/pyutils Then, since the PYTHONPATH contains the /path/to/site-packages folder, and I have a pyutils folder in it, with init.py, I can just import like: from pyutils import mymodule And the rest of the /home/me/python is not in the PYTHONPATH
1
1
0
0
I have a case for needing to add a path to a python package to sys.path (instead of its parent directory), but then refer to the package normally by name. Maybe that's weird, but let me exemplify what I need and maybe you guys know how to achieve that. I have all kind of experimental folders, modules, etc inside a path like /home/me/python. Now I don't want to add that folder to my sys.path (PYTHONPATH) since there are experimental modules which names could clash with something useful. But, inside /home/me/python I want to have a folder like pyutils. So I want to add /home/me/python/pyutils to PYTHONPATH, but, be able to refer to the package by its name pyutils...like I would have added /home/me/python to the path.
Add path to python package to sys.path
0
1.2
1
0
0
2,392
23,897,254
2014-05-27T19:16:00.000
3
0
1
0
0
python,multiprocessing,lsf
0
23,901,931
0
1
0
true
0
0
One (very simplified) way to think of LSF is as a system that launches a process and lets the process know how many cores (potentially on different hosts) have been allocated to it. LSF can't prevent your program from doing something stupid (like for example, if multiple instances of it run at the same time, and one instance overwrites the other's output). Some common ways of using LSF. Run 6 sequential jobs that process one file each. These 6 can run in parallel. Have a dependant seventh job that runs after the previous 6 finish, which will combine the output of the previous 6 into a single output. Run a parallel job that is assigned 6 cores on a single host. Seems that the python multiprocessing module would fit in well here. The env variable $LSB_MCPU_HOSTS will tell you how many cores are assigned to the job, so you know how big to make the pool. Run a parallel jobs that is assigned 6 cores, and could run on multiple hosts. Again, your process must be able to start itself on these other hosts. (or use blaunch to help out) I'm not sure which of these 3 ways best fits you needs. But I hope that the explanation helps you decide.
1
3
0
0
I have a single task to complete X number of times in Python and I will be using LSF to speed that up. Is it better to submit a job containing several Python scripts which can be run separately in parallel or one Python script that utilizes the multiprocessor module? My issue is I don't trust LSF to know how to split up the Python code into several processes (I'm not sure how LSF does this). However, I also don't want several Python scripts floating around as that seems inefficient and disorganized. The task at hand involves parsing six very large ASCII files and saving the output in a Python dict for later use. I want to parse the six files in parallel (they take about 3 minutes each). Does LSF allow Python to tell it something like "Hey, here's one script, but you're going to split it into these six processes"? Does LSF need Python to tell it that or does it already know how to do that? Let me know if you need more info. I have trouble balancing between "just enough" and "too much" background.
LSF: Submit one Python script that uses multiprocessor module *or* submit several scripts at once that are "pre-split"?
0
1.2
1
0
0
1,380
23,921,986
2014-05-28T21:13:00.000
2
0
0
0
0
python,web-scraping,beautifulsoup,web-crawler
0
23,922,228
0
2
0
false
1
0
You're basically asking "how do I write a search engine." This is... not trivial. The right way to do this is to just use Google's (or Bing's, or Yahoo!'s, or...) search API and show the top n results. But if you're just working on a personal project to teach yourself some concepts (not sure which ones those would be exactly though), then here are a few suggestions: search the text content of the appropriate tags (<p>, <div>, and so forth) for relevant keywords (duh) use the relevant keywords to check for the presence of tags that might contain what you're looking for. For example, if you're looking for a list of things, then a page containing <ul> or <ol> or even <table> might be a good candidate build a synonym dictionary and search each page for synonyms of your keywords too. Limiting yourself to "US" might mean an artificially low ranking for a page containing just "America" keep a list of words which are not in your set of keywords and give a higher ranking to pages which contain the most of them. These pages are (arguably) more likely to contain the answer you're looking for good luck (you'll need it)!
1
8
0
0
I'm trying to teach myself a concept by writing a script. Basically, I'm trying to write a Python script that, given a few keywords, will crawl web pages until it finds the data I need. For example, say I want to find a list of venemous snakes that live in the US. I might run my script with the keywords list,venemous,snakes,US, and I want to be able to trust with at least 80% certainty that it will return a list of snakes in the US. I already know how to implement the web spider part, I just want to learn how I can determine a web page's relevancy without knowing a single thing about the page's structure. I have researched web scraping techniques but they all seem to assume knowledge of the page's html tag structure. Is there a certain algorithm out there that would allow me to pull data from the page and determine its relevancy? Any pointers would be greatly appreciated. I am using Python with urllib and BeautifulSoup.
Web scraping without knowledge of page structure
0
0.197375
1
0
1
3,298
23,944,242
2014-05-29T22:46:00.000
7
1
1
0
1
python,c,numpy,gmp,gmpy
0
23,946,348
0
1
0
true
0
0
numpy and GMPY2 have different purposes. numpy has fast numerical libraries but to achieve high performance, numpy is effectively restricted to working with vectors or arrays of low-level types - 16, 32, or 64 bit integers, or 32 or 64 bit floating point values. For example, numpy access highly optimized routines written in C (or Fortran) for performing matrix multiplication. GMPY2 uses the GMP, MPFR, and MPC libraries for multiple-precision calculations. It isn't targeted towards vector or matrix operations. The Python interpreter adds overhead to each call to an external library. Whether or not the slowdown is significant depends on the how much time is spend by the external library. If the running time of the external library is very short, say 10e-8 seconds, then Python's overhead is significant. If the running time of the external library is relatively long, several seconds or longer, then Python's overhead is probably insignificant. Since you haven't said what you are trying to accomplish, I can't give a better answer. Disclaimer: I maintain GMPY2.
1
2
1
0
I understand that GMPY2 supports the GMP library and numpy has fast numerical libraries. I want to know how the speed compares to actually writing C (or C++) code with GMP. Since Python is a scripting language, I don't think it will ever be as fast as a compiled language, however I have been wrong about these generalizations before. I can't get GMP to work on my computer, so I can't run any tests. If I could, just general math like addition and maybe some trig functions. I'll figure out GMP later.
How do numpy and GMPY2 compare with GMP in terms of speed?
0
1.2
1
0
0
2,014
23,985,795
2014-06-02T00:19:00.000
1
0
0
0
0
python,amazon-web-services,amazon-s3,flask
0
23,986,820
0
1
0
false
1
0
Make the request to your Flask application, which will authenticate the user and then issue a redirect to the S3 object. The trick is that the redirect should be to a signed temporary URL that expires in a minute or so, so it can't be saved and used later or by others. You can use boto.s3.key.generate_url function in your Flask app to create the temporary URL.
1
0
0
0
I am trying to serve files securely (images in this case) to my users. I would like to do this using flask and preferably amazon s3 however I would be open to another cloud storage solution if required. I have managed to get my flask static files like css and such on s3 however this is all non-secure. So everyone who has the link can open the static files. This is obviously not what I want for secure content. I can't seems to figure out how I can make a file available to just authenticated user that 'owns' the file. For example: When I log into my dropbox account and copy a random file's download link. Then go over to anther computer and use this link it will denie me access. Even though I am still logged in and the download link is available to user on the latter pc.
Secure access of webassets with Flask and AWS S3
1
0.197375
1
0
0
890
23,987,050
2014-06-02T03:41:00.000
1
0
0
0
1
python,django,orm
0
23,987,194
0
2
0
false
1
0
I chose option 1 when I set up my environment, which does much of the same stuff. I have a JSON interface that's used to pass data back to the server. Since I'm on a well-protected VLAN, this works great. The biggest benefit, like you say, is the Django ORM. A simple address call with proper data is all that's needed. I also think this is the simplest method. The "blocking on the DB" issue should be non-existent. I suppose that it would depend on the DB backend, but really, that's one of the benefits of a DB. For example, a single-threaded file-based sqlite instance may not work. I keep things in Django as much as I can. This could also help with DB security/integrity, since it's only ever accessed in one place. If your client accesses the DB directly, you'll need to ship username/password with the Client. My recommendation is to go with 1. It will make your life easier, with fewer lines of code. Besides, as long as you code Client properly, it should be easy to modify DB access later on down the road.
1
2
0
0
So in my spare time, I've been developing a piece of network monitoring software that essentially can be installed on a bunch of clients, and the clients report data back to the server(RAM/CPU/Storage/Network usage, and the like). For the administrative console as well as reporting, I've decided to use Django, which has been a learning experience in itself. The Clients report to the Server asynchronously, with whatever data they happen to have(As of right now, it's just received and dumped, not stored in a DB). I need to access this data in Django. I have already created the models to match my needs. However, I don't know how to go about getting the actual data into the django DB safely. What is the way to go about doing this? I thought of a few options, but they all had some drawbacks: Give the Django app a reference to the Server, and just start a thread that continuously checks for new data and writes it to the DB. Have the Server access the Django DB directly, and write it's data there. The problem with 1 is that im even more tightly coupling the server with the django app, but the upside is that I can use the ORM to write the data nicely. The problem with 2 is that I can't use the ORM to write data, and I'm not sure if it could cause blocking on the DB in some cases. Is there some obvious good option I'm missing? I'm sorry if this question is vague. I'm very new to Django, and I don't want to write myself into a corner.
How to access Django DB and ORM outside of Django
0
0.099668
1
1
0
849
24,036,291
2014-06-04T11:33:00.000
11
0
0
1
0
c#,python,protocol-buffers,protobuf-net
0
24,038,019
0
1
0
true
0
0
DateTime is spoofed via a multi-field message that is not trivial, but not impossible to understand. In hindsight, I wish I had done it a different way, but it is what it is. The definition is available in bcl.proto in the protobuf-net project. However! If you are targering multiple platforms, I strongly recommend you simply use a long etc in your DTO model, representing some time granularity into some epoch (seconds or milliseconds since 1970, for example).
1
8
0
0
I'm working on a project consisting on Client/Server. Client is written in Python (will run on linux) and server in C#. I'm communicating through standard sockets and I'm using protobuf-net for protocol definition. However, I'm wondering how would protobuf-net handle DateTime serialization. Unix datetime differs from .net standard datetime, so how should I handle this situation? Thanks
How protobuf-net serialize DateTime?
0
1.2
1
0
0
3,765
24,062,830
2014-06-05T14:26:00.000
1
0
1
0
0
python,c,distutils
0
24,066,500
0
3
0
false
0
0
I'd consider building the python module as a subproject of a normal shared library build. So, use automake, autoconf or something like that to build the shared library, have a python_bindings directory with a setup.py and your python module.
1
16
0
0
I have written a library whose main functionality is implemented in C (speed is critical), with a thin Python layer around it to deal with the ctypes nastiness. I'm coming to package it and I'm wondering how I might best go about this. The code it must interface with is a shared library. I have a Makefile which builds the C code and creates the .so file, but I don't know how I compile this via distutils. Should I just call out to make with subprocess by overriding the install command (if so, is install the place for this, or is build more appropriate?) Update: I want to note that this is not a Python extension. That is, the C library contains no code to itself interact with the Python runtime. Python is making foreign function calls to a straight C shared library.
Best way to package a Python library that includes a C shared library?
1
0.066568
1
0
0
4,767
24,069,711
2014-06-05T20:31:00.000
1
0
0
0
1
python,rdf,freebase
0
25,625,683
0
1
0
false
1
0
The Freebase dump is in RDF format. The easiest way to query it is to dump it (or a subset of it) into an RDF store. It'll be quicker to query, but you'll need to pay the database load time up front first.
1
3
0
1
I downloaded the freebase data dump and I want to use it to get information about a query just like how I do it using the web API. How exactly do I do it? I tried using a simple zgrep but the result was a mess and takes too much time. Any graceful way to do it (preferably something that plays nicely with python)?
How to search freebase data dump
0
0.197375
1
0
1
964
24,077,041
2014-06-06T08:05:00.000
1
0
0
1
0
python,google-app-engine
0
24,088,125
0
1
0
true
1
0
Unfortunately there is not currently a well-supported way to do this. However, with the disclaimer that this is likely to break at some point in the future, as it depends on internal implementation details, You can fetch the relevant _AE_Backup_Information and _AE_DatastoreAdmin_Operation entities from your datastore and inspect them for information regarding the backup. In particular, the _AE_DatastoreAdmin_Operation has fields active_jobs, completed_jobs, and status.
1
0
0
0
I am taking the backup of datastore , using Taskqueues. I want to check whether the backup has completed successfully or not. I can check the end of the backup job by checking the taskqueue, but how can i check whether the backup was successful or it failed due to some errors.
Google App Engine Check Success of backup programmatically
0
1.2
1
0
0
69
24,112,422
2014-06-09T01:02:00.000
0
0
0
0
0
python,mysql,ruby-on-rails
0
24,112,483
0
1
0
true
0
0
Use ssh to login to your home computer, setup authorized keys for it and disable password login. setup iptables on your linux machine if you don't have a firewall on your router, and disable traffic on all ports except 80 and 22 (ssh and internet). That should get you started.
1
0
0
0
I have read a few posts on how to enable remote login to mysql. My question is: is this a safe way to access data remotely? I have a my sql db located at home (on Ubuntu 14.04) that I use for research purposes. I would like to run python scripts from my Macbook at work. I was able to remote login from my old windows OS using workbench connection (DNS ip). However the OS change has got me thinking what is the best/most secure way to accomplish this task?
Best way to access data from mysql db on other non-local machines
1
1.2
1
1
0
49
24,123,128
2014-06-09T15:05:00.000
0
1
0
1
0
python,git
0
24,123,328
0
2
0
false
0
0
in the top of your python file add #!/usr/bin/python then you can rename mv myScript.py myScript and run chmod 755 myScript. This will make it so you can run the file with ./myScript look into adding the file directory to your path or linking it to the path if you want to be able to run it from anywhere.
1
3
0
0
I'm writing git commands through a Python script (on Windows) When I double click on myScript.py, commands are launched in the Windows Command Prompt. I would like to execute them in Git Bash. Any idea how to do that without opening Git Bash and write python myScript.py?
Launch Python script in Git Bash
0
0
1
0
0
28,190
24,128,433
2014-06-09T20:26:00.000
6
0
0
0
0
python,django,email,nginx,django-allauth
0
24,129,038
0
2
0
true
1
0
Django get hostname and port from HTTP headers. Add proxy_set_header Host $http_host; into your nginx configuration before options proxy_pass.
1
8
0
0
I'm running django on port 8001, while nginx is handling webserver duties on port 80. nginx proxies views and some REST api calls to Django. I'm using django-allauth for user registration/authentication. When a new user registers, django-allauth sends the user an email with a link to click. Because django is running on port 8001, the link looks like http://machine-hostname:8001/accounts/confirm-email/xxxxxxxxxxxxxx How can I make the url look like http://www.example.com/accounts/confirm-email/xxxxxxxx ? Thanks!
django-allauth: how to modify email confirmation url?
0
1.2
1
0
0
2,367