Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
34,984,015
2016-01-25T02:00:00.000
0
0
1
0
python,heap
34,984,079
2
false
0
0
I noticed that given a list, if i create a heap using heapq.heapify(), the elements are in a different order than what I obtain if i iterate over the list and do heap.heappush(). Could someone help me understand why? There's no reason they should be the same. There is more than one way to store the data. Also, given the iterable, is one way better than the other for creating a heap and why? Make a list and hand it to heapify (this is what heapify is for, which makes it simpler, and it also has better asymptotic performance, should that ever matter.)
1
2
0
I noticed that given a list, if i create a heap using heapq.heapify(), the elements are in a different order than what I obtain if i iterate over the list and do heap.heappush(). Could someone help me understand why? Also, given the iterable, is one way better than the other for creating a heap and why?
why is heap created using heapq.heapify different from heap created by iterative heapq.heappush
0
0
0
210
34,985,134
2016-01-25T04:30:00.000
5
0
1
0
python,matplotlib,py2exe
34,985,405
2
false
0
0
Never mind! I managed to solve it, by copying the required dll from inside numpy/core, into the dist folder that py2exe creates, not outside of it.
1
4
1
I'm trying to compile a python program in py2exe. It is returning a bunch of missing modules, and when I run the executable, it says: "MKL FATAL ERROR: Cannot load mkl_intel_thread.dll" All my 'non-plotting' scripts work perfectly, just scripts utilizing 'matplotlib', and 'pyqtgraph' don't work. I've even found the file in Numpy/Core/mkl_intel_thread.dll, and placed it into the folder with the .exe, and it still doesn't work. Does anyone have any idea how this can be solved? I'm using Anaconda Python 3.4, and matplotlib 1.5.1
py2exe: MKL FATAL ERROR: Cannot load mkl_intel_thread.dll
0.462117
0
0
2,857
34,985,747
2016-01-25T05:33:00.000
0
0
1
0
python,methods
34,985,872
1
true
0
0
Because the creation of that abstract object is only an intermediate step. In the quote you gave, the end result is: a new argument list is constructed from the instance object and the argument list, and the function object is called with this new argument list In other words, in the end, the function object that "is" the method is called with the instance as the first argument and the rest of the arguments passed along. That is also what happens when you call the method yourself with the instance as the first argument.
1
0
0
In various tutorials I've seen it claimed that calling a method using instance.method() is syntactically equivalent to Class.method(instance). I was also researching what 'method binding' is and the official python documentation states: "When an instance attribute is referenced that isn’t a data attribute, its class is searched. If the name denotes a valid class attribute that is a function object, a method object is created by packing (pointers to) the instance object and the function object just found together in an abstract object: this is the method object. When the method object is called with an argument list, a new argument list is constructed from the instance object and the argument list, and the function object is called with this new argument list." Based on this description, it appears only the instance.method() syntax would result in an 'abstract method object' being created because the 'instance attribute is being referenced'. If that is the case, how are the two expressions syntactically equivalent?
Difference between calling a method via Class.method(instance) and instance.method()
1.2
0
0
63
34,988,109
2016-01-25T08:33:00.000
0
0
0
1
python,windows,admin,administration,administrator
34,989,256
1
false
0
0
For me the easiest solution is an administrator terminal instance. Press the start/window button Enter the search field Type in cmd and wait until under programs cmd.exe is found right click on that program click on the option where you can execute as an administrator Now your terminal has administrator rights. When you start a python skript inside that terminal the python interpreter has also amdin rights.
1
1
0
I am on a project in which I need to have full access of each directory or file in the windows file system,I am using python for it.But I cant modify or access some files and totally inaccessible the C:/ drive with python,showing "permission denied". I want to know is there any kind of way to get the full access as administrator using python,please suggest and help.
How to fully access windows machine as an administrator using python
0
0
0
963
34,988,678
2016-01-25T09:08:00.000
1
0
0
1
python,linux,sockets,subprocess
34,989,073
2
false
0
0
It seems like a permission issue. The subprocess is probably running as an other user and therefore you will not have access to the process. Use sudo ps xauw |grep [processname] to figure as under what user the daemon process is running.
1
0
0
In my program, A serve-forever daemon is restarted in a subprocess. The program itself is a web service, using port 5000 by default. I don't know the detail of the start script of that daemon, but it seems to inherit the socket listening on port 5000. So if I were to restart my program, I'll find that the port is already occupied by the daemon process. Now I am considering to fine tune the subprocess function to close the inherited socket FD, but I don't know how to get the FD in the first place.
How to get a socket FD according to the port occupied in Python?
0.099668
0
1
429
34,990,561
2016-01-25T10:40:00.000
8
0
0
0
python,azure,azure-machine-learning-studio
35,020,997
1
true
0
0
First, I am assuming you are doing your timing test on the published AML endpoint. When a call is made to the AML the first call must warm up the container. By default a web service has 20 containers. Each container is cold, and a cold container can cause a large(30 sec) delay. In the string returned by the AML endpoint, only count requests that have the isWarm flag set to true. By smashing the service with MANY requests(relative to how many containers you have running) can get all your containers warmed. If you are sending out dozens of requests a instance, the endpoint might be getting throttled. You can adjust the number of calls your endpoint can accept by going to manage.windowsazure.com/ manage.windowsazure.com/ Azure ML Section from left bar select your workspace go to web services tab Select your web service from list adjust the number of calls with slider By enabling debugging onto your endpoint you can get logs about the execution time for each of your modules to complete. You can use this to determine if a module is not running as you intended which may add to the time. Overall, there is an overhead when using the Execute python module, but I'd expect this request to complete in under 3 secs.
1
8
1
I have made an Azure Machine Learning Experiment which takes a small dataset (12x3 array) and some parameters and does some calculations using a few Python modules (a linear regression calculation and some more). This all works fine. I have deployed the experiment and now want to throw data at it from the front-end of my application. The API-call goes in and comes back with correct results, but it takes up to 30 seconds to calculate a simple linear regression. Sometimes it is 20 seconds, sometimes only 1 second. I even got it down to 100 ms one time (which is what I'd like), but 90% of the time the request takes more than 20 seconds to complete, which is unacceptable. I guess it has something to do with it still being an experiment, or it is still in a development slot, but I can't find the settings to get it to run on a faster machine. Is there a way to speed up my execution? Edit: To clarify: The varying timings are obtained with the same test data, simply by sending the same request multiple times. This made me conclude it must have something to do with my request being put in a queue, there is some start-up latency or I'm throttled in some other way.
Azure Machine Learning Request Response latency
1.2
0
0
1,088
34,990,774
2016-01-25T10:51:00.000
0
0
0
0
python,opengl,pyopengl
34,991,608
1
false
0
1
If you are using perspective matrix adjust you field of view. Smaller fov bigger the object
1
1
0
I am rendering a virtual object on a video feed, and I want to try zooming in on it. Right now, I: rotate and translate the camera by where it actually is with respect to the object, make a 2D texture of the latest frame filling up the entire screen, and draw the object at the origin after multiplying the vertices by a rotation matrix. The position is a 3D-vector, and the orientations of the object and camera are quaternions. How can I zoom in on the object while simultaneously magnifying the texture?
Zooming onto an object and background in OpenGL
0
0
0
206
34,991,430
2016-01-25T11:25:00.000
1
0
0
0
python,search,scipy,triangulation,delaunay
34,992,250
1
true
0
0
You can try point in location test, especially Kirkpatrick algorithm/data structure. Basically you subdivide the mesh in both axis and re-triangulate it. A better and simpler solution is to give each triangle a color and draw a bitmap then check the color of the bitmap with the point.
1
0
1
I have a set of points in a plane where each point has an associated altitude. I'm thinking of using the scipy.spatial library to compute the Delaunay triangulation of the point set and then use the result to interpolate for the points in between. The library implements a nice function that, given a point, finds the triangle it lies in. This would be particularly useful when calculating the depth map from the mesh. I assume though (please do correct me if I'm wrong) that the search function searches from the same starting point every time it is called. Since the points I will be looking for will tend to lie either on the triangle the previous one lied on or on an adjacent one, I figure that's unneccessary, but can't seem to find a way to optimize the search, other than to implement it myself. Is there a way to set the initial triangle for the search, or to optimize the depth map calculation otherwise?
Optimizing scipy.spatial.Delaunay.find_simplex
1.2
0
0
651
34,991,696
2016-01-25T11:39:00.000
0
0
1
0
python,playback,music-notation
34,992,041
3
false
0
0
Use multiprocess and winsound module to do so.
1
2
0
I plan to do a music notation which allow the user to place the note where ever they want and allowed them to play the notes they placed. If the user put the note in the same time which we have to play 2 or above notes together, any solution to solve that problem?
How can I make a play two musical notes simultaneously in Python?
0
0
0
342
34,992,439
2016-01-25T12:17:00.000
0
0
0
1
python,amazon-ec2,ipython,spyder
34,993,859
1
false
0
0
I often have the same problem, the easiest and fastest fix I have for you at the moment, is running the code in a new dedicated python console every time. this can easily be done by: 1) click the run settings icon (the wrench with green play/run buttton in the top of your screen) 2) Select the 2nd option (execute in a new dedicated python console) 3) press ok This will automatically run the code in a new console next time you press the run file button (f5), and should prevent the error message.
1
1
0
I'm running Spyder on Windows connecting remotely to and Amazon EC2 ipython kernel. Whenever I run some operations that take more than a few seconds to run, I get the repeated message It seems the kernel died unexpectedly. Use 'Restart kernel' to continue using this console. But my kernel is all fine and dandy. Sometimes I have to press Enter repeatedly to make it snap out of it, other times I have to restart the Spyder console and connect to my still-alive kernel. Any tips? Is there a way to disable the kernel-death check, or increasing the timeout? Thanks! :)
Spyder Python IDE thinks remote kernel has died when it hasn't; any ways to prevent that?
0
0
0
635
34,992,626
2016-01-25T12:28:00.000
1
1
1
0
python-2.7,wolfram-mathematica
34,994,799
2
false
0
0
How big are the matrices? If they are not too large, the JSON format will work well. I have used this, it is easy to work with both in Python and Mathematica. If they are large, I would try HDF5. I have no experience with writing this from Python, but I know that it can store multiple datasets, thus it can store multiple matrices of different sizes.
1
3
0
I am running a simulation in Python. The simulation's results are summarized in a list of number matrices. Is there a nice export format I can use to write this list, so that later I can read the file in Mathematica easily, and Mathematica will recognize it as a list of matrices automatically?
Save list of table of numbers from Python into format easily readable by Mathematica?
0.099668
0
0
633
34,992,856
2016-01-25T12:39:00.000
0
0
0
0
python,django,django-rest-framework,offlineapps
34,996,085
1
false
0
0
I'm confused as to how you're approaching this. My understanding is that when the app is offline you want to "queue up" any API requests that are sent. Your process seems fine however without knowing the terms around the app being "offline" it's hard to understand if this best. Assuming you're meaning the server(s) holding the application are offline you're correct you want a process in the android app that will store the request until the application becomes online. However, this can be dangerous for end users. They should be receiving a message on the application being offline and to "try again later" as it were. The fear being they submit a request for x new contacts to be queued and then re-submit not realizing the application was offline. I would suggest you have the android app built to either notify the user of the app being down or provide some very visible notification that requests are queued locally on their phone until the application becomes available and for them to view/modify/delete said locally cached requests until the application becomes available. When the API becomes available a notification can be set for users to release their queue on their device.
1
3
0
I have multiple api which we have provided to android developers. Like : 1) Creating Business card API 2) Creating Contacts API So these api working fine when app is online. So our requirement is to handle to create business card and contacts when app is offline. We are following steps but not sure:- 1) Android developer store the business card when app offline and send this data to server using separate offline business card api when app comes online. 2) Same we do for creating contacts offline using offline contact api. My problem is I want do in one api call to send all data to server and do operation. Is this approach will right?? Also please suggest what is the best approach to handle offline data. Also how to handle syncing data when app would come online?? Please let me know if I could provide more information.
how to design rest api which handle offline data
0
0
1
1,524
34,993,615
2016-01-25T13:17:00.000
0
0
0
0
python,postgresql,utf-8
34,993,660
1
true
0
0
Your encoding and the database connection encoding don't match. The database connection is in UTF8 and you're probably trying to send with Latin1 encoding. When opening the connection send SET client_encoding TO 'Latin1', after that PostgreSQL will assume all strings to be in Latin1 encoding regardless of the database encoding. Alternatively you can use conn.set_client_encoding('Latin1')
1
0
0
I am trying to call a postgres database procedure using psycopg2 in my python class. lCursor.callproc('dbpackage.proc',[In_parameter1,In_parameter2,out_parameter]). In_parameter values is 5008001#60°V4#FR.tif But I am getting the below error. DataError: invalid byte sequence for encoding "UTF8": 0xb0 I have tried mostly solutions given on net, but no luck.
DataError: invalid byte sequence for encoding "UTF8": 0xb0 while calling the database procedure
1.2
1
0
2,227
34,995,645
2016-01-25T15:01:00.000
0
0
0
0
python,matplotlib
35,028,696
1
false
0
0
I'm wishing to do something similar for small hexbins, thinking to: (1) Get the hexbin centres: hexobj_cen=hexobj.get_offsets() lon_hex=hexobj_cen[:,0] #hexbin lon centre lat_hex=hexobj_cen[:,1] #hexbin lat centre (2) Run a for loop (for each hexbin centre) to find the Cartesian distance (N.hypot) between that centre from all points put into an array Then ask if for each hexbin centre, point-hexbin distance is greater than some maximum distance (half the distance between two opposing vertices. Great if there is a standard way (within pylab.hexbin) to do this, but also couldn't yet find it.
1
1
1
Is there a way to get the borders of a matplotlib.pyplot.hexbin plot? Say, i have a pd.DataFrame with spatial latitude and longitude values, which i plot in a hexbin plot. Afterwards i want to assign the corresponding bin of the hexbin grid to each instance of my DataFrame, by checking if the latitude and longitude values of an instance fall in one of the hexbin bins. Can i assign names or indices to the different bins? I have already looked in the documentation for the hexbin plot, all i can find are line properties, which describe the lines that are drawn in the plot.
Matplotlib hexbin - get bin borders
0
0
0
399
34,998,280
2016-01-25T17:10:00.000
1
0
0
0
python,apache-spark,python-3.4,pyspark
35,013,791
2
false
0
0
This is not the problem of PySpark, this is a limit of Spark implement. Spark use a scala array to store the broadcast elements, since the max Integer of Scala is 2*10^9, so the total string bytes is 2*2*10^9 = 4GB, you can view the Spark code.
1
1
1
In Pyspark, I am trying to broadcast a large numpy array of size around 8GB. But it fails with the error "OverflowError: cannot serialize a string larger than 4GiB". I have 15g in executor memory and 25g driver memory. I have tried using default and kyro serializer. Both didnot work and show same error. Can anyone suggest how to get rid of this error and the most efficient way to tackle large broadcast variables?
Broadcast large array in pyspark (~ 8GB)
0.099668
0
0
3,217
34,999,194
2016-01-25T18:03:00.000
2
1
0
0
python,api,email,gmail,send
35,799,866
2
false
0
0
Try deleting generated storage.json file and then try again afresh. you might be trying this script with different scopes so "storage.json" might be having wrong details.
2
1
0
I'm trying to send an email using the Gmail API in python. I think I followed the relevant documentation and youtube vids. I'm running into this error: googleapiclient.errors.HttpError: HttpError 403 when requesting https://www.googleapis.com/gmail/v1/users/me/messages/send?alt=json returned "Insufficient Permission" Here is my script: #!/usr/bin/env python from googleapiclient.discovery import build from httplib2 import Http from oauth2client import file, client, tools from email.mime.text import MIMEText import base64 import errors SCOPES = 'https://mail.google.com/' CLIENT_SECRET = 'client_secret.json' store = file.Storage('storage.json') credz = store.get() if not credz or credz.invalid: flags = tools.argparser.parse_args(args=[]) flow = client.flow_from_clientsecrets(CLIENT_SECRET, SCOPES) credz = tools.run_flow(flow, store, flags) GMAIL = build('gmail', 'v1', http=credz.authorize(Http())) def CreateMessage(sender, to, subject, message_text): """Create a message for an email. Args: sender: Email address of the sender. to: Email address of the receiver. subject: The subject of the email message. message_text: The text of the email message. Returns: An object containing a base64url encoded email object. """ message = MIMEText(message_text) message['to'] = to message['from'] = sender message['subject'] = subject return {'raw': base64.urlsafe_b64encode(message.as_string())} def SendMessage(service, user_id, message): """Send an email message. Args: service: Authorized Gmail API service instance. user_id: User's email address. The special value "me" can be used to indicate the authenticated user. message: Message to be sent. Returns: Sent Message. """ try: message = (service.users().messages().send(userId=user_id, body=message) .execute()) print 'Message Id: %s' % message['id'] return message except errors.HttpError, error: print 'An error occurred: %s' % error message = CreateMessage('[email protected]', '[email protected]', 'test_subject', 'foo') print message SendMessage(GMAIL, 'me', message) I tried adding scopes, trying different emails, etc. I have authenticated by logging into my browser as well. (The [email protected] is a dummy email btw)
403 error sending email with gmail API (python)
0.197375
0
1
2,231
34,999,194
2016-01-25T18:03:00.000
1
1
0
0
python,api,email,gmail,send
46,799,877
2
false
0
0
I had the same problem. I solved it by running again the quickstart.py that provides google and change SCOPE so that google can give you all permissions you want. After that don't need to have SCOPE or CLIENT_SECRET on your new code to send a message, just get_credentials(), CreateMessage() and SendMessage() methods.
2
1
0
I'm trying to send an email using the Gmail API in python. I think I followed the relevant documentation and youtube vids. I'm running into this error: googleapiclient.errors.HttpError: HttpError 403 when requesting https://www.googleapis.com/gmail/v1/users/me/messages/send?alt=json returned "Insufficient Permission" Here is my script: #!/usr/bin/env python from googleapiclient.discovery import build from httplib2 import Http from oauth2client import file, client, tools from email.mime.text import MIMEText import base64 import errors SCOPES = 'https://mail.google.com/' CLIENT_SECRET = 'client_secret.json' store = file.Storage('storage.json') credz = store.get() if not credz or credz.invalid: flags = tools.argparser.parse_args(args=[]) flow = client.flow_from_clientsecrets(CLIENT_SECRET, SCOPES) credz = tools.run_flow(flow, store, flags) GMAIL = build('gmail', 'v1', http=credz.authorize(Http())) def CreateMessage(sender, to, subject, message_text): """Create a message for an email. Args: sender: Email address of the sender. to: Email address of the receiver. subject: The subject of the email message. message_text: The text of the email message. Returns: An object containing a base64url encoded email object. """ message = MIMEText(message_text) message['to'] = to message['from'] = sender message['subject'] = subject return {'raw': base64.urlsafe_b64encode(message.as_string())} def SendMessage(service, user_id, message): """Send an email message. Args: service: Authorized Gmail API service instance. user_id: User's email address. The special value "me" can be used to indicate the authenticated user. message: Message to be sent. Returns: Sent Message. """ try: message = (service.users().messages().send(userId=user_id, body=message) .execute()) print 'Message Id: %s' % message['id'] return message except errors.HttpError, error: print 'An error occurred: %s' % error message = CreateMessage('[email protected]', '[email protected]', 'test_subject', 'foo') print message SendMessage(GMAIL, 'me', message) I tried adding scopes, trying different emails, etc. I have authenticated by logging into my browser as well. (The [email protected] is a dummy email btw)
403 error sending email with gmail API (python)
0.099668
0
1
2,231
35,001,306
2016-01-25T20:04:00.000
0
0
0
0
python,csv
35,015,910
3
false
0
0
Apart from extracting the data, the first thing you need to do is rearrange your data. As it is now, 191 columns are added every day. To do that, the whole file needs to be parsed (probably in memory, data growing every day), data gets added to the end of each row, and everything has to be fully written to disk again. Usually, to add data to a csv, rows are added at the end of the file. No need to parse and rewrite the whole file each time. On top of that, most software to read csv files starts having problems when the number of columns gets higher. So it would be a lot better to add the daily data as rows at the end of the csv file. While we're at it: assuming the 253 x 191 is some sort of grid, or at least every cell has te same data type, this would be a great candidate for binary storage (not sure how/if Python can handle that). All data could be stored in it's binary form resulting in a fixed length field/cell. To access a field, it's position could simply be calculated and there would be no need to parse and convert all the data each time. Retrieving data would be almost instant.
1
2
1
I have a csv file contain of daily precipitation with (253 rows and 191 column daily) so for one year I have 191 * 365 column. I want to extract data for certain row and column that are my area of interest example row 20 and column 40 for the first day and the 2,3,4 ... 365 days has the same distance between the column. I'm new in python, is there any way that I can extract the data and store it in a new csv for a certain row and column for one year? Thanks
Cut selected data from daily precipitation CSV files
0
0
0
91
35,002,061
2016-01-25T20:48:00.000
3
0
0
0
python,django,rest,django-views,django-rest-framework
35,009,997
2
true
1
0
You don't have to "fix" Deprecation Warnings as they are, well, only warnings and things still work. However, if you'll decide to update they might break your app. So usually it's a good idea to rewrite the parts with warnings to new interfaces, that are hinted in those warnings if it's in your code. If it's in some side library you use, then you might want to wait if the library creator will update his/her library in the next release. Regarding your particular warnings, unless you'll decide to update to Django 1.10, your code should work well.
1
4
0
I am a new user of the Django Framework. I am currently building a REST API with the django_rest_framework. When starting my server I am getting deprecation warnings that I have no idea how to fix. RemovedInDjango110Warning: 'get_all_related_objects is an unofficial API that has been deprecated. You may be able to replace it with 'get_fields()' for relation in opts.get_all_related_objects() The above is the first of these. Does anyone know how to fix this issue. All I have in my API at the minute is standard rest calls using the built in ModelViewSet and I have also overwritten the default authentication & user system with my own so I have no idea why I'm getting these warnings as I have been using Django 1.9 from the start. I also got this: RemovedInDjango110Warning: render() must be called with a dict, not a RequestContext From my initial research this is related to templates. I am not using any templates so I don't know why this is coming up. Can anyone help me to fix these issues?
How to fix a Deprecation Warning in Django 1.9
1.2
0
0
2,418
35,002,800
2016-01-25T21:32:00.000
1
0
1
0
python,ipython,ipython-notebook,jupyter-notebook
43,137,436
1
false
0
0
I feel your frustration! This will probably depend on where you plotter functionality is within your ipython notebook, but one solution (especially if it is in the last cell) is to create many blank cells below your plotter cell (say 10 to 20) then use Ctrl + Enter to run the plotter cell. This may need some playing around with but I have gotten this to work for me! (I think the idea of Ctrl + Enter is to keep the cell where it is)
1
9
0
In IPython, if I run a cell, the browser scrolls to put the cell near the bottom of the page. If this happens when I plot a chart, then I can't see the chart and have to scroll down manually. How can I stop this happening?
IPython Auto Scroll?
0.197375
0
0
1,284
35,003,509
2016-01-25T22:19:00.000
4
0
1
0
python-3.x
35,003,561
1
false
0
0
False is considered to be 0, and True is considered to be 1, so it should be: (2 + (3 == 4) + 5) == 7 Since False is 0, the 3 == 4 part would evaluate to 0. Because of this, the expression would be the equivalent of: (2 + 0 + 5) == 7 which would evaluate to True
1
0
0
I'm trying to make this statement to evaluate to be true using parentheses. I have tried multiple combinations but it always evaluates out to be false 2 + 3 == 4 + 5 == 7
How to make this statement true using python
0.664037
0
0
79
35,004,466
2016-01-25T23:32:00.000
0
1
0
1
python,centos
35,004,508
2
false
0
0
Do you have pip for python3, too? Try pip3 rather than pip. I assume your regular pip is just installing the modules for Python 2.x.
1
3
0
I'm running Centos7 and it comes with Python2. I installed python3, however when I install modules with pip, python3 doesn't use them. I can run python3 by typing python3 at the CLI python (2.x) is located in /usr/bin/python python3 is located in /usr/local/bin/python3 I tried creating a link to python3 in /usr/bin/ as "python", but as expected, it didnt resolve anything. I renamed the current python to python2.bak It actually broke some command line functionality (tab to complete). I had to undo those changes to resolve. Suggestions welcome. Thanks.
How to properly install python3 on Centos 7
0
0
0
2,685
35,004,619
2016-01-25T23:46:00.000
2
0
0
0
python,deep-learning,tensorflow
35,004,791
1
true
0
0
The amount of pre-fetching depends on your queue capacity. If you use string_input_producer for your filenames and batch for batching, you will have 2 queues - filename queue, and prefetching queue created by batch. Queue created by batch has default capacity of 32, controlled by batch(...,capacity=) argument, therefore it can prefetch up to 32 images. If you follow outline in TensorFlow official howto's, processing examples (everything after batch) will happen in main Python thread, whereas filling up the queue will happen in threads created/started by batch/start_queue_runners, so prefetching new data and running prefetched data through the network will occur concurrently, blocking when the queue gets full or empty.
1
3
1
I am not quite sure about how file-queue works. I am trying to use a large dataset like imagenet as input. So preloading data is not the case, so I am wondering how to use the file-queue. According to the tutorial, we can convert data to TFRecords file as input. Now we have a single big TFRecords file. So when we specify a FIFO queue for the reader, does it mean the program would fetch a batch of data each time and feed the graph instead of loading the whole file of data?
reading a large dataset in tensorflow
1.2
0
0
2,670
35,009,726
2016-01-26T08:36:00.000
2
0
0
0
python,tcp,twisted,ibm-watson
35,411,963
1
false
1
0
That's our fault. We experienced an issue with WebSockets connections being dropped when the service was under heavy load.
1
1
0
I have been using IBM watson speech to text over websockets and since recently there are connection drops in the middle of process or handshake issues. This is the error log and it can't process audio files after 1-2 minutes of handshake: _connectionLost: [Failure instance: Traceback (failure with no frames): : Connection was closed cleanly. ('WebSocket connection closed: connection was closed uncleanly (peer dropped the TCP connection without previous WebSocket closing handshake)', 'code: ', 1006, 'clean: ', False) Can somebody help me understand what is exactly going wrong. I am currently running the process through a virtual machine but the problem persists even with local machine implementation. Is there a problem with Watson server?
Connection drop with IBM Watson Server
0.379949
0
0
223
35,010,499
2016-01-26T09:23:00.000
0
0
1
0
python,vim,python-mode
47,372,217
2
false
0
0
Use :!python3 %, instead of using :PymodeRun or the default keybinding <leader>r. That way you run the code in the shell and can break out of it and use the debugger as expected.
1
4
0
It is my first time to edit python code in vim equipped with python-mode plugin. After setting breakpoints, I use "\ r" command to run it.Then it keep still after printing '[pymode]code running...'.I try some ways but still can not quit debug.It just have no response no matter what I do.
How to stop debugging python in vim
0
0
0
2,197
35,010,894
2016-01-26T09:46:00.000
0
0
1
0
python,gps,pyephem
35,047,679
2
false
0
0
PyEphem, at least, does not have any knowledge of the RINEX format — it can only understand the TLE format for satellite ephemeris files.
1
2
0
I'm looking for a way to read GNSS RINEX files with Python. Most importantly I would just need to get orbital information read from a RINEX navigation file elements so that I could use PyEphem for follow-up processing. Can anybody help?
How to read RINEX files with Python?
0
0
0
3,203
35,017,039
2016-01-26T15:07:00.000
2
0
0
0
python,stripe-payments
35,018,652
1
true
1
0
That would trigger a declined charge which is a card_error and can be triggered with this card number: 4000000000000002: Charge will be declined with a card_declined code.
1
1
0
If I create a Charge object via the Stripe API and the card is valid, but the charge is declined, what error does this cause? It doesn't look to be possible to simulate this error in the test sandbox and I'd like to be able to catch it (and mock it in tests), but the documentation isn't clear on this point.
Catching API response for insufficient funds from Stripe
1.2
0
1
1,120
35,020,609
2016-01-26T17:57:00.000
2
0
1
0
python
35,020,764
3
true
0
0
A quick and dirty approach might be len(str(NUMBER).strip('0')) which will trim off any trailing zeros and count the remaining digits. To discount the decimal point then you'd need len(str(NUMBER).replace('.','').strip('0')) However you need to bear in mind that in many cases converting a python float to a string can give you some odd behaviour, due to the way floating point numbers are handled.
1
4
0
For a coding exercise I'm working on, I'm trying to compare two numbers and choose the one that has the larger number of significant digits. For example: compare 2.37e+07 and 2.38279e+07, select 2.38279e+07 because it has more significant digits. I don't know how to implement this in Python. I considered counting the length of each number using len(str(NUMBER)), but this method returns "10" for both of the numbers above because it doesn't differentiate between zero and non-zero digits. How can I compare the number of significant digits in Python?
Compare the number of significant digits in two numbers
1.2
0
0
1,713
35,023,378
2016-01-26T20:31:00.000
0
0
1
1
python,python-2.7,module,lua,centos
35,494,125
1
false
0
0
I am not exactly sure what you mean by "How to create a module for kmos". You didn't mention which terminal you are using. However it will be definitely helpful to understand the mechanism behind finding executables and python import. If you want to execute the kmos command-line interface (e.g. kmos export ...) you need to make sure that wherever the kmos shell client is is in your $PATH variable. When you installed kmos (pip install --user --upgrade kmos) it should tell where it went. That directory needs to show up when you run echo $PATH. Most likely something like ~/.local/bin. If it doesn't show up you may want to put export PATH=${PATH}:~/.local/bin into your ~/.bashrc or the corresponding syntax in your echo $SHELL configuration file. The other location is where the python module gets copied to. When you do the pip installation it should print this out as well. Most likely something like ~/.local/lib/pythonXY/site-packages. When you run python -c "import sys; print(sys.path)" it should include given directory. You can automatically add this directory again using your echo ${SHELL} configuration file like export PYTHONPATH=${PYTHONPATH}:~/.local/lib/pythonXY/site-packages. If you can already import kmos from python then python -c "import kmos; print(kmos.__file__)" will tell you where it found it.
1
0
0
I have recently installed kmos, a python package using pip in my user account on my institute cluster. How to create a module for kmos and set the path to the directory such that python accesses the library. Currently, I am giving the path to the kmos binary while running the program. Linux distro: Cent OS Module support: Lua-based Lmod environmental modules system
create module for python package
0
0
0
108
35,024,007
2016-01-26T21:10:00.000
24
0
0
0
python,django,django-models
35,024,190
1
true
1
0
Each tuple results in a discrete UNIQUE clause being added to the CREATE TABLE query. As such, each tuple is independent and an insert will fail if any data integrity constraint is violated.
1
23
0
When I am defining a model and using unique_together in the Meta, I can define more than one tuple. Are these going to be ORed or ANDed? That is lets say I have a model where class MyModel(models.Model): druggie = ForeignKey('druggie', null=True) drunk = ForeignKey('drunk', null=True) quarts = IntegerField(null=True) ounces = IntegerField(null=True) class Meta: unique_together = (('drunk', 'quarts'), ('druggie', 'ounces')) either both druggie and ounces are unique or both drunk and quarts are unique, but not both.
Multiple tuples in unique_together
1.2
1
0
5,860
35,024,118
2016-01-26T21:16:00.000
-2
0
0
0
python,image,python-3.x,tkinter
54,412,011
3
false
0
1
Luke, this is too late , however may help others. Your image has to be in the same sub directory with .py script. You can also type './imageFileName.png'
1
6
0
I've spent some time looking online and so far haven't found anything that answers this without using PIL, which i can't get to work is there another way to do this simply?
How to load an image into a python 3.4 tkinter window?
-0.132549
0
0
33,691
35,024,898
2016-01-26T22:02:00.000
0
1
0
0
python,colors,scale,blender,particles
35,025,985
2
false
0
0
Should I say before t min (32.668837340451788) is green, t max (129.20671163699313) is magenta? So that the script knows, is the value "cold" or "warm". First particle for example will be almost magenta. Here are 4 first particles: 5.28964162682e+14 5.62257206698e+13 -2.9525300544e+14 128.332184907 5.23680422449e+14 9.33982452199e+13 -2.9525300544e+14 128.336966138 3 5.15787732694e+14 1.3010546441e+14 -2.9525300544e+14 128.346633243 5.05325414399e+14 1.66164504722e+14 -2.9525300544e+14 128.355079501
1
1
0
I make an astronomical visualization, which illustrates a birth of a planet from a cloud of particles (which i have over 130.000). Each particle, besides xyz-coordinate, has also a temperature value. Is it possible to code it like Temperature minimum is green, Temperature maximum is magenta. Dear script, color my particles in scale between green and magenta? I am working with Python (Blender). Thank you in advance for any help!
color scale to illustrate the temperature, Python script
0
0
0
912
35,027,113
2016-01-27T01:14:00.000
2
0
1
0
python,dictionary,memory,pickle,ram
35,027,180
2
false
0
0
At one moment you will run out of RAM and your code will crash. Before you reach that stage, your system is likely to slow down as it will start swapping. For such large amount of data you shall use something what will allow you to store the data without keeping all in memory, some example being databases (sqlite is easy to start with). Another warning: if your source data are in some file and have certain size, expect that Python will need more memory to work with it as it has to create some structures (like dictionaries) for it.
2
2
0
I'm writing Python code right now which gradually constructs a large dictionary (600 million elements) while constantly reading from the dict. I regularly write that object to a file using cPickle, and pick up where I left off when interrupted by reading from the file. By the time it's done, the dictionary will take up approximately 40 GB. My computer has 16 GB of RAM. What behavior should I expect? Memory error? Frozen computer? Extremely slow access to the dictionary? Also, I'm attempting to fix my alternate implementation (a NumPy array instead of a dictionary), expected to take only 5 GB but also about three times longer to run. Am I correct that constant memory access while staying within 16 GB will make the NumPy version actually run faster?
Object Larger Than RAM
0.197375
0
0
486
35,027,113
2016-01-27T01:14:00.000
0
0
1
0
python,dictionary,memory,pickle,ram
35,027,484
2
false
0
0
Can you run out of RAM on a modern operating system? If you have 16Gb, less is available because the OS is using some. However, virtual memory should make this much greater but performance may suffer.
2
2
0
I'm writing Python code right now which gradually constructs a large dictionary (600 million elements) while constantly reading from the dict. I regularly write that object to a file using cPickle, and pick up where I left off when interrupted by reading from the file. By the time it's done, the dictionary will take up approximately 40 GB. My computer has 16 GB of RAM. What behavior should I expect? Memory error? Frozen computer? Extremely slow access to the dictionary? Also, I'm attempting to fix my alternate implementation (a NumPy array instead of a dictionary), expected to take only 5 GB but also about three times longer to run. Am I correct that constant memory access while staying within 16 GB will make the NumPy version actually run faster?
Object Larger Than RAM
0
0
0
486
35,027,646
2016-01-27T02:13:00.000
2
1
0
1
python-2.7,ubuntu,amazon-ec2
35,028,687
1
false
0
0
You could handle things a few ways, but I would simply mount the instance's filesystem locally, and keep a Putty (Windows) terminal open to execute commands remotely. Trying to install a GUI on the EC2 instance is probably more trouble than it's worth, and a waste of resources. In most cases, I build everything inside a local (small) Ubuntu Server VM while I'm working on it, until it's ready for some sort of deployment before moving to an EC2/DO Droplet/What-have-you. The principles are basically the same - having to work with a machine that you don't have immediate full command of - and it's cheaper, to boot.
1
2
0
I have a server instance (Ubuntu) running on AWS EC2. What's the best way to use GUI-based Python editor (e.g., Spyder, Sublimetext, PyCharm) with that server instance?
Using Python GUI Editor on Ubuntu AWS
0.379949
0
0
590
35,028,910
2016-01-27T04:29:00.000
0
0
1
0
python-2.7,beautifulsoup,lxml
35,028,988
2
false
1
0
That's a matter of personal preference, however in most cases the benefits of installing libraries in a virtual environment far outweigh the costs. Setting up virtualenv (and perhaps virtualenvwrapper), creating an environment for your project, and activating it will take 2-10 minutes (depending on your familiarity with the system) before you can start work on your project itself, but it may save you a lot of hassle further down the line. I would recommend that you do so.
2
0
0
I am doing a data scraping project in python.For that i need to use beautiful soup and lxml.Should i install them globally or in a virtual environment?
Do i need Virtual environment for lxml and beautiful soup in linux?
0
0
1
104
35,028,910
2016-01-27T04:29:00.000
3
0
1
0
python-2.7,beautifulsoup,lxml
35,029,148
2
true
1
0
Well Using or not using a virtual environment is up to you. But it is always a best practice to use virtualenv and virtualenvwrapper. So that if something unusual happens with your project and its dependencies it won't hamper the python reciding at the system level. It might happen that in future you might need to work on different version of lxml or beautifulsoup and if you do not use virtual environment then you need to upgrade or degrade the libraries and now your older project will not run as you have upgraded or degraded everything in the system level python. Therefore it is wise to start using the best practices as early as possible to save time and efforts.
2
0
0
I am doing a data scraping project in python.For that i need to use beautiful soup and lxml.Should i install them globally or in a virtual environment?
Do i need Virtual environment for lxml and beautiful soup in linux?
1.2
0
1
104
35,029,353
2016-01-27T05:13:00.000
2
0
1
0
ipython-notebook,jupyter-notebook
35,031,657
1
true
0
0
Since you have ruled out all other possible causes, it is probably a firewall between you and the remote host blocking the port.
1
0
0
This is really weird. I have a notebook server running remotely and can connect to it successfully until yesterday. I still can connect to the notebook server using localhost or [ip] on the remote server. But when trying from a remote PC it is always timeout. I netstat -antop | grep :port and saw jupyter listening on both localhost and any ip. Also I tried to tcpdump what the remote server got on port and can see web request coming in from my remote PC and retried for two times. But the ipython notebook doesn't get any requests (telling from the debug string in --debug mode). Any clue why this happened?
Cannot connect to remote ipython notebook server
1.2
0
1
1,081
35,030,346
2016-01-27T06:36:00.000
1
0
1
0
python,pip,anaconda,easy-install
35,032,666
1
false
0
0
Open Command prompt Go to Python34/Scripts cd C://Python34//Scripts Install your packages using pip pip install yourpackegeName
1
0
0
I am having Anaconda3 and Python3.4. But when i run easy_install, packges get installed in Anaconda3\Lib\site-packages and I want to do it for C:\Python34\Lib\site-packages. How to do it?
Installing packages in Python3.4 by easy_install
0.197375
0
0
921
35,032,917
2016-01-27T09:05:00.000
1
0
0
0
android,python,qpython
35,033,063
1
true
0
1
In QPython in Programs I see /storages/emulated/0/com.hipipal/qpyplus/scripts/
1
0
0
I wanted to write python code in my local computer and transfer it to Android device to execute the codes in mobile or Tablet. Is it possible to transfer? If yes, in which location do I need to transfer.
Qpython in Android
1.2
0
0
330
35,032,994
2016-01-27T09:09:00.000
1
1
0
1
python,windows,batch-file,sftp,fabric
35,033,131
2
false
0
0
The built in FTP command doesn't have a facility for security. You can use winscp, an open source free SFTP client and FTP client for Windows.
1
1
0
I'm wondering if there's any way to connect SFTP server with Windows' Command Prompt, by only executing batch file. Do I need to install additional software? which software? The purpose is to do pretty basic file operations (upload, delete, rename) on remote SFTP server by executing a batch file. And by the way, I have heard about python's Fabric library, and I wonder whether it's better solution than the batch script for the mentioned basic file operations? Thanks a lot!
Connecting to SFTP server via Windows' Command Prompt
0.099668
0
0
15,990
35,033,997
2016-01-27T09:55:00.000
0
0
0
0
python,python-2.7,psycopg2,psql
35,034,708
2
false
0
0
change your psql query to cast and get the date column as string e.g. select date_column_name:: to_char from table_name.
1
4
0
I have a python code which queries psql and returns a batch of results using cursor.fetchall(). It throws an exception and fails the process if a casting fails, due to bad data in the DB. I get this exception: File "/usr/local/lib/python2.7/site-packages/psycopg2cffi/_impl/cursor.py", line 377, in fetchall return [self._build_row() for _ in xrange(size)] File "/usr/local/lib/python2.7/site-packages/psycopg2cffi/_impl/cursor.py", line 891, in _build_row self._casts[i], val, length, self) File "/usr/local/lib/python2.7/site-packages/psycopg2cffi/_impl/typecasts.py", line 71, in typecast return caster.cast(value, cursor, length) File "/usr/local/lib/python2.7/site-packages/psycopg2cffi/_impl/typecasts.py", line 39, in cast return self.caster(value, length, cursor) File "/usr/local/lib/python2.7/site-packages/psycopg2cffi/_impl/typecasts.py", line 311, in parse_date raise DataError("bad datetime: '%s'" % bytes_to_ascii(value)) DataError: bad datetime: '32014-03-03' Is there a way to tell the caster to ignore this error and parse this as a string instead of failing the entire batch?
psql cast parse error during cursor.fetchall()
0
1
0
280
35,038,543
2016-01-27T13:21:00.000
0
0
0
0
python,mysql,frontend,crud,web2py
35,039,883
1
false
1
0
This double/redundant way of talking to my DB strikes me as odd and web2py does not support python3. Any abstraction you want to use to communicate with your database (whether it be the web2py DAL, the Django ORM, SQLAlchemy, etc.) will have to have some knowledge of the database schema in order to construct queries. Even if you programmatically generated all the SQL statements yourself without use of an ORM/DAL, your code would still have to have some knowledge of the database structure (i.e., somewhere you have to specify names of tables and fields, etc.). For existing databases, we aim to automate this process via introspection of the database schema, which is the purpose of the extract_mysql_models.py script. If that script isn't working, you should report an issue on Github and/or open a thread on the web2py Google Group. Also, note that when creating a new database, web2py helps you avoid redundant specification of the schema by handling migrations (including table creation) for you -- so you specify the schema only in web2py, and the DAL will automatically create the tables in the database (of course, this is optional).
1
0
0
I was asked to port a Access database to MySQL and provide a simple web frontend for the users. The DB consists of 8-10 tables and stores data about clients consulting (client, consultant,topic, hours, ...). I need to provide a webinterface for our consultants to use, where they insert all this information during a session into a predefined mask/form. My initial thought was to port the Access-DB to MySQL, which I have done and then use the web2py framework to build a user interface with login, inserting data, browse/scroll through the cases and pulling reports. web2py with usermanagment and a few samples views & controllers and MySQL-DB is running. I added the DB to the DAL in web2py, but now I noticed, that with web2py it is mandatory to define every table again in web2py for it being able to communicate with the SQL-Server. While struggeling to succesfully run the extract_mysql_models.py script to export the structure of the already existing SQL DB for use in web2py concerns about web2py are accumulating. This double/redundant way of talking to my DB strikes me as odd and web2py does not support python3. Is web2py the correct way to fulfill my task or is there better way? Thank you very much for listening/helping out.
Using web2py for a user frontend crud
0
1
0
670
35,038,706
2016-01-27T13:29:00.000
0
1
1
0
shell,ipython,jmodelica
35,055,447
1
true
0
0
The problem is that Python needs to know the paths to where JModelica store the Python package "pymodelica". If you use the IPython from the installation of JModelica, this automatically sets the correct paths. The same goes with the regular Python shell, if you use the link from the installation of JModelica, it should work while if you use the Python shell directly from your Python installation, it will not work.
1
1
0
So I just installed JModelica and with this Python 2.7 is included. When I use the IPython-console and try to import the following (it works): from pymodelica import compile_fmu However when I write this in the Python Shell program it says: Traceback (most recent call last): File "", line 1, in from pymodelica import compile_fmu ImportError: No module named pymodelica**** What is the problem here? I want to use the Python Shell since you can write scripts there. Regards, Jasir
JModelica: Python Shell and IPython trouble importing package
1.2
0
0
462
35,042,006
2016-01-27T15:52:00.000
1
1
0
1
python,sockets,udp,raspberry-pi,ipv6
35,063,138
1
true
0
0
You can use host ='fe80::ba27:ebff:fed4:5691', assuming you only have one link. Link-Local addresses (Link-Local scope) are designed to be used for addressing on a single link for purposes such as automatic address configuration, neighbor discovery or when no routers are present. Routers must not forward any packets with Link-Local source or destination addresses to other links. So if you are sending data from a server to a raspberry pi (1 link), you can use the link-local scope for you IPv6 address. host = 'ff02::1:ffd4:5691' is the link-local multicast scope, unless you have a reason to send multicast, there is no need.
1
0
0
I am interested to do socket programming. I would like send and receive Ipv6 UDP server socket programming for raspberry (conneted with ethernet cable and opened in Putty). After surfing coulpe of sites I have got confusion with IPv6 UDP host address. Which type of host address should I use to send and receive message ipv6 UDP message. is the link local address example: host ='fe80::ba27:ebff:fed4:5691';//link local address to Tx and Rx from Raspberry or host = 'ff02::1:ffd4:5691' Thank you so much. Regards, Mahesh
Ipv6 UDP host address for bind
1.2
0
1
422
35,042,198
2016-01-27T16:02:00.000
8
0
0
1
python,shell,pdb
51,962,231
3
false
0
0
Simply use the "os" module and you will able to easily execute any os command from within pdb. Start with: (Pdb) import os And then: (Pdb) os.system("ls") or even (Pdb) os.system("sh") the latest simply spawns a subshell. Exiting from it returns back to debugger. Note: the "cd" command will have no effect when used as os.system("cd dir") since it will not change the cwd of the python process. Use os.chdir("/path/to/targetdir") for that.
1
10
0
I want to run cd and ls in python debugger. I try to use !ls but I get *** NameError: name 'ls' is not defined
Run shell command in pdb mode
1
0
0
5,045
35,042,342
2016-01-27T16:08:00.000
0
0
1
0
java,python,methods,return
35,042,684
3
true
0
0
Java function can only execute one return statement per iteration. Doesn't matter how many return statements you write in a function, first one found in traversed path of function will be executed.
3
1
0
Considering that Python could execute a return statement in a function at most one time, can Java execute the statement more than once?
Can the return statement execute more than once in Java as opposed to Python?
1.2
0
0
143
35,042,342
2016-01-27T16:08:00.000
0
0
1
0
java,python,methods,return
35,042,401
3
false
0
0
Java methods may only return one value on a particular instance.
3
1
0
Considering that Python could execute a return statement in a function at most one time, can Java execute the statement more than once?
Can the return statement execute more than once in Java as opposed to Python?
0
0
0
143
35,042,342
2016-01-27T16:08:00.000
0
0
1
0
java,python,methods,return
35,042,459
3
false
0
0
When you call a function, a new frame is added to the call-stack which contains that function's variables, parameters, etc. When you return from a function in Java (and any other languages I know of), the stack frame for that function is removed from the call-stack. The function no longer exists (in that particular call instance as pointed out by John Walgreen). So, calling return more than once would not many any sense (and it isn't possible). You can have multiple return statements, but only one will ever be invoked on any given function call depending on the path taken through the function.
3
1
0
Considering that Python could execute a return statement in a function at most one time, can Java execute the statement more than once?
Can the return statement execute more than once in Java as opposed to Python?
0
0
0
143
35,044,298
2016-01-27T17:36:00.000
-2
0
0
0
python,api,pygame,pyglet
56,559,014
3
false
0
1
I've tried pyglet and pygame and rate pygame as the best .
2
2
0
There are many questions dealing with pyglet and pygame, but what I want to know is difference in theses two, in simple terms. Not in technical terms, not experimental features and all that. They are both libraries, both API, both for creation of games and multimedia apps, right? Just in plain English, for someone like me, relative begginer, who has finished course about Python in Codecademy and read Head first Python book.
Difference between pyglet and pygame, in simple words?
-0.132549
0
0
3,694
35,044,298
2016-01-27T17:36:00.000
1
0
0
0
python,api,pygame,pyglet
35,044,912
3
false
0
1
PyGame is low-level library. You have to do all on your own - starting at mainloop and all functions called by mainloop. You can do it in different ways. (And you can learn something about mainloops in Pyglet, Tkinter, PyQt, wxPython and other GUIs, not only in Python) Pyglet is framework - it has already mainloop and you can't change it and you can't see how it works. You override functions which mainloop calls. You have to keep its rules.
2
2
0
There are many questions dealing with pyglet and pygame, but what I want to know is difference in theses two, in simple terms. Not in technical terms, not experimental features and all that. They are both libraries, both API, both for creation of games and multimedia apps, right? Just in plain English, for someone like me, relative begginer, who has finished course about Python in Codecademy and read Head first Python book.
Difference between pyglet and pygame, in simple words?
0.066568
0
0
3,694
35,044,665
2016-01-27T17:55:00.000
0
0
0
0
python,django,performance,security
35,049,943
1
false
1
0
I think the best solution for this is to download the first image with BeautifulSoups (as you are currently doing) and then upload it to a CDN (like AmazonWS S3, Google Cloud Storage, etc) and save only the link to that image in your model. So the next time you view that link, you will just serve the image from your CDN. This solution is very secure and can scale up!
1
0
0
I am building a webapp in Django that allows users to post links. When they post a link, I want to display a thumbnail image for the link. Right now, I simply download the first image on the linked page (using BeautifulSoup), store it in my Django model, and then serve it with the model. I am wondering whether this is the best solution, from both a scale and security perspective? Would a better solution be to simply store a link to the original image on the original website, and then have the user's browse simply request that image from the linked website? Would the second solution be faster and safer than downloading all the images onto my server? I am also worried about whether downloading and serving thousands of images will scale, as well as how to protect the app from images on malicious sites.
Download and serve image or store link to image? Scale + Security
0
0
0
53
35,045,038
2016-01-27T18:16:00.000
3
1
1
0
python,virtualenv,pytest
39,231,653
4
false
0
0
In my case I was obliged to leave the venv (deactivate), remove pytest (pip uninstall pytest), enter the venv (source /my/path/to/venv), and then reinstall pytest (pip install pytest). I don't known exacttly why pip refuse to install pytest in venv (it says it already present). I hope this helps
2
75
0
I installed pytest into a virtual environment (using virtualenv) and am running it from that virtual environment, but it is not using the packages that I installed in that virtual environment. Instead, it is using the main system packages. (Using python -m unittest discover, I can actually run my tests with the right python and packages, but I want to use the py.test framework.) Is it possible that py.test is actually not running the pytest inside the virtual environment and I have to specify which pytest to run? How to I get py.test to use only the python and packages that are in my virtualenv? Also, since I have several version of Python on my system, how do I tell which Python that Pytest is using? Will it automatically use the Python within my virtual environment, or do I have to specify somehow?
How do I use pytest with virtualenv?
0.148885
0
0
32,064
35,045,038
2016-01-27T18:16:00.000
95
1
1
0
python,virtualenv,pytest
54,597,424
4
false
0
0
There is a bit of a dance to get this to work: activate your venv : source venv/bin/activate install pytest : pip install pytest re-activate your venv: deactivate && source venv/bin/activate The reason is that the path to pytest is set by the sourceing the activate file only after pytest is actually installed in the venv. You can't set the path to something before it is installed. Re-activateing is required for any console entry points installed within your virtual environment.
2
75
0
I installed pytest into a virtual environment (using virtualenv) and am running it from that virtual environment, but it is not using the packages that I installed in that virtual environment. Instead, it is using the main system packages. (Using python -m unittest discover, I can actually run my tests with the right python and packages, but I want to use the py.test framework.) Is it possible that py.test is actually not running the pytest inside the virtual environment and I have to specify which pytest to run? How to I get py.test to use only the python and packages that are in my virtualenv? Also, since I have several version of Python on my system, how do I tell which Python that Pytest is using? Will it automatically use the Python within my virtual environment, or do I have to specify somehow?
How do I use pytest with virtualenv?
1
0
0
32,064
35,045,604
2016-01-27T18:48:00.000
-1
0
0
1
python,azure,azureservicebus
35,061,648
1
false
0
0
Per my experience, I think it's a program flow problem of your embedded application. You can try to add a testing function that ping the service bus host interval some seconds until the network fine to return a boolean value to start a new connection after the device switches the network adaptor. Meanwhile, count for pinging till a specified value to call a shell command like service network restart or ifconfig <eth-id> down && ifconfig <eth-id> up to restart the related network adaptor. It's just an idea. Could you supply some codes for providing more useful help?
1
1
0
In my application, I send a message to a topic based on a local event. This works quite well until I run into a network issue. On the network side, my device is going through an access point that provides primary/secondary connection to the internet. The primary connection is through an ADSL line but if that fails, it switches over to an LTE network. When the switch-over occurs, the IP address of my device stays unchanged (as that is on the local network and assigned through DHCP). When this switch-over occurs, I find that there is an error with the send command. I get my local event and try to send a message to the service bus. The first send results in a 'ReadTimeout' but a subsequent send is fine. I then get another local event and try another send and the process repeats itself. If I reboot the device then everything works fine. Here is the stack-trace: File "/usr/sbin/srvc/sb.py", line 420, in ReadTimeout: HTTPSConnectionPool(host='****.servicebus.windows.net', port= 443): Read timed out. (read timeout=65) Traceback (most recent call last): File "/usr/sbin/srvc/sb.py", line 420, in peek_lock=False, timeout=sb_timeout) File "/usr/local/lib/python2.7/dist-packages/azure/servicebus/servicebusservic e.py", line 976, in receive_subscription_message timeout) File "/usr/local/lib/python2.7/dist-packages/azure/servicebus/servicebusservic e.py", line 762, in read_delete_subscription_message response = self._perform_request(request) File "/usr/local/lib/python2.7/dist-packages/azure/servicebus/servicebusservic e.py", line 1109, in _perform_request resp = self._filter(request) File "/usr/local/lib/python2.7/dist-packages/azure/servicebus/_http/httpclient .py", line 181, in perform_request self.send_request_body(connection, request.body) File "/usr/local/lib/python2.7/dist-packages/azure/servicebus/_http/httpclient .py", line 145, in send_request_body connection.send(None) File "/usr/local/lib/python2.7/dist-packages/azure/servicebus/_http/requestscl ient.py", line 81, in send self.response = self.session.request(self.method, self.uri, data=request_bod y, headers=self.headers, timeout=self.timeout) File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 457, in req uest resp = self.send(prep, **send_kwargs) File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 569, in sen d r = adapter.send(request, **kwargs) File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 422, in sen d raise ReadTimeout(e, request=request) ReadTimeout: HTTPSConnectionPool(host='****.servicebus.windows.net', port= 443): Read timed out. (read timeout=65)
Azure Service Bus SDK for Python results in Read Timeout when sending a message to topic
-0.197375
0
0
423
35,047,691
2016-01-27T20:48:00.000
2
0
1
1
python,linux
35,047,953
4
false
0
0
Use update-alternatives --config python and shoose python2.7 from choices. If you need to remove it use update-alternatives --remove python /usr/bin/python2.7.
1
3
0
The version of Linux I am working on has python 2.6 by default, and we installed 2.7 on it in a separate folder. If I want to run a .py script, how do I tell it to use 2.7 instead of the default?
How to select which version of python I am running on Linux?
0.099668
0
0
7,476
35,048,891
2016-01-27T21:59:00.000
4
0
1
1
python,python-3.x,vim
35,048,982
3
false
0
0
It's not clear why you want to do this. To truly run an interactive program, you'll have to create a pseudo-tty and manage it from your python script - not for the faint of heart. If you just want to insert text into an existing file, you can do that directly from python, using the file commands. Or you could invoke a program like sed, the "stream editor", that is intended to do file editing in a scripted fashion. The sed command supports a lot of the ex command set (which is the same base command set that vi uses) so i, c, s, g, a, all work.
1
0
0
I want to make a python script that: opens a file, executes the command i, then writes 2 lines of code, hits escape executes the command ZZ. I was thinking along the lines of os.system("vi program") then os.system("i") and os.system("code"), but that didn't work because you can only execute commands. Thank you!
python script to edit a file in vim
0.26052
0
0
1,493
35,048,996
2016-01-27T22:06:00.000
0
0
1
1
python,terminal
35,049,070
3
false
0
0
When you type "python", your path is searched to run this version. But, if you specify the absolute path of the other python, you run it the way you want it. Here, in my laptop, I have /home/user/python3_4 and /home/user/python2_7. If I type python, the 3.4 version is executed, because this directory is set in my path variable. When I want to test some scripts from the 2.7 version, I type in the command line: /home/user/python2_7/bin/python script.py. (Both directory were chosen by me. It's not the default for python, of course). I hope it can help you.
2
1
0
I have downloaded a python program from git. This program is python 3. On my laptop i have both python 2.7 and python 3.4. Python 2.7 is default version. when i want run this program in terminal it gives some module errors because of it used the wrong version. how can i force an name.py file to open in an (non) default version of python. I have tried so search on google but this without any result because of lack of search tags. also just trying things like ./name.py python3 but with same result(error)
Run python program from terminal
0
0
0
7,570
35,048,996
2016-01-27T22:06:00.000
0
0
1
1
python,terminal
35,964,107
3
true
0
0
The Method of @Tom Dalton and @n1c9 work for me! python3 name.py
2
1
0
I have downloaded a python program from git. This program is python 3. On my laptop i have both python 2.7 and python 3.4. Python 2.7 is default version. when i want run this program in terminal it gives some module errors because of it used the wrong version. how can i force an name.py file to open in an (non) default version of python. I have tried so search on google but this without any result because of lack of search tags. also just trying things like ./name.py python3 but with same result(error)
Run python program from terminal
1.2
0
0
7,570
35,050,000
2016-01-27T23:16:00.000
1
1
0
0
python,cryptography,rsa,public-key,pycrypto
35,056,033
1
false
0
0
Just like normal signatures: first perform a cryptographic (one-way) hash over the message and blind & sign that instead of the message.
1
0
0
I tried to blind some big message using pythons RSA from Crypto.PublicKey. The problem is, even if i generate big key, like 6400 bits, key.blind() method still crushes with "message too large" error. I know, that my message can't be bigger than N in key, because every computation is in modulo N, but how can big things be blind signed then?
Python Crypto blinding big messages
0.197375
0
0
175
35,050,323
2016-01-27T23:41:00.000
0
0
0
0
python,telnet,raw-input
35,051,893
1
false
0
0
If I understand correctly then no... You will have to come up with a workaround
1
0
0
I want to populate a list of commands that are stored over a server when tab is pressed and auto complete can be used only with a local set of words as i read. Is there a way to send ('\t') to the telnet server when user enters tab on raw_input, which would atleast return a list of possible commands along with showing them to user?
Send data via socket when user hits tab on raw_input in Python
0
0
1
81
35,050,753
2016-01-28T00:21:00.000
81
0
0
0
python,machine-learning,deep-learning
38,405,970
8
false
0
0
Since you have a pretty small dataset (~ 1000 samples), you would probably be safe using a batch size of 32, which is pretty standard. It won't make a huge difference for your problem unless you're training on hundreds of thousands or millions of observations. To answer your questions on Batch Size and Epochs: In general: Larger batch sizes result in faster progress in training, but don't always converge as fast. Smaller batch sizes train slower, but can converge faster. It's definitely problem dependent. In general, the models improve with more epochs of training, to a point. They'll start to plateau in accuracy as they converge. Try something like 50 and plot number of epochs (x axis) vs. accuracy (y axis). You'll see where it levels out. What is the type and/or shape of your data? Are these images, or just tabular data? This is an important detail.
3
90
1
My training set has 970 samples and validation set has 243 samples. How big should batch size and number of epochs be when fitting a model to optimize the val_acc? Is there any sort of rule of thumb to use based on data input size?
How big should batch size and number of epochs be when fitting a model?
1
0
0
132,079
35,050,753
2016-01-28T00:21:00.000
11
0
0
0
python,machine-learning,deep-learning
38,457,655
8
false
0
0
I use Keras to perform non-linear regression on speech data. Each of my speech files gives me features that are 25000 rows in a text file, with each row containing 257 real valued numbers. I use a batch size of 100, epoch 50 to train Sequential model in Keras with 1 hidden layer. After 50 epochs of training, it converges quite well to a low val_loss.
3
90
1
My training set has 970 samples and validation set has 243 samples. How big should batch size and number of epochs be when fitting a model to optimize the val_acc? Is there any sort of rule of thumb to use based on data input size?
How big should batch size and number of epochs be when fitting a model?
1
0
0
132,079
35,050,753
2016-01-28T00:21:00.000
7
0
0
0
python,machine-learning,deep-learning
44,901,953
8
false
0
0
I used Keras to perform non linear regression for market mix modelling. I got best results with a batch size of 32 and epochs = 100 while training a Sequential model in Keras with 3 hidden layers. Generally batch size of 32 or 25 is good, with epochs = 100 unless you have large dataset. in case of large dataset you can go with batch size of 10 with epochs b/w 50 to 100. Again the above mentioned figures have worked fine for me.
3
90
1
My training set has 970 samples and validation set has 243 samples. How big should batch size and number of epochs be when fitting a model to optimize the val_acc? Is there any sort of rule of thumb to use based on data input size?
How big should batch size and number of epochs be when fitting a model?
1
0
0
132,079
35,061,489
2016-01-28T12:27:00.000
1
0
1
0
python,pycharm
35,061,997
1
false
0
0
I use PyCharm in my classes. My experience is that the all the required code, including the imported modules, are compiled at runtime. If you change anything in that suite you need to start running from scratch for it to take effect. I'm not a professional programmer so my experience is with small apps. I'd love to hear form an expert.
1
0
0
I am working on a project in PyCharm that involves extensive computations, with long runtimes. I would like to do the following: I come up with a version of my code; then run it, then I edit the code some more; however, the run I started before still only uses the old version of the code (i.e. the snapshot at the point of running). Is this possible in PyCharm? I run my project by selecting the Run 'projectname' option from the Run menu. I understand the run works by pre-compling the .py files to .pyc files stored in the __pycache__ folder. However, I don't know the following. Will saving the file in PyCharm cause the .pyc files to be replaced by new versions? This is something I want to avoid since I want one run to only use one snapshot of the source tree, not multiple versions at different points of execution. What if some python class is only needed, say, 20 minutes after the run has started. Will the .pyc file be created at the beginning of the run, or on-demand (where the corresponding .py file might already have changed)?
Edit file while executing in PyCharm
0.197375
0
0
504
35,063,946
2016-01-28T14:17:00.000
1
0
0
0
python-2.7,csv,pandas,pandasql
35,064,268
1
true
0
0
Reading the entire index column will still need to read and parse the whole file. If no fields in the file are multiline, you could scan the file backwards to find the first newline (but with a check if there is a newline past the data). The value following that newline will be your last index. Storing the last index in another file would also be a possibility, but you would have to make sure both files stay consistent. Another way would be to reserve some (fixed amount of) bytes at the beginning of the file and write (in place) the last index value there as a comment. But your parser would have to support comments, or be able to skip rows.
1
2
1
I have a .csv file on disk, formatted so that I can read it into a pandas DataFrame easily, to which I periodically write rows. I need this database to have a row index, so every time I write a new row to it I need to know the index of the last row written. There are plenty of ways to do this: I could read the entire file into a DataFrame, append my row, and then print the entire DataFrame to memory again. This might get a bit slow as the database grows. I could read the entire index column into memory, and pick the largest value off, then append my row to the .csv file. This might be a little better, depending on how column-reading is implemented. I am curious if there is a way to just get that one cell directly, without having to read a whole bunch of extra information into memory. Any suggestions?
reading the last index from a csv file using pandas in python2.7
1.2
0
0
511
35,064,212
2016-01-28T14:31:00.000
1
0
0
0
python,python-3.x,tkinter
35,066,487
1
true
0
1
No, it's not possible. If you want a button with rich text you'll have to create your own. Or, create an image that has the look you want, and use the image with a standard button. You can create your own using a text widget that is one character tall and a few characters wide. Then, you can place bindings on the button to handle clicks, and to change the relief to simulate a button. Unfortunately, it won't have the look of the platform-specific buttons.
1
1
0
I wonder if it is possible to change the weight of one word on a tkinter button? So the result would look something like this: [ yes, I agree ] I've tried using tags but neither tk.Button nor tk.Button['text'] seem to allow it. Thanks!
Modifying a Part of Text on tk.Button
1.2
0
0
33
35,064,426
2016-01-28T14:40:00.000
6
0
1
0
python,pip
61,537,392
5
false
0
0
It is important to note that pip uninstall can not uninstall a module that has been installed with pip install -e. So if you go down this route, be prepared for things to get very messy if you ever need to uninstall. A partial solution is to (1) reinstall, keeping a record of files created, as in sudo python3 -m setup.py install --record installed_files.txt, and then (2) manually delete all the files listed, as in e.g. sudo rm -r /usr/local/lib/python3.7/dist-packages/tdc7201-0.1a2-py3.7.egg/ (for release 0.1a2 of module tdc7201). This does not 100% clean everything up however; even after you've done it, importing the (removed!) local library may succeed, and attempting to install the same version from a remote server may fail to do anything (because it thinks your (deleted!) local version is already up to date).
2
174
0
When would the -e, or --editable option be useful with pip install? For some projects the last line in requirements.txt is -e .. What does it do exactly?
When would the -e, --editable option be useful with pip install?
1
0
0
90,707
35,064,426
2016-01-28T14:40:00.000
0
0
1
0
python,pip
67,320,308
5
false
0
0
As suggested in previous answers, there is no symlinks that are getting created. How does '-e' option work? -> It just updates the file "PYTHONDIR/site-packages/easy-install.pth" with the project path specified in the 'command pip install -e'. So each time python search for a package it will check this directory as well => any changes to the files in this directory is instantly reflected.
2
174
0
When would the -e, or --editable option be useful with pip install? For some projects the last line in requirements.txt is -e .. What does it do exactly?
When would the -e, --editable option be useful with pip install?
0
0
0
90,707
35,066,307
2016-01-28T16:06:00.000
12
0
0
1
python,docker,docker-compose
35,066,625
1
true
1
0
It looks like you ran the pip install in a one-off container. That means your package isn't going to be installed in subsequent containers created with docker-compose up or docker-compose run. You need to install your dependencies in the image, usually by adding the pip install command to your Dockerfile. That way, all containers created from that image will have the dependencies available.
1
6
0
I am running a Django project with docker. Now I want to install a Python package inside the Docker container and run the following command: docker-compose django run pip install django-extra-views Now when I do docker-compose up, I get an error ImportError: No module named 'extra_views'. docker-compose django run pip freeze doesn't show the above package either. Am I missing something?
Cannot install Python Package with docker-compose
1.2
0
0
9,159
35,069,440
2016-01-28T18:40:00.000
1
0
0
0
python,pandas
35,069,535
2
false
0
0
Look into the DataFrame.pivot method
1
2
1
I've been looking through the documentation (and stack overflow) and am having trouble figuring out how rearrange a pandas data frame the way described below. I wish to have a row where there is a column name, a row name and the value of that specific row and column: Input: A B C X 1 2 3 Y 4 5 6 Output: X A 1 X B 2 X C 3 Y A 4 Y B 5 Y C 6 Any help would be much appreciated
Rearranging Data in Pandas
0.099668
0
0
121
35,070,320
2016-01-28T19:29:00.000
0
0
1
0
javascript,python
35,071,049
1
true
0
0
My Solutions 1) Just as what what other people said , you could try to use the Google Map APIs and code a bit. 2) Or you can use Openstreetmap. I would perfer openstreetmap. I did several apps and websites based on LBS. I know how to place coordinates on maps. If you want to finish this quickly and cooler. You may try this combination: Django as the freamwork, PostgreSQL as the DB backend PostgreSQL PGIS as the geolocation handler Openstreetmap as the map viewer My summery: Solution 1) is quick, faster. It needs you some hard-coding efforts. Solution 2) is bit slower but full-featured. It's very extensive for future developing. Hope this could help you
1
0
0
I am familiar with coding in python for the work I do in bioinformatics. I've recently been asked to do a different type of analysis -- analyzing data and then overlaying that data over a map of the US. I figure I will need to use javascript after I write the python code to do the data analysis, but I am not familiar with creating images. What is the best way to incorporate my python data analysis with code that will produce a dynamic image? Thanks for your help.
python data analysis overlay image of US
1.2
0
0
127
35,074,209
2016-01-28T23:33:00.000
37
0
0
0
python,excel,google-sheets,ipython,ipython-notebook
35,090,610
6
true
0
0
Try using the to_clipboard() method. E.g., for a dataframe, df: df.to_clipboard() will copy said dataframe to your clipboard. You can then paste it into Excel or Google Docs.
2
19
1
I've been using iPython (aka Jupyter) quite a bit lately for data analysis and some machine learning. But one big headache is copying results from the notebook app (browser) into either Excel or Google Sheets so I can manipulate results or share them with people who don't use iPython. I know how to convert results to csv and save. But then I have to dig through my computer, open the results and paste them into Excel or Google Sheets. That takes too much time. And just highlighting a resulting dataframe and copy/pasting usually completely messes up the formatting, with columns overflowing. (Not to mention the issue of long resulting dataframes being truncated when printed in iPython.) How can I easily copy/paste an iPython result into a spreadsheet?
How to copy/paste a dataframe from iPython into Google Sheets or Excel?
1.2
0
0
17,289
35,074,209
2016-01-28T23:33:00.000
1
0
0
0
python,excel,google-sheets,ipython,ipython-notebook
66,239,699
6
false
0
0
Paste the output to an IDE like Atom and then paste in Google Sheets/Excel
2
19
1
I've been using iPython (aka Jupyter) quite a bit lately for data analysis and some machine learning. But one big headache is copying results from the notebook app (browser) into either Excel or Google Sheets so I can manipulate results or share them with people who don't use iPython. I know how to convert results to csv and save. But then I have to dig through my computer, open the results and paste them into Excel or Google Sheets. That takes too much time. And just highlighting a resulting dataframe and copy/pasting usually completely messes up the formatting, with columns overflowing. (Not to mention the issue of long resulting dataframes being truncated when printed in iPython.) How can I easily copy/paste an iPython result into a spreadsheet?
How to copy/paste a dataframe from iPython into Google Sheets or Excel?
0.033321
0
0
17,289
35,077,571
2016-01-29T05:32:00.000
1
0
1
0
markdown,ipython-notebook,jupyter-notebook,nbconvert
47,268,532
3
false
0
0
If you have the nb extensions active, you can see the contents on the left hand side. Just before the contents (but after the title "Contents"), there is a small letter "n". If you press the n, it removes the numbering from the headers.
1
40
0
When I use the heading tags (# ##, etc) using markdown, they are always converted to numbered sections during pdf-latex conversion. How to indicate unnumbered headings in markdown in Jupyter Notebook (IPython)? I realize this can be done by adding a '*' right next to each section directly in the latex document, but is there any other way to do this?
How to remove heading numbers in Jupyter during pdf conversion?
0.066568
0
0
18,612
35,078,548
2016-01-29T06:53:00.000
4
0
1
0
python,numpy,matplotlib,intel,anaconda
45,779,589
1
false
0
0
The error means that the program couldn't find mkl library files under its library path, which is what you need to make it find. I had the issue when running matplotlib scripts on windows with numpy+mkl, and I got it fixed by copying files that start with "mkl_" in site-packages/numpy/core to my python.exe root. I'm not familiar with compiled python program but the idea should be the same. Since you had this error I assume you are using mkl version packages. You need to figure out where the .exe tries to load libraries from(could be the same path where the executable is located), and copy all the mkl dll's of any package there. Or there could be something like "compile options" that allows you to configure the path etc. Hope it helps you.
1
9
0
Hello fellow programmers, so I am having a spot of trouble getting this python .exe to function properly. I am using Anaconda 3 and the latest version of pyinstaller, and my code has nothing odd going on when I run it as a .py, but for the sake of distribution I need to have it as a ".exe". Whenever I try to run my .exe all I get is the error: Intel MKL FATAL ERROR: Cannot load mkl_intel_thread.dll. and then it closes. Again, I am not having this problem if I run my python code in .py format from the same command window. Any help would be greatly appreciated, thank you!
Python Pyinstaller 3.1 Intel MKL FATAL ERROR: Cannot load mkl_intel_thread.dll
0.664037
0
0
7,294
35,079,670
2016-01-29T08:10:00.000
0
0
1
0
python,windows
35,080,840
2
false
0
0
I couldn't find a solution anywhere to this, so I just deleted every trace of Python from my computer and installed Anaconda. I don't feel this was a very informed or optimal solution, but I now have consistent behavior in various places. Also, the Anaconda installer seems much more smooth than the pip installer.
2
0
0
I'm having some trouble tracking down where my pip modules are going, and I finally found what seems to be the root of the issue when I did a "pip list" command in two separate cmd windows. One window was running as admin, and the other not. They showed two completely different lists of modules installed. When I ran "python" in each window, one started python 3.4.3, and the other python 3.5.0a2. The reason I'm doing this in two separate types of windows is because I'm running into "access is denied" errors when trying to install modules with pip. (For example, requests.) When I check my PATH variable, it points to C:\Program Files\Python 3.5. Is there an admin PATH variable somewhere that I can modify so that I can run python3.5 as admin? Can someone help me understand how I can get around access is denied without using admin cmd, or how I can change admin Path variable, or something? I'm running Windows 7, 64 bit, with several versions of python installed. 2.7, 3.3, 3.4.3, 3.5.0a2. I can get more refined details if I need to. Edit Addition: I'd like to use virtualenv with python3.5, but when I try to install it with pip install virtualenv, I get Permission denied error.
Different versions of python when running cmd as admin, how do I alter admin version?
0
0
0
226
35,079,670
2016-01-29T08:10:00.000
1
0
1
0
python,windows
35,079,769
2
false
0
0
Although you are running Python on a Windows Machine - I am assuming a Client i.e. Desktop. You should go and look at Virtual Python Environments - there are lots of resources documenting how this is accomplished... You are directly manipulating the System copy of the Python Environment and 1 mistake will screw the whole lot up. Much better (and safer) for either project/Projects(s) to share a Virtual Env - which you can then either upgrade using pip requirements.
2
0
0
I'm having some trouble tracking down where my pip modules are going, and I finally found what seems to be the root of the issue when I did a "pip list" command in two separate cmd windows. One window was running as admin, and the other not. They showed two completely different lists of modules installed. When I ran "python" in each window, one started python 3.4.3, and the other python 3.5.0a2. The reason I'm doing this in two separate types of windows is because I'm running into "access is denied" errors when trying to install modules with pip. (For example, requests.) When I check my PATH variable, it points to C:\Program Files\Python 3.5. Is there an admin PATH variable somewhere that I can modify so that I can run python3.5 as admin? Can someone help me understand how I can get around access is denied without using admin cmd, or how I can change admin Path variable, or something? I'm running Windows 7, 64 bit, with several versions of python installed. 2.7, 3.3, 3.4.3, 3.5.0a2. I can get more refined details if I need to. Edit Addition: I'd like to use virtualenv with python3.5, but when I try to install it with pip install virtualenv, I get Permission denied error.
Different versions of python when running cmd as admin, how do I alter admin version?
0.099668
0
0
226
35,080,416
2016-01-29T08:55:00.000
3
0
1
0
python,gitpython
35,080,417
1
false
0
0
The correct way to do this is to pass <argument>=True as part of the **kwargs. So, in the special case, this would be my_remote.pull(all=True).
1
2
0
The signature of many Repo-functions includes **kwargs, of which the documentation says, that you can pass arguments to the underlying wrapped git command. However, there is no place for *args. In order to pass flag-like arguments like --all. I would have expected them to be passed like my_remote.pull('all'). So, for instance, how would you pass --all to the pull function of Remote?
How can I pass single options to push/pull of gitpython?
0.53705
0
0
1,049
35,081,055
2016-01-29T09:31:00.000
0
0
0
0
python,django,django-rest-framework
35,081,076
1
false
1
0
I don't know if it's "better" but it can helps to keep things DRY. I haven't done that yet but something I'm considering for my next projects.
1
0
0
I am trying to build a restful web app and I was thinking since the serializers is very similar to a django form and can be produced in the HTML. I was thinking if better use the serializers rather than the django forms
Is it ideal to replace django forms with django rest framework serializers in HTML terms
0
0
0
117
35,082,015
2016-01-29T10:20:00.000
2
0
0
0
java,python,ruby,activemq,stomp
35,082,799
1
true
1
0
Well, the simple answer is that you can't. That's not part of the Stomp protocol. The complex answer, as always, is "it depends". It's entirely possible that whatever is providing your stomp service will have something that you can use. (In RabbitMQ, for example, you can log in to the web interface and look at the current queue names). However the whole point of Stomp (and to a certain extent in all messaging) is that there aren't really "desintations", just queues which can be read by one or more clients. And the queues are transient; you might find the information deprecates pretty quickly...
1
1
0
In Stomp, how can I browse all queues or/and topics available? Is it possible at all? The key here is to get the result and the language is not important, it can be either python, ruby or java because as I've found out it's easier to do this particular task using them because of the existing libraries. Python seems to have only one most popular library, though.
How to get a list of Stomp queues or/and topics (their names) as a client?
1.2
0
0
1,119
35,083,133
2016-01-29T11:15:00.000
2
0
0
1
python,django,celery
35,083,287
3
false
1
0
The usual solution here is to offload the task to celery, and return a "please wait" response in your view. If you want, you can then use an Ajax call to periodically hit a view that will report whether the response is ready, and redirect when it is.
1
5
0
In one of the views in my django application, I need to perform a relatively lengthy network IO operation. The problem is other requests must wait for this request to be completed even though they have nothing to do with it. I did some research and stumbled upon Celery but as I understand, it is used to perform background tasks independent of the request. (so I can not use the result of the task for the response to the request) Is there a way to process views asynchronously in django so while the network request is pending other requests can be processed? Edit: What I forgot to mention is that my application is a web service using django rest framework. So the result of a view is a json response not a page that I can later modify using AJAX.
Performing a blocking request in django view
0.132549
0
0
2,469
35,085,809
2016-01-29T13:30:00.000
3
0
1
0
python,opencv,python-3.5
39,967,831
3
false
0
0
For me the only working way was using conda: conda install --channel https://conda.anaconda.org/menpo opencv3 and then import it using import cv2
1
2
1
I have looked for a proper way to install OpenCV, but all I can find are people fudging around with Python 2.old or virtualenv or other things that are utterly irrelevant. I just want be able to run import cv2 without any import errors. How do I install OpenCV on OS X 10.11 for use with Python 3.5.1?
Installing OpenCV 3.1 on OS X El Capitan using Python 3.5.1
0.197375
0
0
4,202
35,086,705
2016-01-29T14:17:00.000
0
0
0
0
python,django
35,087,045
2
false
1
0
Yes just create your desired groups in admin panel and add your permissions to each group then assign your users to the defined groups.
2
0
0
Is there any way to create a group model with permissions already estabilished? I'm trying to create a system with at least 4 pre defined user types, and each user type will have some permissions.
Django Pre-defined groups
0
0
0
94
35,086,705
2016-01-29T14:17:00.000
0
0
0
0
python,django
35,088,494
2
false
1
0
You can add the Group and Permissions creating commands to a migration, using the RunPython operation.
2
0
0
Is there any way to create a group model with permissions already estabilished? I'm trying to create a system with at least 4 pre defined user types, and each user type will have some permissions.
Django Pre-defined groups
0
0
0
94
35,086,949
2016-01-29T14:29:00.000
1
0
1
1
macos,python-3.x,vim
35,093,602
3
false
0
0
Finally found the solution - $ brew install vim --with-python3
3
2
0
I'm trying to work out to integrate Python3 into Vim, I know I need to do it when compiling vim but I cant seem to get it right. I'm using homebrew to install with the following script: brew install vim --override-system-vim --with-python3 It installs vim however when i check the version, python3 is still not supported.
Vim python3 integration on mac
0.066568
0
0
3,440
35,086,949
2016-01-29T14:29:00.000
1
0
1
1
macos,python-3.x,vim
47,591,845
3
false
0
0
I thought I had the same issue but realised I needed to re-start the shell. If the problem still persists, it may be that you have older versions that homebrew is still trying to install. brew cleanup would remove older bottles and perhaps allow you to install the latest. If this is still giving you trouble, I found removing vim with brew uninstall --force vim and then reinstalling with brew install vim --override-system-vim --with-python3 worked for me. EDIT 2018-08-22 Python 3 is now default when compiling vim. Therefore the command below should integrate vim with Python 3 automatically. brew install vim --override-system-vim
3
2
0
I'm trying to work out to integrate Python3 into Vim, I know I need to do it when compiling vim but I cant seem to get it right. I'm using homebrew to install with the following script: brew install vim --override-system-vim --with-python3 It installs vim however when i check the version, python3 is still not supported.
Vim python3 integration on mac
0.066568
0
0
3,440
35,086,949
2016-01-29T14:29:00.000
0
0
1
1
macos,python-3.x,vim
56,487,676
3
false
0
0
This worked for me with the latest OS for mac at this date. Hope it works for you. brew install vim python3
3
2
0
I'm trying to work out to integrate Python3 into Vim, I know I need to do it when compiling vim but I cant seem to get it right. I'm using homebrew to install with the following script: brew install vim --override-system-vim --with-python3 It installs vim however when i check the version, python3 is still not supported.
Vim python3 integration on mac
0
0
0
3,440
35,087,956
2016-01-29T15:21:00.000
6
0
1
0
python,multithreading,memory,heap-memory,stack-memory
35,088,014
1
true
0
0
Function parameters are put on the stack, and each thread has its own stack. You don't have to worry about their thread-safety. However, all Python objects are stored on the heap; the stack merely holds references to such objects. If multiple threads are accessing one such mutable object they can still interfere with one another if the access is not synchronised somehow. This has nothing to do with how functions are called however.
1
1
0
I have a function inside a threaded object, this function accepts several parameters and I don't know if when many threads try to use this function this threads will change the parameter values of another thread? I can use a lock but after the parameters have been assigned. If the parameters are stored in the stack I guess they will live inside each threads stack but if they live in heap how can avoid threads changing another threads function parameters?
Where are functions parameters stored in python? Stack? or Heap?
1.2
0
0
947
35,090,793
2016-01-29T17:46:00.000
1
0
0
1
google-app-engine,google-app-engine-python,google-cloud-debugger
35,107,768
1
true
1
0
It looks like you did everything correctly. The "Failed to update the snapshot" error shows up when there is some problem on the Cloud Debugger backend. Please contact the Cloud Debugger team through [email protected] or submit feedback report in Google Developer Console.
1
0
0
I'm trying to get the Google Cloud Debugger to work on my Python App Engine module. I've followed the instructions and: Connected to my Bitbucket hosted repository. Generated the source-context.json and source-contexts.json using gcloud preview app gen-repo-info-file Uploaded using appcfg.py update However when I try to set a snapshot using the console, there is message saying: The selected debug target does not have source revision information. The source shown here may not match the deployed source. And when I try to set the snapshot point, I get the error: Failed to update the snapshot
Google Cloud Debugger for Python App Engine module says "Deployment revision unknown"
1.2
0
0
147
35,091,235
2016-01-29T18:11:00.000
0
0
1
0
arrays,python-3.x,numpy,ipython-notebook
35,093,550
1
false
0
0
I have never noticed any performance penalty (5-6 million x 8 arrays here) with IPython/Jupyter, but even if there is some small difference it is unlikely to be noticeable. A much greater speed increase with a similarly low effort would come from writing performance sensitive code in cython, adding type annotations in cython would yield even greater increases. In my own work I have observed speedups of orders of magnitude from using cython smartly.
1
0
1
I'm working with huge multidimensional NumPy arrays in an IPython notebook with Python3 and things are slow going. Is it appreciably quicker to convert the .ipynb file into a .py file and run via the command line?
Are there performance benchmarks of NumPy arrays in an IPython Notebook versus a .py script file?
0
0
0
155
35,092,345
2016-01-29T19:17:00.000
0
0
0
0
python,django,mezzanine
35,800,848
1
true
1
0
So I ended up figuring out what the problem was. The templates in my project has a content block called 'main' in mimicking the native template files. I needed to give the content block a new name across the board instead because they were overwriting the Mezzanine templates somehow.
1
0
0
I just upgraded a website from Django 1.7/Mezzanine 3 to Django 1.8/Mezzanine 3. After doing so, I discovered that the admin site showed none of the previously created objects from my apps, even though they exist in the database and on the live site. When I inspect the object in my browser, it doesn't seem like the database is being searched at all. This affects all of my apps, plus the User app native to Django. It does not affect the pages app, comments app, or blog post app native to Django. I've tried deleting migration files, restarting the server, deleting and recreating the database, and dropping affected tables to recreate them. There are no error messages, the page just looks like no one has created any objects yet. When you create a new object and save it, you still can't see them, even though the new object is live and in the database.
Can't see objects created in Django-Mezzanine Admin site
1.2
0
0
86
35,094,371
2016-01-29T21:29:00.000
-2
0
0
0
javascript,jquery,python,html
35,094,515
5
false
1
0
Make sure to check the "referer" header in Python, and validate that the address is your login page.
1
0
0
I have a login page that sends a request to a Python script to authenticate the user. If it's not authenticated it redirects you to the login page. If it is redirects you to a HTML form. The problem is that you can access the HTML form by writing the URL. How Can I make sure the user came from the login form with my Python script without using modules because I can't install anything in my server. I want it to be strictly with Python I can't use PHP. Is it possible? Can I use other methods to accomplish the task?
How to prevent a user from directly accessing my html page
-0.07983
0
1
1,098
35,096,281
2016-01-30T00:03:00.000
4
0
1
0
python,cython
37,523,539
5
false
0
0
Much Simpler, Try installing Cython from pip. Windows- Open Python folder, press shift+right click, select "open command promt here" pip install cython
1
5
0
In order to install cython ( for python 2.7 , windows 8.1 ), made the download in .zip format, extracted the whole file and run the setup.py . Thus, python shell shows this : Unable to find pgen, not compiling formal grammar. What is the problem and how it can be solved ?
Cython setup error : Unable to find pgen, not compiling formal grammar
0.158649
0
0
16,525
35,102,434
2016-01-30T13:35:00.000
0
0
1
0
python,windows,numpy
43,452,133
4
false
0
0
Use 'pip'. Just open command line in windows and type "pip install numpy". It will automatically download and install numpy in proper way. If you did not add the path of 'pip' just add C:\Python27\Scripts link to your PATH. If you don't know how to do it just type following in your command line (cmd): set PATH=%PATH%;C:\Python27\Scripts pip install numpy
1
1
0
I am trying to install numpy in 32 bit Python2.7 in Win7. Believe that numpy is supposed to be in directory C:\Python2.7\Lib\site-packages? I unzipped the numpy file - it appears as folder numpy-1.10.4' in 'site-packages and yet I still can't import numpy. Is there anything else I am supposed to do to complete installation?
Numpy installation via unzipping into site-packages
0
0
0
1,594
35,104,439
2016-01-30T16:58:00.000
0
0
1
0
python,pycparser
47,680,531
2
false
0
0
You Should go to File -> Settings -> Project: -> Project Interpreter and then on right hand side is a green plus and after clicking it add the "pycparser". Of course after instaling pycparser in cmd"pip3 install pycparser".
1
1
0
2.7.11 python pckg for Mac - and installed the same. Now I am trying to execute a python file python file1.py It throws up the error : File "file1.py", line 107, in from pycparser import parse_file, c_parser, c_ast ImportError: No module named pycparser How do I install this pycparser module ? can someone please help me here .
Pycparser not getting imported
0
0
0
3,431
35,104,966
2016-01-30T17:45:00.000
1
0
1
0
java,python,compilation,ide
35,105,145
4
true
0
0
The normal Java package (the JRE) only contains the stuff necessary to run Java programmes. The JDK is the package containing the compiler. Based on your experience with Pascal and C++, you obviously understand why you need the compiler to create your own programmes. Eclipse and Netbeans are IDEs, Integrated Developement Enviroments. They make it easier for you to program, but they are not strictly needed, in the same way that you can write a program in C++ by just installing a C++ compiler and without using Visual Studio. There are many programmers, especially in the non-windows-world, who just use a text editor to write those programmes. As for Python and Perl, it's the same thing. You need to install their respecitve interpreters to run programmes written in those languages. Without them, how do you expect the computer to understand what you want from it? If you want compare, for example, Eclipse and Visual Studio: the installation for Visual Studio contains both the IDE and the compiler. Eclipse is just the IDE. You also need to install the compiler, which is contained in the JDK.
3
0
0
My first programming language was Pascal and I did not have to install more than just the compiler. Same thing with C++, the environment was all set to write code by just installing Visual Studio. In the case of Java why do we need to install this Java Development Kit besides having the Eclipse, Netbeans or another compiler. I think that Python and Perl have also a packet to be installed before writing code in those languages. Otherwise we wouldn't be able to start off. What do these packets contain and why some languages require these files to be installed before compiling any code.
Why do we need to install a kit (JDK) to start programming in Java
1.2
0
0
3,758
35,104,966
2016-01-30T17:45:00.000
1
0
1
0
java,python,compilation,ide
35,105,087
4
false
0
0
With Pascal and C++ the compiler and related tools will convert the source code into machine code that will directly run on the hardware when called from the Operating System, In Java, Python and Perl the tools generate an intermediate code that does not run directly on the hardware, you need a runtime which is the executable that the operating system calls. This executable will read the intermediate code and convert it to machine language. In Java this is the JRE called java.exe, python is python.exe etc (in non Windows/DOS oS the .exe is not there as not required for executables). In Java you see the intermediate code as .cls files or packed into jars/wars etc and have to explicitly compile the Java to these. Python and perl usually do the compile implicitly, python files show as .pyc and others
3
0
0
My first programming language was Pascal and I did not have to install more than just the compiler. Same thing with C++, the environment was all set to write code by just installing Visual Studio. In the case of Java why do we need to install this Java Development Kit besides having the Eclipse, Netbeans or another compiler. I think that Python and Perl have also a packet to be installed before writing code in those languages. Otherwise we wouldn't be able to start off. What do these packets contain and why some languages require these files to be installed before compiling any code.
Why do we need to install a kit (JDK) to start programming in Java
0.049958
0
0
3,758
35,104,966
2016-01-30T17:45:00.000
0
0
1
0
java,python,compilation,ide
35,105,030
4
false
0
0
Writing Java applets and applications needs development tools like JDK. The JDK includes the Java Runtime Environment, the Java compiler and the Java APIs. For Java Developers. Includes a complete JRE plus tools for developing, debugging, and monitoring Java applications.
3
0
0
My first programming language was Pascal and I did not have to install more than just the compiler. Same thing with C++, the environment was all set to write code by just installing Visual Studio. In the case of Java why do we need to install this Java Development Kit besides having the Eclipse, Netbeans or another compiler. I think that Python and Perl have also a packet to be installed before writing code in those languages. Otherwise we wouldn't be able to start off. What do these packets contain and why some languages require these files to be installed before compiling any code.
Why do we need to install a kit (JDK) to start programming in Java
0
0
0
3,758
35,105,825
2016-01-30T19:03:00.000
1
0
0
0
python,django,tastypie
35,167,054
3
false
1
0
That'll only work for list endpoints though. My advice is to use a middleware to add X- headers, it's a cleaner, more generalized solution.
1
1
0
I want to measure execution time for some queries and add this data to responses, like: {"meta": {"execution_time_in_ms": 500 ...}} I know how to add fields to tastypie's responses, but I haven't idea how to measure time in it, where I should initialize the timer and where I should stop it. Any ideas?
Tastypie. How to add time of execution to responses?
0.066568
0
0
107