Q_Id
int64
2.93k
49.7M
CreationDate
stringlengths
23
23
Users Score
int64
-10
437
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
DISCREPANCY
int64
0
1
Tags
stringlengths
6
90
ERRORS
int64
0
1
A_Id
int64
2.98k
72.5M
API_CHANGE
int64
0
1
AnswerCount
int64
1
42
REVIEW
int64
0
1
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
15
5.1k
Available Count
int64
1
17
Q_Score
int64
0
3.67k
Data Science and Machine Learning
int64
0
1
DOCUMENTATION
int64
0
1
Question
stringlengths
25
6.53k
Title
stringlengths
11
148
CONCEPTUAL
int64
0
1
Score
float64
-1
1.2
API_USAGE
int64
1
1
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
15
3.72M
36,297,179
2016-03-29T23:51:00.000
1
0
0
0
0
python,django,web-applications
0
36,440,307
0
1
0
true
1
0
I have found out that django works fine with os.path and no problems. actually if you are programming with python then django is a great choice for server work.
1
0
0
0
I have a python algorithm that access a huge database in my laptop. I want to create a web server to work with it. can I use django with the folder paths I have used ? like how do I communicate with it ? I want to get an image from web application and get it sent to my laptop and run algorithm on it then send result back to the webserver. would that still be possible without me changing my algorithm paths? like I use os.path to access my database folder, would I still be able to do what with django or shall I learn something else? I wanted to try django as it runs in python and I can learn it easy.
can i use django to access folders on my pc using same os.path for windows?
1
1.2
1
0
0
55
36,314,303
2016-03-30T16:14:00.000
4
0
0
0
0
python,kivy
0
36,314,717
0
1
0
true
0
1
This is an issue with Kivy 1.9.1, where the hint text disappears as soon as the TextInput is focused. It has been fixed in the development branch, and now only disappears when there is content in the field.
1
4
0
0
Hi i noticed that whenever you have the focus property of a textinput widget set to True, the hint_text is not displayed when the textinput is actually in focus. Please is there a way to combine them both, i.e the hint_text gets displayed even when the text input is in focus?
Kivy TextInput how to combine hint_text and focus
1
1.2
1
0
0
443
36,341,867
2016-03-31T19:18:00.000
0
0
1
1
0
python,python-2.7,scroll,console-application,windows-console
0
36,342,079
0
2
0
false
0
0
You can not do it from you python script (OK, it is possible, but most probably you don't want to do it). Scrolling depends on environment (Windows or Linux terminal, doesn't matter). So it is up to users to set it up in a way that is good for them. On Linux you can use less or more: python script.py | less it will buffers output from the script and will let user ability to scroll up and down without losing any information.
1
1
0
0
I have a Python (2.7) console application that I have written on Windows (8.1). I have used argparse.ArgumentParser() for handling the parameters when executing the program. The application has quite a few parameters, so when the --help parameter is used the documentation greatly exceeds the size of the console window. Even with the console window maximized. Which is fine, but the issue I'm encountering is that the user is unable to scroll up to view the rest of the help documentation. I have configured my windows console properties appropriately, such as the "Window Size" and "Screen Buffer Size". And I have verified that those changes are working, but they only work outside of the Python environment. As soon as I execute a Python script or run a --help command for a script, the console properties no longer apply. The scroll bar will disappear from the window and I can no longer scroll to the top to see the previous content. So basically, I need to figure out how to enable scrolling for my Python console programs. I need scrolling enabled both when executing a script and when viewing the --help documentation. I'm not sure how to go about doing that. I have been searching online for any info on the subject and I have yet to find anything even remotely helpful. At this point, I am completely stuck. So if someone knows how to get scrolling to work, I would greatly appreciate your help.
How to Enable Scrolling for Python Console Application
0
0
1
0
0
4,402
36,343,431
2016-03-31T20:46:00.000
1
0
1
0
0
python,break
0
36,359,087
0
3
0
false
0
0
Thanks for your reply. I already tried use sys.exit, raise and others into the third funcion but nothing works. What I did, I put a return with stantement pass or fail. On the secund function I test the return and if fail the script execute sys.exit(). When I do this on the secund function the script stop like we want. Now it's working fine. Probably this is the worst way to do this but worked. Best regards.
1
1
0
0
I'm trying to develop a script that will connect to our switch and do some tasks. In this script I have a main function that calls a second function. In the second function I pass a list of switches that Python will start to connect one by one. The second function will call a third function. In the third function the script makes some tests. If one of these tests fail I want to close the entire script. The problem is that I tried to put return, exit, raise System, os.exit but what happens is that the script doesn't stop, it just jumps to another switch and goes on. Anyone knows how can I close my entire script from a function? Best regards.
Python exit from all function and stop the execution
0
0.066568
1
0
0
5,270
36,352,167
2016-04-01T09:04:00.000
0
0
0
0
0
python,interactive-brokers,ibpy
0
37,589,275
0
1
0
false
0
0
a) As far as I know, IB API has no concept of 'Portfolio'. You probably need to keep a list of orders put in for which portfolio and then resolve the order data supplied by IB against your portfolio vs order data. b) IB does keep track of the client (i.e. your client that is calling the API code - normally defaulted to 0) that has put in the orders. If you want to know what orders are open that were input via your client then: client.reqOpenOrders(); If you want to know all open orders i.e. orders put in via your client plus other clients or TWS, then: client.reqAllOpenOrders();
1
0
0
0
I've been experimenting with IBPy for a while; however, the two following things have been eluding me: a) How does one the name of the actual portfolios that positions belong to? I know how to find positions, their costs, values etc. (using message.UpdatePortfolio), but out trading simulation will likely have many portfolios and it helps to know which portfolio each position belongs to. Is it even possible to send information to IB in multiple portfolios? b) How does one find out the existing orders using IBPy? So when I run the code, I want it to display all positions, along with their order types and limits (e.g. if its a limit order for AAPL, I want to find the limit price etc.) Many thanks!
Getting portfolio names and existing orders using Interactive Brokers IBPy
1
0
1
0
0
549
36,363,502
2016-04-01T18:34:00.000
1
0
0
0
0
python,apache-spark
0
36,363,565
0
1
0
false
0
0
You distribute the data you read among nodes. Every node finds it's 5 local maximums. You combine all the local maximums and you keep the 5 max of them, which is the answer.
1
0
1
0
total noob question. I have a file that contains a number on each line, there are approximately 5 millions rows, each row has a different number, how do i find the top 5 values in the file using spark and python.
spark python product top 5 numbers from a file
0
0.197375
1
0
0
54
36,381,523
2016-04-03T04:10:00.000
0
0
1
0
0
python,string,list,python-3.x,swap
0
36,381,647
0
3
0
false
0
0
I think this should go to the comment section, but I can't comment because of lack of reputation, so... You'll probably want to stick with list index swapping, rather than using .pop() and .append(). .pop() can remove elements from arbitrary index, but only one at once, and .append() can only add to the end of the list. So they're quite limited, and it would complicate your code to use them in this kind of problems. So, well, better stick with swapping with index.
1
0
0
0
Okay, I'm really new to Python and have no idea how to do this: I need to take a string, say 'ABAB__AB', convert it to a list, and then take the leading index of the pair I want to move and swap that pair with the __. I think the output should look something like this: move_chars('ABAB__AB', 0) '__ABABAB' and another example: move_chars('__ABABAB', 3) 'BAA__BAB' Honestly have no idea how to do it.
Swapping pairs of characters in a string
0
0
1
0
0
7,193
36,386,528
2016-04-03T14:16:00.000
1
0
0
1
1
google-app-engine,google-app-engine-python
0
36,388,402
0
1
0
false
1
0
App Engine > Dashboard This view shows how much you are charged so far during the current billing day, and how many hours you still have until the reset of the day. This is equivalent to what the old console was showing, except there is no "total" line under all charges. App Engine > Quotas This view shows how much of each daily quota have been used. App Engine > Quotas > View Usage History This view gives you a summary of costs for each of the past 90 days. Clicking on a day gives you a detailed break-down of all charges for that day.
1
0
0
1
In the old (non-Ajax) Google Appengine's Developer Console Dashboard - showed estimated cost for the last 'n' hours. This was useful to quickly tell how the App engine is doing vis-a-vis the daily budget. This field seems to be missing in the new Appengine Developer Console. I have tried to search various tabs on the Console and looked for documentation, but without success. Looking for any pointers as to how do I get to this information in the new Console and any help/pointers are highly appreciated !
Estimated Cost field is missing in Appengine's new Developer Console
0
0.197375
1
0
0
18
36,388,952
2016-04-03T17:51:00.000
0
0
1
0
0
python-2.7
0
36,389,003
0
1
0
false
0
0
To my knowledge, there's no "cheap trick" you'll have to have all your class elements and compare their variable values with what you have . (Couldn't comment, sry) At least from what I understand you're trying to achieve, the question isn't very well constructed.
1
0
0
0
Can anyone tell me, whether there is a way to find which class a member variable belongs to, using the variable. I am trying to create a decorator that will allow only member method and variables of certain classes be used as method parameter. like @acceptmemberofclass(class1,class2) def method(memberfunc, membervar): #do something I have figured out how to do this with methods (using inspect.getmro(meth.im_class)) but I am unable to find a way for variables
Python how to find which class owns a variable
1
0
1
0
0
29
36,415,572
2016-04-04T19:00:00.000
0
0
0
0
0
python,orange
0
36,516,264
0
1
0
false
0
0
I don't understand what "exported with joblib" refers to, but you can save trained Orange models by pickling them, or with Save Classifier widget if you are using the GUI.
1
0
1
0
I'm evaluating orange as a potential solution to helping new entrants into data science to get started. I would like to have them save out model objects created from different algorithms as pkl files similar to how it is done in scikit-learn with joblib or pickle.
Can the model object for a learner be exported with joblib?
0
0
1
0
0
61
36,445,861
2016-04-06T08:42:00.000
4
0
0
1
0
python,root,python-2.x
0
36,445,985
0
2
0
true
0
0
Try to use os.seteuid(some_user_id) before os.system("some bash command").
2
2
0
0
I have a python 2 script that is run as root. I want to use os.system("some bash command") without root privileges, how do I go about this?
using os.system() to run a command without root
0
1.2
1
0
0
1,614
36,445,861
2016-04-06T08:42:00.000
-1
0
0
1
0
python,root,python-2.x
0
47,768,790
0
2
0
false
0
0
I have test on my PC. If you run the python script like 'sudo test.py' and the question is resolved.
2
2
0
0
I have a python 2 script that is run as root. I want to use os.system("some bash command") without root privileges, how do I go about this?
using os.system() to run a command without root
0
-0.099668
1
0
0
1,614
36,456,584
2016-04-06T16:10:00.000
1
0
0
0
0
python,pygame
0
36,456,629
0
1
0
false
0
1
Use python3.5 -mpip install pygame.
1
0
0
0
I am running El Capitan, when I type python --version the terminal prints Python 2.7.10, I have successfully downloaded pygame for Python 2.7.10 but I want to develop in python 3.5.1, I know I can do this by entering python3 in the terminal, but how do I properly set up pygame for this version of python?
How to download pygame for non default version of python
0
0.197375
1
0
0
44
36,462,875
2016-04-06T21:41:00.000
0
0
0
0
0
python,django,testing,automated-tests
0
36,485,158
0
1
0
true
1
0
Ok, so the key is quite simple, the file is not supposed to start with test. I named it blub_test.py and then called it with ./manage.py test --pattern="blub_test.py"
1
0
0
0
I have a test file with tests in it which will not be called with the regular manage.py test command, only when I specifically tell django to do so. So my file lives in the same folder as tests.py and its name is test_blub.py I tried it with manage.py test --pattern="test_*.py" Any idea?
how to use --patterns for tests in django
0
1.2
1
0
0
528
36,473,334
2016-04-07T10:23:00.000
1
0
1
0
0
python,python-2.7
0
36,473,657
0
1
0
false
0
0
If you have pip installed, try to run pip install parse as root.
1
0
0
0
Can somebody let me know how to install python "parse" module for python2.7 version? Server details : CloudLinux Server release 5.11 cPanel 54.0 (build 21)
Parse python module installtion for python 2.7
0
0.197375
1
0
0
3,954
36,485,392
2016-04-07T19:25:00.000
0
0
1
1
0
ipython,beaker-notebook
0
37,124,128
0
2
0
false
0
0
If you want to change the current working directory, I don't think that's possible. But if you want to serve files as in make them available to the web server that creates the page, use ~/.beaker/v1/web as described in the "Generating and accessing web content" tutorial.
1
2
0
0
Trying to experiment with Beaker Notebooks, but I can not figure out how to launch from a specified directory. I've downloaded the .zip file (I'm on Windows 10), and can launch from that directory using the beaker.command batch file, but cannot figure out where to configure or set a separate launch directory for a specific notebook. With Jupyter notebooks, launching from the saved .ipynb file serves from that directory, but I cannot figure out how to do the same for Beaker notebooks. Does anyone know the correct method to serve a Beaker Notebook from various parent directories? Thanks.
How to serve Beaker Notebook from different directory
0
0
1
0
0
178
36,490,085
2016-04-08T01:40:00.000
2
1
0
0
0
python,python-2.7,twitter,tweepy
0
42,499,529
0
3
0
false
0
0
Once you have a tweet, the tweet includes a user, which belongs to the user model. To call the location just do the following tweet.user.location
1
5
0
1
I am starting to make a python program to get user locations, I've never worked with the twitter API and I've looked at the documentation but I don't understand much. I'm using tweepy, can anyone tell me how I can do this? I've got the basics down, I found a project on github on how to download a user's tweets and I understand most of it.
How to get twitter user's location with tweepy?
0
0.132549
1
0
1
13,479
36,552,029
2016-04-11T14:48:00.000
3
0
0
0
0
python,flask,formatting,pygal
0
36,914,709
0
1
0
true
0
1
graph.value_formatter = lambda y: "{:,}".format(y) will get you the commas. graph.value_formatter = lambda y: "${:,}".format(y) will get you the commas and the dollar sign. Note that this formatting seems to be valid for Python 2.7 but would not work on 2.6.
1
2
0
1
I'm using Pygal (with Python / Flask) relatively successfully in regards to loading data, formatting colors, min/max, etc., but can't figure out how to format a number in Pygal using dollar signs and commas. I'm getting 265763.557372895 and instead want $265,763. This goes for both the pop-up boxes when hovering over a data point, as well as the y-axes. I've looked through pygal.org's documentation to no avail. Does anyone know how to properly format those numbers? UPDATE: I'm not quite ready to mark this question "answered" as I still can't get the separating commas. However, I did find the following native formatting option in pygal. This eliminates trailing decimals (without using Python's int()) and adds a dollar sign: graph.value_formatter = lambda y: "$%.0f" % y Change the 0f to 2f if you prefer two decimals, etc.
Properly formatting numbers in pygal
0
1.2
1
0
0
784
36,576,158
2016-04-12T14:23:00.000
0
1
0
0
0
python,ssh,paramiko
0
36,601,638
0
1
0
true
0
0
I figured out a way to get the data, it was pretty straight forward to be honest, albeit a little hackish. This might not work in other cases, especially if there is latency, but I could also be misunderstanding what's happening: When the connection opens, the server spits out two messages, one saying it can't chdir to a particular directory, then a few milliseconds later it spits out another message stating that you need to connect to the other IP. If I send a command immediately after connecting (doesn't matter what command), exec_command will interpret this second message as the response. So for now I have a solution to my problem as I can check this string for a known message and change the flow of execution. However, if what I describe is accurate, then this may not work in situations where there is too much latency and the 'test' command isn't sent before the server response has been received. As far as I can tell (and I may be very wrong), there is currently no proper way to get the stdout stream immediately after opening the connection with paramiko. If someone knows a way, please let me know.
1
1
0
0
I'm writing a script that uses paramiko to ssh onto several remote hosts and run a few checks. Some hosts are setup as fail-overs for others and I can't determine which is in use until I try to connect. Upon connecting to one of these 'inactive' hosts the host will inform me that you need to connect to another 'active' IP and then close the connection after n seconds. This appears to be written to the stdout of the SSH connection/session (i.e. it is not an SSH banner). I've used paramiko quite a bit, but I'm at a loss as to how to get this output from the connection, exec_command will obviously give me stdout and stderr, but the host is outputting this immediately upon connection, and it doesn't accept any other incoming requests/messages. It just closes after n seconds. I don't want to have to wait until the timeout to move onto the next host and I'd also like to verify that that's the reason for not being able to connect and run the checks, otherwise my script works as intended. Any suggestions as to how I can capture this output, with or without paramiko, is greatly appreciated.
Paramiko get stdout from connection object (not exec_command)
0
1.2
1
0
1
391
36,593,464
2016-04-13T09:02:00.000
1
0
1
0
0
python,multithreading,zeromq,pyzmq
0
36,602,302
0
1
0
true
0
0
There is no dinner for free even if some marketing blable offers that, do not take it for granted. Efficient means usually a complex resources handling. Simplest to implement usually fights with overheads & efficient resources handling. Simplest? Using sir Henry FORD's point of view, a component, that is not present in one's design simply cannot fail. In this very sense, we strive here not to programmatically control anything beyond the simplest possible use of elementary components of an otherwise smart ZeroMQ library: Scenario SIMPLEST: Rule a) The central HQ-unit ( be it just a thread or a fully isolated process ) .bind()-s it's receiving port ( pre-set as a ZMQ_SUBSCRIBE behaviour-archetype ) and "subscribes" it's topic-filter to "everything" .setsockopt( ZMQ_SUBSCRIBE, "" ) before it spawns the first DAQ-{ thread | process }, further ref'd as DAQ-unit. Rule b) Each DAQ-unit simply .connect()-s to an already setup & ready port on the HQ-unit with a unit-local socket access-port, pre-set as a ZMQ_PUBLISH behaviour-archetype. Rule c) Any DAQ-unit simply .send( ..., ZMQ_NOBLOCK )-s as needed it's local-data via a message, which is being delivered in the background by the ZeroMQ-layer to the hands of the HQ-unit, being there queued & available for a further processing at the HQ-unit's will. Rule d) The HQ-unit regularly loops and .poll( 1 )-s for a presence of a collected message from any DAQ-unit + .recv( ZMQ_NOBLOCK ) in case any such was present. That's all Asynchronous: yes. Non-blocking: yes. Simplest: yes. Scaleable: yes. Almost linearly, until I/O-bound ( still some tweaking possible to handle stressed-I/O-operations ) as a bonus-point...
1
1
0
0
The parent process launches a few tens of threads that receive data (up to few KB, 10 requests per second), which has to be collected in a list in the parent process. What is the recommended way to achieve this, which is efficient, asynchronous, non-blocking, and simpler to implement with least overhead? The ZeroMQ guide recommends using a PAIR socket archetype for coordinating threads, but how to scale that with a few hundred threads?
How to collect data from multiple threads in the parent process with ZeroMQ while keeping the solution both as simple & as scaleable as possible?
0
1.2
1
0
0
214
36,643,784
2016-04-15T09:49:00.000
2
0
0
0
0
python-2.7,wxpython
0
36,649,037
0
2
0
true
0
1
Set the style on the text ctrl as TE_PASSWORD: The text will be echoed as asterisks.
1
0
0
0
I want to add a simple password check to a Python/wxPython/MySQL application to confirm that the user wants to carry out a particular action. So far I have a DialogBox with a textCtrl for password input and Buttons for Submit or Cancel. At the moment the password appears in the textCtrl. I would prefer this to appear as asterisks whilst the user input is captured but cannot figure out how to do this. How could I implement this?
Creating Simple Password Check In wxPython
0
1.2
1
0
0
493
36,645,076
2016-04-15T10:46:00.000
3
0
0
0
1
python,django
1
52,244,829
1
3
0
true
1
0
I know this is a while as to when i asked the question. I finally fixed this by changing the hosts. I went for Digital Oceans (created a new droplet) which supports wsgi. I deployed the app using gunicorn (application server) and nginx (proxy server). It is not a good idea to deploy a Django app on shared hosting as you will be limited especially installing the required packages.
2
5
0
0
I am trying to deploy a django app on hostgator shared hosting. I followed the hostgator django installation wiki and i deployed my app. The issue is that i am getting a 500 error internal page when entering the site url in the browser. I contacted the support team but could not provide enough info on troubleshooting the error Premature end of script headers: fcgi.This was the error found on the server error log. I am installed django 1.9.5 on the server and from the django documentation it does not support fastcgi. So my question 500 error be caused by the reason that i am running django 1.9.5 on the server and it does not support fastcgi. if so do i need to install lower version of django to support the fastcgi supported by hostgator shared hosting First i thought the error was caused by my .htaccess file but it has no issue from the what i heard from support team. Any Leads to how i can get the app up and running will be appreciated. This is my first time with django app deployment. Thank you in advance
Django app deployment on shared hosting
0
1.2
1
0
0
6,154
36,645,076
2016-04-15T10:46:00.000
0
0
0
0
1
python,django
1
36,646,426
1
3
0
false
1
0
As you say, Django 1.9 does not support FastCGI. You could try using Django 1.8, which is a long term support release and does still support FastCGI. Or you could switch to a different host that supports deploying Django 1.9 with wsgi.
2
5
0
0
I am trying to deploy a django app on hostgator shared hosting. I followed the hostgator django installation wiki and i deployed my app. The issue is that i am getting a 500 error internal page when entering the site url in the browser. I contacted the support team but could not provide enough info on troubleshooting the error Premature end of script headers: fcgi.This was the error found on the server error log. I am installed django 1.9.5 on the server and from the django documentation it does not support fastcgi. So my question 500 error be caused by the reason that i am running django 1.9.5 on the server and it does not support fastcgi. if so do i need to install lower version of django to support the fastcgi supported by hostgator shared hosting First i thought the error was caused by my .htaccess file but it has no issue from the what i heard from support team. Any Leads to how i can get the app up and running will be appreciated. This is my first time with django app deployment. Thank you in advance
Django app deployment on shared hosting
0
0
1
0
0
6,154
36,669,500
2016-04-16T20:42:00.000
0
0
0
1
0
python,azure,web-applications,azure-webjobs
0
36,669,596
0
2
0
false
1
0
You would need to provide some more information about what kind of interface your web app exposes. Does it only handle normal HTTP1 requests or does it have a web socket or HTTP2 type interface? If it has only HTTP1 requests that it can handle then you just need to make multiple requests or try and do long polling. Otherwise you need to connect with a web socket and stream the data over a normal socket connection.
2
0
0
0
I have a python script that runs continuously as a WebJob (using Microsoft Azure), it generates some values (heart beat rate) continuously, and I want to display those values in my Web App. I don't know how to proceed to link the WebJob to the web app. Any ideas ?
Streaming values in a python script to a wep app
0
0
1
0
0
63
36,669,500
2016-04-16T20:42:00.000
1
0
0
1
0
python,azure,web-applications,azure-webjobs
0
36,671,291
0
2
0
true
1
0
You have two main options: You can have the WebJobs write the values to a database or to Azure Storage (e.g. a queue), and have the Web App read them from there. Or if the WebJob and App are in the same Web App, you can use the file system. e.g. have the WebJob write things into %home%\data\SomeFolderYouChoose, and have the Web App read from the same place.
2
0
0
0
I have a python script that runs continuously as a WebJob (using Microsoft Azure), it generates some values (heart beat rate) continuously, and I want to display those values in my Web App. I don't know how to proceed to link the WebJob to the web app. Any ideas ?
Streaming values in a python script to a wep app
0
1.2
1
0
0
63
36,687,929
2016-04-18T07:35:00.000
0
0
0
0
0
python-2.7,scikit-learn,text-classification
0
36,693,072
0
1
0
true
0
0
As with most supervised learning algorithms, Random Forest Classifiers do not use a similarity measure, they work directly on the feature supplied to them. So decision trees are built based on the terms in your tf-idf vectors. If you want to use similarity then you will have to compute a similarity matrix for your documents and use this as your features.
1
1
1
1
I am doing some work in document classification with scikit-learn. For this purpose, I represent my documents in a tf-idf matrix and feed a Random Forest classifier with this information, works perfectly well. I was just wondering which similarity measure is used by the classifier (cosine, euclidean, etc.) and how I can change it. Haven't found any parameters or informatin in the documentation. Thanks in advance!
similarity measure scikit-learn document classification
0
1.2
1
0
0
350
36,694,973
2016-04-18T13:10:00.000
0
1
0
0
0
java,python,server,client,hessian
0
36,695,144
0
1
0
true
1
0
Burlap and Hessian are 2 different (but related) RPC protocols, with Burlap being XML based and Hessian being binary. They're both also pretty ancient, so if you have an opportunity to use something else, I'd highly recommend it. If not, then you're going to have to find a Burlap lib for Python. Since it seems that a Burlap lib for Python simply doesn't exist (at least anymore), your best choice is probably to make a small Java proxy that communicates with a more recent protocol with the Python side and in Burlap with the Java server.
1
0
0
0
I'm trying to connect a burlap java server with a python client but I can't find any detail whatsoever regarding how to use burlap with python or if it even is implemented for python. Any ideas? Can I build burlap python clients? Any resources? Would using a hessian python client work with a java burlap server?
Burlap java server to work with python client
0
1.2
1
0
0
205
36,717,654
2016-04-19T11:56:00.000
0
0
0
0
0
python,aws-lambda,aws-api-gateway
0
36,757,815
0
2
0
false
1
0
You could return it base64-encoded...
2
0
0
0
I have a lambdas function that resizes and image, stores it back into S3. However I want to pass this image to my API to be returned to the client. Is there a way to return a png image to the API gateway, and if so how can this be done?
Passing an image from Lambda to API Gateway
0
0
1
0
1
835
36,717,654
2016-04-19T11:56:00.000
0
0
0
0
0
python,aws-lambda,aws-api-gateway
0
36,727,013
0
2
0
false
1
0
API Gateway does not currently support passing through binary data either as part of a request nor as part of a response. This feature request is on our backlog and is prioritized fairly high.
2
0
0
0
I have a lambdas function that resizes and image, stores it back into S3. However I want to pass this image to my API to be returned to the client. Is there a way to return a png image to the API gateway, and if so how can this be done?
Passing an image from Lambda to API Gateway
0
0
1
0
1
835
36,722,975
2016-04-19T15:31:00.000
3
1
0
0
1
python,g++,theano
1
40,705,647
0
6
0
false
0
1
This is the error that I experienced in my mac running jupyter notebook with a python 3.5 kernal hope this helps someone, i am sure rggir is well sorted at this stage :) Error Using Theano backend. WARNING (theano.configdefaults): g++ not detected ! Theano will be unable to execute optimized C-implementations (for both CPU and GPU) and will default to Python implementations. Performance will be severely degraded. To remove this warning, set Theano flags cxx to an empty string. Cause update of XCode (g++ compiler) without accepting terms and conditions, this was pointed out above thanks Emiel Resolution: type g++ --version in the mac terminal "Agreeing to the Xcode/iOS license requires admin privileges, please re-run as root via sudo." is output as an error launch Xcode and accept terms and conditions return g++ --version in the terminal Something similar to the following will be returned to show that Xcode has been fully installed and g++ is now available to keras Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1 Apple LLVM version 8.0.0 (clang-800.0.42.1) Target: x86_64-apple-darwin15.6.0 Thread model: posix InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin Restart you machine… I am sure there are some more complicated steps that someone smarter than me can add here to make this faster Run the model.fit function of the keras application which should run faster now … win!
3
18
0
0
I installed theano but when I try to use it I got this error: WARNING (theano.configdefaults): g++ not detected! Theano will be unable to execute optimized C-implementations (for both CPU and GPU) and will default to Python implementations. Performance will be severely degraded. I installed g++, and put the correct path in the environment variables, so it is like theano does not detect it. Does anyone know how to solve the problem or which may be the cause?
theano g++ not detected
0
0.099668
1
0
0
31,119
36,722,975
2016-04-19T15:31:00.000
7
1
0
0
1
python,g++,theano
1
39,568,992
0
6
0
false
0
1
I had this occur on OS X after I updated XCode (through the App Store). Everything worked before the update, but after the update I had to start XCode and accept the license agreement. Then everything worked again.
3
18
0
0
I installed theano but when I try to use it I got this error: WARNING (theano.configdefaults): g++ not detected! Theano will be unable to execute optimized C-implementations (for both CPU and GPU) and will default to Python implementations. Performance will be severely degraded. I installed g++, and put the correct path in the environment variables, so it is like theano does not detect it. Does anyone know how to solve the problem or which may be the cause?
theano g++ not detected
0
1
1
0
0
31,119
36,722,975
2016-04-19T15:31:00.000
6
1
0
0
1
python,g++,theano
1
37,846,308
0
6
0
false
0
1
On Windows, you need to install mingw to support g++. Usually, it is advisable to use Anaconda distribution to install Python. Theano works with Python3.4 or older versions. You can use conda install command to install mingw.
3
18
0
0
I installed theano but when I try to use it I got this error: WARNING (theano.configdefaults): g++ not detected! Theano will be unable to execute optimized C-implementations (for both CPU and GPU) and will default to Python implementations. Performance will be severely degraded. I installed g++, and put the correct path in the environment variables, so it is like theano does not detect it. Does anyone know how to solve the problem or which may be the cause?
theano g++ not detected
0
1
1
0
0
31,119
36,759,037
2016-04-21T03:24:00.000
1
0
0
0
0
python,tensorflow
0
36,870,610
0
1
0
true
0
0
The model_with_buckets() function in seq2seq.py returns 2 tensors: the output and the losses. The outputs variable contains the raw output of the decoder that you're looking for (that would normally be fed to the softmax).
1
0
1
0
I was using Tensorflow sequence to sequence example code. for some reason, I don't want to add softmax to output. instead, I want to get the raw output of decoder without softmax. I was wondering if anyone know how to do it based on sequence to sequence example code? Or I need to create it from scratch or modify the the seq2seq.py (under the /tensorflow/tensorflow/python/ops/seq2seq.py)? Thank you
tensorflow sequence to sequence without softmax
0
1.2
1
0
0
389
36,779,522
2016-04-21T20:12:00.000
2
0
0
0
0
python,r,csv
0
36,780,531
0
3
1
false
0
0
use sed '2636759d' file.csv > fixedfile.csv As a test for a 40,001 line 1.3G csv, removing line 40,000 this way takes 0m35.710s. The guts of the python solution from @en_Knight (just stripping the line and writing to a temp file) is ~ 2 seconds faster for this same file. edit OK sed (or some implementations) may not work (based on feedback from questioner) You could, in plain bash, to remove row n from a file of N rows, file.csv, you can do head -[n-1] file.csv > file_fixed.csv and tail -[N-n] file.csv >> file_fixed.csv (in both of these the expression in brackets is replaced by a plain number). To do this, though you need to know N. The python solution is better...
1
6
1
0
I have a ~220 million row, 7 column csv file. I need to remove row 2636759. This file is 7.7GB, more than will fit in memory. I'm most familiar with R, but could also do this in python or bash. I can't read or write this file in one operation. What is the best way to build this file incrementally on disk, instead of trying to do this all in memory? I've tried to find this on SO but have only been able to find how to do this with files that are small enough to read/write in memory, or with rows that are at the beginning of the file.
remove known exact row in huge csv
0
0.132549
1
0
0
1,468
36,824,269
2016-04-24T14:17:00.000
1
0
1
1
0
python,python-3.x,cmd,python-import
0
36,824,295
0
1
0
false
0
0
Save the program with a .py extention. For example: hello.py Then run it with python <script_name>.py. For example: python hello.py
1
0
0
0
I am trying to run a python program import random random.random() Written in notepad in two different lines,I want to run it in cmd.how to do it?
Running python from notepad in cmd
0
0.197375
1
0
0
45
36,831,877
2016-04-25T04:08:00.000
0
0
1
0
1
python
1
36,831,904
0
5
0
false
0
0
Instead of just double-clicking the file, run it from the command line. A terminal window that your program automatically created will also be automatically closed when the program ends, but if you open a terminal yourself and run the program from the command line, it won't touch the open terminal and you can read the error.
1
3
0
0
So I know I can make Python executable using pyinstaller. However, every time it raises an error, it will instantly end the program, so I can't find what is the error. I know I probably can use time.sleep(30000) to stop it. But if the code raises error before it meets time.sleep(30000), it will just shut down. To sum up, how to make it keep not shutting down, so I can see where is the mistake?
How to make Python executable pause when it raises an error?
0
0
1
0
0
5,599
36,836,101
2016-04-25T09:04:00.000
0
0
0
0
0
python-2.7,web-applications,pyramid
0
36,873,321
0
2
0
false
1
0
If you're using jinja, try this: <div class="html-content">{{scraped_html|safe}}</div>
1
0
0
0
please I am working on a site where I would scrap another website's html table source code and append it to my template before rendering my page. I have written the script which stores the html code in a variable but don't know how to appendix it. Kindly suggest.
How to modify a pyramid template on the fly before rendering
0
0
1
0
0
45
36,859,840
2016-04-26T08:49:00.000
0
0
0
0
0
python,numpy,fft
0
36,976,133
0
1
0
true
0
0
Yes. Apply fftfreq to each spatial vector (x and y) separately. Then create a meshgrid from those frequency vectors. Note that you need to use fftshift if you want the typical representation (zero frequencies in center of spatial spectrum) to both the output and your new spatial frequencies (before using meshgrid).
1
0
1
0
I know that for fft.fft, I can use fft.fftfreq. But there seems to be no such thing as fft.fftfreq2. Can I somehow use fft.fftfreq to calculate the frequencies in 2 dimensions, possibly with meshgrid? Or is there some other way?
How to extract the frequencies associated with fft2 values in numpy?
1
1.2
1
0
0
343
36,861,358
2016-04-26T09:51:00.000
0
1
0
0
0
python,node.js,django,nginx
0
36,862,095
0
1
0
false
0
0
You really should move away from using the IP as the restriction. Not only can the IP be changed allowing for an intermediary to replay the OTP. A combination of the visiting IP along with additional unique vectors would serve as a better method of identifying the visitor and associating the OTP with their access. Because of this the throttling you wish to implement would be better served at the code or application level vs. your web server. You should also be doing that anyways in order to better protect the OTP and the best practices associated with them; expiring, only using them once etc. etc.
1
2
0
0
Scenario : I have an OTP generation API. As of now , if I do POST with contact number in body, it will be generating OTP code irrespective of how many times, it gets invoked by same ip. There is no security at code level and nginx level. Suggestions are accepted whether blocking IP should be done at code level or Nginx. I want to restrict access to api 5 times in a day from same IP .
Securing OTP API at code level or Nginx level?
0
0
1
0
0
219
36,869,258
2016-04-26T15:23:00.000
3
0
1
0
0
python-3.x,anaconda,graphviz,spyder
0
64,227,029
0
3
0
false
0
0
Open Anaconda Prompt Run-> "conda install python-graphviz" in anaconda prompt. After install graphviz then copy the directory: C:\Users\Admin\anaconda3\Library\bin\graphviz Open Control Panel\System\Advanced system settings Enviornment variables\path\Edit\New Paste that copied directory and then click Ok
1
9
0
0
I am attempting to use Graphviz from Spyder (via an Anaconda install). I am having trouble understanding what is needed to do this and how to go about loading packages, setting variables, etc. I straight forward approach for a new Python and Graphviz and Spyder user would be great! Also, apart from just creating and running Graphviz, how can one run Graphviz from python with a pre-generated .gv file?
How to use Graphviz with Anaconda/Spyder?
0
0.197375
1
0
0
36,755
36,883,631
2016-04-27T07:53:00.000
0
0
1
0
0
python,arrays,cluster-computing
0
36,883,848
0
1
0
false
0
0
I also work with really big datasets (complete genomes or all possible gene combinations) and i store these in a zipped database with pickle. this way it is ram efficient and uses a lot less hard disk memory. I suggest you try that.
1
1
0
0
I need to create a big array in python from Sqlite database. It's size is 1000_000_000*1000_000_000 and each item is one or zero. Actually, my computer can't store in RAM this volume of information. Maybe someone have idea how to work in this situation? Maybe store these vectors in database or there is some framework for similar needs? If i am able to do this, then i need to build clusters, that problem frighten me not less, with this information size. Thanks in advance/
How do you work with big array in python?
0
0
1
0
0
259
36,884,019
2016-04-27T08:12:00.000
0
0
1
0
0
python,equality
0
36,884,102
0
2
0
false
0
0
is checks if the two items are the exact same object. This check identity == checks if the two objects are equal values You use is not None to make sure that the object the "real" none and not just false-y.
1
0
0
0
Two items may be unequal in many ways. Can python tell what is the reason? For example: 5 is not 6, int(5) is not float(5), "5" is not "5 ", ... Edit: I did not ask what kinds of equality test there are, but why and how those are not equal. I think my question is not duplicate.
If "a is not b" how to know what is the difference?
1
0
1
0
0
99
36,894,191
2016-04-27T15:24:00.000
0
0
0
0
0
python,numpy,random,machine-learning,normal-distribution
0
60,298,484
0
4
0
false
0
0
You can subdivide your targeted range (by convention) to equal partitions and then calculate the integration of each and all area, then call uniform method on each partition according to the surface. It's implemented in python: quad_vec(eval('scipy.stats.norm.pdf'), 1, 4,points=[0.5,2.5,3,4],full_output=True)
1
44
1
0
In machine learning task. We should get a group of random w.r.t normal distribution with bound. We can get a normal distribution number with np.random.normal() but it does't offer any bound parameter. I want to know how to do that?
How to get a normal distribution within a range in numpy?
0
0
1
0
0
43,386
36,901,709
2016-04-27T22:01:00.000
0
0
0
0
0
jquery,python,mysql,api,scripting
0
36,901,878
0
1
0
true
0
0
You don't mention what server side language you're using, but the concepts would be the same for all - make your query to get your 200K variables, loop through the result set, making the curl call to the API for each, store the results in an array, json encode the array at the end of the loop, and then dump the result to a file. As for the limit to requests per time period, most languages have some sort of pause function, in PHP it's sleep(). If all else fails, you could put a loop that does nothing (except take time) into each call to put a delay in the process.
1
0
0
0
I am 'kind of' new to programming and must have searched a large chunk of the web in connection with this question. I am sure the answer is somewhere out there but I am probably simply not using the right terminology to find it. Nevertheless, I did my best and I am totally stuck. I hope people here understand the feeling and won't mind helping. I am currently working on a data driven web app that I am building together with an outsourced developer while also learning more about programming. I've got some rusty knowledge of it but I've been working in business-oriented non-technical roles for a few years now and the technical knowledge gathered some dust. The said web app uses MySql database to store information. In this MySql database there is currently a table containing 200,000 variables (Company Names). I want to run those Company Names through a third-party json RESTful API to return some additional data regarding those Companies. There are 2 questions here and I don't expect straight answers. Pointing me in the right learning direction would be sufficient: 1. How would I go about taking those 200,000 variables and executing a script that would automatically make 200,000 calls to the API to obtain the data I am after. How do I then save this data to a json or csv file to import to MySql? I know how to make single API requests, using curl but making automated large volume requests like that is a mystery to me. I don't know whether I should create a json file out of it or somehow queue the requests, I am lost. 2. The API mentioned above is limited to 600 calls per 5 minutes perios, how do I introduce some sort of control system so that when the maximum volume of API calls is reached the script pauses and only returns to working when the specified amount of time goes by? What language is best to interact with the json RESTful API and to write the script described in question no1? Thank you for your help. Kam
Most effective way to run execute API calls
0
1.2
1
0
1
392
36,905,809
2016-04-28T05:16:00.000
0
0
0
0
0
python,python-2.7,url
0
36,947,097
0
1
0
false
0
0
This is typically a function of the terminal that you're using. In iterm2, you can click links by pressing cmd+alt+left click. In gnome-terminal, you can click links by pressting ctrl+left click or right clicking and open link.
1
0
0
0
I know writing print "website url" does not provide a clickable url. So is it possible to get a clickable URL in python, which would take you directly to that website? And if so, how can it be done?
Clickable website URL
1
0
1
0
1
46
36,911,785
2016-04-28T10:21:00.000
0
0
1
0
0
python,artificial-intelligence
0
37,138,718
0
2
0
false
0
0
You need to assign score (evaluation) to each move based on the rules of the game. When you choose appropriate scoring method you can evaluate which out of 5 possible actions is the best one. As an example let's assume simple game where you must take all opponents pawns by placing your pawn on top of theirs (checkers with simplified/relaxed rules). When you move pawn to the next free cell without exposing your pawn to the danger you can assign score +1 and when you take opponent's pawn +3. If opponent takes your pawn in next move you subtract your score -3. You can define other scoring rules. When you apply scoring to all possible moves you can then select best move either using MinMax algorithm for 2 players game or some greedy search algorithm which just maximizes the score selecting action which yields highest score on next move without predicting opponent's move.
1
0
0
0
I do not see too how I could set this: I must code a small IA for asymmetric board game for 2 players. Each turn each player has a number of action points to use to move their pieces on the board (10x10). For now I know how to generate the list of possible moves for each pawn based on the number of given action point but I block to the next step, selecting the best move. How could I code the fact that for example 5 action points it is best to move one pawn 3 cells and another 2 cells that move one pawn 5 cells. Do I have to use a particular algorithm or apply a programming concept ...? Well, I'm lost. Hope you can help me :)
define the best possible move (AI - game)
0
0
1
0
0
839
36,921,961
2016-04-28T17:46:00.000
0
0
1
0
0
python,performance,opencv,compilation,cython
0
68,894,961
0
4
0
false
0
0
If you try to find your answer using cython with Visual Studio to convert python code into pyd( Python Dynamic Module ) then, you will have a blurry answer. As, visual code that you expect to work might not due to compatibility issue with later versions. For instance, 1900, 1929 of msvc. You will need to edit cygwin-compiler in disutils to get things done. If you want to use MingW then also you need to include the configuration of compiler used in disutils. A very simple way is that we can use Nuitka, it is very simplifed and reliable to convert python code into C and Python Dynamic Library. No configuration, no support addition required. Let's grab basics 1). Install nuitka, pip install nuitka 2). Install MingW, from Sourceforge 3). Add mingW to path And everything is good to go now. 4).Open cmd as admin, type python -m nuitka --module file.py Nuitka will create file.c, file.pyi(For imports) and file.cp39_architecture.pyd in current directory. From file.pyd you can import the module directly into your main.py and it will be lightning fast. But, it you want to create standalone application then try, python -m nuitkafile.py
1
10
0
0
I'm creating a project that uses Python OpenCV. My image processing is a bit slow, so I thought I can made the code faster by creating a .pyd file (I read that somewhere). I am able to create a .c file using Cython, but how to make a .pyd? While they are a kind of .dll, should I make a .dll first and convert it? And I think they're not platform-independent, what are equivalents on Unix? Thanks for any help!
How to create a .pyd file?
1
0
1
0
0
28,584
36,924,296
2016-04-28T19:54:00.000
0
0
1
0
0
python,raw-input
0
36,924,497
0
2
0
false
0
0
Well I'm not sure of having a program understand the english language. It will only take a string literal as a string literal. "Good" does not mean Good or Bad to the Interpreter in Python. What I'd suggest is making a dictionary of all of the good phrases you want, such as I'm good, Feelin' great, I'm A OK. You can store all of these good feeling string literals to your "Good Feels" dictionary and vice versa for your Bad feels string literals. I'm not too sure how you'd work around spelling with <100% accuracy and the interpreter still picking it up I'm a bit inexperienced myself, but I'd say a predefined dictionary is your best bet, maybe throw in an else statement that prompts the user to spell correctly if he can't get one of the saying right.
1
0
0
0
(I am new to Python and programming) I am using raw_input() so I can make a program that talks to the user. For example: Program: How are you? User: I am doing great!/I feel terrible. I need my program to respond accordingly, as in "YAY!" or "Aw man... I hope you feel better soon." so can you please give me ways to scan for words such as "good" or "bad" in the user's raw input so my program knows how to respond? I know a few ways to do this, but the problem is, I want multiple words for it to look for, like great, amazing, and awesome can all be classified into the "good" group. AND, I need it where it doesn't have to be exact. I keep on running into problems where the user has to exactly type, "I am good." instead of all the different variations that they could possibly say it. THANK YOU IN ADVANCE!
Python: If Raw_Input Contains...BLAH
0
0
1
0
0
362
36,941,823
2016-04-29T15:08:00.000
-1
0
0
0
1
python,django,entity-attribute-value
0
40,278,064
0
2
0
false
1
0
I am trying to answer,let me know, wheather we are on a same plane. I think, you need to formulate EAV database scema first. For that identify what are the entities,attributes, and the associated values. Here, in the example mentioned by you, entity maybe device and it's attribute maybe setting. If we take other example, say in case of car sales, entity is sales recipt, attribute is product purchased by the customer(car), and values are price, car model, car colour etc. Make master tables and tables that stores mappings if any. This schema implementation in models.py will make your models, and insert values in those models through shell, or insert script.
1
6
0
0
I need to implement a fairly standard entity-attribute-value hierarchy. There are devices of multiple types, each type has a bunch of settings it can have, each individual device has a set of particular values for each setting. It seems that both django-eav and eav-django packages are no longer maintained, so I guess I need to roll my own. But how do I architect this? So far, I am thinking something like this (skipping a lot of detail) class DeviceType(Model): name = CharField() class Device(Model): name = CharField() type = ForeignKey(DeviceType) class Setting(Model): name = CharField() type = CharField(choices=(('Number', 'int'), ('String', 'str'), ('Boolean', 'bool'))) device_type = ForeignKey(DeviceType) class Value(Model): device = ForeignKey(Device) setting = ForeignKey(Setting) value = CharField() def __setattr__(self, name, value): if name == 'value': ... do validation based on the setting type ... def __getattr__(self, name): if name == 'value': ... convert string to whatever is the correct value for the type ... Am I missing something? Is there a better way of doing this? Will this work?
How to implement EAV in Django
0
-0.099668
1
0
0
2,167
36,946,288
2016-04-29T19:40:00.000
1
1
0
1
0
python,git
0
36,946,689
0
1
0
true
0
0
Check the .git/FETCH_HEAD for the time stamp and the content. Every time you fetch content its updating the content and the modification time of the file.
1
0
0
0
TL;DR I would like to be able to check if a git repo (located on a shared network) was updated without using a git command. I was thinking checking one of the files located in the .git folder to do so, but I can't find the best file to check. Anyone have a suggestion on how to achieve this? Why: The reason why I need to do this is because I have many git repos located on a shared drive. From a python application I built, I synchronize the content of some of these git repo on a local drive on a lot of workstation and render nodes. I don't want to use git because the git server is not powerful enough to support the amount of requests of all the computers in the studio would need to perform constantly. This is why I ended up with the solution of putting the repos on the network server and syncing the repo content on a local cache on each computer using rsync That works fine, but the as time goes by, the repos are getting larger and the rsync is taking too much time to perform. So I would like to be have to (ideally) check one file that would tell me if the local copy is out of sync with the network copy and perform the rsync only when they are out of sync. Thanks
How to check if a git repo was updated without using a git command
1
1.2
1
0
0
63
36,956,477
2016-04-30T15:03:00.000
1
0
0
1
1
python,linux,python-3.x,dbus,bluez
0
36,988,374
0
1
0
true
0
0
A system update resolved this problem.
1
1
0
0
I have a BLE device which has a bunch of GATT services running on it. My goal is to access and read data from the service characteristics on this device from a Linux computer (BlueZ version is 5.37). I have enabled experimental mode - therefore, full GATT support should be available. BlueZ's DBUS API, however, only provides the org.bluez.GattManager1 interface for the connected device, and not the org.bluez.GattCharacteristic1 or org.bluez.GattService1 interfaces which I need. Is there something I'm doing wrong? The device is connected and paired, and really I've just run out of ideas as how to make this work, or what may be wrong. If it helps, I'm using Python and the DBUS module to interface with BlueZ.
BlueZ DBUS API - GATT interfaces unavailable for BLE device
0
1.2
1
0
0
1,158
36,957,843
2016-04-30T17:14:00.000
0
1
1
0
0
python,shell,path,environment-variables
0
36,957,901
0
2
0
false
0
0
PYTHONPATH is the default search path for importing modules. If you use bash, you could type echo $PYTHONPATH to look at it.
1
3
0
0
What is the $PYTHONPATH variable, and what's the significance in setting it? Also, if I want to know the content of my current pythonpath, how do I find that out?
Trying to understand the pythonpath variable
0
0
1
0
0
464
36,960,576
2016-04-30T21:42:00.000
2
0
1
0
0
python,sorting
0
36,960,631
0
1
0
true
0
0
Use a tuple of (my_date, my_time) as the "single element" you're sorting on. You could build a datetime.datetime object from the two, but that seems unnecessary just to sort them. This applies in general to any situation where you want a lexicographical comparison between multiple quantities. "Lexicographical" meaning, most-significant first with less-significant quantities as tie-breakers, which is exactly what the standard comparisons do for tuple.
1
0
0
0
I have a list of objects that each contain a datetime.date() and a datetime.time() element. I know how to sort the array based on a single element using insertion sort, or any other sorting algorithm. However, how would I sort this list in chronological order using date AND time?
Python: sort by date and time?
0
1.2
1
0
0
797
36,972,296
2016-05-01T21:40:00.000
0
0
0
0
0
python,amazon-sqs
0
36,972,378
0
2
0
false
1
0
It looks like you can do the following: Assigner Reads from the assigner queue and assigns the proper ids Packs the data in bulks and uploads them to S3. Sends the path to S3 to the Dumper queue Dumper reads the bulks and dumps them to DB in bulks
2
1
0
0
I am trying to scale an export system that works in the following steps: Fetch a large number of records from a MySQL database. Each record is a person with an address and a product they want. Make an external API call to verify address information for each of them. Make an internal API call to get store and price information about the product on each record. Assign identifiers to each record in a specific format, which is different for each export. Dump all the data into a file, zip it and email it. As of now all of this happens in one monolithic python script which is starting to show its age. As the number of records being exported at a time has grown by about 10x, the script takes a lot of memory and whole export process is slow because all the steps are blocking and sequential. In order to speed up the process and make it scalable I want to distribute the work into a chain of SQS queues. This is quite straightforward for the first 4 steps: Selector queue - takes a request, decides which records will be exported. Creates a msg for each of them in the verifier queue with export_id and record_id. Verifier queue - takes the id of the record, makes the API call to verify its address. Creates a msg in the price queue with export_id and record_id. Price queue - takes the id of a record, makes the API call to get prices and attaches it to the record. Creates a msg in the assigner queue with export_id and record_id. Assigner queue - takes the id of a record, assigns it the sequential export ID. Creates a msg in the dumper queue with export_id and record_id. Dumper queue - ??? This is all fine and dandy till now. Work is parallelized and we can add more workers to whichever step needs them the most. I'm stumped by how to add the last step in the process? Till now all the queues have been (suitably) dumb. They get a msg, perform an action and pass it on. In the current script, by the time we reach the last step, the program can be certain that all previous steps are complete for all the records and it is time to dump the information. How should I replicate this in the distributed case? Here are the options I could think of: The dumper queue just saves it's incoming msgs in a DB table till it gets a msg flagged "FINAL" and then it dumps all msgs of that export_id. This makes the final msg a single point of failure. If multiple exports are being processed at the same time, order of msgs is not guaranteed so deciding which msg is final is prone to failure. Pass an expected_total and count in each step and the dumper queue waits till it gets enough msgs. This would cause the dumper queue to get blocked and other exports will have to wait till all msgs of a previously started export are received. Will also have to deal with possibly infinite wait time in some way if msgs get lost. None of the above options seem good enough. What other options do I have? At a high level, consistency is more important than availability in this problem. So the exported files can arrive late, but they should be correct. Msg Delay Reasons As asked in the comments: Internal/External API response times may vary. Hard to quantify. If multiple exports are being processed at the same time, msgs from one export may get lagged behind or be received in a mixed sequence in queues down the line.
Scaling a sequential program into chain of queues
0
0
1
0
0
121
36,972,296
2016-05-01T21:40:00.000
0
0
0
0
0
python,amazon-sqs
0
61,071,601
0
2
0
false
1
0
You should probably use a cache instead of a queue.
2
1
0
0
I am trying to scale an export system that works in the following steps: Fetch a large number of records from a MySQL database. Each record is a person with an address and a product they want. Make an external API call to verify address information for each of them. Make an internal API call to get store and price information about the product on each record. Assign identifiers to each record in a specific format, which is different for each export. Dump all the data into a file, zip it and email it. As of now all of this happens in one monolithic python script which is starting to show its age. As the number of records being exported at a time has grown by about 10x, the script takes a lot of memory and whole export process is slow because all the steps are blocking and sequential. In order to speed up the process and make it scalable I want to distribute the work into a chain of SQS queues. This is quite straightforward for the first 4 steps: Selector queue - takes a request, decides which records will be exported. Creates a msg for each of them in the verifier queue with export_id and record_id. Verifier queue - takes the id of the record, makes the API call to verify its address. Creates a msg in the price queue with export_id and record_id. Price queue - takes the id of a record, makes the API call to get prices and attaches it to the record. Creates a msg in the assigner queue with export_id and record_id. Assigner queue - takes the id of a record, assigns it the sequential export ID. Creates a msg in the dumper queue with export_id and record_id. Dumper queue - ??? This is all fine and dandy till now. Work is parallelized and we can add more workers to whichever step needs them the most. I'm stumped by how to add the last step in the process? Till now all the queues have been (suitably) dumb. They get a msg, perform an action and pass it on. In the current script, by the time we reach the last step, the program can be certain that all previous steps are complete for all the records and it is time to dump the information. How should I replicate this in the distributed case? Here are the options I could think of: The dumper queue just saves it's incoming msgs in a DB table till it gets a msg flagged "FINAL" and then it dumps all msgs of that export_id. This makes the final msg a single point of failure. If multiple exports are being processed at the same time, order of msgs is not guaranteed so deciding which msg is final is prone to failure. Pass an expected_total and count in each step and the dumper queue waits till it gets enough msgs. This would cause the dumper queue to get blocked and other exports will have to wait till all msgs of a previously started export are received. Will also have to deal with possibly infinite wait time in some way if msgs get lost. None of the above options seem good enough. What other options do I have? At a high level, consistency is more important than availability in this problem. So the exported files can arrive late, but they should be correct. Msg Delay Reasons As asked in the comments: Internal/External API response times may vary. Hard to quantify. If multiple exports are being processed at the same time, msgs from one export may get lagged behind or be received in a mixed sequence in queues down the line.
Scaling a sequential program into chain of queues
0
0
1
0
0
121
36,978,007
2016-05-02T08:19:00.000
2
0
0
0
0
python,selenium,switch-statement
0
36,978,124
0
1
0
false
0
0
With .get(url), just like you got to the first page.
1
0
0
0
My main Question: How do I switch pages? I did some things on a page and than switch to another one, how do I update the driver to be the current page?
Selenium Python, New Page
0
0.379949
1
0
1
108
36,993,230
2016-05-02T23:30:00.000
0
0
1
0
1
python-sphinx,restructuredtext
0
66,927,504
0
2
0
false
0
0
Perhaps indicate the start and end of the section where the files should go with a comment (.. START_GLOB_INCLUDE etc), and then have a build pre-process step that finds the files you want and rewrites that section of the master file.
1
7
0
0
I am trying to write documentation and want and have multiply files used by multiple toc trees. Previously I used an empty file with .. include:: <isonum.txt> however, this does not work for multiply files in a directory with sub directories. Another solution I have used was to use a relative file path to the index file I am linking to. However this messes up the sphinx nav tree. So my question is how to include a directory of files with RST and Sphinx?
How to include a directory of files with RST and Sphinx
0
0
1
0
0
5,575
37,002,150
2016-05-03T10:51:00.000
2
0
1
0
0
python-3.x,dll,ctypes
0
38,547,145
0
1
0
true
0
1
I would recommend using Cython to do your wrapping. Cython allows you to use C/C++ code directly with very little changes (in addition to some boilerplate). For wrapping large libraries, it's often straightforward to get something up and running very quickly with minimal extra wrapping work (such as in Ctypes). It's also been my experience that Cython scales better... although it takes more front end work to stand Cython up rather than Ctypes, it is in my opinion more maintainable and lends itself well to the programmatic generation of wrapping code to which you allude.
1
2
0
1
I know how to use ctypes to call a function from a C++ .dll in Python by creating a "wrapper" function that casts the Python input types to C. I think of this as essentially recreating the function signatures in Python, where the function body contains the type cast to C and a corresponding .dll function call. I currently have a set of C++ .dll files. Each library contains many functions, some of which are overloaded. I am tasked with writing a Python interface for each of these .dll files. My current way forward is to "use the hammer I have" and go through each function, lovingly crafting a corresponding Python wrapper for each... this will involve my looking at the API documentation for each of the functions within the .dlls and coding them up one by one. My instinct tells me, though, that there may be a much more efficient way to go about this. My question is: Is there a programmatic way of interfacing with a Windows C++ .dll that does not require crafting corresponding wrappers for each of the functions? Thanks.
How to programmatically wrap a C++ dll with Python
0
1.2
1
0
0
1,193
37,061,089
2016-05-05T22:08:00.000
4
0
1
0
0
python,tensorflow,jupyter
0
46,785,026
0
14
0
false
0
0
Here is what I did to enable tensorflow in Anaconda -> Jupyter. Install Tensorflow using the instructions provided at Go to /Users/username/anaconda/env and ensure Tensorflow is installed Open the Anaconda navigator and go to "Environments" (located in the left navigation) Select "All" in teh first drop down and search for Tensorflow If its not enabled, enabled it in the checkbox and confirm the process that follows. Now open a new Jupyter notebook and tensorflow should work
2
34
1
0
I installed Jupyter notebooks in Ubuntu 14.04 via Anaconda earlier, and just now I installed TensorFlow. I would like TensorFlow to work regardless of whether I am working in a notebook or simply scripting. In my attempt to achieve this, I ended up installing TensorFlow twice, once using Anaconda, and once using pip. The Anaconda install works, but I need to preface any call to python with "source activate tensorflow". And the pip install works nicely, if start python the standard way (in the terminal) then tensorflow loads just fine. My question is: how can I also have it work in the Jupyter notebooks? This leads me to a more general question: it seems that my python kernel in Jupyter/Anaconda is separate from the python kernel (or environment? not sure about the terminology here) used system wide. It would be nice if these coincided, so that if I install a new python library, it becomes accessible to all the varied ways I have of running python.
Trouble with TensorFlow in Jupyter Notebook
0
0.057081
1
0
0
87,389
37,061,089
2016-05-05T22:08:00.000
-1
0
1
0
0
python,tensorflow,jupyter
0
67,094,115
0
14
0
false
0
0
Open an Anaconda Prompt screen: (base) C:\Users\YOU>conda create -n tf tensorflow After the environment is created type: conda activate tf Prompt moves to (tf) environment, that is: (tf) C:\Users\YOU> then install Jupyter Notebook in this (tf) environment: conda install -c conda-forge jupyterlab - jupyter notebook Still in (tf) environment, that is type (tf) C:\Users\YOU>jupyter notebook The notebook screen starts!! A New notebook then can import tensorflow FROM THEN ON To open a session click Anaconda prompt, type conda activate tf the prompt moves to tf environment (tf) C:\Users\YOU> then type (tf) C:\Users\YOU>jupyter notebook
2
34
1
0
I installed Jupyter notebooks in Ubuntu 14.04 via Anaconda earlier, and just now I installed TensorFlow. I would like TensorFlow to work regardless of whether I am working in a notebook or simply scripting. In my attempt to achieve this, I ended up installing TensorFlow twice, once using Anaconda, and once using pip. The Anaconda install works, but I need to preface any call to python with "source activate tensorflow". And the pip install works nicely, if start python the standard way (in the terminal) then tensorflow loads just fine. My question is: how can I also have it work in the Jupyter notebooks? This leads me to a more general question: it seems that my python kernel in Jupyter/Anaconda is separate from the python kernel (or environment? not sure about the terminology here) used system wide. It would be nice if these coincided, so that if I install a new python library, it becomes accessible to all the varied ways I have of running python.
Trouble with TensorFlow in Jupyter Notebook
0
-0.014285
1
0
0
87,389
37,064,168
2016-05-06T04:19:00.000
0
0
0
0
0
python,kivy
0
37,085,902
0
1
0
false
0
1
How can you imagine a user scrolling through 40000 labels? You should rethink your app design. Consider adding a text input, and based on the given string, fetch filtered data from the database you have.
1
0
0
0
I have a GridView inside of a ScrollView. I am trying to create and display approximately ~12,000 items in the GridView (which clearly will not display appropriately on screen), but the number of items could feasible be ~40,000. Currently ~18 seconds are spent constructing all of the items (Labels), and any resizing of the window results in another significant delay. How can I speed up the construction and rendering of the items? I don't know how to do paging or delayed, on-demand loading on a ScrollView.
Kivy ScrollView (with Gridview) Suffering Performance Issues
0
0
1
0
0
525
37,098,546
2016-05-08T09:57:00.000
92
0
0
0
0
python,tensorflow
0
37,102,908
0
4
0
true
0
0
I'd recommend to always use tf.get_variable(...) -- it will make it way easier to refactor your code if you need to share variables at any time, e.g. in a multi-gpu setting (see the multi-gpu CIFAR example). There is no downside to it. Pure tf.Variable is lower-level; at some point tf.get_variable() did not exist so some code still uses the low-level way.
1
134
0
1
As far as I know, Variable is the default operation for making a variable, and get_variable is mainly used for weight sharing. On the one hand, there are some people suggesting using get_variable instead of the primitive Variable operation whenever you need a variable. On the other hand, I merely see any use of get_variable in TensorFlow's official documents and demos. Thus I want to know some rules of thumb on how to correctly use these two mechanisms. Are there any "standard" principles?
Difference between Variable and get_variable in TensorFlow
0
1.2
1
0
0
43,849
37,123,971
2016-05-09T19:31:00.000
0
0
1
0
0
python,python-3.x
0
37,124,089
1
2
0
true
0
0
fix your PATH environment variable so it has the global python directory declared before the anacaonda directory.
1
0
0
0
Good afternoon. I have been learning Virtualenv and Virtualenvwrapper. I then decided I wanted to install Anaconda Python again so I could continue learning how to do data analysis. Then I saw where you can use conda to make a virtual environment for Anaconda. I installed it and told it not to add the path to my bashrc file but then conda was not recognized. So then I reinstalled and said yes. But now my global python is set to anaconda 3.5 which I do not want. How can I use conda to set up a virtual environment without affecting my global python of 2.7? Thank you.
Installing Anaconda Python in a virtual world without changing global Python version
0
1.2
1
0
0
255
37,127,292
2016-05-10T00:05:00.000
0
0
1
0
0
python,python-2.7,import,module,anaconda
0
45,157,942
0
2
0
false
0
0
To add the python path for anaconda if you are on windows: Right click my computer Go to advanced settings Click on environment variables Find the PATH variable and click edit Add the path where your python.exe file is located Example: C:\Anaconda3 - might not work C:\Anaconda3 - then this should work Same thing for those, who have other installations.
1
1
0
0
I used IDLE for some time, then for a class they told us to download Anaconda, which I ended up not using, but still downloaded it anyway. I uninstalled anaconda and deleted all the files from my CPU and started using IDLE again. I now can't import a module to IDLE because it can't find it. I think anaconda messed up the python path, but I don't know how to change it so I can import modules back to python. How can I determine what the python path is and how can I change it so when I download modules I can import them to IDLE again? I am running OsX 10.10.5 and Python 2.7.10.
Reset python path after Anaconda
0
0
1
0
0
3,476
37,177,322
2016-05-12T04:31:00.000
1
0
0
0
0
python,zeromq,multicast,pyzmq
1
45,479,765
0
1
0
false
0
0
Here is the general procedure which works for me: 1. download zeromq package (using zeromq-4.1.5.tar.gz as example) 2. tar zxvf zeromq-4.1.5.tar.gz 3. cd zeromq-4.1.5 4. apt-get install libpgm-dev 5. ./configure --with-pgm && make && make install 6. pip install --no-binary :all: pyzmq Then you can use pgm/epgm as you want.
1
1
0
0
I have zmq version 4.1.3 and pyzmq version 15.2.0 installed on my machine (I assume through pip but I dont remember now). I have a need to connect to a UDP epgm socket but get the error "protocol not supported". I have searched the vast expanses of the internet and have found the answer: "build zero mq with --with-pgm option". Does anyone know how to do that? I searched around the harddrive and found the zeromq library in pkgs in my python directory and found some .so files but I dont see any setup.py or anything to recompile with the mysterious --with-pgm option.
How to install pyzmq "--with-pgm"
0
0.197375
1
0
1
1,035
37,178,582
2016-05-12T06:10:00.000
2
0
0
0
0
python,flask
0
37,179,018
0
3
0
false
1
0
CTRL+C is the right way to quit the app, I do not think that you can visit the url after CTRL+C. In my environment it works well. What is the terminal output after CTRL+C? Maybe you can add some details. You can try to visit the url by curl to test if browser cache or anything related with browser cause this problem.
2
2
0
0
I made a flask app following flask's tutorial. After python flaskApp.py, how can I stop the app? I pressed ctrl + c in the terminal but I can still access the app through the browser. I'm wondering how to stop the app? Thanks. I even rebooted the vps. After the vps is restated, the app still is running!
How to stop flask app.run()?
0
0.132549
1
0
0
13,550
37,178,582
2016-05-12T06:10:00.000
1
0
0
0
0
python,flask
0
43,197,195
0
3
0
false
1
0
Have you tried pkill python? WARNING: do not do so before consulting your system admin if are sharing a server with others.
2
2
0
0
I made a flask app following flask's tutorial. After python flaskApp.py, how can I stop the app? I pressed ctrl + c in the terminal but I can still access the app through the browser. I'm wondering how to stop the app? Thanks. I even rebooted the vps. After the vps is restated, the app still is running!
How to stop flask app.run()?
0
0.066568
1
0
0
13,550
37,184,618
2016-05-12T10:44:00.000
4
0
0
0
0
python,c++,macos,numpy,blas
0
37,185,292
0
3
0
false
0
0
numpy.show_config() just tells that info is not available on my Debian Linux. However /usr/lib/python3/dist-packages/scipy/lib has a subdirectory for blas which may tell you what you want. There are a couple of test programs for BLAS in subdirectory tests. Hope this helps.
1
33
1
0
I use numpy and scipy in different environments (MacOS, Ubuntu, RedHat). Usually I install numpy by using the package manager that is available (e.g., mac ports, apt, yum). However, if you don't compile Numpy manually, how can you be sure that it uses a BLAS library? Using mac ports, ATLAS is installed as a dependency. However, I am not sure if it is really used. When I perform a simple benchmark, the numpy.dot() function requires approx. 2 times the time than a dot product that is computed using the Eigen C++ library. I am not sure if this is a reasonable result.
Find out if/which BLAS library is used by Numpy
1
0.26052
1
0
0
28,680
37,188,623
2016-05-12T13:37:00.000
1
0
1
1
0
python,opencv,ubuntu
0
37,188,746
0
9
0
false
0
0
This is because you have multiple installations of python in your machine.You should make python3 the default, because by default is the python2.7
1
27
0
0
I want to install OpenCV for python3 in ubuntu 16.04. Fist I tried running sudo apt-get install python3-opencv which is how I pretty much install all of my python software. This could not find a repository. The install does work however if I do sudo apt-get install python-opencv this issue with this is that by not adding the three to python it installs for python 2 which I do not use. I would really perfer not to have to build and install from source so is there a way I can get a repository? I also tried installing it with pip3 and it could not find it either.
Ubuntu, how to install OpenCV for python3?
1
0.022219
1
0
0
54,785
37,207,589
2016-05-13T10:25:00.000
2
0
0
0
0
algorithm,python-3.x,discrete-mathematics,binomial-coefficients
0
37,212,468
0
2
0
true
0
0
First you could start with the fact that : C(n,k) = (n/k) C(n-1,k-1). You can prouve that C(n,k) is divisible by n/gcd(n,k). If n is prime then n divides C(n,k). Check Kummer's theorem: if p is a prime number, n a positive number, and k a positive number with 0< k < n then the greatest exponent r for which p^r divides C(n,k) is the number of carries needed in the subtraction n-k in base p. Let us suppose that n>4 : if p>n then p cannot divide C(n,k) because in base p, n and k are only one digit wide → no carry in the subtraction so we have to check for prime divisors in [2;n]. As C(n,k)=C(n,n-k) we can suppose k≤n/2 and n/2≤n-k≤n for the prime divisors in the range [n/2;n] we have n/2 < p≤n, or equivalently p≤n<2p. We have p≥2 so p≤n < p² which implies that n has exactly 2 digits when written in base p and the first digit has to be 1. As k≤n/2 < p, k can only be one digit wide. Either the subtraction as one carry and one only when n-k< p ⇒ p divides C(n,k); either the subtraction has no carry and p does not divide C(n,k). The first result is : every prime number in [n-k;n] is a prime divisor of C(n,k) with exponent 1. no prime number in [n/2;n-k] is a prime divisor of C(n,k). in [sqrt(n); n/2] we have 2p≤n< p², n is exactly 2 digits wide in base p, k< n implies k has at most 2 digits. Two cases: only one carry on no carry at all. A carry exists only if the last digit of n is greater than the last digit of p iif n modulo p < k modulo p The second result is : For every prime number p in [sqrt(n);n/2] p divides C(n;k) with exponent 1 iff n mod p < k mod p p does not divide C(n;k) iff n mod p ≥ k mod p in the range [2; sqrt(n)] we have to check all the prime numbers. It's only in this range that a prime divisor will have an exponent greater than 1
1
3
1
0
I'm interested in tips for my algorithm that I use to find out the divisors of a very large number, more specifically "n over k" or C(n, k). The number itself can range very high, so it really needs to take time complexity into the 'equation' so to say. The formula for n over k is n! / (k!(n-k)!) and I understand that I must try to exploit the fact that factorials are kind of 'recursive' somehow - but I havent yet read too much discrete mathematics so the problem is both of a mathematical and a programming nature. I guess what I'm really looking for are just some tips heading me in the right direction - I'm really stuck.
Smart algorithm for finding the divisors of a binomial coefficient
0
1.2
1
0
0
846
37,219,045
2016-05-13T20:43:00.000
1
0
1
0
0
python,windows,bash,cygwin
0
68,331,346
0
6
0
false
0
0
A better way (in my opinion): Create a shortcut: Set the target to %systemroot%\System32\cmd.exe /c "python C:\Users\MyUsername\Documents\MyScript.py" Start In: C:\Users\MyUsername\Documents\ Obviously change the path to the location of your script. May need to add escaped quotes if there is a space in it.
1
8
0
0
I have a python script I run using Cygwin and I'd like to create a clickable icon on the windows desktop that could run this script without opening Cygwin and entering in the commands by hand. how can I do this?
Windows: run python command from clickable icon
0
0.033321
1
0
0
33,470
37,227,938
2016-05-14T14:37:00.000
0
0
0
0
0
python,deep-learning
0
37,228,094
0
1
0
false
0
0
Why not add a preprocessing step, where you would either (a) physically move the images to folders associated with bucket and/or rename them, or (b) first scan through all images (headers only) to build the in-memory table of image filenames and their sizes/buckets, and then the random sampling step would be quite simple to implement.
1
0
1
0
For a Deep Learning application I am building, I have a dataset of about 50k grayscale images, ranging from about 300*2k to 300*10k pixels. Loading all this data into memory is not possible, so I am looking for a proper way to handle reading in random batches of data. One extra complication with this is, I need to know the width of each image before building my Deep Learning model, to define different size-buckets within the data (for example: [2k-4k, 4k-6k, 6k-8k, 8k-10k]. Currently, I am working with a smaller dataset and just load each image from a png file, bucket them by size and start learning. When I want to scale up this is no longer possible. To train the model, each batch of the data should be (ideally) fully random from a random bucket. A naive way of doing this would be saving the sizes of the images beforehand, and just loading each random batch when it is needed. However, this would result in a lot of extra loading of data and not very efficient memory management. Does anyone have a suggestion how to handle this problem efficiently? Cheers!
Proper way of loading large amounts of image data
1
0
1
0
0
811
37,240,431
2016-05-15T15:55:00.000
1
0
1
0
0
python,visual-studio,ptvs
0
49,196,161
0
2
0
false
1
0
workaround rather than full answer. I encountered this problem while importing my own module which ran code (which was erroring) on import. By setting the imported module to the startup script, I was able to step through the startup code in the module and debug. My best guess is that visual studio 2015 decided the imported module was a python standard library, but it really isn't viable to turn on the 'debug standard library option' as many standard library modules generate errors on import themselves.
1
2
0
0
I am trying to debug a scrapy project , built in Python 2.7.1 in visual studio 2013. I am able to reach breakpoints, but when I do step into/ step over the debugger seems to continue the exceution as if I did resume (F5). I am working with standard python launcher. Any idea how to make the step into/over functionality work?
python tools visual studio - step into not working
0
0.099668
1
0
0
2,148
37,252,527
2016-05-16T11:11:00.000
36
0
0
0
0
python,pyspark,py4j
0
37,252,533
0
3
0
false
1
0
using the logging module run: logging.getLogger("py4j").setLevel(logging.ERROR)
1
22
0
0
Once logging is started in INFO level I keep getting bunch of py4j.java_gateway:Received command c on object id p0 on your logs. How can I hide it?
how to hide "py4j.java_gateway:Received command c on object id p0"?
0
1
1
0
0
6,581
37,289,244
2016-05-18T02:29:00.000
1
0
1
0
0
c#,python,debugging,visual-studio-2015,ptvs
1
37,337,168
0
1
1
true
0
0
There seems to be a bug/issue in PTVS and/or Visual Studio in that the watch window does not realize that the context has switched to Python, unless there is at least one call to a python method in the call stack. so if embedded script does: print ('foo') , the watch window thinks it's still in c# context. If the embedded window has this instead - the watch window switches to Python def Test(): print ('foo') Test()
1
1
0
0
How does Visual Studio switch between python and C# expressions when debugging a process that mixes both C# an Python by embedding and invoking python interpreter? For background: My Visual Studio 2015 with PTVS 2.2.2 did not allow me to specify any python expressions in the watch window (on at least two machines), until something switched, and now it only allows using Python expressions in the same watch window (but not C#). I am not sure what I did, is there a proper way to switch between the two languages? Once Python expressions started working, the C# expressions now all fall back on 'internal error in expression evaluator' both in watch and immediate window. The whole thing might have been related to me playing around with Python Debug Interactive window, but it feels very ad hoc and I am wondering how to properly configure this.
How to specify language for watch window expressions in a multi-language debugging environment?
1
1.2
1
0
0
55
37,311,172
2016-05-18T22:33:00.000
3
0
0
0
0
python,xml,django
0
37,311,205
0
1
0
true
1
0
You might want to consider using PycURL or Twisted. These should have the asynchronous capabilities you're looking for.
1
1
0
0
Using the form i create several strings that looks like xml data. One part of this strings i need to send on several servers using urllib and another part, on soap server, then i use suds library. When i receive the respond, i need to compare all of this data and show it to user. The sum of these server is nine and quantity of servers can grow. When i make this requests successively, it takes lot of time. According to this i have a question, is there some python library that can make different requests at the same time? Thank you for answer.
Django sending xml requests at the same time
1
1.2
1
0
1
112
37,313,320
2016-05-19T03:10:00.000
1
0
0
0
0
python,audio
0
37,313,404
0
2
0
false
0
0
i handle this by using Matlab.python can do the same. (left-channel+right-channel)/2.0
2
1
1
0
I am playing around with some audio processing in python. Right now I have the audio as a 2x(Large Number) numpy array. I want to combine the channels since I only want to try some simple stuff. I am just unsure how I should do this mathematically. At first I thought this is kind of like converting an RGB image to gray-scale where you would average each of the color channels to create a gray pixel. Then I thought that maybe I should add them due to the superposition principal of waves (then again average is just adding and dividing by two.) Does anyone know the best way to do this?
How to convert two channel audio into one channel audio
0
0.099668
1
0
0
3,971
37,313,320
2016-05-19T03:10:00.000
1
0
0
0
0
python,audio
0
37,313,414
0
2
0
true
0
0
To convert any stereo audio to mono, what I have always seen is the following: For each pair of left and right samples: Add the values of the samples together in a way that will not overflow Divide the resulting value by two Use this resulting value as the sample in the mono track - make sure to round it properly if you are converting it to an integer value from a floating point value
2
1
1
0
I am playing around with some audio processing in python. Right now I have the audio as a 2x(Large Number) numpy array. I want to combine the channels since I only want to try some simple stuff. I am just unsure how I should do this mathematically. At first I thought this is kind of like converting an RGB image to gray-scale where you would average each of the color channels to create a gray pixel. Then I thought that maybe I should add them due to the superposition principal of waves (then again average is just adding and dividing by two.) Does anyone know the best way to do this?
How to convert two channel audio into one channel audio
0
1.2
1
0
0
3,971
37,328,773
2016-05-19T16:12:00.000
3
1
0
1
0
python,plugins,jenkins
0
37,616,786
0
2
0
true
0
0
As far as my experiments for jenkins and python goes, shining panda plug-in doesn't install python in slave machines in fact it uses the existing python library set in the jenkins configuration to run python commands. In order to install python on slaves, I would recommend to use python virtual environment which comes along with shining panda and allows to run the python commands and then close the virtual environment.
1
7
0
0
The Jenkins ShiningPanda plugin provides a Managers Jenkins - Configure System setting for Python installations... which includes the ability to Install automatically. This should allow me to automatically setup Python on my slaves. But I'm having trouble figuring out how to use it. When I use the Add Installer drop down it gives me the ability to Extract .zip/.tar.gz Run Batch Command Run Shell Command But I can't figure out how people us these options to install Python. Especially as I need to install Python on Windows, Mac, & Linux. Other Plugins like Ant provide an Ant installations... which installs Ant automatically. Is this possible with Python?
How to configure the Jenkins ShiningPanda plugin Python Installations
1
1.2
1
0
0
4,515
37,340,848
2016-05-20T07:43:00.000
0
0
0
0
0
javascript,jquery,python,raspberry-pi,rfid
0
37,479,489
0
3
0
false
1
0
To update page data without delay you need to use websockets. There is no need in using heavy frameworks. Once page is loaded first time you open websocket with js and listen to it. Every time you read a tag you post all necessary data to this open socket and it instantly appear on client side.
1
0
0
0
So I'm using a Raspberry Pi 2 with a rfid scanner and wrote a script in python that logs people in and out of our attendance system, connects to our postgresql database and returns some data like how much overtime they have and whether their action was a login or logout. This data is meant to be displayed on a very basic webpage (that is not even on a server or anything) that just serves as a graphical interface to display said data. My problem is that I cannot figure out how to dynamically display that data that my python script returns on the webpage without having to refresh it. I'd like it to simply fade in the information, keep it there for a few seconds and then have it fade out again (at which point the system becomes available again to have someone else login or logout). Currently I'm using BeautifulSoup4 to edit the Html File and Chrome with the extension "LivePage" to then automatically update the page which is obviously a horrible solution. I'm hoping someone here can point me in the right direction as to how I can accoumplish this in a comprehensible and reasonably elegant way. TL;DR: I want to display the results of my python script on my web page without having to refresh it.
Dynamically update static webpage with python script
1
0
1
0
0
1,371
37,381,061
2016-05-23T00:28:00.000
0
0
0
0
0
python-3.x,pygame
0
37,381,094
0
1
0
true
0
1
In my experience, most uses for sprites can work with rectangular collision boxes. Only if you had an instance where a ball needed to be collided with then you would have to use an actual sprite. In the long run, I would recommend learning sprites, but if you need help with sprites, the pygame documentation has loads of information!
1
2
0
1
Ok so I am very bad at sprites and I just don't get how to use sprites with blitted images. So I was wondering, if I make the image and just make a rectangle around that object that follows that image around, would that be a good replacement for sprites, the rectangle would be the one that's colliding for instances... Or should I try learning sprites. If I should is there any tutorial that could help me with using blitted images to make sprite characters that use python 3.4? Thank you!
Replacing sprites idea
1
1.2
1
0
0
25
37,386,595
2016-05-23T08:52:00.000
4
0
0
0
1
python,r,text-mining,lda,mallet
0
37,416,493
0
1
0
true
0
0
Thank you for this thorough summary! As an alternative to topicmodels try the package mallet in R. It runs Mallet in a JVM directly from R and allows you to pull out results as R tables. I expect to release a new version soon, and compatibility with tm constructs is something others have requested. To clarify, it's a good idea for documents to be at most around 1000 tokens long (not vocabulary). Any more and you start to lose useful information. The assumption of the model is that the position of a token within a given document doesn't tell you anything about that token's topic. That's rarely true for longer documents, so it helps to break them up. Another point I would add is that documents that are too short can also be a problem. Tweets, for example, don't seem to provide enough contextual information about word co-occurrence, so the model often devolves into a one-topic-per-doc clustering algorithm. Combining multiple related short documents can make a big difference. Vocabulary curation is in practice the most challenging part of a topic modeling workflow. Replacing selected multi-word terms with single tokens (for example by swapping spaces for underscores) before tokenizing is a very good idea. Stemming is almost never useful, at least for English. Automated methods can help vocabulary curation, but this step has a profound impact on results (much more than the number of topics) and I am reluctant to encourage people to fully trust any system. Parameters: I do not believe that there is a right number of topics. I recommend using a number of topics that provides the granularity that suits your application. Likelihood can often detect when you have too few topics, but after a threshold it doesn't provide much useful information. Using hyperparameter optimization makes models much less sensitive to this setting as well, which might reduce the number of parameters that you need to search over. Topic drift: This is not a well understood problem. More examples of real-world corpus change would be useful. Looking for changes in vocabulary (e.g. proportion of out-of-vocabulary words) is a quick proxy for how well a model will fit.
1
4
1
0
Introduction I'd like to know what other topic modellers consider to be an optimal topic-modelling workflow all the way from pre-processing to maintenance. While this question consists of a number of sub-questions (which I will specify below), I believe this thread would be useful for myself and others who are interested to learn about best practices of end-to-end process. Proposed Solution Specifications I'd like the proposed solution to preferably rely on R for text processing (but Python is fine also) and topic-modelling itself to be done in MALLET (although if you believe other solutions work better, please let us know). I tend to use the topicmodels package in R, however I would like to switch to MALLET as it offers many benefits over topicmodels. It can handle a lot of data, it does not rely on specific text pre-processing tools and it appears to be widely used for this purpose. However some of the issues outline below are also relevant for topicmodels too. I'd like to know how others approach topic modelling and which of the below steps could be improved. Any useful piece of advice is welcome. Outline Here is how it's going to work: I'm going to go through the workflow which in my opinion works reasonably well, and I'm going to outline problems at each step. Proposed Workflow 1. Clean text This involves removing punctuation marks, digits, stop words, stemming words and other text-processing tasks. Many of these can be done either as part of term-document matrix decomposition through functions such as for example TermDocumentMatrix from R's package tm. Problem: This however may need to be performed on the text strings directly, using functions such as gsub in order for MALLET to consume these strings. Performing in on the strings directly is not as efficient as it involves repetition (e.g. the same word would have to be stemmed several times) 2. Construct features In this step we construct a term-document matrix (TDM), followed by the filtering of terms based on frequency, and TF-IDF values. It is preferable to limit your bag of features to about 1000 or so. Next go through the terms and identify what requires to be (1) dropped (some stop words will make it through), (2) renamed or (3) merged with existing entries. While I'm familiar with the concept of stem-completion, I find that it rarely works well. Problem: (1) Unfortunately MALLET does not work with TDM constructs and to make use of your TDM, you would need to find the difference between the original TDM -- with no features removed -- and the TDM that you are happy with. This difference would become stop words for MALLET. (2) On that note I'd also like to point out that feature selection does require a substantial amount of manual work and if anyone has ideas on how to minimise it, please share your thoughts. Side note: If you decide to stick with R alone, then I can recommend the quanteda package which has a function dfm that accepts a thesaurus as one of the parameters. This thesaurus allows to to capture patterns (usually regex) as opposed to words themselves, so for example you could have a pattern \\bsign\\w*.?ups? that would match sign-up, signed up and so on. 3. Find optimal parameters This is a hard one. I tend to break data into test-train sets and run cross-validation fitting a model of k topics and testing the fit using held-out data. Log likelihood is recorded and compared for different resolutions of topics. Problem: Log likelihood does help to understand how good is the fit, but (1) it often tends to suggest that I need more topics than it is practically sensible and (2) given how long it generally takes to fit a model, it is virtually impossible to find or test a grid of optimal values such as iterations, alpha, burn-in and so on. Side note: When selecting the optimal number of topics, I generally select a range of topics incrementing by 5 or so as incrementing a range by 1 generally takes too long to compute. 4. Maintenance It is easy to classify new data into a set existing topics. However if you are running it over time, you would naturally expect that some of your topics may cease to be relevant, while new topics may appear. Furthermore, it might be of interest to study the lifecycle of topics. This is difficult to account for as you are dealing with a problem that requires an unsupervised solution and yet for it to be tracked over time, you need to approach it in a supervised way. Problem: To overcome the above issue, you would need to (1) fit new data into an old set of topics, (2) construct a new topic model based on new data (3) monitor log likelihood values over time and devise a threshold when to switch from old to new; and (4) merge old and new solutions somehow so that the evolution of topics would be revealed to a lay observer. Recap of Problems String cleaning for MALLET to consume the data is inefficient. Feature selection requires manual work. Optimal number of topics selection based on LL does not account for what is practically sensible Computational complexity does not give the opportunity to find an optimal grid of parameters (other than the number of topics) Maintenance of topics over time poses challenging issues as you have to retain history but also reflect what is currently relevant. If you've read that far, I'd like to thank you, this is a rather long post. If you are interested in the suggest, feel free to either add more questions in the comments that you think are relevant or offer your thoughts on how to overcome some of these problems. Cheers
What is the optimal topic-modelling workflow with MALLET?
1
1.2
1
0
0
1,478
37,402,110
2016-05-23T23:29:00.000
0
0
1
0
0
windows,python-3.x,polyglot
1
61,607,500
0
4
0
false
0
0
This will work by typing it on Anaconda Prompt: pip install polyglot==14.11
1
0
0
0
HI i want to install the polyglot on python version 3.5. it requires to have numpy installed which i already have and also libicu-dev. My OS is windows X86. I went through cmd, pip and wrote "pip install libicu-dev" but it gives me an error that could not find a version to satisfy the requirements. how can i install libicu-dev?
Getting error on installing polyglot for python 3.5?
0
0
1
0
0
2,805
37,413,302
2016-05-24T12:15:00.000
-1
0
0
0
0
python,scikit-learn,cross-validation
0
53,760,310
0
4
0
false
0
0
For individual scores of each class, use this : f1 = f1_score(y_test, y_pred, average= None) print("f1 list non intent: ", f1)
1
5
1
1
I'm using cross_val_score from scikit-learn (package sklearn.cross_validation) to evaluate my classifiers. If I use f1 for the scoring parameter, the function will return the f1-score for one class. To get the average I can use f1_weighted but I can't find out how to get the f1-score of the other class. (precision and recall analogous) The functions in sklearn.metrics have a labels parameter which does this, but I can't find anything like this in the documentation. Is there a way to get the f1-score for all classes at once or at least specify the class which should be considered with cross_val_score?
f1 score of all classes from scikits cross_val_score
0
-0.049958
1
0
0
13,883
37,414,515
2016-05-24T13:04:00.000
0
0
1
0
0
python,list,operators
0
37,414,636
0
3
0
true
0
0
Why not create one list for the ints and one for the operators and them append from each list step by step? edit: you can first convert your ints to strings then, create a string by using string=''.joint(list) after that you can just eval(string) edit2: you can also take a look at the Sympy module that allows you to use symbolic math in python
2
0
0
0
EDIT: When I say function in the title, I mean mathematical function not programming function. Sorry for any confusion caused. I'm trying to create a function from randomly generated integers and operators. The approach I am currently taking is as follows: STEP 1: Generate random list of operators and integers as a list. STEP 2: Apply a set of rules to the list so I always end up with an integer, operator, integer, operator... etc list. STEP 3: Use the modified list to create a single answer once the operations have been applied to the integers. For example: STEP 1 RESULT: [1,2,+,-,2,/,3,8,*] STEP 2 RESULT: [1,+,2,-,2,/,3,*,8] - Note that I am using the operator command to generate the operators within the list. STEP 3 RESULT: The output is intended to be a left to right read function rather than applying BODMAS, so in this case I'd expect the output to be 8/3 (the output doesn't have to be an integer). So my question is: What function (and within what module) is available to help me combine the list as defined above. OR should I be combining the list in a different way to allow me to use a particular function? I am considering changing the way I generate the list in the first place so that I do the sort on the fly, but I think I'll end up in the same situation that I wouldn't know how to combine the integers and operators after going through the sort process. I feel like there is a simple solution here and I am tying myself up in knots unnecessarily! Any help is greatly appreciated, Dom
creating a simple function using lists of operators and integers
1
1.2
1
0
0
60
37,414,515
2016-05-24T13:04:00.000
0
0
1
0
0
python,list,operators
0
37,438,697
0
3
0
false
0
0
So, for those that are interested. I achieved what I was after by using the eval() function. Although not the most robust, within the particular loop I have written the inputs are closely controlled so I am happy with this approach for now.
2
0
0
0
EDIT: When I say function in the title, I mean mathematical function not programming function. Sorry for any confusion caused. I'm trying to create a function from randomly generated integers and operators. The approach I am currently taking is as follows: STEP 1: Generate random list of operators and integers as a list. STEP 2: Apply a set of rules to the list so I always end up with an integer, operator, integer, operator... etc list. STEP 3: Use the modified list to create a single answer once the operations have been applied to the integers. For example: STEP 1 RESULT: [1,2,+,-,2,/,3,8,*] STEP 2 RESULT: [1,+,2,-,2,/,3,*,8] - Note that I am using the operator command to generate the operators within the list. STEP 3 RESULT: The output is intended to be a left to right read function rather than applying BODMAS, so in this case I'd expect the output to be 8/3 (the output doesn't have to be an integer). So my question is: What function (and within what module) is available to help me combine the list as defined above. OR should I be combining the list in a different way to allow me to use a particular function? I am considering changing the way I generate the list in the first place so that I do the sort on the fly, but I think I'll end up in the same situation that I wouldn't know how to combine the integers and operators after going through the sort process. I feel like there is a simple solution here and I am tying myself up in knots unnecessarily! Any help is greatly appreciated, Dom
creating a simple function using lists of operators and integers
1
0
1
0
0
60
37,419,778
2016-05-24T17:03:00.000
1
0
0
0
0
python,screen,freeze,display
0
37,421,436
0
1
0
true
0
1
If by "the screen" you're talking about the terminal then I highly recommend checking out the curses library. It comes with the standard version of Python. It gives control of many different aspects of the terminal window including the functionality you described.
1
0
0
0
I searched the web and SO but did not find an aswer. Using Python, I would like to know how (if possible) can I stop the screen from updating its changes to the user. In other words, I would like to buid a function in Python that, when called, would freeze the whole screen, preventing the user from viewing its changes. And, when called again, would set back the screen to normal. Something like the Application.ScreenUpdating Property of Excel VBA, but applied directly to the whole screen. Something like: FreezeScreen(On) FreenScreen(Off) Is it possible? Thanks for the help!!
Using Python, how to stop the screen from updating its content?
1
1.2
1
0
0
1,178
37,463,782
2016-05-26T14:29:00.000
0
0
0
0
0
python,virtual-machine,virtualization,proxmox
0
37,565,566
0
2
0
false
0
0
The description parameter is only a message to show in proxmox UI, and it's not related to any function
1
1
0
0
I am using proxmoxer to manipulate machines on ProxMox (create, delete etc). Every time I am creating a machine, I provide a description which is being written in ProxMox UI in section "Notes". I am wondering how can I retrieve that information? Best would be if it can be done with ProxMox, but if there is not a way to do it with that Python module, I will also be satisfied to do it with plain ProxMox API call.
How can I retrieve Proxmox node notes?
1
0
1
0
1
256
37,480,048
2016-05-27T09:38:00.000
4
0
0
0
1
django,pythonanywhere
0
37,480,695
0
1
0
true
1
0
I Figured it out, thanks for the hint Mr. Raja Simon. In my PythonAnywhere Dashboard on Web Tab. I set something like this.. URL /media/ Directory /home//media_cdn *media_cdn is where my images located.
1
2
0
0
I am using Django and PythonAnywhere, and I want to make the DEBUG to False. But when I set it to False and make ALLOWED_HOSTS = ['*'], it works fine. But the problem is the media (or the images) is not displaying. Anyone encounter this and know how to resolve it?
Django + Pythonanywhere: How to disable Debug Mode
0
1.2
1
0
0
1,288
37,480,728
2016-05-27T10:10:00.000
0
0
0
0
0
python,r,csv
0
37,480,887
0
1
0
false
0
0
you can use list.data[[1]]$name1
1
0
1
0
Suppose that i have multiple .csv files with columns of same kind . If i wanted to access data of a particular column from a specified .csv file , how is it possible? All .csv files have been stored in list.data for ex: Suppose that here , list.data[1] gives me the first .csv file. How will i access a column of this file? I have tried list.data[1]$nameofthecolumn. But this is giving me null values. I am not much familiar with R. list.data[1]$name1 NULL list.data[1]$name1 NULL list.data[1] $ NULL
how to access a particular column of a particular csv file from many imported csv files
1
0
1
0
0
51
37,502,942
2016-05-28T19:17:00.000
3
0
0
0
1
python,python-2.7,user-interface,tkinter
0
37,503,535
0
1
0
true
0
1
Every tkinter program needs exactly one instance of Tk. Tkinter is a wrapper around an embedded tcl interpreter. Each instance of Tk gets its own copy of the interpreter, so two Tk instances have two different namespaces. If you need multiple windows, create one instance of Tk and then additional windows should be instances of Toplevel. While you can create, destroy, and recreate a root window, there's really no point. Instead, create the root window for the login screen, and then just delete the login screen widgets and replace them with your second window. This becomes trivial if you make each of your "windows" a separate class that inherits from tk.Frame. Because tkinter will destroy all child widgets when a frame is destroyed, it's easy to switch from one "window" to another. Create an instance of LoginFrame and pack it in the root window. When they've input a correct password, destroy that instance, create an instance of MainWindow and pack that.
1
0
0
0
I am starting to learn Tkinter and have been creating new windows with new instances of Tk every time. I just read that that wasn't a good practice. If so, why? And how could this be done better? I have seen others create windows with Toplevel and Frame instances. What are the benefits/drawbacks of using these instead? In case this makes a difference: The application that I am writing code for starts off with a login window and then proceeds to a second window is the entered password is correct.
Tkinter Creating Multiple Windows - Use new Tk instance or Toplevel or Frame?
0
1.2
1
0
0
1,757
37,527,124
2016-05-30T13:38:00.000
1
0
0
0
0
python,html,qt,groupbox
0
37,535,825
0
3
0
true
1
1
QGroupBox's title property does not support HTML. The only customization you can do through the title string (besides the text itself) is the addition of an ampersand (&) for keyboard accelerators. In short, unlike QLabel, you can't use HTML with QGroupBox.
1
1
0
0
I want to set my QGroupBox's title with HTML expressions in python program, e.g. : ABC. (subscript) Does anybody have an idea how to do this?
How can I set QGroupBox's title with HTML expressions? (Python)
0
1.2
1
0
0
372
37,534,440
2016-05-30T22:39:00.000
1
0
1
0
0
python,jupyter-notebook,command-line-arguments,papermill
0
60,984,594
0
11
0
false
0
0
A workaround is to make the jupyter notebook read the arguments from a file. From the command line, modify the file and run the notebook.
1
67
0
0
I'm wondering if it's possible to populate sys.argv (or some other structure) with command line arguments in a jupyter/ipython notebook, similar to how it's done through a python script. For instance, if I were to run a python script as follows: python test.py False Then sys.argv would contain the argument False. But if I run a jupyter notebook in a similar manner: jupyter notebook test.ipynb False Then the command line argument gets lost. Is there any way to access this argument from within the notebook itself?
Passing command line arguments to argv in jupyter/ipython notebook
0
0.01818
1
0
0
117,203
37,538,068
2016-05-31T06:11:00.000
0
0
0
0
0
python,svm
0
37,538,260
0
1
0
false
0
0
You can create one more feature vector in your training data, if the name of the book contains your predefined words then make it one otherwise zero.
1
1
1
0
I 'm using svm to predict the label from the title of a book. However I want to give more weight to some features pre defined. For example, if the title of the book contains words like fairy, Alice I want to label them as children's books. I'm using word n-gram svm. Please suggest how to achieve this using sklearn.
giving more weight to a feature using sklearn svm
0
0
1
0
0
156
37,556,808
2016-05-31T22:40:00.000
0
0
0
0
0
user-interface,python-3.x,tkinter,textbox,messagebox
0
37,559,356
0
1
0
true
0
1
simpledialog.askstring() was exactly what I needed.
1
1
0
1
I know how to make message boxes, and I know how to create textboxes. However, I don't know how to make a message box WITH a textbox in it. I've looked through the tkinter documentation and it seems message boxes are kind of limited. Is there a way to make a message box that contains a text entry field, or would I have to create a new pop up window from scratch?
Python: Create a message box with a text entry field using tkinter
0
1.2
1
0
0
392
37,559,502
2016-06-01T04:34:00.000
0
0
0
0
0
python
0
37,658,540
0
1
0
false
0
1
I found an alternative solution to this problem. In a python while loop push a single file into SD Card and use "adb mv" to rename the file in the SD Card after pushing the file, continue this until the device memory is full.
1
1
0
0
I know "adb push" command could be used to push files to android device. But in Python how do i push single file for example file.txt(size of about 50 KB) into android sdcard with different names(for ex. file1.txt, file2.txt etc) until the device storage is full. Any idea much appreciated.
Python push single file into android device with different names until the device runs out of memory
0
0
1
0
0
227
37,568,811
2016-06-01T12:36:00.000
1
0
0
0
0
python,django,url,sharing
0
37,651,587
0
2
0
true
1
0
As Seluck suggested I decided to go with base64 encoding and decoding: In the model my "link" property is now built from the standard url + base64.urlsafe_b64encode(str(media_id)) The url pattern I use to match the base64 pattern: base64_pattern = r'(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$' And finally in the view we decode the id to load the proper data: media_id = base64.urlsafe_b64decode(str(media_id)) media = Media.objects.get(pk=media_id)
1
0
0
0
I'm having a bit of a problem figuring out how to generate user friendly links to products for sharing. I'm currently using /product/{uuid4_of_said_product} Which is working quite fine - but it's a bit user unfriendly - it's kind of long and ugly. And I do not wish to use and id as it would allow users to "guess" products. Not that that is too much of an issue - I would like to avoid it. Do you have any hints on how to generate unique, user friendly, short sharing urls based on the unique item id or uuid?
Sharing URL generation from uuid4?
0
1.2
1
0
1
491
37,603,610
2016-06-02T23:18:00.000
4
0
1
0
0
python,multithreading
0
37,603,825
0
2
0
false
0
0
Premature Optimization This is a classic example of premature optimization. Without knowing how much time your threads spend blocking, presumably waiting for other writes to happen, it's unclear what you have to gain from creating the added complexity of managing thousands of locks. The Global Interpreter Lock Threading itself can be a premature optimization. Is your task easily threadable? Can many threads safely work in parallel? Tasks that require a large amount of shared state (i.e. many and frequent locks) are typically poor candidates for high thread counts. In python, you're likely to see even less benefit because of the GIL. Are your threads doing a lot of IO, or calling out to external applications, or using python modules written in C that properly releases the GIL? If not, threading might not actually give you any benefit. You can sidestep the GIL by using the multiprocessing module, but there's an overhead to passing locks and writes across process boundaries, and ironically, it might make your application much slower Queues Another option is to use a write queue. If threads don't actually need to share state, but they all need to write to the same object (i.e. very little reading from that object), you can simply add the writes to a queue and have a single thread process the writes, with no need for any locks.
1
7
0
0
I have a single-threaded python3 program that I'm trying to convert to use many threads. I have a tree-like data structure that gets read from and written to. Potentially many threads will want to read and write at the same time. One obvious way about this is to have a single lock for the entire data structure: no one can read while a write is happening, no more than one write can happen at a time, and no write can happen when there are pending reads. However, I'd like to make the locking more fine-grained for greater performance. It's a full 16-ary tree, and when fully populated has about 5 to 6 million leafs (mostly well-balanced in practice, but no guarantee). If I wanted the finest-grained locking, I could lock the parents of the leafs. That would mean over 100 thousand locks. I must admit, I haven't tried this yet. But I thought I'd ask first: are there any hardware limitations or performance reasons that should stop me from creating so many lock objects? That is, should I consider just locking down to, say, depth 2 from the root (256 locks)? Thanks for any insights. EDIT: More details: I don't know how many cores yet as we're still experimenting as to just how much computing power we'll need, but I'd hazard a guess that just a handful of cores will be used. I'm aiming for around 50,000 threads. There's async I/O, and one thread per socket. During a bootstrapping phase of the code, as many threads as possible will be running simultaneously (as limited by hardware), but that's a one-time cost. The one we're more interested in is once things are up and running. At that point, I'd hazard a guess that only several thousand per second are running. I need to measure the response time, but I'd guess it's around 10ms per wake period. That's a few 10s of threads active at a time (on average). Now that I write that out, maybe that's the answer to my question. If I only need a few 10s of threads reading or writing at a time, then I don't really need that fine-grained locking on the tree.
are there any limitations on the number of locks a python program can create?
1
0.379949
1
0
0
643
37,632,393
2016-06-04T16:13:00.000
1
0
0
0
0
python,tensorflow,jupyter-notebook,tensorboard
0
37,682,991
0
1
1
false
0
0
The jupyter stuff seems fine. In general, if you don't close TensorBoard properly, you'll find out as soon as you try to turn on TensorBoard again and it fails because port 6006 is taken. If that isn't happening, then your method is fine. As regards the logdir, passing in the top level logdir is generally best because that way you will get support for comparing multiple "runs" of the same code in TensorBoard. However, for this to work, it's important that each "run" be in its own subdirectory, e.g.: logs/run1/..tfevents.. logs/run2/..tfevents.. tensorboard --logdir=logs
1
1
1
0
what is the proper way to close tensorboard with jupyter notebook? I'm coding tensorflow on my jupyter notebook. To launch, I'm doing: 1. !tensorboard --logdir = logs/ open a new browser tab and type in localhost:6006 to close, I just do: close the tensorflow tab on my browser on jupyter notebook, I click on interrupt kernel Just wondering if this is the proper way.... BTW, in my code I set my log file as './log/log1'. when starting tensorboard, should I use --logdir = ./log or --logdir = ./log/log1? thank you very much.
how to close tensorboard server with jupyter notebook
0
0.197375
1
0
0
2,652
37,642,502
2016-06-05T13:31:00.000
1
0
1
0
1
python,ipython,jupyter
0
37,645,847
0
1
0
false
0
0
Ended up solving it with: pip install -U ipython ipython3 kernelspec install-self
1
1
0
0
I had anaconda2 installed, and manually added the python3 kernel, so I could chose between python2 and python3. The problem was that I added my system's python3 binary, not anaconda's, so I was missing all the libraries that anaconda brings. Specifically I couldnt import 'from scipy.misc import imread'. So I deleted anaconda2, installed anaconda3, but my jupyter notebook still uses my system's old python3 kernel. When I run sys.version inside the jupyter notebook I get python 3.4, but when I run it inside ipython in console I get python 3.5, with all the modules I need good to go. So how do I tell jupyter notebook specifically what bin to use as a kernel?
Jupyter Notebook uses my system's python3.4 instead of anaconda's python3.5 as a kernel
0
0.197375
1
0
0
519