Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
38,707,350
2016-08-01T20:14:00.000
1
0
0
1
python,subprocess,command-line-arguments
38,707,485
2
false
0
0
There are two things this needs to do: you need to handle command-line flags, and you need to send signals to another process. For the flags, you could use the argparse library, or simply sys.argv. For sending signals, you will need the Process ID (pid) of the already running process. Under Linux you can call ps, and check to see if there is another instance of the script running. If there is, send it a signal. Another alternative to signal handling is DBus. This is less cross-platform capable, though.
1
0
0
I'm trying to make a python script run in the background, and listen to commands. For example if I run my script: python my_script.py it will start running (and wait for commands). Then I wish to run: python my_script.py --do_something, open a different python process, and it will run a function do_something() in the previous process. I've seen this works on programs like PDPlayer where the flag --play causes the player to start playing a video. Can this also be done in python? I know how to handle the command line arguments using argparse. I need help regarding the communication between the two python processes. Also, I plan to cx_freeze the app, so the PID of the app can be found using psutil by searching the executable name. Thanks. btw, I'm using Windows...
Send commands to running script by running it with flags
0.099668
0
0
1,144
38,708,645
2016-08-01T21:46:00.000
0
0
0
0
python,orm,sqlalchemy
38,882,242
2
false
0
0
Turns out I needed to use association tables and the joinedload() function. The documentation is a bit wonky but I got there after playing with it for a while.
1
0
0
I'm trying to serialize results from a SQLAlchemy query. I'm new to the ORM so I'm not sure how to filter a result set after I've retrieved it. The result set looks like this, if I were to flatten the objects: A1 B1 V1 A1 B1 V2 A2 B2 V3 I need to serialize these into a list of objects, 1 per unique value for A, each with a list of the V values. I.E.: Object1: A: A1 B: B1 V: {V1, V2} Object2: A: A2 B: B2 V: {V3} Is there a way to iterate through all unique values on a given column, but with the ability to return a list of values from the other columns?
How do I filter SQLAlchemy results based on a columns' value?
0
1
0
591
38,709,029
2016-08-01T22:24:00.000
4
0
1
0
python
38,709,062
1
true
0
0
Both usages invoke the built-in function float. In python, functions are just values, so (float) is the same function reference as float. There is no casting involved. I would prefer the first usage, because it's more clear.
1
0
0
If you just type float(44*2.2) on the interpreter and (float)(44*2.2) they return the same result. Is one explicitly casting the result, and one using it as a function? And what is the use of each case and any pro's/con's for each case? This is Python 3, I'm sure it'll work with Python 2 as well.
What is the difference between float(44*2.2) and (float)(44*2.2) in PYTHON 3?
1.2
0
0
51
38,709,439
2016-08-01T23:11:00.000
3
0
0
0
python,seaborn
38,718,933
1
true
0
0
Changes were made in 0.7.1 to clean up the top-level namespace a bit. axlabel was not used anywhere in the documentation, so it was moved to make the main functions more discoverable. You can still access it with sns.utils.axlabel. Sorry for the inconvenience. Note that it's usually just as easy to do ax.set(xlabel="...", ylabel="..."), though it won't get you exactly what you want here because you can't set the size to something different than the default in that line.
1
2
1
Having trouble with my upgrade to Seaborn 0.7.1. Conda only has 0.7.0 so I removed it and installed 0.7.1 with pip. I am now getting this error: AttributeError: module 'seaborn' has no attribute 'axlabel' from this line of code sns.axlabel(xlabel="SAMPLE GROUP", ylabel=y_label, fontsize=16) I removed and reinstalled 0.7.0 and it fixed the issue. However, in 0.7.1, axlabel appears to still be there and I didn't see anything about changes to it in the release notes. What am I missing?
Upgraded Seaborn 0.7.0 to 0.7.1, getting AttribueError for missing axlabel
1.2
0
0
853
38,711,658
2016-08-02T04:14:00.000
1
0
0
1
python,rest,nginx,docker
38,712,037
1
false
1
0
Consider doing a docker ps -a to get the stopped container's identifier. -a here just means listing all of the containers you got on your machine. Then do docker inspect and look for the LogPath attribute. Open up the container's log file and see if you could identify the root cause on why the process died inside the container. (You might need root permission to do this) Note: A process can die because of anything, e.g. code fault If nothing suspicious is presented in the log file then you might want to check on the State attribute. Also check the ExitCode attribute to see if you can work backwards to see which line of your application could have exited using that code. Also check the OOMKilled flag, if this is true then it means your container could be killed due to out of memory error. Well if you still can't figure out why then you might need to add more logging into your application to give you more insight on why it died.
1
0
0
I have deployed a rest service inside a docker container using uwsgi and nginx. When I run this python flask rest service inside docker container, for first one hour service works fine but after sometime somehow nginx and rest service stops for some reason. Has anyone faced similar issue? Is there any know fix for this issue?
Service inside docker container stops after some time
0.197375
0
0
1,799
38,711,966
2016-08-02T04:48:00.000
0
0
0
0
python,pandas,numbers,row
38,721,746
2
false
0
0
Sorry I couldnt add a code sample but Im on my phone. piRSquared confirmed my fears when he said the info is lost. I guess ill have to do a loop everytime or add a column with numbers ( that will get scrambled if i sort them : / ). Thanks everyone.
1
2
1
In pandas, is it possible to reference the row number for a function. I am not talking about .iloc. iloc takes a location i.e. a row number and returns a dataframe value. I want to access the location number in the dataframe. For instance, if the function is in the cell that is 3 rows down and 2 columns across, I want a way to return the integer 3. Not the entry that is in that location. Thanks.
Function that depends on the row number
0
0
0
118
38,723,681
2016-08-02T14:46:00.000
0
0
0
1
python,google-app-engine
38,742,762
2
false
1
0
The main question will be how to ensure it only runs once for a particular version. Here is an outline on how you might approach it. You create a HasRun module, which you use store each the version of the deployed app and this indicates if the one time code has been run. Then make sure you increment your version, when ever you deploy your new code. In you warmup handler or appengine_config.py grab the version deployed, then in a transaction try and fetch the new HasRun entity by Key (version number). If you get the Entity then don't run the one time code. If you can not find it then create it and run the one time code, either in a task (make sure the process is idempotent, as tasks can be retried) or in the warmup/front facing request. Now you will probably want to wrap all of that in memcache CAS operation to provide a lock or some sort. To prevent some other instance trying to do the same thing. Alternately if you want to use the task queue, consider naming the task the version number, you can only submit a task with a particular name once. It still needs to be idempotent (again could be scheduled to retry) but there will only ever be one task scheduled for that version - at least for a few weeks. Or a combination/variation of all of the above.
2
1
0
I have been looking for a solution for my app that does not seem to be directly discussed anywhere. My goal is to publish an app and have it reach out, automatically, to a server I am working with. This just needs to be a simple Post. I have everything working fine, and am currently solving this problem with a cron job, but it is not quite sufficient - I would like the job to execute automatically once the app has been published, not after a minute (or whichever the specified time it may be set to). In concept I am trying to have my app register itself with my server and to do this I'd like for it to run once on publish and never be ran again. Is there a solution to this problem? I have looked at Task Queues and am unsure if it is what I am looking for. Any help will be greatly appreciated. Thank you.
Google App Engine - run task on publish
0
0
0
66
38,723,681
2016-08-02T14:46:00.000
2
0
0
1
python,google-app-engine
38,794,445
2
false
1
0
Personally, this makes more sense to me as a responsibility of your deploy process, rather than of the app itself. If you have your own deploy script, add the post request there (after a successful deploy). If you use google's command line tools, you could wrap that in a script. If you use a 3rd party tool for something like continuous integration, they probably have deploy hooks you could use for this purpose.
2
1
0
I have been looking for a solution for my app that does not seem to be directly discussed anywhere. My goal is to publish an app and have it reach out, automatically, to a server I am working with. This just needs to be a simple Post. I have everything working fine, and am currently solving this problem with a cron job, but it is not quite sufficient - I would like the job to execute automatically once the app has been published, not after a minute (or whichever the specified time it may be set to). In concept I am trying to have my app register itself with my server and to do this I'd like for it to run once on publish and never be ran again. Is there a solution to this problem? I have looked at Task Queues and am unsure if it is what I am looking for. Any help will be greatly appreciated. Thank you.
Google App Engine - run task on publish
0.197375
0
0
66
38,724,255
2016-08-02T15:10:00.000
0
0
0
0
java,python,pandas,numpy,jython
38,724,313
2
false
1
0
Have you tried using xml to transfer the data between the two applications ? My next suggestion would be to output the data in JSON format in a txt file and then call the java application which will read the JSON from the text file.
2
2
1
I have created a python script for predictive analytics using pandas,numpy etc. I want to send my result set to java application . Is their simple way to do it. I found we can use Jython for java python integration but it doesn't use many data analysis libraries. Any help will be great . Thank you .
Sending pandas dataframe to java application
0
0
0
2,769
38,724,255
2016-08-02T15:10:00.000
0
0
0
0
java,python,pandas,numpy,jython
57,166,461
2
false
1
0
Better approach here is to use java pipe input like python pythonApp.py | java read. Output of python application can be used as an input for java application till the format of data is consitent and known. Above soultions of creating a file and then reading also works but is prone to more errors.
2
2
1
I have created a python script for predictive analytics using pandas,numpy etc. I want to send my result set to java application . Is their simple way to do it. I found we can use Jython for java python integration but it doesn't use many data analysis libraries. Any help will be great . Thank you .
Sending pandas dataframe to java application
0
0
0
2,769
38,727,035
2016-08-02T17:35:00.000
0
0
1
0
python,matplotlib,ipython
38,729,045
1
false
0
0
You need to install pyGTK. How to do so depends on what you're using to run Python. You could also not use '%matplotlib inline' and then it'll default to whatever is installed on your system.
1
0
1
I got two questions when I was plotting graph in ipython. once, i implement %matplotlib inline, I don't know how to switch back to use floating windows. when I search for the method to switch back, people told me to implement %matplotlib osx or %matplotlib, however, I finally get an error, which is Gtk* backend requires pygtk to be installed. Can anyone help me, giving me some idea? p.s. I am using windows 10 and python 2.7
How to turn off matplotlib inline function and install pygtk?
0
0
0
336
38,728,501
2016-08-02T19:00:00.000
4
0
0
0
python,neural-network,tensorflow,recurrent-neural-network
38,740,100
1
false
0
0
I think when you use the tf.nn.rnn function it is expecting a list of tensors and not just a single tensor. You should unpack input in the time direction so that it is a list of tensors of shape [?, 22501]. You could also use tf.nn.dynamic_rnn which I think can handle this unpack for you.
1
1
1
I have some very basic lstm code with tensorflow and python, where my code is output = tf.nn.rnn(tf.nn.rnn_cell.BasicLSTMCell(10), input_flattened, initial_state=tf.placeholder("float", [None, 20])) where my input flattened is shape [?, 5, 22501] I'm getting the error TypeError: inputs must be a sequence on the state parameter of the lstm, and I'm ripping my hair out trying to find out why it is giving me this error. Any help would be greatly appreciated.
Inputs not a sequence wth RNNs and TensorFlow
0.664037
0
0
3,118
38,730,867
2016-08-02T21:35:00.000
0
0
1
0
python,windows,pip
38,767,497
2
false
0
0
Seems I found the reason: the Windows computer is within a state agency behind a hefty firewall. I tried the same install with Python 3.4.3 from my laptop connected through my phone and had no problem at all. So, I think the firewall was in the way. Thanks for looking into this, @froost1999 and @Rockse .
1
0
0
I am struggling with adding to my Python 34 installation: C:> pip install --upgrade pip Could not find any downloads that satisfy the requirement pip in c:\python34\lib \site-packages Collecting pip No distributions at all found for pip in c:\python34\lib\site-packages What am I doing wrong? Please help.
Python 3.4 windows, pip install fails
0
0
0
816
38,733,220
2016-08-03T02:07:00.000
59
0
0
0
python,python-2.7,scikit-learn
38,733,854
2
true
0
0
You might need to reinstall numpy. It doesn't seem to have installed correctly. sklearn is how you type the scikit-learn name in python. Also, try running the standard tests in scikit-learn and check the output. You will have detailed error information there. Do you have nosetests installed? Try: nosetests -v sklearn. You type this in bash, not in the python interpreter.
1
57
1
On OS X 10.11.6 and python 2.7.10 I need to import from sklearn manifold. I have numpy 1.8 Orc1, scipy .13 Ob1 and scikit-learn 0.17.1 installed. I used pip to install sklearn(0.0), but when I try to import from sklearn manifold I get the following: Traceback (most recent call last): File "", line 1, in File "/Library/Python/2.7/site-packages/sklearn/init.py", line 57, in from .base import clone File "/Library/Python/2.7/site-packages/sklearn/base.py", line 11, in from .utils.fixes import signature File "/Library/Python/2.7/site-packages/sklearn/utils/init.py", line 10, in from .murmurhash import murmurhash3_32 File "numpy.pxd", line 155, in init sklearn.utils.murmurhash (sklearn/utils/murmurhash.c:5029) ValueError: numpy.dtype has the wrong size, try recompiling. What is the difference between scikit-learn and sklearn? Also, I cant import scikit-learn because of a syntax error
Difference between scikit-learn and sklearn
1.2
0
0
81,337
38,734,521
2016-08-03T04:37:00.000
6
0
1
0
python,windows,command-prompt
38,734,522
2
false
0
0
Searching did not yield good results, so I thought I should share the process I took with anyone looking for this in the future. Make sure the Python 3 folder is present in the PATH environment variable. Locate the "python.exe" file in the Python 3 folder. Copy and Paste the "python.exe" file within the Python 3 folder. Rename the copied file to "python3" (or whatever you want the command to be). Now, when you input python3 script.py to Command Prompt, the script will run through the copied Python 3 file. Also, by copying python.exe (instead of renaming it) you allow other interpreters - such as PyCharm - to continue using their default "python.exe" path settings. I hope this helps! EDIT: A "symlink" has the same effect, but keeps things a bit tidier.
2
7
0
Having various projects in both Python 2 and Python 3 (with both python versions installed), I was looking for a more intuitive way to run scripts via Command Prompt than py -3 script.py. Python 2 already took python script.py, so ideally python3 script.py should invoke Python 3. My question: How can I add python3 as a Command Prompt command?
Python 2 and Python 3 - Running in Command Prompt
1
0
0
25,378
38,734,521
2016-08-03T04:37:00.000
7
0
1
0
python,windows,command-prompt
55,441,142
2
false
0
0
If Python 2 and 3 are both installed and in the PATH variable, you can do: py -2 or py -3
2
7
0
Having various projects in both Python 2 and Python 3 (with both python versions installed), I was looking for a more intuitive way to run scripts via Command Prompt than py -3 script.py. Python 2 already took python script.py, so ideally python3 script.py should invoke Python 3. My question: How can I add python3 as a Command Prompt command?
Python 2 and Python 3 - Running in Command Prompt
1
0
0
25,378
38,737,144
2016-08-03T07:31:00.000
1
1
1
0
python,excel,python-3.5,execution-time,miniconda
38,737,237
2
false
0
0
It runs on single core, the computer1 has higher clock rate = faster single threaded processing.
1
0
0
I was faced with the following problem: on a computer (number 2) script execution time is significantly greater than on another computer (computer 1). Computer 1 - i3 - 4170 CPU @ 3.7 GHz (4 core), 4GB RAM (Execution time 9.5 minutes) Computer 2 - i7 - 3.07GHz (8 core), 8GB RAM (Execution time 15-17 minutes) I use Python to process Excel files. I import for these three libraries: xlrd, xlsxwriter, win32com Why is the execution time different? How can I fix it?
Script execution time on different computers (python 3.5, miniconda)
0.099668
0
0
61
38,738,706
2016-08-03T08:46:00.000
0
0
0
0
python,xlsxwriter
38,740,317
1
false
0
0
is it possible? Unfortunately not. Shapes aren't supported in XlsxWriter, apart from Textbox.
1
2
0
I wana draw some simple shapes in excel file like as "arrow, line, rectangle, oval" using XLSXWriter, but i can find any example to do it. Is it possible ? If not, what library of python can do that ? Thanks!
How to drawing shapes using XLSXWriter
0
1
0
1,339
38,741,327
2016-08-03T10:43:00.000
1
0
0
1
python,google-app-engine
38,788,820
1
true
1
0
You have multiple layers of caches beyond memcache, Googles edge cache will definitely cache static content especially if you app is referenced by your domain and not appspot.com . You will probably need to use some cache busting techniques. You can test this by requesting the url that is presenting old content with the same url but appending something like ?x=1 to the url. If you then get current content then the edge cache is your problem and therefore the need to use cache busting techniques.
1
0
0
I've deployed a new version which contains just one image replacement. After migrating traffic (100%) to the new version I can see that only this version now has active instances. However 2 days later and App engine is still intermittently serving the old image. So I assume the previous version. When I ping the domain I can see that the latest version has one IP address and the old version has another. My question is how do I force App Engine to only server the new version? I'm not using traffic splitting either. Any help would be much appreciated Regards, Danny
App Engine serving old version intermittently
1.2
0
0
190
38,742,893
2016-08-03T11:54:00.000
9
0
0
1
python,curl,urllib2,http-upload,seafile-server
38,743,242
2
false
1
0
needed 2 hours to find a solution with curl, it needs two steps: make a get-request to the public uplink url with the repo-id as query parameter as follows: curl 'https://cloud.seafile.com/ajax/u/d/98233edf89/upload/?r=f3e30b25-aad7-4e92-b6fd-4665760dd6f5' -H 'Accept: application/json' -H 'X-Requested-With: XMLHttpRequest' The answer is (json) a id-link to use in next upload-post e.g.: {"url": "https://cloud.seafile.com/seafhttp/upload-aj/c2b6d367-22e4-4819-a5fb-6a8f9d783680"} Use this link to initiate the upload post: curl 'https://cloud.seafile.com/seafhttp/upload-aj/c2b6d367-22e4-4819-a5fb-6a8f9d783680' -F file=@./tmp/index.html -F filename=index.html -F parent_dir="/my-repo-dir/" The answer is json again, e.g. [{"name": "index.html", "id": "0a0742facf24226a2901d258a1c95e369210bcf3", "size": 10521}] done ;)
1
5
0
With Seafile one is able to create a public upload link (e.g. https://cloud.seafile.com/u/d/98233edf89/) to upload files via Browser w/o authentication. Seafile webapi does not support any upload w/o authentication token. How can I use such kind of link from command line with curl or from python script?
How to use a Seafile generated upload-link w/o authentication token from command line
1
0
1
2,271
38,745,067
2016-08-03T13:29:00.000
1
0
0
0
python,django,django-rest-framework
38,746,370
3
false
1
0
you don't need to use models, but you really should. django's ORM (the way it handles reading/writing to databases) functionality is fantastic and really useful. if you're executing raw sql statements all the time, you either have a highly specific case where django's functions fail you, or you're using django inefficiently and should rethink why you're using django to begin with.
1
0
0
I am new to Django-Rest Framework and I wanted to develop API calls. I am currently using Mysql database so if I have to make changes in the database, do I have to write models in my project or Can I execute the raw data operation onto my database. Like: This is my urls.py file which contains a list of URLs and if any of the URL is hit it directly calls to view function present in views.py file and rest I do the particular operation in that function, like connecting to MySQL database, executing SQL queries and returning JSON response to the front end. Is this a good approach to making API calls? If not Please guide me. Any advice or help will be appreciated.
Using Models in Django-Rest Framework
0.066568
0
0
911
38,747,073
2016-08-03T14:54:00.000
0
0
1
0
python
38,747,336
2
false
0
0
If your third party lib does not have any dependencies you can get the source files (.py) put into your project folder and use it as package by using import, else it has dependencies your project size grow more better create exe for your script.
1
0
0
I have a Python script which uses open source third party libraries for geoprocessing (OGR and Shapely). My plan is to execute this script on a computer without having to install the required libraries. I know that there are tools such as py2exe available for this purpose. However, compiling an executable file is not my first choice as I noticed that such files can get pretty large in size. Besides, I would like to use the code within another script. I would therefore like to create a portable python script which already includes the third party methods needed for executing. Is it possible to include third party methods in a Python script in order to avoid the installation of third party libraries? And if not, what can I do instead, besides compiling an executable file? I work on Windows OS.
Portable python script: Is it possible to include third party libraries in script?
0
0
0
1,524
38,751,024
2016-08-03T18:22:00.000
-1
0
0
0
python,macos,os.system
38,751,112
3
true
1
0
You can try writing the data you want to share to a file and have the other script read and interpret it. Have the other script run in a loop to check if there is a new file or the file has been changed.
2
0
0
i am scraping data through multiple websites. To do that i have written multiple web scrapers with using selenium and PhantomJs. Those scrapers return values. My question is: is there a way i can feed those values to a single python program that will sort through that data in real time. What i want to do is not save that data to analyze it later i want to send it to a program that will analyze it in real time. what i have tried: i have no idea where to even start
Sharing data with multiple python programms
1.2
0
1
87
38,751,024
2016-08-03T18:22:00.000
-1
0
0
0
python,macos,os.system
38,751,162
3
false
1
0
Simply use files for data exchange and a trivial locking mechanism. Each writer or reader (only one reader, it seems) gets a unique number. If a writer or reader wants to write to the file, it renames it to its original name + the number and then writes or reads, renaming it back after that. The others wait until the file is available again under its own name and then access it by locking it in a similar way. Of course you have shared memory and such, or memmapped files and semaphores. But this mechanism has worked flawlessly for me for over 30 years, on any OS, over any network. Since it's trivially simple. It is in fact a poor man's mutex semaphore. To find out if a file has changed, look to its writing timestamp. But the locking is necessary too, otherwise you'll land into a mess.
2
0
0
i am scraping data through multiple websites. To do that i have written multiple web scrapers with using selenium and PhantomJs. Those scrapers return values. My question is: is there a way i can feed those values to a single python program that will sort through that data in real time. What i want to do is not save that data to analyze it later i want to send it to a program that will analyze it in real time. what i have tried: i have no idea where to even start
Sharing data with multiple python programms
-0.066568
0
1
87
38,751,240
2016-08-03T18:35:00.000
0
0
1
0
python,py2exe
38,751,432
1
true
0
0
It can if you exec () the contents of the .py file.
1
0
0
I want to distribute a python program as an executable file. I use py2exe with options = {"py2exe":{"bundle_files": 1}}. But I want to keep a py file as configure script ( including not just variables but also functions and classes ). Can py2exe do this? Or any other way to do it?
Can py2exe keep a py file as configure script?
1.2
0
0
23
38,751,364
2016-08-03T18:42:00.000
1
0
0
0
python,statistics,scikit-learn,cluster-analysis,k-means
38,751,473
2
false
0
0
The behavior you are experiencing probably has to do with the under the hood implementation of k-means clustering that you are using. k-means clustering is an NP-hard problem, so all the implementations out there are heuristic methods. What this means practically is that for a given seed, it will converge toward a local optima that isn't necessarily consistent across multiple seeds.
2
1
1
I am using the MiniBatchKMeans model from the sklearn.cluster module in anaconda. I am clustering a data-set that contains approximately 75,000 points. It looks something like this: data = np.array([8,3,1,17,5,21,1,7,1,26,323,16,2334,4,2,67,30,2936,2,16,12,28,1,4,190...]) I fit the data using the process below. from sklearn.cluster import MiniBatchKMeans kmeans = MiniBatchKMeans(batch_size=100) kmeans.fit(data.reshape(-1,1) This is all well and okay, and I proceed to find the centroids of the data: centroids = kmeans.cluster_centers_ print centroids Which gives me the following output: array([[ 13.09716569], [ 2908.30379747], [ 46.05089228], [ 725.83453237], [ 95.39868475], [ 1508.38356164], [ 175.48099948], [ 350.76287263]]) But, when I run the process again, using the same data, I get different values for the centroids, such as this: array([[ 29.63143489], [ 1766.7244898 ], [ 171.04417206], [ 2873.70454545], [ 70.05295277], [ 1074.50387597], [ 501.36134454], [ 8.30600975]]) Can anyone explain why this is?
MiniBatchKMeans gives different centroids after subsequent iterations
0.099668
0
0
1,020
38,751,364
2016-08-03T18:42:00.000
1
0
0
0
python,statistics,scikit-learn,cluster-analysis,k-means
38,754,035
2
false
0
0
Read up on what mini-batch k-means is. It will never even converge. Do one more iteration, the result will change again. It is design for data sets so huge you cannot load them into memory at once. So you load a batch, pretend this were the full data set, do one iterarion. Repeat woth the next batch. If your batches are large enough and random, then the result will be "close enough" to be usable. While itjs never optimal. Thus: the minibatch results are even more random than regular k-means results. They change every iteration. if you can load your data into memory, don't use minibatch. Instead use a fast k-means implementation. (most are surprisingly slow). P.S. on one-dimensional data, sort your data set and then use an algorithm that benefits from the sorting instead of k-means.
2
1
1
I am using the MiniBatchKMeans model from the sklearn.cluster module in anaconda. I am clustering a data-set that contains approximately 75,000 points. It looks something like this: data = np.array([8,3,1,17,5,21,1,7,1,26,323,16,2334,4,2,67,30,2936,2,16,12,28,1,4,190...]) I fit the data using the process below. from sklearn.cluster import MiniBatchKMeans kmeans = MiniBatchKMeans(batch_size=100) kmeans.fit(data.reshape(-1,1) This is all well and okay, and I proceed to find the centroids of the data: centroids = kmeans.cluster_centers_ print centroids Which gives me the following output: array([[ 13.09716569], [ 2908.30379747], [ 46.05089228], [ 725.83453237], [ 95.39868475], [ 1508.38356164], [ 175.48099948], [ 350.76287263]]) But, when I run the process again, using the same data, I get different values for the centroids, such as this: array([[ 29.63143489], [ 1766.7244898 ], [ 171.04417206], [ 2873.70454545], [ 70.05295277], [ 1074.50387597], [ 501.36134454], [ 8.30600975]]) Can anyone explain why this is?
MiniBatchKMeans gives different centroids after subsequent iterations
0.099668
0
0
1,020
38,751,800
2016-08-03T19:08:00.000
4
0
0
1
python,python-2.7,hashlib,frozenset
38,767,427
2
true
0
0
Removal of this package have helped me: sudo rm -rf /Library/Python/2.7/site-packages/hashlib-20081119-py2.7-macosx-10.11-intel.egg
1
2
0
I have this error on each my command with Python: ➜ /tmp sudo easy_install pip Traceback (most recent call last): File "/usr/bin/easy_install-2.7", line 11, in load_entry_point('setuptools==1.1.6', 'console_scripts', 'easy_install')() File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 357, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 2394, in load_entry_point return ep.load() File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 2108, in load entry = __import__(self.module_name, globals(),globals(), ['__name__']) File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/setuptools/__init__.py", line 11, in from setuptools.extension import Extension File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/setuptools/extension.py", line 5, in from setuptools.dist import _get_unpatched File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/setuptools/dist.py", line 15, in from setuptools.compat import numeric_types, basestring File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/setuptools/compat.py", line 17, in import httplib File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 80, in import mimetools File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/mimetools.py", line 6, in import tempfile File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/tempfile.py", line 35, in from random import Random as _Random File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/random.py", line 49, in import hashlib as _hashlib File "build/bdist.macosx-10.11-intel/egg/hashlib.py", line 115, in """ TypeError: 'frozenset' object is not callable What can I do with this?
Python 2.7 on OS X: TypeError: 'frozenset' object is not callable on each command
1.2
0
0
8,871
38,755,220
2016-08-03T23:29:00.000
2
1
1
0
python,windows,git,raspberry-pi
38,760,276
3
true
0
0
You can run a SAMBA server on your Raspberry Pi, set your python project folder as a network disk. Then you can use any windows IDE you like, just open the file which is on the network disk. Currently I am using VS2015 + Python Tools for Visual Studio for remote debugging purpose.
1
1
0
Is it possible to open files from a Raspberry pi in windows for editing (using for example notepad++)? I am currently using the built in python IDE in Raspbian but i feel that it would speed up the development process if i could use a windows IDE for development. I have also tried using a git repo to share files between the PI and Windows but it is a bit cumbersome to. Or does anyone have any other ideas about workflow between Windows and Raspberry?
Develop Raspberry apps from windows
1.2
0
0
606
38,757,611
2016-08-04T02:34:00.000
1
0
1
0
python,visual-studio-2015
43,836,514
1
false
0
0
I know this is probably far too late, but I had this problem as well. I found the solution and that is to right click the "Python Environments" in the solution explorer of the project and add/remove python environments and then choose the one you're attempting to go to.
1
0
0
I am trying to modify the default Python compilation environment options in Visual Studio. The option directory is at Options -> Python Tools -> Environment Options. However, after I added a new custom environment, changed the default environment to the custom one, and clicked "OK" to save, I found that the settings go back to the default "Python 3.4". How can I really save these changes?
How can I change the default python enviroment in VS2015?
0.197375
0
0
475
38,758,342
2016-08-04T04:09:00.000
0
0
0
0
python,facebook,wit.ai
42,297,616
2
false
0
0
You could also use api.ai , which by default provides a default fallback intent. In default fallback event, if api.ai does not understand the input or does not have an answer , it would reply with "I did not understand what you just said"
1
3
0
I'm working on a chatbot project based on Facebook's Wit.ai and was wondering if it is possible to set a default intent? For example, my bot currently supports only a handful of questions, such as "Where are you located?" or "What is your phone number?", each of these questions has an intent and story associated with it but if someone asks something the bot doesn't understand, wit seems (I haven't been able to find any info about this) to choose a story at random and execute it. I would like to set a default intent that will respond with something like "I don't understand what you mean." in the event that no other intent is recognized. Is it possible to do this? Specifically, I would like to know if there is an officially accepted way to do this as I currently have a way to achieve this but it is a bit hacky and requires me to edit the wit package from facebook which I would prefer not to do.
Is it possible to set a default intent in Wit.ai?
0
0
0
1,233
38,759,647
2016-08-04T06:08:00.000
2
0
0
0
python-2.7,machine-learning,neural-network
38,763,173
3
true
0
0
Yes - this is a really important issue. Basically there are two ways to do that: Try different topologies and choose best: due to the fact that number of neurons and layers are a discrete parameters you cannot differentiate your loss function with respect to this parameters in order to use a gradient descent methods. So the easiest way is to simply set up different topologies and compare them using either cross-validation or division of your training set to - training / testing / validating parts. You can also use a grid / random search schemas to do that. Libraries like scikit-learn have appropriate modules for that. Dropout: the training framework called dropout could also help. In this case you are setting up relatively big number of nodes in your layers and trying to adjust a dropout parameter for each layer. In this scenario - e.g. assuming that you will have a two-layer network with 100 nodes in your hidden layer with dropout_parameter = 0.6 you are learning the mixture of models - where every model is a neural network with size 40 (approximately 60 nodes are turned off). This might be also considered as figuring out the best topology for your task.
2
0
1
In neural network theory - setting up the size of hidden layers seems to be a really important issue. Is there any criteria how to choose the number of neurons in a hidden layer?
How should we set the number of the neurons in the hidden layer in neural network?
1.2
0
0
638
38,759,647
2016-08-04T06:08:00.000
1
0
0
0
python-2.7,machine-learning,neural-network
38,776,068
3
false
0
0
You have to set the number of neurons in hidden layer in such a way that it shouldn't be more than # of your training example. There are no thumb rule for number of neurons. Ex: If you are using MINIST Dataset then you might have ~ 78K training example. So make sure that combination of Neural Network (784-30-10) = 784*30 + 30*10 which are less than training examples. but if you use like (784-100-10) then it exceeds the # of training example and highly probable to over-fit. In short, make sure you are not over-fitting and hence you have good chances to get good result.
2
0
1
In neural network theory - setting up the size of hidden layers seems to be a really important issue. Is there any criteria how to choose the number of neurons in a hidden layer?
How should we set the number of the neurons in the hidden layer in neural network?
0.066568
0
0
638
38,766,962
2016-08-04T12:10:00.000
0
0
0
0
python,mysql,analytics
38,770,424
1
false
1
0
You have to put a date on your data and instead of using now() use it.
1
0
0
I have two separate programs; one counts the daily view stats and another calculates earning based on the stats. Counter runs first and followed by Earning Calculator a few seconds later. Earning Calculator works by getting stats from counter table using date(created_at) > date(now()). The problem I'm facing is that let's say at 23:59:59 Counter added 100 views stats and by the time the Earning Calculator ran it's already the next day. Since I'm using date(created_at) > date(now()), I will miss out the last 100 views added by the Counter. One way to solve my problem is to summarise the previous daily report at 00:00:10 every day. But I do not like this. Is there any other ways to solve this issue? Thanks.
How to solve mysql daily analytics that happens when date changes
0
1
0
53
38,767,481
2016-08-04T12:34:00.000
1
0
0
0
python,scikit-learn,text-classification
38,788,040
1
false
0
0
In the supervised learning approach as it is, you cannot add extra category. Therefore I would use some heuristics. Try to predict probability for each category. Then, if all 4 or at least 3 probabilities are approximately equal, you can say that the sample is "unknown". For this approach LinearSVC or other type of Support Vector Classifier is bad suited, because it does not naturally gives you probabilities. Another classifier (Logistic Regression, Bayes, Trees, Forests) would be better
1
3
1
I am using Scikit-Learn to classify texts (in my case tweets) using LinearSVC. Is there a way to classify texts as unclassified when they are a poor fit with any of the categories defined in the training set? For example if I have categories for sport, politics and cinema and attempt to predict the classification on a tweet about computing it should remain unclassified.
Scikit-Learn- How to add an 'unclassified' category?
0.197375
0
0
198
38,767,532
2016-08-04T12:36:00.000
0
0
1
0
python,pip,virtualenv
66,693,644
1
false
0
0
When you try to install a package using pip (or even uninstall one), try to run your comand prompt opr powershell as administrator
1
5
0
By necessity I have a number of python 3.4 distributions on my system some of which are installed in non-user writable locations and some I have built myself using different compilers and 3rd party libraries (such as MKL). I need to be able to reliably isolate each of these for use but can't use virtualenv / pyenv ... etc. As long as windows knows which executable to use the isolation is reasonably good. I then use "pip -t" to install to a directory which I can add to PYTHONPATH. The problem is that if packages installed by any distribution using the --user argument to pip placed in a single shared location, and cause me problems. I was wondering if there is an easy way to tell python not to look in the AppData\Roaming\Python... directory when searching for packages.
Stop python looking in AppData\Roaming\Python
0
0
0
1,827
38,767,584
2016-08-04T12:39:00.000
0
0
0
1
python,linux,archive,7zip,rar
38,768,586
2
false
0
0
I can't give you a native python answer, but, if you need to fall back on os.system, the command-line utilities for handling all four formats have switches which can be used to list the contents of the archive including the size of each file and possibly a total size: rar: unrar l FILENAME.rar lists information on each file and the total size. zip: unzip -l FILENAME.zip lists size, timestamp, and name of each file, along with the total size. 7z: 7z l FILENAME.7z lists the details of each file and the total size. tar: tar -tvf FILENAME.tar or tar -tvzf FILENAME.tgz (or .tar.gz) lists details of each file including file size. No total size is provided, so you'll need to add them up yourself. If you're looking at native python libraries, you can also check for whether they have a "list" or "test" function. Those are the terms used by the command-line tools to describe the switches I mentioned above, so the same names are likely to have been used by the library authors.
1
0
0
I need to validate result size of unpacked archive without unpacking it, so that to prevent huge archives to store on my server. Or start unpacking and when size is exceeded certain size, then stop unpacking. I have already tried lib pyunpack, but it allows only unpacking archives. Need to validate such archive extensions: rar, zip, 7z, tar. Maybe I can do it with using some linux features by calling them by os.system.
Find out result size of unpacked archive without unpacking it. Or stop unpacking when certain size is exceed
0
0
0
1,601
38,767,616
2016-08-04T12:40:00.000
4
1
0
0
python,django,multithreading,webserver,pyramid
38,782,897
2
false
1
0
There's no magic in pyramid or django that gets you past process boundaries. The answers depend entirely on the particular server you've selected and the settings you've selected. For example, uwsgi has the ability to run multiple threads and multiple processes. If uwsig spins up multiple processes then they will each have their own copies of data which are not shared unless you took the time to create some IPC (this is why you should keep state in a third party like a database instead of in-memory objects which are not shared across processes). Each process initializes a WSGI object (let's call it app) which the server calls via body_iter = app(environ, start_response). This app object is shared across all of the threads in the process and is invoked concurrently, thus it needs to be threadsafe (usually the structures the app uses are either threadlocal or readonly to deal with this, for example a connection pool to the database). In general the answers to your questions are that things happen concurrently, and objects may or may not be shared based on your server model but in general you should take anything that you want to be shared and store it somewhere that can handle concurrency properly (a database).
2
7
0
I am having a hard time trying to figure out the big picture of the handling of multiple requests by the uwsgi server with django or pyramid application. My understanding at the moment is this: When multiple http requests are sent to uwsgi server concurrently, the server creates a separate processes or threads (copies of itself) for every request (or assigns to them the request) and every process/thread loads the webapplication's code (say django or pyramid) into computers memory and executes it and returns the response. In between every copy of the code can access the session, cache or database. There is a separate database server usually and it can also handle concurrent requests to the database. So here some questions I am fighting with. Is my above understanding correct or not? Are the copies of code interact with each other somehow or are they wholly separated from each other? What about the session or cache? Are they shared between them or are they local to each copy? How are they created: by the webserver or by copies of python code? How are responses returned to the requesters: by each process concurrently or are they put to some kind of queue and sent synchroniously? I have googled these questions and have found very interesting answers on StackOverflow but anyway can't get the whole picture and the whole process remains a mystery for me. It would be fantastic if someone can explain the whole picture in terms of django or pyramid with uwsgi or whatever webserver. Sorry for asking kind of dumb questions, but they really torment me every night and I am looking forward to your help:)
What exactly happens on the computer when multiple requests came to the webserver serving django or pyramid application?
0.379949
0
0
1,964
38,767,616
2016-08-04T12:40:00.000
3
1
0
0
python,django,multithreading,webserver,pyramid
38,767,725
2
false
1
0
The power and weakness of webservers is that they are in principle stateless. This enables them to be massively parallel. So indeed for each page request a different thread may be spawned. Wether or not this indeed happens depends on the configuration settings of you webserver. There's also a cost to spawning many threads, so if possible threads are reused from a thread pool. Almost all serious webservers have page cache. So if the same page is requested multiple times, it can be retrieved from cache. In addition, browsers do their own caching. A webserver has to be clever about what to cache and what not. Static pages aren't a big problem, although they may be replaced, in which case it is quite confusing to still get the old page served because of the cache. One way to defeat the cache is by adding (dummy) parameters to the page request. The statelessness of the web was initialy welcomed as a necessity to achieve scalability, where webpages of busy sites even could be served concurrently from different servers at nearby or remote locations. However the trend is to have stateful apps. State can be maintained on the browser, easing the burden on the server. If it's maintained on the server it requires the server to know 'who's talking'. One way to do this is saving and recognizing cookies (small identifiable bits of data) on the client. For databases the story is a bit different. As soon as anything gets stored that relates to a particular user, the application is in principle stateful. While there's no conceptual difference between retaining state on disk and in RAM memory, traditionally statefulness was left to the database, which in turned used thread pools and load balancing to do its job efficiently. With the advent of very large internet shops like amazon and google, mandatory disk access to achieve statefulness created a performance problem. The answer were in-memory databases. While they may be accessed traditionally using e.g. SQL, they offer much more flexibility in the way data is stored conceptually. A type of database that enjoys growing popularity is persistent object store. With this database, while the distinction still can be made formally, the boundary between webserver and database is blurred. Both have their data in RAM (but can swap to disk if needed), both work with objects rather than flat records as in SQL tables. These objects can be interconnected in complex ways. In short there's an explosion of innovative storage / thread pooling / caching/ persistence / redundance / synchronisation technology, driving what has become popularly know as 'the cloud'.
2
7
0
I am having a hard time trying to figure out the big picture of the handling of multiple requests by the uwsgi server with django or pyramid application. My understanding at the moment is this: When multiple http requests are sent to uwsgi server concurrently, the server creates a separate processes or threads (copies of itself) for every request (or assigns to them the request) and every process/thread loads the webapplication's code (say django or pyramid) into computers memory and executes it and returns the response. In between every copy of the code can access the session, cache or database. There is a separate database server usually and it can also handle concurrent requests to the database. So here some questions I am fighting with. Is my above understanding correct or not? Are the copies of code interact with each other somehow or are they wholly separated from each other? What about the session or cache? Are they shared between them or are they local to each copy? How are they created: by the webserver or by copies of python code? How are responses returned to the requesters: by each process concurrently or are they put to some kind of queue and sent synchroniously? I have googled these questions and have found very interesting answers on StackOverflow but anyway can't get the whole picture and the whole process remains a mystery for me. It would be fantastic if someone can explain the whole picture in terms of django or pyramid with uwsgi or whatever webserver. Sorry for asking kind of dumb questions, but they really torment me every night and I am looking forward to your help:)
What exactly happens on the computer when multiple requests came to the webserver serving django or pyramid application?
0.291313
0
0
1,964
38,772,455
2016-08-04T16:10:00.000
1
0
1
1
python,dask
38,772,791
1
true
0
0
The workers themselves are just Python processes, so you could do tricks with globals(). However, it is probably cleaner to emit values and pass these between tasks. Dask retains the right to rerun functions and run them on different machines, so depending on global state or worker-specific state can easily get you into trouble.
1
2
0
Is there a way with dask to have a variable that can be retrieved from one task to another. I mean a variable that I could lock in the worker and then retrieve in the same worker when i execute another task.
Dask worker persistent variables
1.2
0
0
340
38,774,748
2016-08-04T18:23:00.000
0
0
0
0
python,opencv,camera,kivy,motion-detection
38,774,932
1
false
0
1
opencv is a computer vision framework (hence the c-v) which can interact with device cameras. Kivy is a cross-platform development tool which can interact with device cameras. It makes sense that there are good motion detection tutorials for opencv but not kivy camera, since this isnt really what kivy is for.
1
0
1
What is the difference between Kivy Camera and opencv ? I am asking this because in Kivy Camera the image gets adjusted according to frame size but in opencv this does not happen. Also I am not able to do motion detection in kivy camera whereas I found a great tutorial for motion detection on opencv. If someone can clarify the difference it would be appreciated ! Thanks :)
Difference between Kivy camera and opencv camera
0
0
0
386
38,776,447
2016-08-04T20:02:00.000
5
0
1
0
python,git
38,776,587
2
true
0
0
Don't do that! Suppose that git stash save saves nothing, but there are already some items in the stash. Then, when you're all done, you pop the most recent stash, which is not one you created. What did you just do to the user? One way to do this in shell script code is to check the result of git rev-parse refs/stash before and after git stash save. If it changes (from failure to something, or something to something-else), you have created a new stash, which you can then pop when you are done. More recent versions of Git have git stash create, which creates the commit-pair as usual but does not put them into the refs/stash reference. If there is nothing to save, git stash create does nothing and outputs nothing. This is a better way to deal with the problem, but is Git-version-dependent.
2
2
0
I am creating a post-commit script in Python and calling git commands using subprocess. In my script I want to stash all changes before I run some commands and then pop them back. The problem is that if there was nothing to stash, stash pop returns a none-zero error code resulting in an exception in subprocess.check_output(). I know how I can ignore the error return code, but I don't want to do it this way. So I have been thinking. Is there any way to get the number of items currently in stash? I know there is a command 'git stash list', but is there something more suited for my needs or some easy and safe way to parse the output of git stash list? Also appreciate other approaches to solve this problem.
Only call 'git stash pop' if there is anything to pop
1.2
0
0
441
38,776,447
2016-08-04T20:02:00.000
2
0
1
0
python,git
38,776,533
2
false
0
0
You can simply try calling git stash show stash@{0}. If this returns successfully, there is something stashed.
2
2
0
I am creating a post-commit script in Python and calling git commands using subprocess. In my script I want to stash all changes before I run some commands and then pop them back. The problem is that if there was nothing to stash, stash pop returns a none-zero error code resulting in an exception in subprocess.check_output(). I know how I can ignore the error return code, but I don't want to do it this way. So I have been thinking. Is there any way to get the number of items currently in stash? I know there is a command 'git stash list', but is there something more suited for my needs or some easy and safe way to parse the output of git stash list? Also appreciate other approaches to solve this problem.
Only call 'git stash pop' if there is anything to pop
0.197375
0
0
441
38,778,079
2016-08-04T21:57:00.000
0
0
0
1
python,linux,shell,cmd
38,778,425
1
false
0
0
You cant but the default command prompt dimensions are 677, 343. The height being 677 and the width being 343, hope this helps.
1
2
0
I'm working on a Python shell script that is supposed to fill a percentage of the user's screen. The shell's width, however, is calculated in characters instead of pixels, and I find it difficult to compare them to the screen resolution (which is obviously in pixels). How can I effectively calculate the width in characters with only the screen pixels while still being able to support both Windows and Linux? For the sake of the question, let's assume none of the users have changed their shell settings from the default ones.
How to calculate the width of the Windows CMD shell in pixels?
0
0
0
206
38,780,223
2016-08-05T02:33:00.000
0
0
1
1
python,linux,windows,matlab,gurobi
38,780,526
2
false
0
0
Yes, you can write Gurobi Python code on one system, then copy it and run it on another. You can go from Windows to Linux, Mac to Windows, etc. Alternately, if you have Gurobi Compute Server, your Windows computer can be a client of your Linux server.
1
1
0
I have a huge MILP in Matlab, which I want to re-program in Gurobi using python language, on a Windows desktop. But after that I want to run it on a super computer which has a Linux os. I know python is cross-platform. Does this mean anything I create in Gurobi on Windows will run on Linux too? If this question is dumb I'm sorry, I just want to know for sure.
Can a LP created on a Windows platform be run on a Linux platform?
0
0
0
51
38,782,969
2016-08-05T06:53:00.000
1
0
1
0
python,linux,matplotlib,build,fedora
38,783,475
1
false
0
0
I strongly recommend to use virtualenv and build your package inside. Is it really necessary to install via setup.py? If not, you can consider using pip to install your package inside virtualenv.
1
1
0
I need to build a python module from source. It is just my second build and I'm a bit confused regarding the interaction between built packages and binaries installed through package manager. Do I need to uninstall the binary first? If I don't need to Will it overwrite the installed version or will both be available? If it will not overwrite how can I import the built version into python? Thank you all! p.s: If it is case sensitive I'm on fedora 24 and the package is matplotlib which is installed through a setup.py.
building a package from source whose binary is already installed
0.197375
0
0
64
38,787,989
2016-08-05T11:20:00.000
2
0
1
0
python,python-3.x,concurrency,python-asyncio,shared-resource
38,791,950
2
false
0
0
There is lots of info that is missing in your question. Is your app threaded? If yes, then you have to wrap your list in a threading.Lock. Do you switch context (e.g. use await) between writes (to the list) in the request handler? If yes, then you have to wrap your list in a asyncio.Lock. Do you do multiprocessing? If yes then you have to use multiprocessing.Lock Is your app divided onto multiple machines? Then you have to use some external shared database (e.g. Redis). If answers to all of those questions is no then you don't have to do anything since single-threaded async app cannot update shared resource parallely.
1
7
0
I've got a network application written in Python3.5 which takes advantage of pythons Asyncio which concurrently handles each incoming connection. On every concurrent connection, I want to store the connected clients data in a list. I'm worried that if two clients connect at the same time (which is a possibility) then both tasks will attempt to write to the list at the same time, which will surely raise an issue. How would I solve this?
Python3 Asyncio shared resources between concurrent tasks
0.197375
0
1
7,818
38,788,955
2016-08-05T12:11:00.000
1
0
1
0
python
38,789,203
2
false
0
0
block if necessary until an item is available This simply means Queue is empty when you make the request and it will be blocked until you add an item to the Queue, unless you pass the argument block = False or set some Timeout. immediately available This means, there is some item on Queue when you make the request and it will be returned immediately.
2
0
0
That is the Official Docs about multiprocessing.Queue.get get([block[, timeout]]) Remove and return an item from the queue. If optional args block is True (the default) and timeout is None (the default), block if necessary until an item is available. If timeout is a positive number, it blocks at most timeout seconds and raises the Queue.Empty exception if no item was available within that time. Otherwise (block is False), return an item if one is immediately available, else raise the Queue.Empty exception (timeout is ignored in that case). The question is what is the difference between available and immediately available Thanks in advance.
python multiprocessing.Queue.put/get's block parameter
0.099668
0
0
2,240
38,788,955
2016-08-05T12:11:00.000
1
0
1
0
python
38,789,231
2
false
0
0
In the first case where block=True is set, "available" means when an item is present on the queue and ready to be removed by Queue.get(). The point is that the thread/process will block until there is an item ready to be removed from the queue. In the second case, block=False so the calling thread will not block if there is no item in the queue (no item is "immediately available" on the queue). Instead Queue.get() will raise Queue.Empty to signify that there is nothing on the queue to read. Your application needs to handle that exception, possibly by performing other tasks and then trying again later.
2
0
0
That is the Official Docs about multiprocessing.Queue.get get([block[, timeout]]) Remove and return an item from the queue. If optional args block is True (the default) and timeout is None (the default), block if necessary until an item is available. If timeout is a positive number, it blocks at most timeout seconds and raises the Queue.Empty exception if no item was available within that time. Otherwise (block is False), return an item if one is immediately available, else raise the Queue.Empty exception (timeout is ignored in that case). The question is what is the difference between available and immediately available Thanks in advance.
python multiprocessing.Queue.put/get's block parameter
0.099668
0
0
2,240
38,789,873
2016-08-05T12:58:00.000
2
0
1
0
python
38,790,377
8
false
0
0
The first requirement is to decide exactly what you mean by "number of digits." For example, -2.1352 contains ... how many digits? One? Five? Six? Seven? An argument could be made in favor of each of these. Then, in the case of floating point numbers, there's the question of rounding. Float-binary is base-two which must be converted to base-ten at some number of digits' (decimal) precision. Is that number fixed? Would -2.3 (two digits? one? three?) be displayed as -2.3000 hence five digits (four? six?). A "code golf" exercise like this can be tackled in any number of ways. Step-one is to hammer out exactly what you mean in your statement of the problem to be coded.
1
2
0
For example If the list contains numbers 3 , 14 , 24 , 6 , 157 , 132 ,12 It should give the maximum no of digits as 3
How to find the maximum no of digits in the list in python
0.049958
0
0
776
38,793,001
2016-08-05T15:35:00.000
0
0
1
0
python,algorithm,list,iterator,traversal
38,793,437
2
false
0
0
I would just create a dictionary for each list (could be dict1 and dict2), and when you iterate through the first list, you set both dict1[e] = e' and dict2[e'] = e. Then, when iterating through L2, you just check first that the current e isn't in dict2. Could also add this check to the first iteration in L1 if there are duplicate e values. Hope that makes sense - writing this on my phone.
1
0
0
Ok, not sure if that was the best title, but I have two python lists. L1 and L2 which both have elements of type T and do not have the same length. I have a function p(T,T) which is a predicate checking a property about two elements of type T. I would like to check that for all elements e in L1, p(e,e') holds, where e' is SOME element in L2. so basically for each element in L1 I go over the second list, and check if the predicate holds for any of the elements. But I also want to check the same thing for the other list. p(T,T) is symmetric. So if p(e,e') then p(e',e). I do not want to do the same thing twice because of that symmetry. I need to somehow record that if I see p(e,e') then I know p(e',e) and not have to check again for the second list. What is the best way to do this in python? I thought about having another field for each element e1 in each list, telling us whether p(e1, e2) holds, where e2 is a member of the other list. But I think that requires copying both of those lists so I don't mutate them. Is there any good way to do this?
For two lists, l1 and l2, how to check that for all e1 ∈ l1, ∃p(e1,e2) where e2 is some element in l2, efficiently in python?
0
0
0
211
38,794,622
2016-08-05T17:14:00.000
0
0
0
0
python,numpy,array-broadcasting
38,794,707
3
false
0
0
Use x.fill(1). Make sure to return it properly as fill doesn't return a new variable, it modifies x
1
0
1
I have a NumPy array A with shape (m,n) and want to run all the elements through some function f. For a non-constant function such as for example f(x) = x or f(x) = x**2 broadcasting works perfectly fine and returns the expected result. For f(x) = 1, applying the function to my array A however just returns the scalar 1. Is there a way to force broadcasting to keep the shape, i.e. in this case to return an array of 1s?
How to get constant function to keep shape in NumPy
0
0
0
1,182
38,794,937
2016-08-05T17:35:00.000
0
0
1
1
python,windows,bash,windows-subsystem-for-linux
38,797,912
3
false
0
0
You have at least four options: Specify the complete absolute path to the python executable you want to use. Define an alias in your .bashrc file Modify the PATH variable in your .bashrc file to include the location of the python version you wish to use. Create a symlink in a directory which is already in your PATH.
2
11
0
I am using Windows 10 and have Python installed. The new update brought bash to windows, but when I call python from inside bash, it refers to the Python installation which came with the bash, not to my Python installed on Windows. So, for example, I can't use the modules which I have already installed on Windows and would have to install them separately on the bash installation. How can I (and can I?) make bash point to my original Windows Python installation? I see that in /usr/bin I have a lot of links with "python" inside their name, but I am unsure which ones to change, and if changing them to Windows directories would even work because of different executable formats.
Pointing bash to a python installed on windows
0
0
0
7,716
38,794,937
2016-08-05T17:35:00.000
5
0
1
1
python,windows,bash,windows-subsystem-for-linux
40,900,477
3
true
0
0
As of Windows 10 Insider build #14951, you can now invoke Windows executables from within Bash. You can do this by explicitly calling the absolute path to an executable (e.g. c:\Windows\System32\notepad.exe), or by adding the executable's path to the bash path (if it isn't already), and just calling, for example, notepad.exe. Note: Be sure to append the .exe to the name of the executable - this is how Linux knows that you're invoking something foreign and routes the invocation request to the registered handler - WSL in this case. So, in your case, if you've installed Python 2.7 on Windows at C:\, you might invoke it using a command like this from within bash: $ /mnt/c/Python2.7/bin/python.exe (or similar - check you have specified each folder/filename case correctly, etc.) HTH.
2
11
0
I am using Windows 10 and have Python installed. The new update brought bash to windows, but when I call python from inside bash, it refers to the Python installation which came with the bash, not to my Python installed on Windows. So, for example, I can't use the modules which I have already installed on Windows and would have to install them separately on the bash installation. How can I (and can I?) make bash point to my original Windows Python installation? I see that in /usr/bin I have a lot of links with "python" inside their name, but I am unsure which ones to change, and if changing them to Windows directories would even work because of different executable formats.
Pointing bash to a python installed on windows
1.2
0
0
7,716
38,795,545
2016-08-05T18:15:00.000
0
0
0
0
java,python,performance,orientdb
38,802,276
1
false
1
0
AFAIK on remote connection (with a standalone OrientDB server) performance would be the same. The great advantage of using the Java native driver is the option to go embedded. If your deployment scenario allows it, you can avoid the standalone server and use OrientDB embedded into your Java application, avoiding network overhead.
1
0
0
I want to develop a project that need a noSQL database. After searching a lot, I chose OrientDB. I want to make an API Rest that can connect to OrientDB. Firstly, I wanted to use Flask to develop but I don't know if it's better to use Java native driver between Python binary driver to connect with database. Anyone have results of performance between these drivers?
Performance between Python and Java drivers with OrientDB
0
1
0
127
38,795,912
2016-08-05T18:43:00.000
2
0
0
0
python,scikit-learn,cluster-analysis,k-means
38,803,263
1
false
0
0
It is not a good idea to do this during optimization, because it changes the optimization procedure substantially. It will essentially reset the whole optimization. There are strategies such as bisecting k-means that try to learn the value of k during clustering, but they are a bit more tricky than increasing k by one - they decide upon one particular cluster to split, and try to choose good initial centroids for this cluster to keep things somewhat stable. Furthermore, increasing k will not necessarily improve Silhouette. It will trivially improve SSQ, so you cannot use SSQ as a heuristic for choosing k, either. Last but not least, computing the Silhouette is O(n^2). It is too expensive to run often. If you have large enough amount of data to require MiniBatchKMeans (which really is only for massive data), then you clearly cannot afford to compute Silhouette at all.
1
0
1
I am attempting to use MiniBatchKMeans to stream NLP data in and cluster it, but have no way of determining how many clusters I need. What I would like to do is periodically take the silhouette score and if it drops below a certain threshold, increase the number of centroids. But as far as I can tell, n_clusters is set when you initialize the clusterer and can't be changed without restarting. Am I wrong here? Is there another way to approach this problem that would avoid this issue?
Is it possible to increase the number of centroids in KMeans during fitting?
0.379949
0
0
77
38,796,252
2016-08-05T19:08:00.000
0
0
0
0
python,sockets,networking,tun
38,833,280
1
false
0
0
If I am understanding your problem, you should be able to write an application that connects to the tun device and also maintains another network socket. You will need some sort of multiplexing such as epoll or select. But, basically, whenever you see data on the tun interface, you can receive the data into a buffer and then provide this buffer (with the correct number of received octets) to the send call of the other socket. Typically you use such a setup when you insert some custom header or something to e.g., implement a custom VPN solution.
1
0
0
I want to connect a Tun to a socket so that whatever data is stored in the Tun file will then end up being pushed out to a socket which will receive the data. I am struggling with the higher level conceptual understanding of how I am supposed to connect the socket and the Tun. Does the Tun get a dedicated socket that then communicates with another socket (the receive socket)? Or does the Tun directly communicate with the receive socket? Or am I way off all together? Thanks!
Connecting a Tun to a socket
0
0
1
420
38,796,441
2016-08-05T19:20:00.000
1
0
1
1
python,wing-ide,python-packaging
38,796,484
2
false
0
0
pip install hypothesis Assuming you have pip. If you want to install it from the downloaded package just open command prompt and cd to the directory where you downloaded it and do python setup.py install
1
0
0
I'm using Wing IDE, how do I install hypothesis Python package to my computer? I have already download the zip file, do I use command prompt to install it or there is an option in Wing IDE to do it?
how to install hypothesis Python package?
0.099668
0
0
1,175
38,803,437
2016-08-06T10:51:00.000
0
0
0
0
python,user-interface,tkinter,py2app
39,405,027
3
false
0
1
I actually figured out: a bit of a stupid mistake, but since I'm using python 3.x I have to type in python3 before doing it.
1
0
0
I am building a GUI tkinter python3 application and attempting to compile it with py2app. For some reason when I try to launch the .app bundle in the dist folder it gives me this error: A main script could not be located in the Resources folder I was wondering why it is doing this, as it is rather frustrating, and I can not find anything about it anywhere. I copied my .py file into the resources folder (Networking.py). Previous to this error I also found an error in the Info.plist. In the key where it states the runtime executable, I found it was trying to get python2.7, which I have updated and am no longer using. I changed it to my current version, which the path for looks like this: /Library/Frameworks/Python.framework/Versions/3.6/Python It may be worth noting that it had a strange path previously, which did not look like a proper path to me. It was @executable_path/../Frameworks/Python.framework/Versions/2.7/PythonI removed this completely... Was this wrong? I have no idea about anything about XML, which is what it seemed to be... Also when compiling this happened: error: [Errno 1] Operation not permitted: '/Users/Ember/dist/Networking.app/Contents/MacOS/Networking' Any help would be highly appreciated! Thanks! EDIT I actually figured out: a bit of a stupid mistake, but since I'm using python 3.x I have to type in python3 before doing it.
Py2app: A main script could not be located in the Resources folder
0
0
0
1,292
38,803,824
2016-08-06T11:34:00.000
0
0
1
0
java,python,string,challenge-response
38,963,429
1
false
0
0
I am not sure. But i will avoid matching first and last character of chunk. Should replace all other.
1
0
0
I am currently passing 4 of the 5 hidden test cases for this challenge and would like some input Quick problem description: You are given two input strings, String chunk and String word The string "word" has been inserted into "chunk" some number of times The task is to find the shortest string possible when all instances of "word" have been removed from "chunk". Keep in mind during removal, more instances of the "word" might be created in "chunk". "word" can also be inserted anywhere, including between "word" instances If there are more the one shortest possible strings after removal, return the shortest word that is lexicographic-ally the earliest. This is easier understood with examples: Inputs: (string) chunk = "lololololo" (string) word = "lol" Output: (string) "looo" (since "looo" is eariler than "oolo") Inputs: (string) chunk = "goodgooogoogfogoood" (string) word = "goo" Output: (string) "dogfood" right now I am iterating forwards then backwards, removing all instances of word and then comparing the two results of the two iterations. Is there a case I am overlooking? Is it possible there is a case where you have to remove from the middle first or something along those lines? Any insight is appreciated.
Google Challenge Dilemma, Insights into possible errors?
0
0
0
47
38,807,068
2016-08-06T17:41:00.000
0
0
1
0
python,qt,python-3.x,pyqt5
40,221,119
1
false
0
1
Had the exact same issue. Looks like Eric wants pyuic5.bat (somewhere in the path) I created such a batch file with the following contents, and it worked @"pyuic5.exe" %1 %2 %3 %4 %5 %6 %7 %8 %9 PS: In my setup these files are both located in a folder: C:\Python35-32\Scripts
1
0
0
I will compile form designed by qt designer in eric6, but show "Could not start pyuic5, Ensure that it is in the search path." But actually the PATH of pyuic5.exe has been in the system PATH, and also the pyuic5.exe can be run by typing pyuic5 in the cmd of window7 . The envirement is python3.5+qt5.7+pyqt5.7+eric6. Why I could not compile form in eric6? How can fix the error?
Compile form in eric6, but show "Could not start pyuic5, Ensure that it is in the search path."
0
0
0
974
38,807,772
2016-08-06T18:59:00.000
2
1
1
0
windows,python-2.7,python-module
38,808,033
2
false
0
0
I suppose you mean "copy the python installation from one system to another" (else the answer is: put your modules on a USB key and copy them to the other system). the best way The best way of course would be to install Python properly on the other system using setup. But as you said, all dependencies/external libraries that you could easily get using pip for instance would have to be re-done. Nothing impossible with a small batch script, even if you don't have internet, but you would have to get hold of all the .whl files. the full treatment, portable-style But if you cannot you can create a "portable" version of python like this: zip the contents of C:\python27 to an USB key copy all python DLLS: copy C:\windows\system32\py*DLL K: (if K is your usb drive) unzip the contents of the archive somewhere on the second machine add the DLLs directly in the python27 directory. (those DLLs were installed in the windows system in previous Python versions, now it's even simpler since they are natively installed in the python directory) The advantage of this method is that it can be automated to be performed on several machines. There are some disadvantages too: python is not seen as "installed" in the registry, so no "uninstall" is proposed. It's a portable install associations with .py and .pyw are not done. But you can do it manually by altering some registry keys. another method, better You can have best of both worlds like this: perform a basic install of python on the second machine overwrite the install with the zip file => you get the registered install + the associations + the PATH... I would recommend that last method. Last partial method, maybe best matching your question Try copying the Lib directory only. It's where the libraries are installed. I'm not 100% sure but it worked for me when I wanted to put wx on a python install lacking wx. Of course you will copy over already existing files, but they are the same so no problem. I let other people comment if this is acceptable or not. I'm not sure of all the installation mechanism, maybe that will fail in some particular case.
2
13
0
Is it possibe to copy all of the python modules from one Windows computer to another computer? They are both running the same version of Python 2.7.12. The reason for doing so is that I have internet access on one of them, and manual installing modules on the other requires to much time because of dependencies.
Copy python modules from one system to another
0.197375
0
0
25,254
38,807,772
2016-08-06T18:59:00.000
0
1
1
0
windows,python-2.7,python-module
59,914,406
2
false
0
0
In my case, copy-pasting python installation didn't do the job. You need to check the "C:\Users\\AppData\Roaming\Python*" folder. You may find installed python modules there. Copy and paste these into your source folder will add these modules to your python.
2
13
0
Is it possibe to copy all of the python modules from one Windows computer to another computer? They are both running the same version of Python 2.7.12. The reason for doing so is that I have internet access on one of them, and manual installing modules on the other requires to much time because of dependencies.
Copy python modules from one system to another
0
0
0
25,254
38,808,229
2016-08-06T19:58:00.000
0
0
1
0
python,python-3.x,dictionary
38,808,423
2
false
0
0
Assuming the value is not in the dictionary when passed to the callback, dict.setdefault is not really the problem - this operation is one of many available for changing a python dictionary. Specifically, it ensures SOMETHING is stored for the given key, which can be done directly anyway with an indexed assignment. As long as your code and the invoker are both maintaining a reference to the same dictionary, you have no real choice but to trust the invoker (and any all other reference holders to this dictionary). The mutability of the dictionary is the problem, so the possible solutions orient around the leeway in the design. I can only think of: copy the dictionary when it is passed into the callback, and/or change the return value to a copy or a (readonly) view of the dictionary
1
2
0
I currently have an object passed to my callback which is a dictionary and which I have to return. The invoker called obj.setdefault("Server", "Something") on the dictionary so that it has a default value even if the key/value pair does not exist. How can I revert/remove that setdefault (remove the default value)? I mean I simply don't want that key/value pair in the dict and it seems that it doesn't have complications if the key doesn't exit, but it is always added because of the default.
Dictionary: Revert/remove setdefault
0
0
0
304
38,808,690
2016-08-06T21:03:00.000
4
0
1
1
python,debugging,intellij-idea,pycharm,remote-debugging
55,854,571
4
false
0
0
I just contacted JetBrains and was informed that their documentation is out of date and that it's now located in /Users/<user>/Library/Application Support/<product_version>/python.
1
9
0
I can't find pycharm-debug.egg in IntelliJ Idea (2016.2) installation directory, where can I get it?
Where can I get pycharm-debug.egg for Idea?
0.197375
0
0
6,168
38,814,666
2016-08-07T13:35:00.000
4
0
0
1
python,google-app-engine,null,google-cloud-datastore,app-engine-ndb
38,815,611
1
true
1
0
You have to specifically set the value to NULL, otherwise it will not be stored in the Datastore and you see it as missing in the Datastore viewer. This is an important distinction. NULL values can be indexed, so you can retrieve a list of entities where date of birth, for example, is null. On the other hand, if you do not set a date of birth when it is unknown, there is no way to retrieve a list of entities with date of birth property missing - you'll have to iterate over all entities to find them. Another distinction is that NULL values take space in the Datastore, while missing values do not.
1
1
0
I recently updated an entity model to include some extra properties, and noticed something odd. For properties that have never been written, the Datastore query page shows a "—", but for ones that I've explicitly set to None in Python, it shows "null". In SQL, both of those cases would be null. When I query an entity that has both types of unknown properties, they both read as None, which fits with that idea. So why does the NDB datastore viewer differentiate between "never written" and "set to None", if I can't differentiate between them programatically?
Why does the Google App Engine NDB datastore have both "—" and "null" for unkown data?
1.2
1
0
184
38,818,981
2016-08-07T22:03:00.000
0
1
0
0
python,twitter,tweepy
38,819,049
1
false
0
0
I don't know much about the limits with Tweepy, but you can always write a basic web scraper with urllib and BeautifulSoup to do so. You could take a website such as www.doesfollow.com which accomplishes what you are trying to do. (not sure about request limits with this page, but there are dozens of other websites that do the same thing) This website is interesting because the url is super simple. For example, in order to check if Google and Twitter are "friends" on Twitter, the link is simply www.doesfollow.com/google/twitter. This would make it very easy for you to run through the users as you can just append the users to the url such as 'www.doesfollow.com/'+ user1 + '/' + user2 The results page of doesfollow has this tag if the users are friends on Twitter: <div class="yup">yup</div>, and this tag if the users are not friends on Twitter: <div class="nope">nope</div> So you could parse the page source code and search to find which of those tags exist to determine if the users are friends on Twitter. This might not be the way that you wanted to approach the problem, but it's a possibility. I'm not entirely sure how to approach the graphing part of your question though. I'd have to look into that.
1
3
1
I'm facing problem like this. I used tweepy to collect +10000 tweets, i use nltk naive-bayes classification and filtered the tweets into +5000. I want to generate a graph of user friendship from that classified 5000 tweet. The problem is that I am able to check it with tweepy.api.show_frienship(), but it takes so much and much time and sometime ended up with endless ratelimit error. is there any way i can check the friendship more eficiently?
Most efficient way to check twitter friendship? (over 5000 check)
0
0
1
592
38,819,322
2016-08-07T22:58:00.000
28
0
1
0
ipython-notebook,jupyter-notebook,recovery
44,044,643
12
false
0
0
This is bit of additional info on the answer by Thuener, I did the following to recover my deleted .ipynb file. The cache is in ~/.cache/chromium/Default/Cache/ (I use chromium) used grep in binary search mode, grep -a 'import math' (replace search string by a keyword specific in your code) Edit the binary file in vim (it doesn't open in gedit) The python ipynb should file start with '{ "cells":' and ends with '"nbformat": 4, "nbformat_minor": 2}' remove everything outside these start and end points Rename the file as .ipynb, open it in your jupyter-notebook, it works.
4
23
0
I have iPython Notebook through Anaconda. I accidentally deleted an important notebook, and I can't seem to find it in trash (I don't think iPy Notebooks go to the trash). Does anyone know how I can recover the notebook? I am using Mac OS X. Thanks!
How to recover deleted iPython Notebooks
1
0
0
68,420
38,819,322
2016-08-07T22:58:00.000
11
0
1
0
ipython-notebook,jupyter-notebook,recovery
59,777,233
12
false
0
0
On linux: I did the same error and I finally found the deleted file in the trash /home/$USER/.local/share/Trash/files
4
23
0
I have iPython Notebook through Anaconda. I accidentally deleted an important notebook, and I can't seem to find it in trash (I don't think iPy Notebooks go to the trash). Does anyone know how I can recover the notebook? I am using Mac OS X. Thanks!
How to recover deleted iPython Notebooks
1
0
0
68,420
38,819,322
2016-08-07T22:58:00.000
0
0
1
0
ipython-notebook,jupyter-notebook,recovery
52,046,886
12
false
0
0
Sadly my file was neither in the checkpoints directory, nor chromium's cache. Fortunately, I had an ext4 formatted file system and was able to recover my file using extundelete: Figure out the drive your missing deleted file was stored on: df /your/deleted/file/diretory/ Switch to a folder located on another you have write access to: cd /your/alternate/location/ It is proffered to run extundlete on an unmounted partition. Thus, if your deleted file wasn't stored on the same drive as your operating system, it's recommended you unmount the partition of the deleted file (though you may want to ensure extundlete is already installed before proceeding): sudo umount /dev/sdax where sdax is the partition returned by your df command earlier Use extundelete to restore your file: sudo extundelete --restore-file /your/deleted/file/diretory/delted.file /dev/sdax If successful your recovered file will be located at: /your/alternate/location/your/deleted/file/diretory/delted.file
4
23
0
I have iPython Notebook through Anaconda. I accidentally deleted an important notebook, and I can't seem to find it in trash (I don't think iPy Notebooks go to the trash). Does anyone know how I can recover the notebook? I am using Mac OS X. Thanks!
How to recover deleted iPython Notebooks
0
0
0
68,420
38,819,322
2016-08-07T22:58:00.000
1
0
1
0
ipython-notebook,jupyter-notebook,recovery
53,780,758
12
false
0
0
If you're using windows, it sends it to the recycle bin, thankfully. Clearly, it's a good idea to make checkpoints.
4
23
0
I have iPython Notebook through Anaconda. I accidentally deleted an important notebook, and I can't seem to find it in trash (I don't think iPy Notebooks go to the trash). Does anyone know how I can recover the notebook? I am using Mac OS X. Thanks!
How to recover deleted iPython Notebooks
0.016665
0
0
68,420
38,825,512
2016-08-08T09:24:00.000
0
0
1
0
python,unit-testing,pycharm,nose
41,159,526
1
true
0
0
Pycharm do support nose-testconfig. It has to be configured in Run->Edit configurations->Defaults->Python tests->Nosetests, check Params check box and set required info there "--tc-file=deafault.ini "
1
0
0
I have most of my test cases programmed using nose-testconfig, I'm trying to configure PyCharm in order to have a better IDE to improve my productivity but after investigating a lot an even ask to JetBrains I discovered PyCharm does not support the nose-testconfig plugin then my ini file cannot be loaded before starting my test, so my test cannot be executed from PyCharm (I tried with external tool configuration but I could neither execute them, apparently same problem with config file --tc-file=/home/iniFiles/884_firefox.ini") Is there a way to use nose-testconfig with PyCharm? I would like to have testing tool in pycharm, but most important to me is debugging functionality. I have a huge bunch of legacy test cases which I need to be able to use in PyCharm.
PyCharm does not support nose-testconfig and I need an alternative
1.2
0
0
128
38,827,889
2016-08-08T11:19:00.000
89
0
1
1
python-3.x,window
38,842,351
3
true
0
0
Hey just right click on exe file and run as a administrator.It worked for me :)
2
25
0
I have Python 2.7 on my Window 7. Problem is with python 3.5 and 3.6 version only.
Getting the targetdir variable must be provided when invoking this installer while installing python 3.5
1.2
0
0
28,893
38,827,889
2016-08-08T11:19:00.000
2
0
1
1
python-3.x,window
52,044,332
3
false
0
0
There are 2-3 ways to solve the issue: As suggested above, Right-click on exe file and run as administrator. Open command prompt in administrator mode. Just take a note of where your setup file location is present. Use cd C:\Users\ABC\Downloads Type C:\>python-3.7.0.exe TargetDir=C:\Python37 Note: my setup file was python-3.7.0.exe Follow the steps 3.Please try to do the custom installation and choose a clean folder location. In custom installation, we can tick or un-tick some option. Choose only 1/2 option which are required. Leave rest. Sometimes this troubleshoot step also help to install. 4.Go to properties of python Setup file. Go to advance setting and change the owner to administrator. Also go to compatibility and tick on "Run as administrator"
2
25
0
I have Python 2.7 on my Window 7. Problem is with python 3.5 and 3.6 version only.
Getting the targetdir variable must be provided when invoking this installer while installing python 3.5
0.132549
0
0
28,893
38,828,829
2016-08-08T12:08:00.000
1
0
1
0
python,pycharm,tensorflow,anaconda,ubuntu-16.04
39,021,770
3
true
0
0
Anaconda defaults doesn't provide tensorflow yet, but conda-forge do, conda install -c conda-forge tensorflow should see you right, though (for others reading!) the installed tensorflow will not work on CentOS < 7 (or other Linux Distros of a similar vintage).
1
2
1
I am using Ubuntu 16.04 . I tried to install Tensorflow using Anaconda 2 . But it installed a Environment inside ubuntu . So i had to create a virtual environment and then use Tensorflow . Now how can i use both Tensorflow and Sci-kit learn together in a single environment .
How to use Tensorflow and Sci-Kit Learn together in one environment in PyCharm?
1.2
0
0
2,492
38,830,610
2016-08-08T13:33:00.000
85
0
0
1
python,docker,jupyter-notebook
38,936,551
11
false
0
0
You need to run your notebook on 0.0.0.0: jupyter notebook -i 0.0.0.0. Running on localhost make it available only from inside the container.
7
96
0
I created a docker image with python libraries and Jupyter. I start the container with the option -p 8888:8888, to link ports between host and container. When I launch a Jupyter kernel inside the container, it is running on localhost:8888 (and does not find a browser). I used the command jupyter notebook But from my host, what is the IP address I have to use to work with Jupyter in host's browser ? With the command ifconfig, I find eth0, docker, wlan0, lo ... Thanks !
Access Jupyter notebook running on Docker container
1
0
0
99,109
38,830,610
2016-08-08T13:33:00.000
0
0
0
1
python,docker,jupyter-notebook
48,486,958
11
false
0
0
In the container you can run the following to make it available on your local machine (using your docker machine's ip address). jupyter notebook --ip 0.0.0.0 --allow-root You may not need to provide the --allow-root flag depending on your container's setup.
7
96
0
I created a docker image with python libraries and Jupyter. I start the container with the option -p 8888:8888, to link ports between host and container. When I launch a Jupyter kernel inside the container, it is running on localhost:8888 (and does not find a browser). I used the command jupyter notebook But from my host, what is the IP address I have to use to work with Jupyter in host's browser ? With the command ifconfig, I find eth0, docker, wlan0, lo ... Thanks !
Access Jupyter notebook running on Docker container
0
0
0
99,109
38,830,610
2016-08-08T13:33:00.000
2
0
0
1
python,docker,jupyter-notebook
51,117,257
11
false
0
0
You can use the command jupyter notebook --allow-root --ip[of your container] or give access to all ip using option --ip0.0.0.0.
7
96
0
I created a docker image with python libraries and Jupyter. I start the container with the option -p 8888:8888, to link ports between host and container. When I launch a Jupyter kernel inside the container, it is running on localhost:8888 (and does not find a browser). I used the command jupyter notebook But from my host, what is the IP address I have to use to work with Jupyter in host's browser ? With the command ifconfig, I find eth0, docker, wlan0, lo ... Thanks !
Access Jupyter notebook running on Docker container
0.036348
0
0
99,109
38,830,610
2016-08-08T13:33:00.000
0
0
0
1
python,docker,jupyter-notebook
62,352,225
11
false
0
0
docker run -i -t -p 8888:8888 continuumio/anaconda3 /bin/bash -c "/opt/conda/bin/conda install jupyter -y --quiet && mkdir /opt/notebooks && /opt/conda/bin/jupyter notebook --notebook-dir=/opt/notebooks --ip='*' --port=8888 --no-browser --allow-root" i had to add --allow-root to the command and now its running
7
96
0
I created a docker image with python libraries and Jupyter. I start the container with the option -p 8888:8888, to link ports between host and container. When I launch a Jupyter kernel inside the container, it is running on localhost:8888 (and does not find a browser). I used the command jupyter notebook But from my host, what is the IP address I have to use to work with Jupyter in host's browser ? With the command ifconfig, I find eth0, docker, wlan0, lo ... Thanks !
Access Jupyter notebook running on Docker container
0
0
0
99,109
38,830,610
2016-08-08T13:33:00.000
0
0
0
1
python,docker,jupyter-notebook
71,815,877
11
false
0
0
Go in the Docker and check cat /etc/jupyter/jupyter_notebook_config.py : You should see / add this line : c.NotebookApp.allow_origin = 'https://colab.research.google.com'
7
96
0
I created a docker image with python libraries and Jupyter. I start the container with the option -p 8888:8888, to link ports between host and container. When I launch a Jupyter kernel inside the container, it is running on localhost:8888 (and does not find a browser). I used the command jupyter notebook But from my host, what is the IP address I have to use to work with Jupyter in host's browser ? With the command ifconfig, I find eth0, docker, wlan0, lo ... Thanks !
Access Jupyter notebook running on Docker container
0
0
0
99,109
38,830,610
2016-08-08T13:33:00.000
65
0
0
1
python,docker,jupyter-notebook
48,986,548
11
false
0
0
Host machine: docker run -it -p 8888:8888 image:version Inside the Container : jupyter notebook --ip 0.0.0.0 --no-browser --allow-root Host machine access this url : localhost:8888/tree‌​ When you are logging in for the first time there will be a link displayed on the terminal to log on with a token.
7
96
0
I created a docker image with python libraries and Jupyter. I start the container with the option -p 8888:8888, to link ports between host and container. When I launch a Jupyter kernel inside the container, it is running on localhost:8888 (and does not find a browser). I used the command jupyter notebook But from my host, what is the IP address I have to use to work with Jupyter in host's browser ? With the command ifconfig, I find eth0, docker, wlan0, lo ... Thanks !
Access Jupyter notebook running on Docker container
1
0
0
99,109
38,830,610
2016-08-08T13:33:00.000
12
0
0
1
python,docker,jupyter-notebook
46,086,088
11
false
0
0
To get the link to your Jupyter notebook server: After your docker run command, a hyperlink should be automatically generated. It looks something like this: http://localhost:8888/?token=f3a8354eb82c92f5a12399fe1835bf8f31275f917928c8d2 :: /home/jovyan/work If you want to get the link again later down the line, you can type docker exec -it <docker_container_name> jupyter notebook list.
7
96
0
I created a docker image with python libraries and Jupyter. I start the container with the option -p 8888:8888, to link ports between host and container. When I launch a Jupyter kernel inside the container, it is running on localhost:8888 (and does not find a browser). I used the command jupyter notebook But from my host, what is the IP address I have to use to work with Jupyter in host's browser ? With the command ifconfig, I find eth0, docker, wlan0, lo ... Thanks !
Access Jupyter notebook running on Docker container
1
0
0
99,109
38,832,347
2016-08-08T14:50:00.000
0
0
0
1
python,scapy,pcap
38,832,386
1
true
0
0
If you're running on linux or OS X try running as root or with sudo, otherwise if you're on windows try running as administrator.
1
0
0
I am trying to run pcapy_sniffer.py but i get this pcapy.PcapError: eth1: You don't have permission to capture on that device (socket: Operation not permitted)
pcapy.PcapError: eth1: You don't have permission to capture on that device
1.2
0
1
1,739
38,833,309
2016-08-08T15:36:00.000
0
1
0
0
python,rabbitmq,amqp,pika
38,833,846
1
true
0
0
RabbitMQ can do this. You only want to read from the queue when you're ready - so spin up a thread that can spawn the external process and watch it, then fetch the next message from the queue when the process is done. You can then have mulitiple threads running in parallel to manage multiple queues. I'm not sure what you want an ack for? Are you trying to stop RabbitMQ from adding new elements to that queue if it gets too full (because its elements are being processed too slowly/not at all)? There might be a way to do this when you add messages to the queues - before adding an item, check to make sure that the number of messages already in that queue is not "much greater than" the average across all queues?
1
1
0
I'm trying to stay connected to multiple queues in RabbitMQ. Each time I pop a new message from one of these queue, I'd like to spawn an external process. This process will take some time to process the message, and I don't want to start processing another message from that specific queue until the one I popped earlier is completed. If possible, I wouldn't want to keep a process/thread around just to wait on the external process to complete and ack the server. Ideally, I would like to ack in this external process, maybe passing some identifier so that it can connect to RabbitMQ and ack the message. Is it possible to design this system with RabbitMQ? I'm using Python and Pika, if this is relevant to the answer. Thanks!
RabbitMQ: Consuming only one message at a time from multiple queues
1.2
0
0
1,536
38,834,104
2016-08-08T16:19:00.000
0
0
0
0
python,python-3.x,url
38,834,146
1
false
0
0
Find out what the source code of the Proxy Block page is. Use urllib and BeautifulSoup to try and scrape the page and parse the page's source code to see if you can find something unique that can tell you if the site is accessible or not. For example, in my office, when a page is blocked by our proxy the title tag of the source code is <title>Network Error</title>. Something such as that could be an identifier for you. Just a quick idea. So for example you could have the URL's to test in a list and iterate through the list in a loop and try and scrape each site.
1
0
0
If our network has a proxy , then some sites can not be opened. I want to check iteratively , how many sites can be accessed through our network.
Python : How to check if a given site is accessible through a proxy network?
0
0
1
243
38,836,411
2016-08-08T18:46:00.000
1
1
0
0
python,robotframework
38,837,243
1
false
0
0
No, it is not possible to dynamically create tests via an argument file. It is, however, possible to write a script that reads a data file and generates a suite of tests before running pybot.
1
0
0
I am learning robot and creating a testframework. I want to give people an easy way to add more tests. Is it possible to dynamically create tests based on arguments passed in argument file? I have all my tests in a .rst file and right now users have to populate the test table , but I want to make it simpler so other people actually use the framework.
RobotFramework adding tests through argument file
0.197375
0
0
71
38,836,898
2016-08-08T19:15:00.000
1
0
0
0
python,linux,sockets,unix
38,838,673
1
false
0
0
At the TCP level, the only control you have is how many bytes you pass off to send(), and how often you call it. Once send() has handed over some bytes to the networking stack, it's entirely up to the networking stack how fast (or slow) it wants to send them. Given the above, you can roughly limit your transmission rate by monitoring how many bytes you have sent, and how much time has elapsed since you started sending, and holding off subsequent calls to send() (and/or the number of data bytes your pass to send()) to keep the average rate from going higher than your target rate. If you want any finer control than that, you'll need to use UDP instead of TCP. With UDP you have direct control of exactly when each packet gets sent. (Whereas with TCP it's the networking stack that decides when to send each packet, what will be in the packet, when to resend a dropped packet, etc)
1
0
0
TCP flows by their own nature will grow until they fill the maximum capacity of the links used from src to dst (if all those links are empty). Is there an easy way to limit that ? I want to be able to send TCP flows with a maximum X mbps rate. I thought about just sending X bytes per second using the socket.send() function and then sleeping the rest of the time. However if the link gets congested and the rate gets reduced, once the link gets uncongested again it will need to recover what it could not send previously and the rate will increase.
Limiting TCP sending rate
0.197375
0
1
1,845
38,838,772
2016-08-08T21:26:00.000
0
0
1
0
python,user-interface,exe,pyinstaller
67,105,487
2
false
0
1
The error is very generic and at this point, it may not be possible to find the actual error what preventing the execution. In order to get actual error please exclude '--windowed' or '--noconsole' parameters during installation and then execute with 'pyinstaller -filename.py --onefile'. Then on execution it will show exact error instead of 'Fatal error! Failed to execute script'. Then you can proceed accordingly.
1
0
0
I'm trying to make my python program run on other systems without python installation. I made use of tkinter to create a gui, after creating an exe file with pyinstaller it throws a fatal error "Failed to run script". I've checked my code several times and it works well. I don't know what's wrong.
Fatal error! failed to execute script when creating executable file
0
0
0
6,275
38,839,215
2016-08-08T22:04:00.000
0
1
0
1
python,command-line,emacs
38,909,330
1
true
0
0
People use different tools for different purposes. An important question about the interface into any program is who is the user? You, as a programmer, will use the interpreter to test a program and check for errors. Often times, the user doesn't really need to access the variables inside because they are not interacting with the application/script with an interpreter. For example, with Python web applications, there is usually a main.py script to redirect client HTTP requests to appropriate handlers. These handlers execute a python script automatically when a client requests it. That output is then displayed to the user. In Python web applications, unless you are the developer trying to eliminate a bug in the program, you usually don't care about accessing variables within a file like main.py (in fact, giving the client access to those variables would pose a security issue in some cases). Since you only need the output of a script, you'd execute that script function in command line and display the result to the client. About best practices: again, depends on what you are doing. Using the python interpreter for computation is okay for smaller testing of isolated functions but it doesn't work for larger projects where there are more moving parts in a python script. If you have a python script reaching a few hundred lines, you won't really remember or need to remember variable names. In that case, it's better to execute the script in command-line, since you don't need access to the internal components. You want to create a new script file if you are fashioning that script for a single set of tasks. For example with the handlers example above, the functions in main.py are all geared towards handling HTTP requests. For something like defining x, defining y, and then adding it, you don't really need your own file since you aren't creating a function that you might need in the future and adding two numbers is a built-in method. However, say you have a bunch of functions you've created that aren't available in a built-in method (complicated example: softmax function to reduce K dimension vector to another K dimension vector where every element is a value between 0 and 1 and all the elements sum to 1), you want to capture in a script file and cite that script's procedure later. In that case, you'd create your own script file and cite it in a different python script to execute.
1
0
0
I am beginner to python and programming in general. As I am learning python, I am tying to develop a good habit or follow a good practice. So let me first explain what I am currently doing. I use Emacs (prelude) to execute python scripts. The keybinding C-c C-c evaluates the buffer which contains the python script. Then I get a new buffer with a python interpreter with >>> prompt. In this environment all the variables used in the scripts are accessible. For example, if x and y were defined in the script, I can do >>> x + y to evaluate it. I see many people (if not most) around me using command line to execute the python script (i.e., $ python scriptname.py). If I do this, then I return to the shell prompt, and I am not able to access the variables x and y to perform x + y. So I wasn't sure what the advantage of running python scripts using the command line. Should I just use Emacs as a editor and use Terminal (I am using Mac) to execute the script? What is a better practice? Thank you!
What is the advantage of running python script using command line?
1.2
0
0
1,721
38,839,269
2016-08-08T22:10:00.000
1
1
0
0
php,python,json,laravel
38,839,396
2
false
0
0
Use python to extract data from the serial ports of rasberry pi and json encode it and store it in the web directory of your laravel project files. Later json decode and present the data on the web end via laravel php. This is all good . Beind said that another way is to get data from python and then make a curl Post request to your php project and collect data
2
1
0
I have a Raspberry PI collecting data from a break beam sensor I wish to use as part of an already developed Laravel application. I was just wondering what would the best way to transfer the data be. I was thinking of creating an JSON file uploading it to a directory then running a cron job hourly to pick up on new files before running them through the Laravel controller to update the database and send emails. I would like to pass the data through the Laravel application rather than sending from Python for management purposes. Could anyone see any issues with my way/ know a better way?
What is the best way to send Python generated data to PHP?
0.099668
0
0
93
38,839,269
2016-08-08T22:10:00.000
2
1
0
0
php,python,json,laravel
38,839,403
2
true
0
0
Your approach sounds fine - the only caveat would be that you will not have "real time" data. You rely on the schedule of your cron jobs to sync the data around - of course you could do this every minute if you wanted to, which would minimize most of the effect of that delay. The other option is to expose an API in your Laravel application which can accept the JSON payload from your python script and process it immediately. This approach offers the benefits of real-time processing and less processing overall because it's on demand, but also requires you to properly secure your API endpoint which you wouldn't need to do with a cron based approach. For the record, I highly recommend using JSON as the data transfer format. Unless you need to implement schema validation (in which case possibly look as XML), using JSON is easy on both PHP and python's side.
2
1
0
I have a Raspberry PI collecting data from a break beam sensor I wish to use as part of an already developed Laravel application. I was just wondering what would the best way to transfer the data be. I was thinking of creating an JSON file uploading it to a directory then running a cron job hourly to pick up on new files before running them through the Laravel controller to update the database and send emails. I would like to pass the data through the Laravel application rather than sending from Python for management purposes. Could anyone see any issues with my way/ know a better way?
What is the best way to send Python generated data to PHP?
1.2
0
0
93
38,840,728
2016-08-09T01:34:00.000
2
0
1
0
python,pickle,python-2.x,python-module
38,840,848
1
true
0
0
Because they didn't code support for it. C level types (and even modules written in Python are implemented with a C level type) require pickle support to be coded explicitly. It's not very easy to determine what should be pickled if a module is allowed to be pickled; importing the same name on the other side would seem simple, but if you're actually trying to pickle the module itself, the worry would be that you want to pickle module state as well. It's even more confusing if the module is a C extension module, where module state may not even be exposed to Python itself, only used internally at the C layer. Given that usually you want specific things from a module, not the whole module (which is usually not referenced as state, just imported at the top level), the benefits of supporting pickling for modules are limited, and the semantics are unclear, they haven't bothered to implement it.
1
0
0
I noticed that when my object contains an explicit reference to a module, pickling it will fail because of this. However, if I stick a reference to a function from that module into my object instead, it can be picked and unpickled successfully. How come Python can pickle functions, but not modules?
Why can functions be pickled, but not modules?
1.2
0
0
112
38,841,720
2016-08-09T03:49:00.000
16
0
0
0
python,django,django-admin,python-3.5
38,841,784
1
true
1
0
While it does not show you what you are typing, it is still taking the input. So just type in the password both times, press enter and it will work even though it does not show up.
1
3
0
I am doing python manage.py createsuperuser in PowerShell and CMD, and I can type when it prompts me for the Username and Email, but when it gets to Password it just won't let me type. It is not freezing though, because when I press enter it re-prompts me for the password... Using Django 1.10 and Windows 10.
Cannot type password. (creating a Django admin superuser)
1.2
0
0
4,366
38,841,865
2016-08-09T04:07:00.000
2
0
1
0
python
38,842,389
3
false
0
0
You can use logging with debug level and once the debugging is completed, change the level to info. So any statements with logger.debug() will not be printed.
2
2
0
To python experts: I put lots of print() to check the value of my variables. Once I'm done, I need to delete the print(). It quite time-consuming and prompt to human errors. Would like to learn how do you guys deal with print(). Do you delete it while coding or delete it at the end? Or there is a method to delete it automatically or you don't use print()to check the variable value?
How do you deal with print() once you done with debugging/coding
0.132549
0
0
306
38,841,865
2016-08-09T04:07:00.000
0
0
1
0
python
38,843,280
3
false
0
0
What I do is put print statements in with with a special text marker in the string. I usually use print("XXX", thething). Then I just search for and delete the line with that string. It's also easier to spot in the output.
2
2
0
To python experts: I put lots of print() to check the value of my variables. Once I'm done, I need to delete the print(). It quite time-consuming and prompt to human errors. Would like to learn how do you guys deal with print(). Do you delete it while coding or delete it at the end? Or there is a method to delete it automatically or you don't use print()to check the variable value?
How do you deal with print() once you done with debugging/coding
0
0
0
306
38,841,875
2016-08-09T04:08:00.000
1
0
1
0
python,python-3.x
38,841,951
4
false
0
0
Then modifying the contents of list "b" won't change list "a" as per what I've read. So, this means its a deep copy. No, it does not. A shallow copy differs from a deep copy in whether contained values are copied or not. In your case, the list is copied, but the two resulting lists will contain the same objects. Adding or removing a value in one list won't affect the other list, but changes to a contained object will be reflected.
1
7
0
Say I have a list a with some values, and I did a b = a[:]. Then modifying the contents of list b won't change list a as per what I've read. So, this means its a deep copy. But python documentation still refers to this as shallow copy. Can someone clear this for me?
Copying a list using a[:] or copy() in python is shallow?
0.049958
0
0
2,887
38,842,666
2016-08-09T05:23:00.000
1
1
1
0
python,postgresql,internationalization,datetime-format,datetimeoffset
38,843,085
1
true
1
0
Keep as much in UTC as possible. Do your timezone conversion at your edges (client display and input processing), but keep anything stored server side in UTC.
1
1
0
I am building a mobile app and would like to follow best practice for datetime. Initially, we launched it in India and made our server, database and app time to IST. Now we are launching the app to other countries(timezones), how should I store the datetime? Should the server time be set to UTC and app should display time-based on user's timezone? What's the best practice to follow in terms of storing date time and exchanging date time format between client and server? Should the client send date time in UTC to the server or in it's own timezone along with locale?
Internationalization for datetime
1.2
0
0
224
38,843,404
2016-08-09T06:17:00.000
1
1
0
0
python,pyramid,pylons
38,886,655
2
false
1
0
Here's how I managed my last Pyramid app: I had both a development.ini and a production.ini. I actually had a development.local.ini in addition to the other two - one for local development, one for our "test" system, and one for production. I used git for version control, and had a main branch for production deployments. On my prod server I created the virtual environment, etc., then would pull my main branch and run using the production.ini config file. Updates basically involved jumping back into the virtualenv and pulling latest updates from the repo, then restarting the pyramid server.
1
2
0
So I have this Python pyramid-based application, and my development workflow has basically just been to upload changed files directly to the production area. Coming close to launch, and obviously that's not going to work anymore. I managed to edit the connection strings and development.ini and point the development instance to a secondary database. Now I just have to figure out how to create another copy of the project somewhere where I can work on things and then make the changes live. At first, I thought that I could just make a copy of the project directory somewhere else and run it with different arguments pointing to the new location. That didn't work. Then, I basically set up an entirely new project called myproject-dev. I went through the setup instructions: I used pcreate, and then setup.py develop, and then I copied over my development.ini from my project and carefully edited the various references to myproject-dev instead of myproject. Then, initialize_myproject-dev_db /var/www/projects/myproject/development.ini Finally, I get a nice pyramid welcome page that everything is working correctly. I thought at that point I could just blow out everything in the project directory and copy over the main project files, but then I got that feeling in the pit of my stomach when I noticed that a lot of things weren't working, like static URLs. Apparently, I'm referencing myproject in includes and also static URLs, and who knows where else. I don't think this idea is going to work, so I've given up for now. Can anyone give me an idea of how people go about setting up a development instance for a Python pyramid project?
Trying to make a development instance for a Python pyramid project
0.099668
1
0
166
38,844,651
2016-08-09T07:31:00.000
1
0
1
0
python,json,openerp,odoo-8,odoo-10
38,847,214
1
false
1
0
maybe this will help you: Step 1: Create js that runs reqiest to /example_link Step 2: Create controller who listens that link @http.route('/example_link', type="json") Step 3: return from that function json return json.dumps(res) where res is python dictionary and also dont forget to import json. Thats all, it's not very hard, hope I helped you, good luck.
1
3
0
I'm trying to integrate reactjs with Odoo, and successfully created components. Now my problem is that I cant get the JSON via odoo. The odoo programmer has to write special api request to make this happen. This is taking more time and code repetitions are plenty. I tried may suggestions and none worked. Is there a better way to convert the browse objects, that odoo generate, to JSON ? Note: Entirely new to python and odoo, please forgive my mistakes, if any mentioned above.
Can I convert an Odoo browse object to JSON
0.197375
0
1
3,015
38,846,395
2016-08-09T09:03:00.000
2
0
1
0
python,installation
38,846,673
1
true
0
0
You can put your example data in the repository in examples folder next to your project sources and exclude it from package with prune examples in your manifest file. There is actually no universal and standard advice for that. Do whatever suits your needs.
1
1
0
I'm setting up my first Python package, and I want to install it with some example data so that users can run the code straight off. In case it's relevant my package is on github and I'm using pip. At the moment my example data is being installed with the rest of the package into site_packages/, by setting include_package_data=True in setup.py, and referencing the files I want to include in MANIFEST.in. However, while this makes sense to me for files used by the code as part of its processing, it doesn't seem especially appropriate for example data. What is best/standard practice for deploying example data with a python package?
Where to place example data in python package?
1.2
0
0
191