Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
34,703,125
2016-01-10T07:28:00.000
0
1
0
0
python,cron
34,703,928
2
false
0
0
I would suggest that you'll put the values that were read by GetTableTimes.py&CreateRelayControlConfig in a json file and always read them in the RelayControlMainGH.py This way your cron jobs will be simple. I'm not sure you need a while True loop anyway since your cron will run the script every * minute/hour/day... I hope this will help you structure your solution better
2
0
0
I have a script(RelayControlMainGH.py) that monitors temperature sensors and controls relays. It uses a while true statement with a time.sleep() and runs forever. I also created a script(GetTableTimes.py) that reads 3 database table files and when they get modified a script(CreateRelayControlConfig.py) re-creates the script(RelayControlMainGH.py). So anytime I change those 3 tables in my database this new config file needs to be made because of the path changes or temp changes or logic used on the relays. What would be a good way to stop the script(RelayControlMainGH.py) from running and allow some time for the new script to be re-created and start it up again. I tried using cron without the while loop but the script (RelayControlMainGH.py) will not run. I am sure if I put it in cron with the while loop I will have to find it in the system to start and stop it. What would be the best way to do this? I am using a raspberry pi with rasbian
start stop python script that gets created dynamically
0
0
0
122
34,704,684
2016-01-10T11:09:00.000
0
1
1
0
python-3.x,pytest
57,849,472
4
false
0
0
I have the same problem, and found three solutions: reload(some_lib) patch SUT, as the imported method is a key and value in SUT, you can patch the SUT. Example, if you use f2 of m2 in m1, you can patch m1.f2 instead of m2.f2 import module, and use module.function.
1
16
0
I have a (python3) package that has completely different behaviour depending on how it's init()ed (perhaps not the best design, but rewriting is not an option). The module can only be init()ed once, a second time gives an error. I want to test this package (both behaviours) using py.test. Note: the nature of the package makes the two behaviours mutually exclusive, there is no possible reason to ever want both in a singular program. I have serveral test_xxx.py modules in my test directory. Each module will init the package in the way in needs (using fixtures). Since py.test starts the python interpreter once, running all test-modules in one py.test run fails. Monkey-patching the package to allow a second init() is not something I want to do, since there is internal caching etc that might result in unexplained behaviour. Is it possible to tell py.test to run each test module in a separate python process (thereby not being influenced by inits in another test-module) Is there a way to reliably reload a package (including all sub-dependencies, etc)? Is there another solution (I'm thinking of importing and then unimporting the package in a fixture, but this seems excessive)?
restart python (or reload modules) in py.test tests
0
0
0
8,670
34,705,575
2016-01-10T12:52:00.000
2
0
0
0
python,macos,applescript
34,711,005
2
false
0
0
tell application "Firefox" to activate is the way to do it in AppleScript
1
1
0
I'm running OS X 10.11, and I have created a web scraper using Python and Selenium. The scraper uses Firefox as the browser to collect data. The Firefox window must remain active at all critical steps, for the scraper to work. When I leave the computer with Firefox as the active window, when I return I often find that the active window focus has changed to something else. Some process is stealing the window focus. Is there a way that I can programatically tell the OS to activate the Firefox window? If so, I can tell the script to do that before every critical action in the script. Preferably, this is something that I would like to achieve using Python. But launching a secondary AppleScript to do this specific task may also be a solution. Note: Atm, I'm not looking at rewriting my script to use a headless browser – just to make it work by forcing active window.
Python: Activate window in OS X
0.197375
0
1
1,426
34,705,917
2016-01-10T13:25:00.000
1
0
1
0
python,r,anaconda,conda
47,464,519
11
false
0
0
Someone suggested a not so elegant way around it, but actually it doesn't matter as long as it works fine. install.packages('package','/Users/yourusernamehere/anaconda/lib/R/library') I spent almost an entire morning looking for an answer to this problem. I was able to install the libraries on RStudio but not on Jupyter Notebook (they have different versions of R) The above solution "almost" worked, it's just that I found the Jupyter Notebook was trying to install in a different directory, and it will report what directory. So I only changed that and it worked as a charm... thanks to Dninhos
3
68
0
I use an out-of-the-box Anaconda installation to work with Python. Now I have read that it is possible to also "include" the R world within this installation and to use the IR kernel within the Jupyter/Ipython notebook. I found the command to install a number of famous R packages: conda install -c r r-essentials My beginner's question: How do I install R packages that are not included in the R-essential package? For example R packages that are available on CRAN. "pip" works only for PyPI Python packages, doesn't it?
How to install R packages that are not available in "R-essentials"?
0.01818
0
0
88,993
34,705,917
2016-01-10T13:25:00.000
1
0
1
0
python,r,anaconda,conda
70,905,426
11
false
0
0
What worked for me is install.packages("package_name", type="binary"). None of the other answers have worked.
3
68
0
I use an out-of-the-box Anaconda installation to work with Python. Now I have read that it is possible to also "include" the R world within this installation and to use the IR kernel within the Jupyter/Ipython notebook. I found the command to install a number of famous R packages: conda install -c r r-essentials My beginner's question: How do I install R packages that are not included in the R-essential package? For example R packages that are available on CRAN. "pip" works only for PyPI Python packages, doesn't it?
How to install R packages that are not available in "R-essentials"?
0.01818
0
0
88,993
34,705,917
2016-01-10T13:25:00.000
3
0
1
0
python,r,anaconda,conda
38,934,208
11
false
0
0
I had a problem when trying to install package from github using install_github("user/package") in conda with r-essentials. Errors were multiple and not descriptive. Was able to resolve a problem using these steps: download and unzip the package locally activate correct conda environment (if required) run R from command line library(devtools) install('/path/to/unzipped-package') Command failed due to missing dependancies, but now I know what's missing! run install.packages('missing-package', repos='http://cran.us.r-project.org') for all dependancies run install('/path/to/unzipped-package') again. Now it should work!
3
68
0
I use an out-of-the-box Anaconda installation to work with Python. Now I have read that it is possible to also "include" the R world within this installation and to use the IR kernel within the Jupyter/Ipython notebook. I found the command to install a number of famous R packages: conda install -c r r-essentials My beginner's question: How do I install R packages that are not included in the R-essential package? For example R packages that are available on CRAN. "pip" works only for PyPI Python packages, doesn't it?
How to install R packages that are not available in "R-essentials"?
0.054491
0
0
88,993
34,706,795
2016-01-10T14:53:00.000
2
1
0
1
python,tesseract,py2exe,pyinstaller
34,722,890
1
false
0
0
Reading the pytesseract docs, I have found the following section: Install google tesseract-ocr from http://code.google.com/p/tesseract-ocr/. You must be able to invoke the tesseract command as "tesseract". If this isn't the case, for example because tesseract isn't in your PATH, you will have to change the "tesseract_cmd" variable at the top of 'tesseract.py'. This means you need to have tersseract installed on your target machine independent of your script being exefied or not. Tesseract is a requirement for your script to work. You will need to ask your users to have tesseract installed or you use an "install wizzard" tool which will check if tesseract is installed and if not install it for your users. But this is not the task of pyinstaller. Pyinstaller only exefies your Python script.
1
0
0
I am using an C based OCR engine known as tesseract with Python interface library pytesseract to access its core features. Essentially, the library reads the local contents of the installed engine for use in a Python program. However, the library continues to look for the engine when distributed as an executable. How do I instead include the engine self-contained in the executable?
Distributed C/C++ Engine with Python
0.379949
0
0
347
34,709,633
2016-01-10T19:08:00.000
1
0
1
0
python,range
34,710,005
5
false
0
0
If you want to be able to compute the size of the sequence based on the parameters to the range() function call, you should use range(0,105,5). Consider the following: len(range(0,6,1)) == 6 == 6/1 len(range(0,6,2)) == 3 == 6/2 len(range(0,6,3)) == 2 == 6/3 len(range(0,6,4)) == 1 == floor(6/4) By that token, you would have len(range(0,105,5)) == 21 == 105/5
1
7
0
If I wanted a list from 0 to 100 in steps of five I could use range(0,105,5), but I could also use range(0,101,5). Honestly, neither of these makes sense to me because excluding the last number seems non-intuitive. That aside, what is the "correct" way to create a list from 0 to 100 in steps of five? And if anyone has the time, in what instance would excluding the last number make code easier to read?
Pythonic way to use range with excluded last number?
0.039979
0
0
2,506
34,710,059
2016-01-10T19:50:00.000
0
1
0
0
java,android,python
34,710,122
1
false
1
1
Instead of running it as one app, what about running the python script as separate from the original script? I believe it would bee possible, as android is in fact a UNIX based OS. Any readers could give their input on this idea an if it would work.
1
0
0
I want to develop an app to track people's Whatsapp last seen and other stuff, and found out that there are APIs out there to deal with it, but the thing is they are writen in python and are normally run in Linux I think I have Java and Android knowledge but not python, and wonder if there's a way to develop the most of the app in Java and get the info I want via calls using these python APIs, but without having to install a python interpreter or similar on the device, so the final user just has to download and run the Android app as he would do with any other I want to know if it would be very hard for someone inexperienced as me (this is the 2nd and final year of my developing grade), for it's what I have in mind for the final project, thx in advance
how to write an Android app in Java which needs to use a Python library?
0
0
0
49
34,710,277
2016-01-10T20:10:00.000
0
0
1
0
python,inheritance,monkeypatching
34,710,599
2
false
0
1
What most GUI toolkits (wxpython,kivy,pyQT) in python are doing is the inheritance approach. I guess both approach should work but using inheritance will be more familiar to your potential users What you may also want to look at is a template language (suck as kv lang or html) to layout the UI since it is a modern approach
1
4
0
I'm writing an experimental GUI framework. The GUI is constructed by combining components (similar to widgets). There are a few "native" classes of components. The framework user specializes by providing certain methods that define configuration, bindings, etc. This can be done by extending a native class and overriding its methods, which is fine, but many of the derived classes will be instantiated just once. Alternatively, I could provide a factory function that would take a native class and the specializing methods (functions, really). This function would instantiate the native class and replace the appropriate methods. Any reason to prefer one approach over the other?
Monkey patching vs inheritance and overriding in Python
0
0
0
1,540
34,711,799
2016-01-10T23:01:00.000
0
1
0
1
python,raspberry-pi,executable,sensors
35,046,201
3
true
0
0
Well, still a little puzzled why it happened, but anyway this solved the problem: As a workaround, I copied the contents of "thermostaatgui.py" over the contents of a working script ("mysimpletest.py"), saved it and it runs OK.
2
1
0
I have a python script on a Raspberry Pi reading the temperature and humidity from a sensor. It works fine when started in IDLE, but when I try starting it in a terminal I get the message:sudo: unable to execute .thermostaatgui.py: No such file or directory. The first line in the script is: #! /usr/bin/python, the same as in other scripts that run without problems and the script is made executable with chmod +x. In the script Adafruit_DHT, datetime and time are imported, other scripts that work do the same.
python executing in IDLE, but not in termnal
1.2
0
0
395
34,711,799
2016-01-10T23:01:00.000
1
1
0
1
python,raspberry-pi,executable,sensors
34,711,852
3
false
0
0
+1 on the above solution. To Debug try this Type "pwd" on your terminal. This will tell you where you are in the shell. Then type "ls -lah" and look for your script. if you can not find it, then you need to "cd" to the directory where the script exists and then execute the script
2
1
0
I have a python script on a Raspberry Pi reading the temperature and humidity from a sensor. It works fine when started in IDLE, but when I try starting it in a terminal I get the message:sudo: unable to execute .thermostaatgui.py: No such file or directory. The first line in the script is: #! /usr/bin/python, the same as in other scripts that run without problems and the script is made executable with chmod +x. In the script Adafruit_DHT, datetime and time are imported, other scripts that work do the same.
python executing in IDLE, but not in termnal
0.066568
0
0
395
34,719,592
2016-01-11T10:48:00.000
0
0
0
1
python,multithreading,eventlet,green-threads,setfsuid
37,549,590
2
false
0
0
The kernel is ignorant to green threads. If a process has a uid and gid, it is used by all green threads running as part of this process. At a first glance, what you are seeking to do is equivalent to having a privileged process do a setuid prior to opening/creating a file, than doing a second setuid to open/create a second file etc. all to ensure that each file has the right ownership. I never tried such a scheme, but it sounds very very wrong. It is also extremely bad security wise. You are running at high privileges and may find yourself processing user X's data while having user Y's uid. At a second glance, green threads are cooperative, meaning that under the hoods, some of the operations you do will yield. Following such yield, you may change to a different green thread that will change the uid again... Bottom line, forget about changing the uid and gid of the green thread - there is no such thing. Create the file with whatever ID you have and chown to the right id after. Find a way to do that without running as root for security reasons.
1
1
0
We have an existing project using Eventlet module. There is a server handling client request using green threads. All the requests are handled by a single user 'User A' I now need to change this to do a setfsuid/setfsgid on the threads so that the underlying files are all created with the ownership of the requesting user only. I understand that I need setid Linux capability to make the setfsid calls. But will setfsid calls work with green threads like they do with the native threads ? By reading through various texts over the net regarding 'green threads', I couldn't gather much :(
setfs(u/g)id or set(u/g)id with eventlet(python green thread)
0
0
0
309
34,726,376
2016-01-11T16:23:00.000
0
0
1
0
python-2.7
34,726,494
4
false
0
0
map(int, "1 2 3 4 5".split()) This will take your string and convert to a list of ints. Split defaults to splitting on a space, so you don't need an argument. For raw_input(), you can do: map(int, raw_input().split())
2
1
0
I want to insert 5 integers by simply typing 3 4 6 8 9 and hitting enter. I know how to insert strings in a list by using list=raw_input().split(" ",99), but how can I insert integers using space?
how to insert integers in list/array separated by space in python
0
0
0
6,406
34,726,376
2016-01-11T16:23:00.000
0
0
1
0
python-2.7
34,726,835
4
false
0
0
The above answer is perfect if you are looking to parse strings into a list. Else you can parse them into Integer List using the given way integers = '22 33 11' integers_list = [] try: integers_list = [int(i) for i in integers.split(' ')] except: print "Error Parsing Integer" print integers_list
2
1
0
I want to insert 5 integers by simply typing 3 4 6 8 9 and hitting enter. I know how to insert strings in a list by using list=raw_input().split(" ",99), but how can I insert integers using space?
how to insert integers in list/array separated by space in python
0
0
0
6,406
34,729,149
2016-01-11T19:07:00.000
0
0
0
0
python,django,nginx,permissions,file-permissions
36,338,260
1
true
1
0
Coulnd't find out who exactly is it created by, however, the permissions depend on the user (root or non root). This means if you run the commands (for example: python manage.py runserver) with sudo or under root the folder gets root permissions which can't be edited from a non root user.
1
1
0
I set up django using nginx and gunicorn. I am looking at the permission in my project folder and I see that the permission for the media folder is set to root (all others are set to debian): -rw-r--r-- 1 root root 55K Dec 2 13:33 media I am executing all app relevant commands like makemigrations, migrate, collectstatic, from debian, therefore everything else is debian. But the media folder doesn't exist when I start my app. I will be created once I upload stuff. But who creates it and how do I change the permissions to debain?
who creates the media folder in django and how to change permission rights?
1.2
0
0
343
34,731,279
2016-01-11T21:18:00.000
0
0
0
1
python,linux,bash,shell
34,731,389
2
true
0
0
Short answer: you can't. The return value of a *nix-style executable is an unsigned integer from 0-255. That usually indicates if it failed or not, but you could co-opt it for your own uses. In this case, I don't think a single unsigned byte is enough. Thus, you need to output it some other way. You have a few options The simplest (and probably best in this case) is to continue outputting your output data on stdout, and send your logs/debugging information somewhere else. That could be to a file, or (it's sort-of what it's for) stderr Output your data to a file (such as one given in a command line parameter) Arrange some kind of named pipe scheme. In practice, this is pretty much the same thing as sending it to a file.
1
1
0
I have a build.sh script that my automated build server executes as part of a build. A big portion of logic of the build is calculating and building a version number. All of this logic is in a python script such as calculate-version.py. Typically what I would do in this case is setup the python script to ONLY print the version number, from which I would read stdout from the bash script, and assign that to an environment variable. However, the python script is becoming sufficiently complex that I'd like to start adding logs to it. I need to be able to output (stdout) logs from the Python script (via print()) while at the same time when it is done, propagate a "return value" from the python script back to the parent shell script. What is the best way of doing this? I thought of doing this through environment variables, but my understanding is those won't be available to the parent process.
How to call into python script like a function from bash?
1.2
0
0
163
34,733,183
2016-01-11T23:43:00.000
2
0
1
0
python,python-2.7,python-3.x,pip
34,733,624
1
true
0
0
I didn't have a Mac to test this on, but I think it may work. First, find the path of your pip and pip3 executable. From the terminal run which pip and which pip3. Once you have the path open the file. The first line should be something like: #!/Library/Frameworks/Python.framework/Versions/3.4/bin/python3.4 Try changing the version from 3.4 to 2.7.
1
1
0
At the moment, it appears my pip is the pip for Python 3.4 I would like this version of pip to be used for Python 2.7, and the Python 3.4 version of pip to be pip3. How can I do this? I tried installing both with Macports.... EDIT: In /opt/local/bin I have pip, pip-2.7, and pip-3.4
How to set two different pip versions for Python 2.7 and Python 3.4?
1.2
0
0
244
34,734,714
2016-01-12T02:41:00.000
12
0
1
0
ipython,jupyter,jupyter-notebook
59,724,231
4
false
0
0
Maybe it is easier to just use unix to just unzip the data. Steps: Transform the folder into a .zip file in your computer. Upload the .zip file to jupyter home. In jupyter notebook run ! unzip ~/yourfolder.zip -d ~/ where ! tells the jupyter notebook that you are going to give code directly to unix, not python code unzip is the unzip commmand ~/yourfolder.zip tells the command where your .zip folder is (at ~/ if you uploaded to the home folder) -d ~/ tells the command where you want to put the unzipped folder (this assumes you want to put it in the home folder, but you can also put it in any other subfolder with -d ~/my_first_level_subfolder or -d ~/my_first_level_subfolder/my_second_level_subfolder, etc.) If you want to delete the original .zip folder, delete it manually at jupyter home or use !rm ~/yourfolder.zip Hope if helps somebody
1
37
0
Can you upload entire folders in IPython Jupyter? If so, how? I know how to upload individual files, of course, but this can get tedious if there are a large number of files and/or subdirectories.
IPython Jupyter: uploading folder
1
0
0
61,675
34,735,016
2016-01-12T03:17:00.000
2
0
0
0
python,machine-learning,scikit-learn,feature-selection
34,736,355
2
false
0
0
Naive Bayes and MultinomialNB are the same algorithms. The difference that you get is from the tfidf transformation which penalises the words that occur in lots of documents in your corpus. My advice: Use tfidf and tune the sublinear_tf, binary parameters and normalization parameters of TfidfVectorization for features. Also try all kind of different classifiers available in scikit-learn which i suspect will give you better results if you properly tune the value of regularization type (penalty eighther l1 or l2) and the regularization parameter (alpha). If you tune them properly I suspect you can get much better results using SGDClassifier with 'log' loss (Logistic Regression) or 'hinge' loss (SVM). The way people usually tune the parameters is through GridSearchCV class in scikit-learn.
1
2
1
I'm working on a machine learning application in Python (using the sklearn module), and am currently trying to decide on a model for performing inference. A brief description of the problem: Given many instances of user data, I'm trying to classify them into various categories based on relative keyword containment. It is supervised, so I have many, many instances of pre-classified data that are already categorized. (Each piece of data is between 2 and 12 or so words.) I am currently trying to decide between two potential models: CountVectorizer + Multinomial Naive Bayes. Use sklearn's CountVectorizer to obtain keyword counts across the training data. Then, use Naive Bayes to classify data using sklearn's MultinomialNB model. Use tf-idf term weighting on keyword counts + standard Naive Bayes. Obtain a keyword count matrix for the training data using CountVectorizer, transform that data to be tf-idf weighted using sklearn's TfidfTransformer, and then dump that into a standard Naive Bayes model. I've read through the documentation for the classes use in both methods, and both seem to address my problem very well. Are there any obvious reasons for why tf-idf weighting with a standard Naive Bayes model might outperform a multinomial Naive Bayes for this type of problem? Are there any glaring issues with either approach?
Choosing an sklearn pipeline for classifying user text data
0.197375
0
0
2,014
34,736,964
2016-01-12T06:30:00.000
0
0
0
0
python,web,flask,host
34,775,584
1
false
1
0
Enable port forwarding on your router, start flask on the 0.0.0.0 address of your computer, set the forwarded port to be the one started on your laptop. This will now allow your LAN and calls to your ISP provided address to be directed to your laptop. To clarify, LAN can do it without port forwarding in my experience.
1
0
0
Im developing an app using flask framework in python, i wanted to host it on my pc for a few people to be able to visit it, similar to wamps put online feature but for flask instead, i dont want to deploy it to the cloud just yet. how can i do it.
How to host a flask web app on my own pc?
0
0
0
2,566
34,737,287
2016-01-12T06:54:00.000
0
0
0
1
python,celery-task
34,738,281
2
false
0
0
How can I get which worker is executing which input? There are 2 options to use multiple workers: You run each worker separately with separate run commands You run in one command using command line option -c i.e. concurrency First method, flower will support it and will show you all the workers, all the tasks (you call inputs), which worker processed which task and other information too. With second method, flower will show you all the tasks being processed by single worker. In this case you can only differentiate by viewing logs generated by celery worker as in logs it does store which worker THREAD executed which task. So, i think you will be better using first option given your requirements. Each worker executed how many inputs and its status? As I mentioned, using first approach, flower will give you this information. If any task is failed how can get failed input data in separately and re-execute with available worker? Flower does provide the filters to filter the failed tasks and does provide what status tasks returned when exiting. Also you can set how many times celery should retry a failed task. But even after retries task fails, then you will have to relaunch the task yourself.
1
3
0
I have celery task with 100 input data in queue and need to execute using 5 workers. How can I get which worker is executing which input? Each worker executed how many inputs and its status? If any task is failed how can get failed input data in separately and re-execute with available worker? Is there any possible ways to customize celery based on worker specific. We can combine celery worker limitation and flower I am not using any framework.
Celery worker details
0
0
0
834
34,737,505
2016-01-12T07:10:00.000
2
0
0
0
python,django
34,737,908
2
true
1
0
Premature optimisation is the root of all evil... This being said, what you want is a cache, not an async queue. Django has a good built-in cache framework, you just have to choose your backend (redis comes to mind but there are other options)
1
0
0
I plan to use 3 servers(there will have a haproxy to dispath to 3 servers but I don't servey it now ) to do load balancing And I face a problem : I create a object which has a function to query from database to get a list when the django start (Because the list seldom changed but very frequently used so I inintial it at first). If the data changed,it will push the message to rabbitmq, and 3 servers has rabbitmq clients to get it. But the problem is the rabbitmq listener's process is not the same with django How can it nortify to django process ?? Now my solution is call api(use localhost) when rabbitmq client got the changed.(so the guest can visit website and I can change the list) But it have to bind 0.0.0.0,I am not sure it's a good idea What is a better way to sync between 3 servers ???
How to sync variable between servers?
1.2
0
0
125
34,739,315
2016-01-12T09:02:00.000
7
0
1
0
python,python-3.x
60,182,624
3
false
0
0
The PYW file type is primarily associated with Python by Python Software Foundation. PYW files are used in Windows to indicate a script needs to be run using PYTHONW. EXE instead of PYTHON. EXE in order to prevent a DOS console from popping up to display the output.
2
56
0
I am new to Python programming. Can anybody provide an explanation on what a *.pyw file is and how it works.
.pyw files in python program
1
0
0
58,022
34,739,315
2016-01-12T09:02:00.000
1
0
1
0
python,python-3.x
68,725,566
3
false
0
0
Its just a file extension that tells python to run the script in the background.
2
56
0
I am new to Python programming. Can anybody provide an explanation on what a *.pyw file is and how it works.
.pyw files in python program
0.066568
0
0
58,022
34,740,756
2016-01-12T10:10:00.000
2
1
0
1
python,debian,uninstallation,reinstall
34,743,144
3
false
0
0
The directory you removed is controlled and maintained by pip. If you have a record of which packages you have installed with pip, you can force it to reinstall them again. If not, too late to learn to make backups; but this doesn't have to be a one-shot attempt -- reinstall the ones you know are missing, then live with the fact that you'll never know if you get an error because you forgot to reinstall a module, or because something is wrong with your code. By and by, you will discover a few more missing packages which you failed to remember the first time; just reinstall those as well as you discover them. As an aside, using virtualenv sounds like a superior solution for avoiding a situation where you need to muck with your system Python installation.
1
3
0
I did something very stupid. I was copying some self written packages to the python dist-packages folder, then decided to remove one of them again by just rewriting the cp command to rm. Now the dist-packages folder is gone. What do I do now? Can I download the normal contents of this folder from somewhere, or do I need to reinstall python completely. If so - is there something I need to be careful about? The folder I removed is /usr/local/lib/python2.7 so not the one maintained by dpkg and friends.
Accidentally removed dist-packages folder, what to do now?
0.132549
0
0
4,302
34,743,371
2016-01-12T12:11:00.000
1
0
0
1
python,kubernetes,google-cloud-logging
34,750,390
1
false
0
0
If you're running at least version 1.1.0 of Kubernetes (you most likely are), then if the logs you write are JSON formatted, they'll show up as structured logs in the Cloud Logging console. Then certain JSON keys are interpreted specially when imported into Cloud Logging, for example 'severity' will be used to set the log level in the console, or 'timestamp' can be used to set the time.
1
1
0
I have a python service running in kubernetes container and writing logs to stdout. I can see the logs in Cloud Logging Console, but they are not structured, meanining: 1. I can't filter log levels 2. Log record with multiple lines interpreted as multiple log records 3. Dates are not parse etc. How can I address this problem? Can I configure flunetd deamon somehow? Or should I write in a specific format? Thanks
Writing logs in kubernetes
0.197375
0
0
1,269
34,745,607
2016-01-12T13:56:00.000
4
0
1
0
python
34,745,814
2
false
0
0
An extension language is just what it sounds like. A language that is used to extend other applications. For example, when you write a macro in Excel using Visual Basic, that's using VB as a extension language. When you write a plugin for your browser with Javascript, that is using JS as an extension language. Extension Language is a language used to write extensions.
2
0
0
How is it different from scripting languages?
What is an extension language? Example: python can be used as an extension language
0.379949
0
0
1,487
34,745,607
2016-01-12T13:56:00.000
1
0
1
0
python
34,745,771
2
false
0
0
This means you can "easily" connect Python with other languages. One example would be to have a main program in C, and use external Python scripts inside. To illustrate, imagine a program in C that computes a labyrinth, and a Python script that gives a strategy to walk through the labyrinth. A user could define his/her own strategies in Python instead of diving in the C code. The user would execute the C code, giving the Python script filepath as an argument, and the C code would execute the Python code as the strategy to use. One nice property is that you can change the Python script and never recompile the C code.
2
0
0
How is it different from scripting languages?
What is an extension language? Example: python can be used as an extension language
0.099668
0
0
1,487
34,750,575
2016-01-12T17:53:00.000
0
0
1
0
indexing,rethinkdb,rethinkdb-python
34,750,764
1
false
0
0
I'm not 100% sure I understand the question, but if you have a secondary index and insert a new document or change an old document, the document will be in the correct place in the index once the write completes. So if you had a secondary index on a timestamp, you could write r.table('items').orderBy(index: r.desc('timestamp')).limit(n) to get the most recent n documents (and you could also subscribe to changes on that).
1
0
0
Let's say that I need to maintain an index on a table where multiple documents can relate do the same item_id (not primary key of course). Can one secondary compound index based on the result of a function which of any item_id returns the most recent document based on a condition, update itself whenever a newer document gets inserted? This table already holds 1.2 million documents in just 25 days, so it's a big-data case here as it will keep growing and must always keep the old records to build whatever pivots needed over the years.
RethinkDb do function based secondary indexes update themselves dynamically?
0
1
0
65
34,751,064
2016-01-12T18:19:00.000
4
0
0
0
python,indexing,algolia
34,751,577
1
true
1
0
Browse is the right way to go. The good thing is that you can specify arguments while performing a browse_all and one of them can be attributesToRetrieve: [] to not retrieve any attributes. You'll therefore only get the objectID.
1
1
0
Is there a way to retrieve all objectIDs from an Algolia Index? I know there is [*Index Name*].browse_all() which in the docs say it can retrieve 1000 objects at a time but it retrieves the entire object rather than just the objectIDs. I can work with pagination but would rather not and do not want to pull the entire object because our indexes are not small.
List of ObjectIDs for an Algolia Index
1.2
0
0
359
34,751,695
2016-01-12T18:57:00.000
0
0
1
0
python,list,python-2.7
34,752,209
1
false
0
0
Your implementation is not very efficient for large numbers. Maybe it's not fast enough? For the special case of divisibility by three you can compute the sum of the digits in the number you are testing. If the sum is divisible by three, so is the number you started with. For example 4789 => 4+7+8+9=28 => 2+8=10 => 1+0=1 (not divisible by three) 4788 => 4+7+8+8=27 => 2+7=9 (9 is divisible by three, and therefore 4788 is as well)
1
1
0
The question in Python Koans specifies "Return True if any number in the list xs is divisible by 3. Otherwise, or if the list is empty, return False." So far I've written return any(x % 3 == 0 for x in xs) but this only gives 2/3 stars.
How to write an efficient divisibility test?
0
0
0
231
34,755,334
2016-01-12T22:55:00.000
0
1
0
0
python,unit-testing,supervisord
34,911,457
1
false
0
0
I decided to use Python Celery which is already installed on my machine. My API queries are wrapped as tasks and send to Celery. Given this setup I created my testrunner as just another task that runs the API tests. The web application tests do not need the stored credentials but run fine in the Celery context as well.
1
0
0
I am building a complex Python application that distributes data between very different services, devices, and APIs. Obviously, there is a lot of private authentication information. I am handling it by passing it with environmental variables within a Supervisor process using the environment= keyword in the configuration file. I have also a test that checks whether all API authentication information is set up correctly and whether the external APIs are available. Currently I am using Nosetest as test runner. Is there a way to run the tests in the Supervisor context without brute force parsing the supervisor configuration file within my test runner?
How could I run unit tests in Supervisor context?
0
0
0
100
34,757,016
2016-01-13T01:41:00.000
0
0
1
0
python,csv,types
34,757,111
2
false
0
0
It is generally not possible. CSV data must be accompanied by metadata, that is, information about the data itself. But.... What you can do is read some part of your file (or read it wholly) and decide which datatype to use for each column using heuristics. Then do a 2nd pass reading the data and casting it to the appropriate data type (found on the 1st pass). On the 1st pass you could keep some true/false (aka boolean) information about each column, like "hasDecimalDigit" (if in some line the characters '0' to '9' were found), "hasHexadecimalDigit" (for characters 'a' to 'f'), "hasPeriod" (for '.'), "hasMoreThanOnePeriod" (for '.' when "hasPeriod" is already true), "hasAlphaCharacter" (for characters 'a' to 'z'), and so on. After an arbitrary number of rows is read you could decide on specific pattern of your information set which datatype is applicable (for example: hasDecimalDigit & !hasHexadecimalDigit & !hasAlphaCharacter & !hasPeriod -> datatype = int, format = decimal).
2
0
0
I have a several MB text file with random data types separated by commas: bgh5w ,12, 5.223, ab4ft55, .... There are only four types of data: integer, float, alphabets and alphanumeric. How to print out the data types in column form?: bgh5w - alphanumeric 12 - integer 5.223 - float ab4ft55 - alphanumeric . . .
Determine the data type of csv and print as column in Python
0
0
0
805
34,757,016
2016-01-13T01:41:00.000
0
0
1
0
python,csv,types
34,776,411
2
false
0
0
If you have a text file the only datatype you have is a String. My suggestion is to use a mapping file where you can lookup the column # to a datatype, at that point it should be pretty straightforward loading the data by your process. The mapping file should be delivered together wit the CSV since requires insights on the nature of the data contained in the CSV.
2
0
0
I have a several MB text file with random data types separated by commas: bgh5w ,12, 5.223, ab4ft55, .... There are only four types of data: integer, float, alphabets and alphanumeric. How to print out the data types in column form?: bgh5w - alphanumeric 12 - integer 5.223 - float ab4ft55 - alphanumeric . . .
Determine the data type of csv and print as column in Python
0
0
0
805
34,757,084
2016-01-13T01:47:00.000
0
0
1
0
python,linux,scripting,virtual-machine
34,761,507
2
false
0
0
For aws use boto. For GCE use Google API Python Client Library For OpenStack use the python-openstackclient and import its methods directly. For VMWare, google it. For Opsware, abandon all hope as their API is undocumented and has like 12 years of accumulated abandoned methods to dig through and an equally insane datamodel back ending it. For direct libvirt control there are python bindings for libvirt. They work very well and closely mimic the c libraries. I could go on.
1
0
0
I want to manage virtual machines (any flavor) using Python scripts. Example, create VM, start, stop and be able to access my guest OS's resources. My host machine runs Windows. I have VirtualBox installed. Guest OS: Kali Linux. I just came across a software called libvirt. Do any of you think this would help me ? Any insights on how to do this? Thanks for your help.
Controlling VMs using Python scripts
0
0
0
1,117
34,758,458
2016-01-13T04:31:00.000
0
0
0
1
python,permission-denied,3dr
34,758,499
2
false
0
0
This is going to sound simple but are you running an elevated command line?
2
0
0
I am trying to install 3DR solo command line on Windows 10. Below is the exception that i get. i have been doing a lot of reading and googling. I couldnt figure out the permission denied problem. I have this part shutil.copyfile(srcfile, destfile), but i still get denied. Exception: Traceback (most recent call last): File "c:\python35\lib\site-packages\pip\basecommand.py", line 211, in main status = self.run(options, args) File "c:\python35\lib\site-packages\pip\commands\install.py", line 311, in run root=options.root_path, File "c:\python35\lib\site-packages\pip\req\req_set.py", line 646, in install **kwargs File "c:\python35\lib\site-packages\pip\req\req_install.py", line 803, in install self.move_wheel_files(self.source_dir, root=root) File "c:\python35\lib\site-packages\pip\req\req_install.py", line 998, in move_wheel_files isolated=self.isolated, File "c:\python35\lib\site-packages\pip\wheel.py", line 339, in move_wheel_files clobber(source, lib_dir, True) File "c:\python35\lib\site-packages\pip\wheel.py", line 317, in clobber shutil.copyfile(srcfile, destfile) File "c:\python35\lib\shutil.py", line 115, in copyfile with open(dst, 'wb') as fdst: PermissionError: [Errno 13] Permission denied: 'c:\python35\Lib\site-packages\_cffi_backend.cp35-win32.pyd'
Python on Windows, installing 3dr solo command line, PermissionError: [Errno 13]
0
0
0
364
34,758,458
2016-01-13T04:31:00.000
0
0
0
1
python,permission-denied,3dr
35,596,119
2
false
0
0
If you are upgrading the cffi package i.e. you had it already installed and doing pip install xyz package that is trying to upgrade cffi to its last version, all you have to do is simply delete: c:\python35\Lib\site-packages\_cffi_backend.cp35-win32.pyd then try again.
2
0
0
I am trying to install 3DR solo command line on Windows 10. Below is the exception that i get. i have been doing a lot of reading and googling. I couldnt figure out the permission denied problem. I have this part shutil.copyfile(srcfile, destfile), but i still get denied. Exception: Traceback (most recent call last): File "c:\python35\lib\site-packages\pip\basecommand.py", line 211, in main status = self.run(options, args) File "c:\python35\lib\site-packages\pip\commands\install.py", line 311, in run root=options.root_path, File "c:\python35\lib\site-packages\pip\req\req_set.py", line 646, in install **kwargs File "c:\python35\lib\site-packages\pip\req\req_install.py", line 803, in install self.move_wheel_files(self.source_dir, root=root) File "c:\python35\lib\site-packages\pip\req\req_install.py", line 998, in move_wheel_files isolated=self.isolated, File "c:\python35\lib\site-packages\pip\wheel.py", line 339, in move_wheel_files clobber(source, lib_dir, True) File "c:\python35\lib\site-packages\pip\wheel.py", line 317, in clobber shutil.copyfile(srcfile, destfile) File "c:\python35\lib\shutil.py", line 115, in copyfile with open(dst, 'wb') as fdst: PermissionError: [Errno 13] Permission denied: 'c:\python35\Lib\site-packages\_cffi_backend.cp35-win32.pyd'
Python on Windows, installing 3dr solo command line, PermissionError: [Errno 13]
0
0
0
364
34,763,600
2016-01-13T10:01:00.000
1
1
0
0
python,node.js,ibm-cloud
34,790,983
2
false
1
0
I finally fixed this as adding an entry to dependencies in package.json of the project, which causes the call of npm install for the linked github repo. It is kinda straightforward but I found no explanation for that on Bluemix resources.
1
1
0
I'd like to run text processing Python scripts after submitting searchForms of my node.js application. I know how the scripts can be called with child_process and spawn within js, but what should I set up on the app (probably some package.json entries?) so that it will be able to run Python after deploying to Bluemix? Thanks for any help!
How to invoke python scripts in node.js app on Bluemix?
0.099668
0
0
358
34,769,148
2016-01-13T14:24:00.000
0
0
1
0
python,google-places-api
53,329,283
2
false
0
0
Also pay attention that python3 requires its own pip for installing google places or any other library. I experienced such error :/
1
2
0
why am i getting error? while installing package for google places i tried pip,easy_install and myenv but couldnt install this is the error Could not find a version that satisfies the requirement googleplaces (from versions: ) No matching distribution found for googleplaces
Python- pip install googleplaces
0
0
0
4,183
34,769,208
2016-01-13T14:26:00.000
6
0
0
0
python,plone,dexterity
34,772,414
2
false
1
0
A different approach can simply be to add event handlers for IObjectAddedEvent, and add there your subcontents using common APIs.
1
1
0
I thought it would be possible to create a custom Dexterity factory that calls the default factory and then adds some subcontent (in my case Archetypes-based) to the created 'parent' Dexterity content. I have no problem creating and registering the custom factory. However, regardless of what method I use (to create the AT subcontent), the subcontent creation fails when attempted from within the custom factory. I've tried everything from plone.api to invokeFactory to direct instantiation of the AT content class. In most cases, traceback shows the underlying Plone/CMF code tries to get portal_types tool using getToolByName and fails; similarly when trying to instantiate the AT class directly, the manage_afterAdd then tries to access reference_catalog, which fails. Is there any way to make this work?
custom Plone Dexterity factory to create subcontent
1
0
0
124
34,771,013
2016-01-13T15:47:00.000
2
1
0
0
python,smpp
34,810,025
1
false
0
0
Take a look at jasmin sms gateway, it's pythonic and has smpp server implementation.
1
1
0
Does anyone know a tool to implement a Python SMPP server and some tips on how to proceed? I found Pythomnic3k framework, but did not find material needed for me to use it as SMPP server ...
Implementing an SMPP Server in Python
0.379949
0
0
491
34,774,326
2016-01-13T18:28:00.000
1
0
0
0
python,instance,pymssql,named
66,684,103
2
false
0
0
According to the pymssql documentation on the pymssql Connection class, for a named instance containing database theDatabase, looking like this: myhost\myinstance You could connect as follows: pymssql.connect(host=r'myhost\myinstance', database='theDatabase', user='user', password='pw') The r-string is a so-called raw string that does not treat the '' as an escape.
1
3
0
I'm trying to connect to a SQL Server named instance from python 3.4 on a remote server, and get an error. File "C:\Scripts\Backups Integrity Report\Backup Integrity Reports.py", line 269, in conn = pymssql.connect(host=r'hwcvcs01\HDPS', user='My-office\romano', password='PASS', database='CommServ') File "pymssql.pyx", line 636, in pymssql.connect (pymssql.c:10178) pymssql.OperationalError: (20002, b'DB-Lib error message 20002, severity 9:\nAdaptive Server connection failed\n') Other SQLs are connected without a problem. Also I manage to connect to the SQL using the Management Studio, from the same remote server. Tried different ports, tried to connect to the host itself rather than the instance, and also tried pypyodbc. What might be the problem?
Python pymssql - Connecting to Named Instance
0.099668
1
0
4,865
34,777,755
2016-01-13T21:50:00.000
1
0
0
0
python,mysql,django,pymysql
53,195,032
2
false
1
0
The short answer is no they are not the same. The engine, in a Django context, is in reference to RDBMS technology. The driver is the library developed to facilitate communication to that actual technology when up and running. Letting Django know what engine to use tells it how to translate the ORM functions from a backend perspective. The developer doesn't see a change in ORM code but Django will know how to convert those actions to a language the technology understands. The driver then takes those actions (e.g. selects, updates, deletes) and sends them over to a running instance to facilitate the action.
1
20
0
I'm new to Django. It wasted me whole afternoon to config the MySQL engine. I am very confused about the database engine and the database driver. Is the engine also the driver? All the tutorial said that the ENGINE should be 'django.db.backends.mysql', but how the ENGINE decide which driver is used to connect MySQL? Every time it says 'django.db.backends.mysql', sadly I can't install MySQLDb and mysqlclient, but PyMysql and the official mysql connector 2.1.3 has been installed. How could I set the driver to PyMysql or mysql connector? Many thanks! OS: OS X Al Capitan Python: 3.5 Django: 1.9 This question is not yet solved: Is the ENGINE also the DRIVER?
How to config Django using pymysql as driver?
0.099668
1
0
18,120
34,778,771
2016-01-13T23:07:00.000
1
1
1
0
python,eclipse-rcp,pydev
34,788,905
1
true
0
0
You shouldn't change what you changed... The proper way would be changing PyDev itself to support your use case. You should provide your IPyHoverParticipant (instead of doing your own text hover) and create a pull request for PyDev so that the hover works in comments/strings (i.e.: skip the "if (!pythonCommentOrMultiline) {" in org.python.pydev.editor.hover.PyTextHover.getHoverInfo(ITextViewer, IRegion) if your hover implements IPyHoverParticipant2).
1
1
0
I'm building an RCP app that serves as an IDE for a custom domain. One of the things we do in that domain is write python scripts that use domain-specific commands which have been wrapped as python functions. I implemented hover text support integrated with PyDev, so that if there is any domain-specific hover text available, it calls a custom ITextHover instead of PyDev's. I have this working, but I see that if I have a string literal argument to a function, the getTextHover() method is never called on the IHoverText instance. I traced this behavior to the partitioning implementation provided by getConfiguredDocumentPartitioning in PyEditConfiguration. Is there a way I can use PyDev's partitioning scheme but somehow override the above behavior, so that getTextHover() is called for String literal arguments? I don't see anything in the preferences, and trying to follow the implementation in the PyDev source code was not successful. EDIT: overriding TextSourceViewerConfiguration#getConfiguredDocumentPartitioning() to return IPythonPartitions.PY_DEFAULT solves the problem. But I'm not sure what the implications are of returning this rather than IPythonPartitions.PYTHON_PARTITION_TYPE, which is the behavior provided by PyEditCOnfigurationWithoutEditor.
Pydev No Hover Text for String arguments
1.2
0
0
88
34,780,851
2016-01-14T02:48:00.000
0
0
0
1
python,django,git
34,781,374
3
false
1
0
Probably the best solution is to identify exactly which code is shared between the two projects and make that a reusable app. Then each installation can install that django app, and then has their own site specific code as well.
2
0
0
I'm building a webapp using Django which needs to have two different versions: an Enterprise version and a standard public version. Up until now, I've been only developing the Enterprise version and am now looking for the best way to separate the two versions in the simplest way while avoiding duplication of code as much as possible. The main difference between the two versions will be that they need different URLs and different Views. I intend to differentiate based on subdomain using a multi-tenant architecture, where the www.example.com is the public version, and company1.example.com hits the enterprise version. I've come up with a couple potential solutions, but I'm not happy with any of them. Separate Git repositories and entirely separate projects, with all common code duplicated. This much duplication of code is bound to be error prone where things will get out of sync and is expected to be ridden with copy-paste mistakes. This is a last-resort solution. Separate Git repositories, with common code shared via Git Submodules (a single common 'base' repository containing base models and shared views). I've read horror stories about git submodules, though, so I'm wary of this solution. Single Git repository containing multiple 'project' folders (public/enterprise) each with their own base urls.py, settings.py, wsgi.py, etc...) and multiple manage.py files to choose which "Project" to run. I'm afraid that this solution would become an utter mess because it wouldn't be possible to have the public and enterprise versions use different versions of the common library if one needs an update before the other. Separate Git repositories, with all shared code developed as 'Re-usable apps' and installed into the python path. This would be a somewhat clean solution, but would be difficult to work with any time changes needed to be made to the common modules. Single project where all features are managed via conditional logic in the views. This would be most prone to bugs and confusion of all, and I'd prefer to avoid this solution. Does anyone have any experience with this type of solution or could anyone help me find the best solution to this problem?
How can I have Enterprise and Public version of Django application sharing some code?
0
0
0
98
34,780,851
2016-01-14T02:48:00.000
1
0
0
1
python,django,git
34,781,480
3
false
1
0
What about "a single Git repository, with all shared code developed as 'Re-usable apps'"? That is configure the options enabled with the INSTALLED_APPS setting. First you need to decide on your release process. If you intend on releasing both versions simultaneously, using the one git repository makes sense. An overriding concern might be if you have different distribution requirements for the code, e.g. if you want the code in the public version to be publicly available and the enterprise version to be private. Then you might have to use two git repositories.
2
0
0
I'm building a webapp using Django which needs to have two different versions: an Enterprise version and a standard public version. Up until now, I've been only developing the Enterprise version and am now looking for the best way to separate the two versions in the simplest way while avoiding duplication of code as much as possible. The main difference between the two versions will be that they need different URLs and different Views. I intend to differentiate based on subdomain using a multi-tenant architecture, where the www.example.com is the public version, and company1.example.com hits the enterprise version. I've come up with a couple potential solutions, but I'm not happy with any of them. Separate Git repositories and entirely separate projects, with all common code duplicated. This much duplication of code is bound to be error prone where things will get out of sync and is expected to be ridden with copy-paste mistakes. This is a last-resort solution. Separate Git repositories, with common code shared via Git Submodules (a single common 'base' repository containing base models and shared views). I've read horror stories about git submodules, though, so I'm wary of this solution. Single Git repository containing multiple 'project' folders (public/enterprise) each with their own base urls.py, settings.py, wsgi.py, etc...) and multiple manage.py files to choose which "Project" to run. I'm afraid that this solution would become an utter mess because it wouldn't be possible to have the public and enterprise versions use different versions of the common library if one needs an update before the other. Separate Git repositories, with all shared code developed as 'Re-usable apps' and installed into the python path. This would be a somewhat clean solution, but would be difficult to work with any time changes needed to be made to the common modules. Single project where all features are managed via conditional logic in the views. This would be most prone to bugs and confusion of all, and I'd prefer to avoid this solution. Does anyone have any experience with this type of solution or could anyone help me find the best solution to this problem?
How can I have Enterprise and Public version of Django application sharing some code?
0.066568
0
0
98
34,784,149
2016-01-14T07:49:00.000
0
0
0
1
java,python,macos,sockets,udp
34,790,670
1
false
0
0
You can access the device via a UDP socket, provided you have the IP address of the device as well as the UDP port number. Both Java and Python have socket APIs so you can use either one. Just make sure you follow the network protocol defined by the device to be able to read to / write from the device properly.
1
0
0
I need to access a smartplug device using socket programming . I have the MAC address and UDP port number of the device . Other information like SSID,password , Apps Id, Dev Id, Cmd ID are also present . Could you please let me know if this can be achieved using Python or Java API . Is there a way in socket programming to access a device using MAC address and get the information sent from a specific UDP port . Thanks in advance for your help .
Smartplug socket programming using Mac Address and UDP port
0
0
1
362
34,785,420
2016-01-14T09:06:00.000
0
0
0
0
python,flask
34,786,153
1
false
1
0
I have used windows task scheduler to schedule a .bat file. The .bat file contained some short code to run the python script. This way the scripy is not idling in the background when you are not using it. As for storing data in between, I would save it to a file.
1
0
0
I have flask web service application with some daily, weekly and monthly events I want to store these events and calculate their start time, for example for an order with count of two and weekly period. The first payment is today and other one is next week. I want to store repeated times and then for each of them send notification on the start time periodically. What is the best solution ?
Create scheduled job and run the periodically
0
0
0
72
34,785,863
2016-01-14T09:28:00.000
6
1
0
0
python,amazon-web-services,amazon-s3,aws-lambda
41,511,055
2
false
1
0
I am also facing the same issue, in my case on every PUT event in S3 bucket a lambda should trigger, it triggers twice with same aws_request_id and aws_lambda_arn. To fix it, keep track of the aws_request_id (this id will be unique for each lambda event) somewhere and have a check on the handler. If the same aws_request_id exist then do nothing, otherwise process as usual.
2
10
0
I'm using an AWS Lambda function (written in python) to send an email whenever an object is uploaded into a preset S3 bucket. The object is uploaded via the AWS PHP SDK into the S3 bucket and is using a multipart upload. Whenever I test out my code (within the Lambda code editor page) it seems to work fine and I only get a single email. But when the object is uploaded via the PHP SDK, the Lambda function runs twice and sends two emails, both with different message ID's. I've tried different email addresses but each address receives exactly two, duplicate emails. Can anyone guide me where could I be going wrong? I'm using the boto3 library that is imported with the sample python code to send the email.
AWS Lambda function firing twice
1
0
1
5,712
34,785,863
2016-01-14T09:28:00.000
13
1
0
0
python,amazon-web-services,amazon-s3,aws-lambda
34,795,499
2
true
1
0
Yes, we have this as well and it's not linked to the email, it's linked to S3 firing multiple events for a single upload. Like a lot of messaging systems, Amazon does not guarantee "once only delivery" of event notifications from S3, so your Lambda function will need to handle this itself. Not the greatest, but doable. Some form of cache with details of the previous few requests so you can see if you've already processed the particular event message or not.
2
10
0
I'm using an AWS Lambda function (written in python) to send an email whenever an object is uploaded into a preset S3 bucket. The object is uploaded via the AWS PHP SDK into the S3 bucket and is using a multipart upload. Whenever I test out my code (within the Lambda code editor page) it seems to work fine and I only get a single email. But when the object is uploaded via the PHP SDK, the Lambda function runs twice and sends two emails, both with different message ID's. I've tried different email addresses but each address receives exactly two, duplicate emails. Can anyone guide me where could I be going wrong? I'm using the boto3 library that is imported with the sample python code to send the email.
AWS Lambda function firing twice
1.2
0
1
5,712
34,786,665
2016-01-14T10:05:00.000
0
0
0
0
python,flask,flask-admin
34,786,896
1
false
1
0
You have to follow this steps Javascript Bind a on change event to your Department select . If the select changes you get the value selected. When you get the value, you have to send it to the server through an AJAX request. Flask Implement a method that reads the value and loads the associated Subdepartments. Send a JSON response to the view with your Subdepartments Javascript In your AJAX request implement a success function. This function by default has as first parameter the data received from the server. Loop over them and append them to the wished select.
1
0
0
I have a Flask-admin application and I have a class with a "Department" and a "Subdepartment" fields. In the create form, I want that when a Department is selected, the Subdepartment select automatically loads all the corresponding subdepartments. In the database, I have a "department" table and a "sub_department" table that was a foreign key "department_id". Any clues on how I could achieve that? Thanks in advance.
Load a select list when selecting another select
0
1
0
44
34,787,213
2016-01-14T10:30:00.000
2
0
1
0
python,numpy
34,787,266
2
false
0
0
It has nothing to do with the array. 1. means 1.0. 1. is a float, 1 is an int.
1
1
1
what is difference between numpy.array([[1., 2], [3, 4], [5, 6]]) and numpy.array([[1, 2], [3, 4], [5, 6]]). I came across one code using two different type of declaration but could not find its meaning.
meaning of "." after first element in numpy array
0.197375
0
0
38
34,787,840
2016-01-14T10:57:00.000
0
0
1
0
python,graph,neo4j,package
34,790,833
1
false
0
0
First you should download the Python package, Explode it, you will see a file named 'setup.py'. And open the terminal in this folder. execute this command'python setup.py install' Or you also can simply use pip.
1
0
0
I need to install the package "graph" in python 2.6.6 to use it with neo4j.Is there a command I can execute to do that?
How to install a python package manually in centos red-had?
0
0
0
71
34,787,957
2016-01-14T11:02:00.000
1
0
0
0
python,excel,csv
34,788,487
2
true
0
0
Using python is recommended for below scenarios: Repeated action: Perform similar set of action over a similar dataset repeatedly. For ex, say you get a monthly forecast data and you have to perform various slicing & dicing and plotting. Here the structure of data and the steps of analysis is more or less the same, but the data differs for every month. Using Python and Pandas will save you a bulk of time and also reduces manual error. Exploratory analysis: Once you establish a certain familiarity with Pandas, Numpy and Matplotlib, analysis using these python libraries are faster and much efficient than Excel Analysis. One simple usecase to justify this statement is backtracking. With Pandas, you can quickly trace back and regain the dataset to its original form or an earlier analysed form. With Excel, you could get lost after a maze of analysis, and might be in a lose of backtrack to an earlier form outside of cntrl + z Teaching Tool: In my opinion, this is the most underutilized feature. IPython notebook could be an excellent teaching tool and reference document for data analysis. Using this, you can efficiently transfer knowledge between colleagues rather than sharing a complicated excel file.
2
1
1
I am currently working on large data sets in csv format. In some cases, it is faster to use excel functions to get the work done. However, I want to write python scripts to read/write csv and carry out the required function. In what cases would python scripts be better than using excel functions for data manipulation tasks? What would be the long term advantages?
CSV format data manipulation: why use python scripts instead of MS excel functions?
1.2
0
0
236
34,787,957
2016-01-14T11:02:00.000
0
0
0
0
python,excel,csv
34,788,093
2
false
0
0
After learning python, you are more flexible. The operations you can do with on the user interface of MS excel are limited, whereas there are no limits if you use python. The benefit is also, that you automate the modifications, e.g. you can re-use it or re-apply it to a different dataset. The speed depends heavily on the algorithm and library you use and on the operation. You can also use VB script / macros in excel to automate things, but usually python is less cumbersome and more flexible.
2
1
1
I am currently working on large data sets in csv format. In some cases, it is faster to use excel functions to get the work done. However, I want to write python scripts to read/write csv and carry out the required function. In what cases would python scripts be better than using excel functions for data manipulation tasks? What would be the long term advantages?
CSV format data manipulation: why use python scripts instead of MS excel functions?
0
0
0
236
34,788,159
2016-01-14T11:13:00.000
5
0
0
0
revit-api,revit,revitpythonshell,revit-2015
34,803,157
2
false
1
0
I think the most preferred way is the UIDocument.RequestViewChange() method. The tricky part about this is that unless you've designed your application to be modeless with external events or idling, it may not actually happen until later when control returns back to Revit from your addin. (There's also setting the UIDocument.ActiveView property - not positive if this has different constraints). The other way that I have done it historically is through the use of the UIDocument.ShowElements() command. The trick here is that you don't have control of the exact view - but if you can figure out the elements that appear only in that view, you can generally make it happen (even if you have to do a separate query to get a bunch of elements that are only in the given floorplan view). Good Luck!
1
4
0
I am trying to activate a view using Revit API. What I want to do exactly is to prompt the user to select some walls, but when the user is asked that, he can't switch views to select more walls (everything is greyed out at that point). So the view I want to activate (by that I mean, I want this view to be actually shown on screen) already exist, and I can access its Id. I have seen threads about creating, browsing, filtering views, but nothing on activating it... It's a Floor Plan view. So far I can access its associated ViewPlan object, and associated parameters (name, Id, ..). Is it possible to do ? Thanks a lot ! Arnaud.
How can I activate (display) a view using Revit API?
0.462117
0
0
2,994
34,790,474
2016-01-14T13:08:00.000
0
0
0
0
javascript,python,redirect,http-referer
34,791,352
1
true
1
0
Thanks to NetHawk it can be done using history.replaceState() or history.pushState().
1
1
0
The flow with me is this we get a request to www.oursite.com?secretinfo=banana then we have people do some stuff on that page and we send them to another site. Is it possible to remove the part "secretinfo=banana" from the referer in the header info? We do this now, by redirecting to another page without this parameters, which does another redirect by a meta-refresh to the other party. As you can imagine this is not very good for the user experience. Doing it direct would be great but even doing it with a 302 or 303 redirect would be better but these don't change the referer. We are using Python 3 with Flask or it can be with JavaScript.
Is it possible to remove the get parameters from the referer in the header without meta-refresh?
1.2
0
1
141
34,790,796
2016-01-14T13:25:00.000
1
0
0
0
python,shell
34,791,633
4
false
0
0
For ease of use, try Selenium. Although it is slower compared to using headless browsers, the good thing is you don't need to use other libraries to enable Javascript since your script will simulate an actual human browsing a website. You can also check the behaviour of your script visually since it opens the website in your browser. You can easily find boilerplate codes and tutorials about it too :)
1
0
0
A friend of mine wants to get some data from certain webpages. He wants it in XML, because he will feed them to some mighty application. That's not a problem, any scripting language can do this. The problem is, that the content is "hidden" and can only be seen, when user is logged in. Which means, in whatever language I'll use, I have to find a way to simulate a web browser - to store cookies (session id), because without it, I won't be able to get data from restricted sections of the website. I don't wish I have to write my own "web browser", but I am not certain if I need one. Also I think, there must be a library for this. Any ideas? Yes, we asked them about API's, data dumps, etc. They don't want to cooperate. Thanks for any tips.
"Datamining" from websites
0.049958
0
1
2,248
34,793,777
2016-01-14T15:47:00.000
0
0
1
0
python-3.x
34,801,497
1
false
0
1
Assuming your question is to open 2 programs at the same time, couldn't you just open 2 tabs and open the 2 programs side by side like that? Ex. on mac, double click open new window and open a new tab.
1
0
0
Hey i'm new here so if I've done something wrong i apologize. I have been searching for hours though and i figured i would just open a new question. I want to have a python program that will open another window so it can display different information on that window whilst still displaying it's own information. For ex: I have "hello.py" and it opens up another python window "goodbye.py" i dont want what ever is in "goodbye.py" to show up on "hello.py" i want it to stay in its own window. Any ideas?
Python 3 opening more than one .py program instead of just calling it in the same window
0
0
0
17
34,793,824
2016-01-14T15:49:00.000
1
0
1
0
python,rpy2
34,796,466
1
true
0
0
Yes to the first question. No to the second. One instance of R per process.
1
2
0
If I start up multiple threads in a single python program that all use rpy2, will they be communicating with the same instance of R? What if I have multiple separate python processes running that do the same?
rpy2: How many instances of R are running?
1.2
0
0
65
34,795,776
2016-01-14T17:20:00.000
1
0
1
1
python,terminal,cron,tmux
34,796,268
4
false
0
0
Okay, I see what you're saying. I've done some similar stuff in the past. For the cron to run your script at 3pm and append to a log file you can do that simply like this: 0 15 * * * command >> log # just logs stdout or 0 15 * * * command &>> log # logs both stdout and stderr If you want it in the terminal I can think of two possibilities: Like you said, you could do a while true loop that checks the time every n seconds and when it's 3pm do something. Alternately you could set up an API endpoint that's always on and trigger it by some other program at 3pm. This could be triggered by the cron for example. Personally I also like the convenience of having a tmux or screen to login to to see what's been happening rather than just checking a log file. So I hope you figure out a workable solution for your use case!
1
3
0
I have a python script, or should I say a python service which needs to run every day at 3pm. Basically, there is a while True : time.sleep(1) at the end of the file. I absolutely need this script to execute in a terminal window (because I need the logs). Bonus if the solution makes it possible to run in a tmux window. I tried cron jobs but can't figure out how to put this in a terminal.
How to run a python script at a certain time in a tmux terminal?
0.049958
0
0
2,519
34,796,053
2016-01-14T17:33:00.000
1
0
0
0
python,http,command-line,wget,httrack
41,248,744
3
false
1
0
This is an old post so you might have figured it out by now. I just came across your post looking for another answer about using Python and HTTrack. I was having the same issue you were having and I passed the argument -r2 and it downloaded the images. My arguments basically look like this: cmd = [httrack, myURL,'-%v','-r2','-F',"Mozilla/5.0 (Windows NT 6.1; Win64; x64)",'-O',saveLocation]
2
2
0
I've been attempting to use HTTrack to mirror a single page (downloading html + prerequisites: style sheets, images, etc), similar to the question [mirror single page with httrack][1]. However, the accepted answer there doesn't work for me, as I'm using Windows (where wget "exists" is but actually a wrapper for Invoke-WebRequest and doesn't function at all the same way). HTTrack really wants to either (a) download the entire website I point it at, or (b) only download the page I point it to, leaving all images still living on the web. Is there a way to make HTTrack download only enough to view a single page properly offline - the equivalent of wget -p?
Using HTTrack to mirror a single page
0.066568
0
1
1,960
34,796,053
2016-01-14T17:33:00.000
-1
0
0
0
python,http,command-line,wget,httrack
59,127,033
3
false
1
0
Saving the page with your browser should download the page and all its prerequisites.
2
2
0
I've been attempting to use HTTrack to mirror a single page (downloading html + prerequisites: style sheets, images, etc), similar to the question [mirror single page with httrack][1]. However, the accepted answer there doesn't work for me, as I'm using Windows (where wget "exists" is but actually a wrapper for Invoke-WebRequest and doesn't function at all the same way). HTTrack really wants to either (a) download the entire website I point it at, or (b) only download the page I point it to, leaving all images still living on the web. Is there a way to make HTTrack download only enough to view a single page properly offline - the equivalent of wget -p?
Using HTTrack to mirror a single page
-0.066568
0
1
1,960
34,796,147
2016-01-14T17:37:00.000
0
0
0
0
python,arrays,numpy
34,796,644
3
false
0
0
Combine your arrays into one, then take the min/max along the new axis. A = np.array([a1,a2, ... , an]) A.min(axis=0), A.max(axis=0)
1
0
1
I'm trying to get the largest/smallest number returned out of two or more numpy.array of equal length. Since max()/min() function doesn't work on multiple arrays, this is some of the best(worst) I've come up with: max(max(a1), max(a2), max(a3), ...) / min(min(a1), min(a2), min(a3), ...) Alternately one can use numpy's maximum, but those only work for two arrays at time. Thanks in advance
How do you find the largest/smallest number amongst several array's?
0
0
0
143
34,800,524
2016-01-14T21:55:00.000
1
0
1
0
python,python-3.x,compilation,pygame,py2exe
34,801,623
2
false
0
1
I had the same problem using cx_Freeze, so hopefully this will work for you as well. Open up your pygame package folder. It should be C:\Python34\Lib\site-packages\pygame. There should be a True Type Font File titled freesansbold.ttf. Copy that file then open the folder containing your exe program. There should be a zipped file called library. Open it up and go to the pygame folder inside the zipped file. Should look something like this \build\exe.win32-3.4\library.zip\pygame. And just paste the freesansbold.ttf file in that folder and it should work perfectly.
2
0
0
I made a program using the pygame module on Python 3 and it works fine within python, but when I try to compile is using py2exe it won't run. (I just get the programName.exe has stopped working error upon trying to run it). I managed to narrow down this problem to the pygame.font module as when I comment all the lines that use that module everything works fine. I tried to forcefully include the module using the -i flag in py2exe, but it doesn't appear to change anything... What am I doing terribly wrong? Edit: I managed to get the reason of the program not working - it crashes as it can not find build\executable.exe\pygame\freesansbold.ttf . What I don't understand is why the hell is the pygame folder supposed to be located in a folder with the name of my executable? (Of course, I can not create a folder with the same name as an existing file in the directory). If anyone has a clue to how to fix it, please help!!
Py2Exe won't successfully compile the pygame.font module on Python 3
0.099668
0
0
319
34,800,524
2016-01-14T21:55:00.000
0
0
1
0
python,python-3.x,compilation,pygame,py2exe
34,867,493
2
false
0
1
I managed to find a way! By including -l library.zip argument in the build_exe command and then following the instructions given by DeliriousSyntax in the answer above I managed to get it to work!
2
0
0
I made a program using the pygame module on Python 3 and it works fine within python, but when I try to compile is using py2exe it won't run. (I just get the programName.exe has stopped working error upon trying to run it). I managed to narrow down this problem to the pygame.font module as when I comment all the lines that use that module everything works fine. I tried to forcefully include the module using the -i flag in py2exe, but it doesn't appear to change anything... What am I doing terribly wrong? Edit: I managed to get the reason of the program not working - it crashes as it can not find build\executable.exe\pygame\freesansbold.ttf . What I don't understand is why the hell is the pygame folder supposed to be located in a folder with the name of my executable? (Of course, I can not create a folder with the same name as an existing file in the directory). If anyone has a clue to how to fix it, please help!!
Py2Exe won't successfully compile the pygame.font module on Python 3
0
0
0
319
34,801,254
2016-01-14T22:49:00.000
0
0
0
0
python-3.x,opengl,pyqtgraph
34,801,690
4
false
0
1
A cursory inspection of the docs renders no help ... however if you decrease the field of view (FOV) while also increasing the distance you will approximate an orthographic projection to arbitrary precision as you vary those two parameters
2
2
0
I'm using PyQtGraph to plot mesh surfaces. I would like to see the 3D world with perspective turned off. Is this possible in pyQtGraph? I have searched through the documentation and the google groups and can't find any reference to this. I think it is possible in principle with openGL so is there a way to bring this out and control perspective on/off in pyQtGraph?
PyQtGraph - Turn off perspective in 3d view
0
0
0
1,275
34,801,254
2016-01-14T22:49:00.000
0
0
0
0
python-3.x,opengl,pyqtgraph
37,876,896
4
false
0
1
I would recommend trying out the newer vispy; it is much more flexible with regards to the types of cameras and mouse interactions supported. In particular, orthographic projection happens to be the default for the 'arcball' camera type, and probably the others too; it's set by setting camera.fov to be 0. As a bonus, ergonomics with ipython are also much improved, i.e. your ipython shell stays responsive at the same time the scene is active, and you can kill the scene without killing your ipython instance, and launch another.
2
2
0
I'm using PyQtGraph to plot mesh surfaces. I would like to see the 3D world with perspective turned off. Is this possible in pyQtGraph? I have searched through the documentation and the google groups and can't find any reference to this. I think it is possible in principle with openGL so is there a way to bring this out and control perspective on/off in pyQtGraph?
PyQtGraph - Turn off perspective in 3d view
0
0
0
1,275
34,801,338
2016-01-14T22:55:00.000
2
0
1
0
python,python-2.7,machine-learning,xgboost
34,802,205
1
false
0
0
It is highly implementation specific but in general randomized algorithms, ran in parallel may behave differently when working with different number of cores (unless one forces synchronization of random number generators, which would slow down the process). So it is something that one should expect - the same applies to Random Forest model etc.
1
3
0
I've installed the same exact version of XGBoost (0.4) on two machines. The only difference between the two machines is the RAM and number of cores (8 vs 16). Using the exact same data, I cannot reproduce the same results. They are slightly different (fourth, fifth decimal). The seed is left to the default.
Does XGBoost produce the same results if I use different number of cores?
0.379949
0
0
758
34,801,342
2016-01-14T22:56:00.000
0
0
0
0
python,tensorflow
53,392,066
8
false
0
0
For rotating an image or a batch of images counter-clockwise by multiples of 90 degrees, you can use tf.image.rot90(image,k=1,name=None). k denotes the number of 90 degrees rotations you want to make. In case of a single image, image is a 3-D Tensor of shape [height, width, channels] and in case of a batch of images, image is a 4-D Tensor of shape [batch, height, width, channels]
1
15
1
In tensorflow, I would like to rotate an image from a random angle, for data augmentation. But I don't find this transformation in the tf.image module.
tensorflow: how to rotate an image for data augmentation?
0
0
0
28,208
34,803,369
2016-01-15T02:20:00.000
2
0
0
0
python,qt,pyqt
34,803,713
1
true
0
1
how about QtGui.QWidget.setVisible(visible) PySide.QtGui.QWidget. setVisible ( visible ) Parameters: visible – PySide.QtCore.bool This property holds whether the widget is visible. Calling setVisible(true) or PySide.QtGui.QWidget.show() sets the widget to visible status if all its parent widgets up to the window are visible. If an ancestor is not visible, the widget won’t become visible until all its ancestors are shown. If its size or position has changed, Qt guarantees that a widget gets move and resize events just before it is shown. If the widget has not been resized yet, Qt will adjust the widget’s size to a useful default using PySide.QtGui.QWidget.adjustSize() . Calling setVisible(false) or PySide.QtGui.QWidget.hide() hides a widget explicitly. An explicitly hidden widget will never become visible, even if all its ancestors become visible, unless you show it. A widget receives show and hide events when its visibility status changes. Between a hide and a show event, there is no need to waste CPU cycles preparing or displaying information to the user. A video application, for example, might simply stop generating new frames. A widget that happens to be obscured by other windows on the screen is considered to be visible. The same applies to iconified windows and windows that exist on another virtual desktop (on platforms that support this concept). A widget receives spontaneous show and hide events when its mapping status is changed by the window system, e.g. a spontaneous hide event when the user minimizes the window, and a spontaneous show event when the window is restored again. You almost never have to reimplement the PySide.QtGui.QWidget.setVisible() function. If you need to change some settings before a widget is shown, use PySide.QtGui.QWidget.showEvent() instead. If you need to do some delayed initialization use the Polish event delivered to the PySide.QtGui.QWidget.event() function.
1
0
0
In a PyQt application is it possible to kill only the GUI (Qt) part? Any Python commands running should be unaffected, only the graphics should disappear.
Kill only Qt in PyQt
1.2
0
0
237
34,804,604
2016-01-15T04:52:00.000
0
0
0
0
python-2.7,openerp,odoo-8,odoo-9
34,806,653
1
false
1
0
You can easily achieve this by creating one field in 'hr.holidays.status' model like whether this leave is visible to manager or not. By overwriting onchange of holiday_status_id and return domain as per logged in user by checking whether it is a manager or not.
1
0
0
user(employee) has to display only 2 leave type and admin(HR officer) has to display all leave type
How to do Odoo Leave Type Filter(user has only display 2 leave type admin need to display all)?
0
0
0
218
34,804,656
2016-01-15T04:57:00.000
0
0
1
0
python,django,python-3.x,installation,installation-path
34,805,744
1
false
1
0
For you and possible future users asking similar question: Only the pip command runs the 2.7 Python interpreter. You are using the 3.4 version, so instead of pip you have to use the pip3.4 command. Why? Python 2.7 is not compatible with the 3.x and higher versions. In your case Django is installed only for the 2.7 version and if you run the python3.4 command, Django is not installed ("no module named django").
1
0
0
I tried to install django after python installation (3.4.0 version), the problem began when i tried to run the simple command: "pip install django" via the cmd - it did nothing (descending line and writes nothing). I forced it to apply the installation using the command: "python -m pip install django". Although it was declared that the installation was successful, when I run, for example, the command: "django-admin --version" it did nothing as well, but when i run the command: "python -m django-admin --version", it says that: "python.exe: no module named django-admin". In general, each command associated to pip or to django does not work, such as: pip help, pip X ot django X Ps. I added the paths in 'Path' of the User Varuables and System Variables: C:\Python34; C:\Python34\Scripts
Unclear issue after installing django
0
0
0
100
34,806,022
2016-01-15T07:16:00.000
0
1
0
0
python,openerp,odoo-8
34,806,721
1
false
1
0
Go to setting-> Technical setting-> Email-> Outgoing Mail Servers Set SMTP server, SMTP port other credentials eg: SMTP Server: smtp.gmail.com SMTP port: 587 connection security: TLS(STARTTLS) Once done, Test the connection is setup properly or not by clicking Test connection button. You can send mail by calling send_mail()
1
0
0
i'm a new Odoo developer and i need to send automatic email when i confirm a form request, and i can input manually sender and receiver the email. Any one have a sample, tutorial or anyone can help me, i dont know the steps or configure of mail server because i use localhost, Thank you
How i can send automatic email when i confirm a form request on Odoo 8?
0
0
0
720
34,807,044
2016-01-15T08:34:00.000
8
0
0
0
python,apache-spark,pyspark
34,807,217
1
true
0
0
If you use IPython terminal you can paste using %paste / %cpaste magic. The first one executes code from the clipboard automatically, the second one is closer to Scala REPL :paste and requires termination by -- or Ctrl-D. It is also possible to use %edit magic which opens external editor and executes code on exit. Standard Python shell doesn't provide similar functionality.
1
4
0
How to enter into paste mode in "pyspark" Spark Shell? Actually I am looking for an equivalent ":paste" command (works in Scala Spark Shell) in "pyspark". By using this mode, I would like to paste entire code snippet in shell rather than executing line by line.
How to use paste mode in pyspark shell?
1.2
0
0
5,126
34,808,553
2016-01-15T10:01:00.000
1
0
0
1
python,google-app-engine,social-networking
34,808,818
1
false
1
0
I´m guessing you have two entities in you model: User and Content. Your queries seem to aggregate upon multiple Content objects. What about keeping this aggregated values on the User object? This way, you don´t need to do any queries, but rather only look up the data stored in the User object for these queries. At some point though, you might consider not using the datastore, but look at sql storage instead. It has a higher constant cost, but I´m guessing at some point (more content/users) it might be worth considering both in terms of cost and performance.
1
0
0
I have a Python server running on Google app engine and implements a social network. I am trying to find the best way (best=fast and cheap) to implement interactions on items. Just like any other social network I have the stream items ("Content") and users can "like" these items. As for queries, I want to be able to: Get the list of users who liked the content Get a total count of the likers. Get an intersection of the likers with any other users list. My Current implementation includes: 1. IntegerProperty on the content item which holds the total likers count 2. InteractionModel - a NdbModel with a key id qual to the content id (fast fetch) and a JsonPropery the holds the likers usernames Each time a user likes a content I need to update the counter and the list of users. This requires me to run and pay for 4 datastore operations (2 reads, 2 writes). On top of that, items with lots of likers results in an InteractionModel with a huge json that takes time to serialize and deserialize when reading/writing (Still faster then RepeatedProperty). None of the updated fields are indexed (built-in index) nor included in combined index (index.yaml) Looking for a more efficient and cost effective way to implement the same requirements.
App Engine social platform - Content interactions modeling strategy
0.197375
0
0
19
34,813,877
2016-01-15T14:58:00.000
0
0
1
0
python,kivy
35,005,349
1
false
0
1
Why are you trying to use a old version of kivy? Use the latest version 1.9.1. follow the documentation Ensure you have the latest pip and wheel: python -m pip install --upgrade pip wheel setup tools . Install the dependencies (skip gstreamer (~90MB) if not needed, see Kivy’s dependencies): python -m pip install docutils pygments pypiwin32 kivy.deps.sdl2 kivy.deps.glew \ kivy.deps.gstreamer --extra-index-url https://kivy.org/downloads/packages/simple/ Install kivy: python -m pip install kivy That’s it. You should now be able to import kivy in python.
1
0
0
I've installed Kivy some time ago, and since then I've tried many ways to run Kivy, but till today I'm not successful. I am able to "import kivy" in python but while importing app module by "from kivy.app import app" It shows error 'no module named app', while there is a folder in Kivy named "app". The location of this folder is "D:\utils\Kivy\Kivy-1.9.0-py2.7-win32-x64\kivy27\kivy" Here's my environment path which I have setup for Kivy: KIVY_DATA_DIR : D:\utils\Kivy\Kivy-1.9.0-py2.7-win32-x64\kivy27\kivy\data KIVY_EXTS_DIR : D:\utils\Kivy\Kivy-1.9.0-py2.7-win32-x64\kivy27\kivy\tools\extensions KIVY_HOME : D:.kivy; D:\utils\Kivy\Kivy-1.9.0-py2.7-win32-x64\kivy27\kivy KIVY_MODULES_DIR : D:\utils\Kivy\Kivy-1.9.0-py2.7-win32-x64\kivy27\kivy; D:\utils\Kivy\Kivy-1.9.0-py2.7-win32-x64\kivy27\kivy\modules Can anyone please help me....
I'm getting error while importing app from kivy
0
0
0
983
34,814,587
2016-01-15T15:34:00.000
0
0
1
0
python-2.7,request,python-requests,importerror,pyinstaller
34,814,921
1
false
0
0
I am an idiot. I was piping the wrong python install. I use a separate version for other projects, and I sandbox my kivy one to prevent conflicts... too bad I forgot about that :)
1
0
0
I have encountered this issue and tried all previous solutions to no avail. Have tried rolling back requests to older versions, have tried updating pyinstaller. Please guys, if you know a configuration that works, let me know. I am compiling some python 2.7 code that uses Kivy
pyinstaller issue with requests 'ImportError: No module named 'requests.packages.chardet.sys'
0
0
0
487
34,814,867
2016-01-15T15:50:00.000
2
0
1
0
python,r,pycharm
34,817,688
2
true
0
0
In the Pycharm charm editor go to Settings > Keymap > Other And change the kep map for "Execute selection in console". Double click it and select "Add keyboard shortcut" I think the default is set the Alt+Shift+E. I was also from an R background before Pycharm and was used to the shortcut of Ctrl+R to run selected code. I think Ctrl+R might be something in Pycharm because I decided a long while back to map mine to Alt+R. Once this is done, you can highlight a section and use your new shortcut to run it in a console. You can also just have a cursor on the line and using the shortcut will run the line and move to the next.
1
3
0
In R 3 * 2 typed on the editor can be executed in the console as [1] 6 by having the cursor on the line where the code is typed; clicking on Run if using RStudio, or through Ctrl + Enter. Very convenient. New to Python, I am coming to realize the if I want to see 6, I may need to type print(3 * 2), unless I type the expression directly on the Python console. Or, is there a shortcut? Incidentally, I am using Pycharm as IDE.
Is there a Python shortcut to circumvent need for `print()`
1.2
0
0
2,187
34,814,891
2016-01-15T15:51:00.000
5
0
0
0
python-2.7,machine-learning,svm,logistic-regression
34,815,680
1
true
0
0
Faster is a bit of a weird question, in part because it is hard to compare apples to apples on this, and it depends on context. LR and SVM are very similar in the linear case. The TLDR for the linear case is that Logistic Regression and SVMs are both very fast and the speed difference shouldn't normally be too large, and both could be faster/slower in certain cases. From a mathematical perspective, Logistic regression is strictly convex [its loss is also smoother] where SVMs are only convex, so that helps LR be "faster" from an optimization perspective, but that doesn't always translate to faster in terms of how long you wait. Part of this is because, computationally, SVMs are simpler. Logistic Regression requires computing the exp function, which is a good bit more expensive than just the max function used in SVMs, but computing these doesn't make the majority of the work in most cases. SVMs also have hard zeros in the dual space, so a common optimization is to perform "shrinkage", where you assume (often correctly) that a data point's contribution to the solution won't change in the near future and stop visiting it / checking its optimality. The hard zero of the SVM loss and the C regularization term in the soft margin form allow for this, where LR has no hard zeros to exploit like that. However, when you want something to be fast, you usually don't use an exact solver. In this case, the issues above mostly disappear, and both tend to learn just as quick as the other in this scenario. In my own experience, I've found Dual Coordinate Descent based solvers to be the fastest for getting exact solutions to both, with Logistic Regression usually being faster in wall clock time than SVMs, but not always (and never by more than a 2x factor). However, if you try and compare different solver methods for LRs and SVMs you may get very different numbers on which is "faster", and those comparisons won't necessarily be fair. For example, the SMO solver for SVMs can be used in the linear case, but will be orders of magnitude slower because it is not exploiting the fact that you only care are Linear solutions.
1
4
1
I am doing machine learning with python (scikit-learn) using the same data but with different classifiers. When I use 500k of data, LR and SVM (linear kernel) take about the same time, SVM (with polynomial kernel) takes forever. But using 5 million data, it seems LR is faster than SVM (linear) by a lot, I wonder if this is what people normally find?
Which one is faster? Logistic regression or SVM with linear kernel?
1.2
0
0
4,058
34,816,964
2016-01-15T17:54:00.000
0
1
0
1
python,c++,libsass
39,832,334
1
true
0
0
I did come up with a solution. I created my own packages to install gcc-4.8.2. It was a lot of work and I am not sure if it breaks a bunch of other dependencies down the line. But it worked for the server stack that I needed at the time. I had to create all of the the following packages to get it to work. cpp-4.8.2-8.el6.x86_64.rpm gcc-4.8.2-8.el6.x86_64.rpm gcc-c++-4.8.2-8.el6.x86_64.rpm gcc-gfortran-4.8.2-8.el6.x86_64.rpm libgcc-4.8.2-8.el6.x86_64.rpm libgfortran-4.8.2-8.el6.x86_64.rpm libgomp-4.8.2-8.el6.x86_64.rpm libquadmath-4.8.2-8.el6.x86_64.rpm libquadmath-devel-4.8.2-8.el6.x86_64.rpm libstdc++-4.8.2-8.el6.x86_64.rpm libstdc++-devel-4.8.2-8.el6.x86_64.rpm So again it was a lot of work, but it did work. But after figuring this out a few months later I was able to just upgrade to Centos 7.
1
1
0
Not sure if this is possible but with libsass requiring gcc-c++ >= 4.7 and Centos 6 not having it, I was curious if libsass-python could use the system's libsass instead of compiling it if it exists. I have been able to build a libsass rpm for Centos 6 but python-libsass still tries to compile it itself. I know that I can use devtoolset-1.1 to install python-libsass (that is how I managed to build the libsass rpm) but I am trying to do all of this with puppet. So I thought if the system had libsass then python-libsass wouldn't have to install it. I considered adding an issue in the python-libsass git project but thought I should ask here first.
Get libsass-python to use system libsass library instead of compiling it
1.2
0
0
322
34,817,150
2016-01-15T18:07:00.000
3
1
0
0
python,rabbitmq,rmq
41,491,616
3
false
1
0
If you are using the default exchange for direct routing (exchange = ''), then you don't have to declare any bindings. By default, all queues are bound to the default exchange. As long as the routing key exactly matches a queue name (and the queue exists), the default exchange iw
3
1
0
I have a rabbit mq server running, with one direct exchange which all my messages go through. The messages are routed to individual non-permanent queues (they may last a couple hours). I just started reading about queue bindings to exchanges and am a bit confused as to if I actually need to bind my queues to the exchange or not. I'm using pika basic_publish and consume functions so maybe this is implied? Not really sure just wanna understand a bit more. Thanks
Do I need rabbitmq bindings for direct exchange?
0.197375
0
0
943
34,817,150
2016-01-15T18:07:00.000
1
1
0
0
python,rabbitmq,rmq
34,817,271
3
true
1
0
Always. In fact, even though queues are strictly a consumer-side entity, they should be declared & bound to the direct exchange by the producer(s) at the time they create the exchange.
3
1
0
I have a rabbit mq server running, with one direct exchange which all my messages go through. The messages are routed to individual non-permanent queues (they may last a couple hours). I just started reading about queue bindings to exchanges and am a bit confused as to if I actually need to bind my queues to the exchange or not. I'm using pika basic_publish and consume functions so maybe this is implied? Not really sure just wanna understand a bit more. Thanks
Do I need rabbitmq bindings for direct exchange?
1.2
0
0
943
34,817,150
2016-01-15T18:07:00.000
1
1
0
0
python,rabbitmq,rmq
34,846,505
3
false
1
0
You have to bind a queue with some binding key to an exchange, else messages will be discarded. This is how any amqp broker works, publisher publish a message to exchange with some key, amqp broker(RabbitMq) routes this message from exchange to those queue(s) which are binded with exchange with the given key. However it's not mandatory to declare and bind a queue in publisher. You can do that in subscriber but make sure you run your subscriber before starting your publisher. If you think your messages are getting routed to queue without bindings than you are missing something.
3
1
0
I have a rabbit mq server running, with one direct exchange which all my messages go through. The messages are routed to individual non-permanent queues (they may last a couple hours). I just started reading about queue bindings to exchanges and am a bit confused as to if I actually need to bind my queues to the exchange or not. I'm using pika basic_publish and consume functions so maybe this is implied? Not really sure just wanna understand a bit more. Thanks
Do I need rabbitmq bindings for direct exchange?
0.066568
0
0
943
34,819,948
2016-01-15T21:15:00.000
0
0
1
0
python,ipython,importerror,ipdb
34,821,265
3
false
0
0
I've had problems getting pip installs to work properly for me. Usually I just end up dropping the file/folder with the rest of the libraries. You can just drop it here: C:\Python27\Lib\site-packages and then just import it in your python script and should be good to go.
2
3
0
I've run pip install ipdb but when I run import ipdb in iPython I still get the error: ImportError: No module named 'ipdb' What does this mean? Similarly, when I'm importing files (with .py extension) in iPython, I'm also getting this error (ImportError: No module named Chapter_1_Python_Syntax) though I've checked the path to the directory and it's correct.
Trouble Importing in Ipython: ImportError: No module named 'ipdb'
0
0
0
2,797
34,819,948
2016-01-15T21:15:00.000
2
0
1
0
python,ipython,importerror,ipdb
34,820,986
3
false
0
0
When I get this error after using 'pip install', closing and restarting the terminal usually solve the problem.
2
3
0
I've run pip install ipdb but when I run import ipdb in iPython I still get the error: ImportError: No module named 'ipdb' What does this mean? Similarly, when I'm importing files (with .py extension) in iPython, I'm also getting this error (ImportError: No module named Chapter_1_Python_Syntax) though I've checked the path to the directory and it's correct.
Trouble Importing in Ipython: ImportError: No module named 'ipdb'
0.132549
0
0
2,797
34,820,966
2016-01-15T22:31:00.000
1
0
0
1
python,google-app-engine,google-cloud-datastore,app-engine-ndb
34,824,272
1
false
1
0
Create an email entity, and use the email address as the entities key. This immediately will prevent duplicates. Fetching all of the email addresses can be very efficient as you only need query by kind with a keys only query, and use map_async to process the emails. In addition you could use these entities to store progress of email, maybe provide an audit trail. To increase speed at time of emailing, you could periodically build cached lists of the emails, either in the datastore or stored in blob storage.
1
0
0
I have a landing page set up and have a html text box (with error checking for valid emails) put together with a submit button. I am currently using NDB to store different entities. What I'm looking for is the best way to store just the email that a person enters. So likely hundreds or thousands of emails will be entered, there shouldn't be duplicates, and eventually we will want to use all of those emails to send a large news update to everyone who entered in their emails. What is the best way to store this email data with these contraints: Fast duplicate checking Quick callback for sending emails en masse
Best way to store emails from a landing page on google app engine?
0.197375
0
0
97
34,821,065
2016-01-15T22:39:00.000
2
0
1
0
python,list-comprehension
34,821,190
2
true
0
0
No, this is impossible with just the length of the inputs. You can use math to determine the length by computing common prime factors, but the work involved would not improve upon just computing the results and taking the len of that, and it requires knowledge of the set contents, not just their length. After all, with just the length, {2, 3} multiplied with {2, 3} (producing {4, 6, 9}) couldn't be distinguished from {2, 3} multiplied with {10, 11}, which would produce entirely unique outputs (four total). Makes for a simple proof by contradiction; knowing the input lengths alone is insufficient to determine the length of the output, no single operation on (2, 2) can possibly produce both 3 and 4 without additional inputs.
1
1
1
New at Python, so please... Just came across comprehensions and I understand that they are soon going to possibly ramify into perhaps dot products or matrix multiplications (although the fact that the result is a set makes them more interesting), but I at this point I want to ask whether there is any formula to determine the length of a comprehension such as: {x * y for x in {3, 4, 5} for y in {4, 5, 6}}. Evidently I don't mean for this particular one: len({x * y for x in {3, 4, 5} for y in {4, 5, 6}}) = 8, but of any general operation of this type with an element-wise multiplication of two sets, and taking as the result the set of the resultant integers (no repetitions), for any given length of x and y, consecutive integers, and known x[1] and y[1]. I understand that this question is at the crossroads of coding and math, but I am asking it here on the off chance that it happened to be a somewhat common, or well-known computational issue, since I have read that comprehensions are very widely used. It is only in this sense that I am interested in the question. Base on the comments so far, my sense is that this is not the case. EDIT: For instance, here is a pattern: If x = {1, 2, 3} the len(x * y) comprehensions is equal to 9 provided y[1] = or > 3. For example, len({x * y for x in {1, 2, 3} for y in {1111, 1112, 1113}}) = 9. So tentatively, length = length(x) * length(y), provided there is no overlap in the elements of x and y. Does it work with 4-element sets? Sure: len({x * y for x in {1, 2, 3, 4} for y in {1111, 1112, 1113, 1114}}) = 16. In fact, the integers don't need to be consecutive, just not overlap: len({x*y for x in {11,2,39} for y in {3,4,5}}) = 9. And, yes, it doesn't work... Check this out: {x * y for x in {0, 1, 3} for y in {36, 12, 4}} = {0, 4, 12, 36, 108}
Length of comprehensions in Python
1.2
0
0
95
34,822,500
2016-01-16T01:20:00.000
1
0
0
0
python,class,django-models,field
34,822,770
1
true
1
0
I think what you want to do is change the view that the user sees. What you have above is the underlying DB model which is the wrong place for this sort of feature. In addition (assuming this is a web application), you will probably need to do it in Javascript, so you can change the set of allowed names as soon as the user changes the nationality field.
1
0
0
I have a field in a class that depends on another field in the same class but I have a problem to code: class myclass(models.Model): nation = [('sp', 'spain'), ('fr', 'france')] nationality = models.CharField(max_length=2, choices=nation) first_name = models.CharField(max_length=2, choices=name) I want to put name = [('ro', 'rodrigo'), ('ra', 'raquel')] if nation = spain and name = [('lu', 'luis'), ('ch', 'chantal')] if nation = france. How I can do that? Thanks!
Using a class field in another fields
1.2
0
0
51
34,824,495
2016-01-16T07:09:00.000
0
0
0
0
python-2.7,database-design,sqlite
34,827,171
1
false
0
0
Thanks to CL.'s comment, I figured out the best way is to think rows in a two-column table, where the first column is id INT, and the second column contains person_names. This way, there will be no issue with varying lengths of PERSONS list. of course, to link the main table with the persons table, the id field has to REFERENCE (foreign keys) to the story_id (main table).
1
0
0
Coming off a NLTK NER problem, I have PERSONS and ORGANIZATIONS, which I need to store in a sqlite3 db. The obtained wisdom is that I need to create separate TABLEs to hold these sets. How can i create a TABLE when len(PERSONs) could vary for each id. It can even be zero. The normal use of: insert into table_name values (?),(t[0]) will return a fail.
insert data in sqlite3 when array could be of different lengths
0
1
0
129
34,825,214
2016-01-16T08:56:00.000
1
0
0
1
python,freebsd,ports,unison
36,164,028
1
false
0
0
I think the message is pretty clear: unison-fsmonitor can't be run on freebsd10 because it's not supported, so you can't use Unison with the -repeat option. Since it's just written in Python, though, I don't see why it shouldn't be supported. Maybe message the developer.
1
1
0
After installing unison from /usr/ports/net/unison with X11 disabled via make config, running the command unison -repeat watch /dir/mirror/1 /dir/mirror/2 Yields the message: Fatal error: No file monitoring helper program found From here I decided to try using pkg to install unison-nox11 and this yields the same error message. I've also tried copying the fsmonitor.py file from unison-2.48.3.tar.gz to /usr/bin/unison-fsmonitor and I got the following error: Fatal error: Unexpected response 'Usage: unison-fsmonitor [options] root [path] [path]...' from the filesystem watcher (expected VERSION) Running the command unison-fsmonitor version shows the message unsupported platform freebsd10 Anyone have any ideas on how to fix this?
Using Unison "-repeat watch" in FreeBSD (10.2) after installing from ports yields error
0.197375
0
0
847
34,826,533
2016-01-16T11:41:00.000
0
1
1
0
python,performance-testing,trace,python-asyncio
34,839,535
2
false
0
0
If you only want to measure performance of "your" code, you could used approach similar to unit testing - just monkey-patch (even patch + Mock) the nearest IO coroutine with Future of expected result. The main drawback is that e.g. http client is fairly simple, but let's say momoko (pg client)... it could be hard to do without knowing its internals, it won't include library overhead. The pro are just like in ordinary testing: it's easy to implement, it measures something ;), mostly one's implementation without overhead of third party libraries, performance tests are isolated, easy to re-run, it's to run with many payloads
1
16
0
I can't use normal tools and technics to measure the performance of a coroutine because the time it takes at await should not be taken in consideration (or it should just consider the overhead of reading from the awaitable but not the IO latency). So how do measure the time a coroutine takes ? How do I compare 2 implementations and find the more efficent ? What tools do I use ?
How to measure Python's asyncio code performance?
0
0
0
5,793
34,828,545
2016-01-16T15:13:00.000
0
0
0
0
python,matplotlib,latex,pgf
34,841,121
1
true
0
0
Yes. The .pgf backend does support transparency. If the *.png and *.pdf files come out as transparent, but the *.pgf does not than it may be a problem with your viewer, or tex packages. For me it was the package "transparent" which enables transparent text on pictures, but I wasn't actually using, which clashed with pgf.
1
1
1
I'm currently creating graphics usind the pgf backend for matplotlib. It works very well for integrating graphs generated in python in latex. However, transparency does not seem to be supported, even though I believe this should be possible in pgf. I am currently using version 1.5.1 of matplotlib.
Does the Matplotlib pgf backend support transparency?
1.2
0
0
1,196
34,830,522
2016-01-16T18:24:00.000
9
1
1
0
python,intellij-idea
39,794,104
2
true
0
0
The Python profiler does not show up in IntelliJ IDEA Ultimate, if the UML plugin is not enabled. At least this worked for me. I had the same issue and asked JetBrains directly.
2
3
0
My intellij version is 15.0.2. But in the run context menu, there is no option regarding profiling a piece of code. Anyone knows what goes wrong?
python profiler not available in Intellij 15.0.2
1.2
0
0
730
34,830,522
2016-01-16T18:24:00.000
1
1
1
0
python,intellij-idea
34,836,262
2
false
0
0
The Python profiling is only available in PyCharm Professional and in the version of the Python plugin for IntelliJ IDEA Ultimate. It's not available in IntelliJ IDEA Community Edition.
2
3
0
My intellij version is 15.0.2. But in the run context menu, there is no option regarding profiling a piece of code. Anyone knows what goes wrong?
python profiler not available in Intellij 15.0.2
0.099668
0
0
730
34,830,533
2016-01-16T18:25:00.000
0
0
0
0
python,mpi4py,communicator
35,227,051
2
true
0
0
We solved the issue by simply letting every process do the same initialisation, i.e. every process creates every group and communicator and assigns the processes to these groups according to the same schema. This way, the processes know their corresponding communicators. Interestingly we found out, that, although every process creates all the groups and communicators, they only know the communicators (and groups) they belong to. If, for instance, process 4, which belongs to communicator 1 but not 2, wants to use the communicator 2, it will crash. According to the error message, this is due because it does not know the communicator, although it initialised it at the beginning.
2
0
0
We are currently working on an mpi4py project where we want to group processes in different groups. We then assign these groups to their own communicators. Theses steps are done by the process 0. Now the question is how can the other processes find out what communicator they belong to? Please note that the groups are of different sizes, e.g. group one contains 5 processes and group two 3 ones. So, how can process 4 (in group one) get the communicator from group one.
mpi4py - Get process's own communicator
1.2
0
0
537
34,830,533
2016-01-16T18:25:00.000
0
0
0
0
python,mpi4py,communicator
34,909,743
2
false
0
0
MPI does this for you. Take a look at MPI_COMM_SPLIT, or in mpi4py it would be COMM.Split(). The important parameters are the 'color' (which group processes will end up in) and 'key' (what order a process will be in that group). It sounds like you already know how you want to 'color' your processes. COMM.Split() is collective over the parent communicator, so you'll compute on each node what the color should be, then split the communicator. You can likely leave the key alone, in which case the processes will be sorted according to their rank in the parent communicator.
2
0
0
We are currently working on an mpi4py project where we want to group processes in different groups. We then assign these groups to their own communicators. Theses steps are done by the process 0. Now the question is how can the other processes find out what communicator they belong to? Please note that the groups are of different sizes, e.g. group one contains 5 processes and group two 3 ones. So, how can process 4 (in group one) get the communicator from group one.
mpi4py - Get process's own communicator
0
0
0
537
34,833,546
2016-01-17T00:01:00.000
0
0
1
0
python,import,anaconda
35,528,124
1
false
0
0
It looks like the Sketcher package has not been installed with the python version you are using in your IDE.Once you install it on the IDE the sketcher package would be available for you to work. Hope it helps.
1
0
0
I'm using python 2.7.10 through Anaconda 2.3.0 (64-bit) with opencv 2.4.11 on windows. I ran the opencv sample watershed.py in the command line by typing "python watershed.py" and it works just fine. Strangely (or maybe not?), when I run the same code in the PyCharm (community edition 4.5.2), it does not run and throws an error: "ImportError: cannot import name Sketcher". Note that I've been developing in PyCharm for a while and everything else seems to work fine (although admittedly I don't use that many different imported modules). I've cross-checked to make sure that I don't have multiple Python installs. I don't. I only have Anaconda, and only one install of that. Any ideas?? Thanks. UncleMeh
Python PyCharm not finding module common
0
0
0
747
34,833,974
2016-01-17T01:05:00.000
0
0
0
0
python,html,web
34,834,095
1
false
0
0
Maybe you should try a python web framework like django or flask .etc Make a simple website that offers a webpage that contians a form to input text ,and when people visit the url, put their text in the form and submit, your code can handle it and return a webpage to show the result.
1
0
0
I have a Python script that accepts text from a user, interprets that text and then produces a text response for that user. I want to create a simple web interface to this Python script that is accessible to multiple people at once. By this I mean that person A can go to the website for the script and begin interacting with the script and, at the same time, person B can do the same. This would mean that the script is running in as many processes/sessions as desired. What would be a good way to approach this?
How can one create an open web interface to a Python script that accepts and returns text?
0
0
1
44