Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
46,101,394
2017-09-07T16:39:00.000
0
1
0
0
python,python-3.x,chat,bots,telegram
46,103,183
3
false
0
0
I am not sure to understand your question, can you give us what you pretend to do more explained? You have a few options, creating a group and adding the bot to it. In private chat you only can talk with a single user at a time.
2
1
0
I am going to make a telegram bot in Python 3 which is a random chat bot. As I am new in telegram bots, I don't know how to join two different people in a chat bot. Is there a guide available for this?
how can i join two users in a telegram chat bot?
0
0
1
2,352
46,101,394
2017-09-07T16:39:00.000
0
1
0
0
python,python-3.x,chat,bots,telegram
46,113,831
3
true
0
0
You need to make a database with chatID as primary column. and another column as partner. which stores his/her chat partner chatID. now when a user sends a message to you bot you just need to check the database for that user and send the message to her chat partner. after the chat is done you should empty partner fields of both users. And for the picking part. when a user wants to find a new partner, choose a random row from your database WHERE partnerChatID is Null and set them to first users ID and vise versa.
2
1
0
I am going to make a telegram bot in Python 3 which is a random chat bot. As I am new in telegram bots, I don't know how to join two different people in a chat bot. Is there a guide available for this?
how can i join two users in a telegram chat bot?
1.2
0
1
2,352
46,102,627
2017-09-07T18:07:00.000
3
1
1
0
python,python-3.x
46,102,720
2
true
0
0
In some cases, especially for very small projects, python script.py and python -m script will be pretty much the same. The biggest difference is when your module lives in a package and has relative imports. If you have a script that import something like from .module import some_name, you will most likely get a ModuleNotFoundError when you run it with python package/scripy.py. On the other hand, python -m package.script will produce whatever output you expected.
1
2
0
It seems to me python -m myscript and python myscript do the same thing: running a script. What is the purpose of using -m? Thanks.
What is the purpose of using `-m`?
1.2
0
0
64
46,105,086
2017-09-07T21:05:00.000
4
0
1
0
python
46,105,122
3
false
0
0
There could be other references to this object. If you replace your reference by a new object, the other references will still point to the original dictionary/set.
2
2
0
I just discovered that in Python dictionaries and sets both have a clear method. The method literally just removes all the entries from the object. Is there a good reason, or even situation, where it makes sense to call foo.clear() rather than foo = {}, or foo = set()? I can imagine it might work more efficiently for garbage collection, but it seems to violate "There should be one-- and preferably only one --obvious way to do it."
Is there a performance advantage to using clear method of a dictonary or set?
0.26052
0
0
78
46,105,086
2017-09-07T21:05:00.000
0
0
1
0
python
46,105,143
3
false
0
0
foo={} does not delete the object, just references foo to a new empty object (the old one still remains though if other variables still reference to it). So, foo.clear() is memory efficient and this is the preferable way to do it.
2
2
0
I just discovered that in Python dictionaries and sets both have a clear method. The method literally just removes all the entries from the object. Is there a good reason, or even situation, where it makes sense to call foo.clear() rather than foo = {}, or foo = set()? I can imagine it might work more efficiently for garbage collection, but it seems to violate "There should be one-- and preferably only one --obvious way to do it."
Is there a performance advantage to using clear method of a dictonary or set?
0
0
0
78
46,107,451
2017-09-08T01:51:00.000
0
0
1
1
python,ubuntu
46,107,476
1
false
1
0
You need to use pip3 as the command. pip3 install coolModule Be sure to add to your bash profile. alias pip3="python3 -m pip"
1
1
0
I have fresh ubuntu 16.04 setup for production. Initially if when i type python --version gives me python 2.7 and python3 --version gives me python 3.5 but i want python points to python3 by default, so in my ~/.bashrc alias python=python3 and source ~/.bashrc, After that i install pip using sudo apt-get install python-pip and when i type pip --version it prints pip 8.1.1 from /usr/lib/python2.7/dist-packages (python 2.7) instead that i want packages to be installed into and get from /usr/local/lib/python3.5/dist-packages. I have django application which is written with python3 compatible code. Update: I want to install other packages which have to load from python3 dist-packages not just pip. I don't want to remove python 2.7 from ubuntu it will break other programs, i thought alias python=python3 would install packages into python3.5 dist-packages as well.
python3 loading dist-packages from python2 on ubuntu
0
0
0
421
46,107,591
2017-09-08T02:07:00.000
0
0
1
0
python,nltk,spyder
46,107,776
2
false
0
0
I don't know what you want actually. If you just need the corpus in nltk, you don't have to put nltk.download() in you code but run nltk.download() once in the shell and download the corpus you need. Remind there is another function called nltk.download-gui(). You can try it in spyder or maybe you should change the graphics backend to Qt5 in your spyder settings if that's the problem.
1
0
0
For some reason, when I put nltk.download() in my .py file after import nltk, it doesn't run correctly in Spyder. It does run with the anaconda prompt though. Should I include it in my .py file? If so, how do I get Spyder to be ok with that? Thanks!
Should I put ntlk.download() in my .py file?
0
0
0
120
46,109,131
2017-09-08T05:19:00.000
0
1
1
0
python,python-3.x,importerror,gspread
63,958,311
4
false
0
0
Make sure you are using the correct interpreter. You may have one or more python interpreters installed so it may be installing to a different one.
2
3
0
When I run Gspread with Python3, I get this error: ImportError: No module named 'gspread' When I run with just Python, I get no errors. I installed gspread with pip install gspread --user. I really need to use Python 3, and I expect I should be able to, but I just did something wrong.
Python 3 and Gspread ImportError
0
0
0
2,307
46,109,131
2017-09-08T05:19:00.000
0
1
1
0
python,python-3.x,importerror,gspread
66,132,206
4
false
0
0
You have to install pip for python3 using apt install python3-pip Then you can install using pip3 install gspread --user
2
3
0
When I run Gspread with Python3, I get this error: ImportError: No module named 'gspread' When I run with just Python, I get no errors. I installed gspread with pip install gspread --user. I really need to use Python 3, and I expect I should be able to, but I just did something wrong.
Python 3 and Gspread ImportError
0
0
0
2,307
46,110,363
2017-09-08T06:54:00.000
0
0
1
0
python,python-3.x,math,modulo
46,110,580
1
true
0
0
I am not sure about the formula but you can add x to the negative number such that (x+ negative number)>=0 and x is a multiple of mod value . This is right because x % k = (x+ y*k) % k
1
4
0
How come -20 % 3 = 1? Just confused with the formulae used for negative number % positive number. (I have seen many question related in quora but still not clear with formula used)
Python Negative Number modulo positive number
1.2
0
0
1,447
46,110,613
2017-09-08T07:09:00.000
0
0
0
0
python-2.7,vpython
46,111,204
2
false
0
1
Hopefully was able to solve the issue. The problem was with the way how I installed Vpython. I should have accidentally selected "custom installation" instead of "full installation". Also the version of numpy that comes with default set-up did not support for me. Hence I used the pip to update the version and now everything is up and running. I am able to get the example programs to work. Also the 64-bit version is not working still. So its always safe to stick on to 32-bit version even if your machine is 64-bit
2
0
0
I am a new to python. I am have installed Python27 and Vpython on my windows 64-bit W8.1 laptop. The python version was Py27 32-bits and Vpython 32-bits. After installation I thought I could directly run an example program from the VIDLE (File -> Open -> bounce). But I realized there is lot more to install to get this working. So I googled the errors and found that I has to install Numpy and WxPython which I was able to complete successfully. But now I have this error shown below "The Polygon module is not installed, so the text and extrusion objects are unavailable. The ttfquery and/or FontTools modules are not installed, so the text object is unavailable." I googled for this but was not able to arrive at anything. Should I install Polygon module, FontTools and ttfquery module? I was not able to fond a proper link to do any of the above. Kindly help me out. I have a hit a wall. Thanks!!
Cannot get started with Vpython
0
0
0
347
46,110,613
2017-09-08T07:09:00.000
0
0
0
0
python-2.7,vpython
47,043,208
2
false
0
1
You're working with an older version of VPython that is no longer supported. See the first page of vpython.org.
2
0
0
I am a new to python. I am have installed Python27 and Vpython on my windows 64-bit W8.1 laptop. The python version was Py27 32-bits and Vpython 32-bits. After installation I thought I could directly run an example program from the VIDLE (File -> Open -> bounce). But I realized there is lot more to install to get this working. So I googled the errors and found that I has to install Numpy and WxPython which I was able to complete successfully. But now I have this error shown below "The Polygon module is not installed, so the text and extrusion objects are unavailable. The ttfquery and/or FontTools modules are not installed, so the text object is unavailable." I googled for this but was not able to arrive at anything. Should I install Polygon module, FontTools and ttfquery module? I was not able to fond a proper link to do any of the above. Kindly help me out. I have a hit a wall. Thanks!!
Cannot get started with Vpython
0
0
0
347
46,111,984
2017-09-08T08:27:00.000
1
0
1
1
python,python-2.7,python-3.x,shell,command-line
46,112,425
4
false
0
0
You can simply add your script to PATH variable in order to launch it from anywhere. In Linux distros, you can simply do it by using a bash command PATH=$PATH:/path/to/your/script. Make sure you don't have the space around the "=" operator. Now, the second thing is you don't want your script to be named as pythonProgram.py.You can simply remove the extension .py from PythonProgram.py by adding a single line to the starting of your script. Open up your script and at the very begining type #!/usr/bin/python.This should be the first line of your code.This line is called shebang and is used to tell the bash which interpreter to be used for compiling the script. If everything went right, you will be able to run your script as pythonProgram arg1.
1
1
0
Excuse the awkward question wording. I've made a script. I would like for others to download it from github, and run it by typing programName argument1 argument2, similar to any other popular app used through the terminal such as Jupyter or even opening Atom/Sublime/etc. (ex:jupyter notebook, atom .). However, unlike Jupyter or sublime, my script isn't launching another app, it's a small app meant to be used in the shell. Currently, to use my script, one must type into the command line python programName.py arg1 etc from within the file's directory. How do I allow others to dl it and use it from anywhere (not having to be within the directory), without having to type out the whole python programName.py part, and only having to type programName arg1?
How would I allow others to use my script from anywhere in the shell without having to type out the file extension?
0.049958
0
0
78
46,112,130
2017-09-08T08:34:00.000
1
0
1
0
python
46,112,193
1
true
0
0
Install ClearConsole package in sublime, then type alt+k to clear then console.
1
0
0
I am trying to customize sublime text 3 for Python development. Is there a way to set my buildsystem to clear the console before running a script? Also, I would like to have my console opened by default when opening a new session. I'm using Windows 10.
How to clear the console in sublime text 3 when running a new script?
1.2
0
0
600
46,118,937
2017-09-08T14:36:00.000
0
0
1
0
python-3.x,twisted,autobahn
46,118,938
2
false
0
0
pip install --upgrade pyopenssl can solve the issue when I am using Ubuntu
1
1
0
I have loaded twisted using pip pip install twisted. Then I tried to import from autobahn.twisted.websocket import WebSocketClientProtocol, I get error when I import 'twisted' has no attribute '__version__'.
'twisted' has no attribute '__version__'
0
0
1
227
46,124,591
2017-09-08T21:10:00.000
2
0
0
0
python-3.x,localhost,port,httpserver
46,124,616
1
true
0
0
There is always port... the default is 80, so just run it on 80 and you'll reach it by just localhost.
1
1
0
I usually start http.server by typing python3 -m http.server port and I can access the server by going to localhost:port My goal is to access the srver by simply typing localhost.
How can you run http.server without a port and simply on localhost?
1.2
0
1
1,128
46,124,681
2017-09-08T21:18:00.000
0
0
1
0
beautifulsoup,python-3.6
46,126,544
1
true
0
0
If used with find_all or find, text=true looks for every tags with texts inside it while get_text() returns the text from your found tags.
1
1
0
i am leaning bs4 library if someone can please explain what is Difference and when to use text=true and get_text()?
Difference and when to use text=true and get_text()
1.2
0
0
479
46,126,827
2017-09-09T03:20:00.000
1
0
0
0
python,django,mongodb,mongoengine
46,126,865
1
true
1
0
Django-mongoengine is a django extension that provides integration with MongoEngine. It is basically like other django-extensions which provides added features. But MongoEngine is a Document-Object Mapper (think ORM, but for document databases) for working with MongoDB from Python. As it uses a simple declarative API, similar to the Django ORM, Django-mongoengine make it work with django. But it is exclusively for working with mongodb using python. Note If you use only mongodb then, you can't use admin functionality of Django. You can try using django-non rel. But I will not suggest to go with that. Coz it works with Django-1.3 which is quite old. If you want to use the admin functionality also and mongodb also then you can use 2 databases first the relational one for admin functionality and then mongodb for other purposes.
1
4
0
What's the difference between django mongoengine and mongoengine Can i use django default/build-in form mongodb
Difference between django mongoengine vs mongoengine
1.2
0
0
1,335
46,127,941
2017-09-09T06:47:00.000
1
0
1
0
python
46,129,112
3
false
0
0
Up to now, I still don't get an answer expected. Initially, when I saw this way of expression open(name[, mode[, buffering]]), I really want to know what does that mean. It means optional parameters obviously. At that moment, I found it may be a different way(different from normal way like f(a,b,c=None,d='balabala')) to define a function with optional parameters but not only tell us it's optional parameters. The benefit of this writing can help us use optional parameters but no default value, so I think it's a more clear and more simple way to define optional parameters. What I really want to know is about 2 things: 1. if we can define optional parameters using this way(no at present) 2. It will be nice if someone could explain what does the module-level function mean? I am really appreciated for the above answers and comments! THANKS A LOT
2
3
0
I often find some functions defined like open(name[, mode[, buffering]]) and I know it means optional parameters. Python document says it's module-level function. When I try to define a function with this style, it always failed. For example def f([a[,b]]): print('123') does not work. Can someone tell me what the module-level means and how can I define a function with this style?
python how to define function with optional parameters by square brackets?
0.066568
0
0
1,720
46,127,941
2017-09-09T06:47:00.000
1
0
1
0
python
46,131,685
3
true
0
0
"1. if we can define optional parameters using this way(no at present)" The square bracket notation not python syntax, it is Backus-Naur form - it is a documentation standard only. A module-level function is a function defined in a module (including __main__) - this is in contrast to a function defined within a class (a method).
2
3
0
I often find some functions defined like open(name[, mode[, buffering]]) and I know it means optional parameters. Python document says it's module-level function. When I try to define a function with this style, it always failed. For example def f([a[,b]]): print('123') does not work. Can someone tell me what the module-level means and how can I define a function with this style?
python how to define function with optional parameters by square brackets?
1.2
0
0
1,720
46,130,703
2017-09-09T12:27:00.000
0
0
0
0
python,tkinter,listbox
46,130,765
1
true
0
1
In the call of your widget, insert command=lambda parameter1=value,parameter2=value,etc:name(parameters) For example, if u want a button to execute a function do_this(a) with paramter a set to 5 it's command=lambda a=5:do_this(a)
1
0
0
Having a horrible time learning Tkinter, and it seems to me as though when you make a button execute a function ...command=do_this), that command cannot have any parameters, it can only execute a function. I would like to pass a parameter to do_this() to give it functionality depending on the input, like do_this(parameter). However the command functionality of a button does not use the brackets at the end of the function name and doesn't seem to support parameters. How do I get around this? The intended use of the program is to generate a frequency histogram based on different groups of data from a csv file, where the groups are selected via a listbox, then the histogram is generated by pressing a button.
Pass a parameter from a listbox/button to a function in Tkinter
1.2
0
0
340
46,132,556
2017-09-09T16:08:00.000
0
0
1
0
python-3.5,ubuntu-16.04
46,135,252
2
false
0
0
python3-tk does not have any problem with Python 3.5 or higher, you may have an outdated source list. sudo apt-get update you may check if is not your source list is outdated or not, use the following link to update your source list https://repogen.simplylinux.ch/
1
0
0
When I try to install python3-tk for python3.5 on ubuntu 16.04 I get the following error, what should I do? python3-tk : Depends: python3 (< 3.5) but 3.5.1-3 is to be installed
install python3-tk on python3.5 or higher
0
0
0
605
46,133,715
2017-09-09T18:13:00.000
1
0
1
0
python,pip
46,133,986
2
false
1
0
You need root/Administrator privileges. On linux use sudo before command and on Windows you can open a command prompt as administrator by right clicking it and selecting run as administrator.
1
0
0
When I try to install pyrebase i get this error message. PermissionError: [Errno 13] Permission denied: /Users/myname/anaconda/lib/python3.6/site-packages/google/api what can I do?
how can I fix permission denied error while trying to install pyrebase
0.099668
0
0
351
46,136,334
2017-09-10T00:45:00.000
2
1
0
0
python,arduino,raspberry-pi,raspberry-pi3,serial-communication
50,027,574
1
false
0
0
What I would do is set up the scanner in such a way that each one has a prefix, so whatever code is read, it will always have a prefix i.e. A000001, A000002, B00001, B00002, so all you'll have to do is use a string function to know that all codes that begin with "A" come from scanner A and all that begin with "B" come from scanner B. Regardless of what programming language you use. this works perfect with Motorola/Zebra/Honeywell scanners..
1
2
0
I am working on a project where I will have several Raspberry Pi 3's set up, each having two barcode scanners, two passive buzzers, and two Adafruit NeoPixel Ring lights. Each time a barcode is scanned, an API request is sent to see if the barcode is valid or not. If the barcode is valid, the Adafruit NeoPixel Ring will be green and a success tone is played on the buzzer, and it the barcode is invalid, the light will be blue and a failure tone is played on the buzzer. My question is: Is there a way in Python on the Raspberry Pi to detect which barcode scanner is sending the barcode? I realize that barcode scanners are HID devices and act like a keyboard, so I would like to know if there is a way in Python to treat the scanner different and not have an input() call to receive the scanner's input. It is especially important to know which barcode scanner the incoming data came from so that I know which light to make green or blue and which buzzer to play the sound. In other words, if scanner 1 had a barcode that was valid and scanner 2 had a barcode that was invalid, I want NeoPixel Ring 1 to be green and NeoPixel Ring 2 to be blue. As it stands now, I am considering using two Arduinos and hook up each scanner, buzzer, and NeoPixel Ring to them, and then use serial communication to communicate with the Raspberry Pi from each Arduino. What are your thoughts/suggestions? Thank you in advance!
Raspberry Pi: Detect Multiple Barcode Scanners in Python
0.379949
0
0
624
46,136,821
2017-09-10T02:43:00.000
0
0
1
0
python-2.7
60,171,559
5
false
0
0
Try this instead, if you are using windows OS type in command line ".\pip " and whatever command you want to use after, it can be install uninstall etc.
1
3
0
I am attempting to install a module (requests) in python 2.7.4 but am unable to do so because apparently I don't have pip installed? I tried to run "python pip --version" in CMD to check for it and got nothing in return except that pip is not a recognized command. Have been googling the past 20 minutes and have tried each suggestion to no avail. Sorry for the stupid question but this is quite infuriating.
Missing pip in python 2.7?
0
0
0
5,455
46,138,715
2017-09-10T08:21:00.000
6
0
1
0
python
46,138,749
2
true
0
0
It will show '\u200c' because it's what __repr__method gives you. However, try printing it using print() and you should get what you want, as print() uses the __str__ magic method.
2
2
0
I am scraping some HTML pages with python. The text in some spaces has Half space character (\u200c). When i use the text in a variable, every things is OK. The problem is when i add the text to a list, it shows '\u200c' instead of real Half space. what is the problem?
\u200c instead of real Half space
1.2
0
0
6,498
46,138,715
2017-09-10T08:21:00.000
0
0
1
0
python
46,138,732
2
false
0
0
My guess is that you're using python 2.7. Start using python 3 instead and these issues will go away. Python 2.7 needs to represent unicode characters that way to know that they are unicode characters. Whereas python 3 handles all string as unicode characters so you don't really have to worry about it as much.
2
2
0
I am scraping some HTML pages with python. The text in some spaces has Half space character (\u200c). When i use the text in a variable, every things is OK. The problem is when i add the text to a list, it shows '\u200c' instead of real Half space. what is the problem?
\u200c instead of real Half space
0
0
0
6,498
46,139,185
2017-09-10T09:27:00.000
2
0
0
1
python,sockets,ipc,zeromq
46,140,815
2
false
0
0
Using a separate socket for signalling and messaging is always better While a Poller-instance will help a bit, the cardinal step is to use separate socket for signalling and another one for data-streaming. Always. The point is, that in such setup, both the Poller.poll() and the event-loop can remain socket-specific and spent not more than a predefined amount of time, during a real-time controlled code-execution. So, do not hesitate to setup a bit richer signalling/messaging infrastructure as an environment where you will only enjoy the increased simplicity of control, separation of concerns and clarity of intents. ZeroMQ is an excellent tool for doing this - including per-socket IO-thread affinity, so indeed a fine-grain performance tuning is available at your fingertips.
1
3
0
I have a server process which receives requests from a web clients. The server has to call an external worker process ( another .py ) which streams data to the server and the server streams back to the client. The server has to monitor these worker processes and send messages to them ( basically kill them or send messages to control which kind of data gets streamed ). These messages are asynchronous ( e.g. depend on the web client ) I thought in using ZeroMQ sockets over an ipc://-transport-class , but the call for socket.recv() method is blocking. Should I use two sockets ( one for streaming data to the server and another to receive control messages from server )?
ZeroMQ bidirectional async communication with subprocesses
0.197375
0
1
563
46,142,451
2017-09-10T15:34:00.000
3
0
1
0
python,future,python-asyncio,concurrent.futures
46,143,898
1
false
0
0
Why is there a need to have two different Future classes in the standard library (in asyncio and in concurrent)? While these classes looks similar, they are using for two different paradigms of concurrent programming and have different implementations and interfaces. For example, concurrent.futures.Future used for thread/process based concurrent programming and shouldn't know nothing about event loop because there isn't one in this case. It's result method just blocks thread's/process's execution flow till timeout or future is done. asyncio.Future used for coroutines based concurrent programming and should know about event loop, coroutine-functions and other related stuff. It's result method wouldn't block execution flow since execution flow shouldn't be block at all in this case. Instead you should await future till it's done allowing execution flow to be returned and managed by event loop. There are no benefits in mixing them, while splitting classes makes theirs implementations easier and interfaces clearer.
1
1
0
From the asyncio documentation it states that asyncio has: "a Future class that mimics the one in the concurrent.futures module, but adapted for use with the event loop;" Why is there a need to have two different Future classes in the standard library (in asyncio and in concurrent)? And why is there the necessity to adapt it for the event loop? What am I missing here, or what made them decide it that way?
Why I can't use concurrent.futures in asyncio event loop?
0.53705
0
0
569
46,142,910
2017-09-10T16:25:00.000
0
0
0
0
python,pywin32
46,145,842
1
false
0
1
I don't know what items might be included in that class. If you have PythonWin, which is a good REPL for working with Python on Windows, you can get extensive lists of GUI components from its help files. For instance, you could select 'Python for Win32 Extensions Help' then 'Objects' under 'Pythonwin and win32ui' to see the list of available GUI objects in this category. By right-clicking within the list you'd get the HTML which you could process for a list. I hope somebody has a better way!
1
0
0
Thanks in advance. How can I list all the components like the buttons labels textboxs everything. I dont know if this is possible with pywin32. I am using python 3.5, windows 10 x64.
List windows gui components using pywin32
0
0
0
702
46,143,290
2017-09-10T17:01:00.000
2
0
0
0
python,django,wagtail
46,143,467
2
true
1
0
It should be possible to use {% canonical_url entry as url %} to get the desired URL as the variable url, rather than outputting it directly from the tag. After that, you can perform the slicing on the variable using {{ url|slice:":-1" }}.
1
0
0
I´m trying to make a whatsapp button but I´m having problems with the trailing slash at the end on the href. Whatsapp renders wrongly with the trailing slash. I´m using wagtail and puput. I´d like to do it on template only because wagtail and puput are addons on divio. If I install them separatedly, I would have to remake my website, so I can´t change models.py. I´m using {% canonical_url entry %} for the href. What I´d like to have would be something like {% canonical_url|slice:":-1" entry %} They provide full_url placeholder, but it doesn´t add date to link. It gives foo.com/slug instead of foo.com/2017/09/01/slug so everything gets rendered wrong too. Any suggestions? Thanks!
Is it possible to apply slicing to django function on template?
1.2
0
0
192
46,143,492
2017-09-10T17:22:00.000
0
0
0
0
python,tensorflow,size,object-detection,region
47,131,499
3
false
0
0
I want to know what is your min_dimension which should be larger than 4000 in your case, otherwise the image will be scale down. object_detection-> core-> preprocessor.py def _compute_new_dynamic_size(image, min_dimension, max_dimension): """Compute new dynamic shape for resize_to_range method.""" image_shape = tf.shape(image) orig_height = tf.to_float(image_shape[0]) orig_width = tf.to_float(image_shape[1]) orig_min_dim = tf.minimum(orig_height, orig_width) # Calculates the larger of the possible sizes min_dimension = tf.constant(min_dimension, dtype=tf.float32) large_scale_factor = min_dimension / orig_min_dim # Scaling orig_(height|width) by large_scale_factor will make the smaller # dimension equal to min_dimension, save for floating point rounding errors. # For reasonably-sized images, taking the nearest integer will reliably # eliminate this error. large_height = tf.to_int32(tf.round(orig_height * large_scale_factor)) large_width = tf.to_int32(tf.round(orig_width * large_scale_factor)) large_size = tf.stack([large_height, large_width]) if max_dimension: # Calculates the smaller of the possible sizes, use that if the larger # is too big. orig_max_dim = tf.maximum(orig_height, orig_width) max_dimension = tf.constant(max_dimension, dtype=tf.float32) small_scale_factor = max_dimension / orig_max_dim # Scaling orig_(height|width) by small_scale_factor will make the larger # dimension equal to max_dimension, save for floating point rounding # errors. For reasonably-sized images, taking the nearest integer will # reliably eliminate this error. small_height = tf.to_int32(tf.round(orig_height * small_scale_factor)) small_width = tf.to_int32(tf.round(orig_width * small_scale_factor)) small_size = tf.stack([small_height, small_width]) new_size = tf.cond( tf.to_float(tf.reduce_max(large_size)) > max_dimension, lambda: small_size, lambda: large_size) else: new_size = large_size return new_size
3
2
1
I have images of a big size (6000x4000). I want to train FasterRCNN to detect quite small object (tipycally between 50 150 pixels). So for memory purpose I crop the images to 1000x1000. The training is ok. When I test the model on the 1000x1000 the results are really good. When I test the model on images of 6000x4000 the result are really bad... I guess it is the region proposal step, but I don't know what I am doing wrong (the keep_aspect_ratio_resizer max_dimension is fix to 12000)... Thanks for your help !
Faster RCNN tensorflow object detection API : dealing with big images
0
0
0
2,633
46,143,492
2017-09-10T17:22:00.000
3
0
0
0
python,tensorflow,size,object-detection,region
46,399,506
3
true
0
0
You need to keep training images and images to test on of roughly same dimension. If you are using random resizing as data augmentation, you can vary the test images by roughly that factor. Best way to deal with this problem is to crop large image into images of same dimension as used in training and then use Non-maximum suppression on crops to merge the prediction. That way, If your smallest object to detect is of size 50px, you can have training images of size ~500px.
3
2
1
I have images of a big size (6000x4000). I want to train FasterRCNN to detect quite small object (tipycally between 50 150 pixels). So for memory purpose I crop the images to 1000x1000. The training is ok. When I test the model on the 1000x1000 the results are really good. When I test the model on images of 6000x4000 the result are really bad... I guess it is the region proposal step, but I don't know what I am doing wrong (the keep_aspect_ratio_resizer max_dimension is fix to 12000)... Thanks for your help !
Faster RCNN tensorflow object detection API : dealing with big images
1.2
0
0
2,633
46,143,492
2017-09-10T17:22:00.000
1
0
0
0
python,tensorflow,size,object-detection,region
46,143,853
3
false
0
0
It looks to me like you are training on images with a different aspect ratio than what you are testing on (square vs not square) --- this could lead to a significant degradation in quality. Though to be honest I'm a bit surprised that the results could be really bad, if you are just visually evaluating, maybe you also have to turn down the score thresholds for visualization.
3
2
1
I have images of a big size (6000x4000). I want to train FasterRCNN to detect quite small object (tipycally between 50 150 pixels). So for memory purpose I crop the images to 1000x1000. The training is ok. When I test the model on the 1000x1000 the results are really good. When I test the model on images of 6000x4000 the result are really bad... I guess it is the region proposal step, but I don't know what I am doing wrong (the keep_aspect_ratio_resizer max_dimension is fix to 12000)... Thanks for your help !
Faster RCNN tensorflow object detection API : dealing with big images
0.066568
0
0
2,633
46,144,952
2017-09-10T19:51:00.000
0
1
0
1
python,linux
46,145,014
2
false
0
0
It is hard to imagine that you will find a significantly faster way to traverse a directory than os.walk() and du. Parallelizing the search might help a bit in some setups (e.g. SSD), but it won't make a dramatic difference. A simple approach to make things faster is by automatically running the script in the background every hour or so, and having your actual script just pick up the results. This won't help if the results need to be current, but might work for many monitoring setups.
1
0
0
I am trying to use Python to find a faster way to sift through a large directory(approx 1.1TB) containing around 9 other directories and finding files larger than, say, 200GB or something like that on multiple linux servers, and it has to be Python. I have tried many things like calling du -h with the script but du is just way too slow to go through a directory as large as 1TB. I've also tried the find command like find ./ +200G but that is also going to take foreeeever. I have also tried os.walk() and doing .getsize() but it's the same problem- too slow. All of these methods take hours and hours and I need help finding another solution if anyone is able to help me. Because not only do I have to do this search for large files on one server, but I will have to ssh through almost 300 servers and output a giant list of all the files > 200GB, and the three methods that i have tried will not be able to get that done. Any help is appreciated, thank you!
Faster way to find large files with Python?
0
0
0
2,409
46,145,906
2017-09-10T21:51:00.000
1
0
1
0
python,server,libraries,python-import,pymysql
46,145,929
3
false
0
0
There are many ways to do this, the easiest way is probably to use pip freeze > requirements.txt to get a list (requirements.txt) of the dependencies that you have installed for your project (which, if you're running under a virtualenv, is only those installed for your project). If you have installed multiple libraries for your interpreter that you don't need, you can remove them from the list, or create the list manually. You can make pip install all the libraries again on your host by doing pip -r requirements.txt.
1
1
0
I'm currently launching my website and I want my Python script to be run. However, it does not work when I run it because it depends on 5 libraries. How do you link them to your code? For example, I use pymysql to write data to my host's database. However, it does not recognise pymysql. Is there a way to download all of the packages once on the server and then being able to access them? How to link to them in the code? Many thanks !
How to run a Python script on your server and import libraries
0.066568
0
0
3,494
46,147,245
2017-09-11T01:58:00.000
1
1
0
0
python,cplex,quadratic
46,158,131
2
true
0
0
After asking my question in the IBM forum, I received and answer and it works. The fastest way to create the quadratic objective function is to use objective.set_quadratic() with only a list that contains the coefficient values (they can vary and don't need to be all equal to 1.0)
1
0
1
I am solving a large sparse quadratic problem. My objective function has only quadratic terms and the coefficients of all terms are the same and equal to 1 and it includes all of the variables. I use objective.set_quadratic_coefficients function in python to create my objective function. For small problems (10000 variables), the objective function is generated quickly but it gets much slower for larger problems (100000 variables) and does return anything for my main problem that has 1000000 variables. Is there an alternative to objective.set_quadratic_coefficients to speed up the creating the problem?
Performance issue with setting quadratic objective in CPLEX
1.2
0
0
119
46,147,441
2017-09-11T02:31:00.000
3
0
1
0
python,encryption,client-server,aes,hmac
46,150,701
1
true
0
0
What you should be doing is using the Encrypt-Then-MAC paradigm. That means, you are first using the first key to encrypt the message with AES. Afterwards, you use the second key as an authentication key for the HMAC-SHA256 function to authenticate the ciphertext (the output of the encryption function). Then you concatenate the ciphertext and the HMAC output (called a tag) and transmit it over the wire. Upon receiving such a message you recalulate the tag from the ciphertext and compare it to the one tranferred. If it is valid, you may decrypt the ciphertext. Final note: You do not really need to use to seperate keys as an input parameter. If would be fine to just supply one master key and than use that to derive to seperate encryption- and authenctication-keys using a key derivation function like HKDF.
1
0
0
So I am trying to write a small encrypted command line IM messenger which accepts two keys as args. I am not sure how the two keys are meant to work together or which is for what. I am trying to use HMAC and AES together but am unsure of how. Please note this is for educational purposes only, I do not intend to try to use this anywhere else. I am having a hard time understanding and any advice or explanations would be so greatly appreciated. I have a working IM messenger already up and running, I need only to figure out the HMAC/AES/keys. Thank you all, ~Maddie
How to combine HMAC with AES for (python) client-server messenger?
1.2
0
0
520
46,150,511
2017-09-11T07:38:00.000
2
0
1
0
python,qt,user-interface,pyqt,pyuic
66,570,292
1
false
0
1
Try to run pip install pyuic5-tool in your terminal.
1
0
0
I have a study assignment due that requires me to convert a *.ui file from Qt into a *.py file using the command prompt. However I have been struggling because I can not find the Pyuic5/4 module used to convert a *.ui file into a *.py file. {I have literally dug through and searched for it in my drive and can not find it} I have the mots recent Qt and Python download and running well. Any help or alternative method to converting *.ui to *.py would really be appreciated!
PyQt5 file missing (PyQt noob)
0.379949
0
0
184
46,151,461
2017-09-11T08:40:00.000
3
0
1
0
python,infinity
46,151,523
2
false
0
0
Numpy has infinity object, you can call it by np.inf.
1
4
1
How to initialize a matrix to a very large number, say to infinity. Similar to initializing all elements to zero: sample = np.matrix((np.zeros(50,50)) I want to initalize to infinity How to do it in python?
Initializing a matrix to infinity in python
0.291313
0
0
14,126
46,152,636
2017-09-11T09:44:00.000
1
0
0
0
python,pymc,markov-chains,mcmc
46,322,688
2
false
0
0
Perhaps, assuming each user behaves the same way in a particular time interval, at each interval t we can get the matrix [ Pr 0->0 , Pr 1->0; Pr 1->0 , Pr 1->0] where Pr x ->y = (the number of people in interval t+1 who are in state y AND who were in state x in interval t) divided by (the number of people who were in state x in interval t), i.e. the probability based on the sample that someone in the given time interval in state x (0 or 1) will transition to state y (0 or 1) in the next time interval.
1
1
1
I'm trying to build a MCMC model to simulate a changing beavior over time. I have to simulate one day with a time interval of 10-minutes. I have several observations of one day from N users in 144 intervals. So I have U_k=U_1,...,U_N U users with k ranging from 1 to N and for each user I have X_i=X_1,...X_t samples. Each user has two possible states, 1 and 0. I have understood that I have to build a transition probability matrix for each time step and then run the MCMC model. Is it right? But I did not understood how to build it in pyMC can anybody provided me suggestion?
Monte Carlo Marcov Chain with pymc
0.099668
0
0
245
46,154,966
2017-09-11T11:44:00.000
3
0
1
1
python,airflow,apache-airflow
46,170,951
1
true
0
0
Get an idea about your environment variable "AIRFLOW_HOME". If it is not declared, it points to your home directory by default. In airflow, the Python scripts are normally placed at "AIRFLOW_HOME"/airflow/dags You can place the Python script and it's dependencies there but I strongly recommend to create and package for the dependencies and install it in your Python environment along airflow and avoid the unnecessary clutter of files in your dag folders.
1
1
0
I would like to import some python files as dependency in airflow and use an function in python_callable in PythonOperator. I tried placing the dependency python file in the dags folder, but doesn't seem to work. I'm assuming the DAG is being moved to some other folder, before being executed. Help appreciated!!
Airflow: External python in python_callable of PythonOperator
1.2
0
0
996
46,156,123
2017-09-11T12:44:00.000
1
0
1
0
python,python-2.7,plotly
48,228,042
3
false
0
0
I had the same exact problem, but the current answer did not resolve my issues. If that's the case, here is an alternative (IDE dependent): I was not having success with the "pip" stuff. I am using PyCharm, and the following simple steps took care of my problems in less than 30 seconds. Settings Click Project: "MyProjectHere" in left hand nav menu Select Project Interpreter from the above drop down Find the green 'plus' sign near the upper right edge of the window that comes up. Type Plotly in the search bar Click install. Maybe one day I won't be a dumb monkey who doesn't know how to use a command line like all the cool kids, but for now this worked for me.
2
2
0
I am working on project and getting this error ImportError: No module named plotly.plotly I tried: pip install plotly pip install --upgrade plotly But import plotly.plotly as py didn't work.
Import error : No module named plotly.plotly
0.066568
0
0
6,361
46,156,123
2017-09-11T12:44:00.000
0
0
1
0
python,python-2.7,plotly
50,879,899
3
false
0
0
I had the same problem installing plotly with pip and then import not being able to find it, but it worked when I used conda, instead.
2
2
0
I am working on project and getting this error ImportError: No module named plotly.plotly I tried: pip install plotly pip install --upgrade plotly But import plotly.plotly as py didn't work.
Import error : No module named plotly.plotly
0
0
0
6,361
46,157,465
2017-09-11T13:56:00.000
0
1
0
0
raspberry-pi,python-3.4,mod-wsgi
47,499,801
2
true
1
0
For anyone looking at this in 2020: I changed mod_wsgi to single thread mode. I'm not sure if it's related to Python, mod_wsgi, or bad juju, but it still would not last long term. After a few hours the PWM would stop at full off. I tried rolling my own PWM daemon, but ultimately went with the pigpio module (is Joan on SE?). It's been working perfect for me.
1
0
0
I've been developing a web interface for a simple raspberry pi project. It's only turning lights on and off, but I've been trying to add a dimming feature with PWM. I'm using modWSGI with Apache, and RPi.GPIO for GPIO access. For my prototype I'm using (3) SN74HC595's in series for the LED outputs, and am trying to PWM the OE line to dim the lights. Operating the shift registers is easy, because they hold the outputs in between updates. However, for PWM to work the GPIO.PWM instance must stay active between WSGI sessions. This is what I'm having trouble with. I've been working on this for a few days, and saw a couple similar questions here. But nothing for active objects like PWM, only simple counters and such. My two thoughts are: 1) Use the global scope to hold the PWM object, and use PWM.ChangeDutyCycle() in the WSGI function to change brightness. This approach has worked before, but it seems like it might not here. Or 2) Create a system level daemon (or something) and make calls to that from within my WSGI function.
Object persistence in WSGI
1.2
0
0
130
46,159,517
2017-09-11T15:43:00.000
0
0
0
0
python,encoding,nltk,spyder,miniconda
46,161,737
1
false
0
0
I was able to work around this issue, but had to uninstall Miniconda and Python. I reinstalled Anaconda, launched Spyder from Anaconda-Navigator and its all working fine now. But I still don't understand the cause of this issue. It will be great if someone is able to explain.
1
0
1
I am trying to execute a simple nltk code: nltk.sent_tokenize(text) and am getting error LookupError: unknown encoding: cp0. I tried typing in chcp in my IPython Console and I am getting the same error. I am working on Windows10 desktop, executing Python code over Miniconda > Spyder IDE. I have Python 2.7 installed.
Python: LookupError: unknown encoding: cp0
0
0
0
2,069
46,161,694
2017-09-11T17:56:00.000
1
0
0
1
python,amazon-web-services,push-notification,amazon-sns
46,165,313
1
true
1
0
I would store the device token (and I do). I was able to use it when I needed to transparently migrate a few million endpoints from a US region to one in Asia. Might also come in handy if you also wanted to migrate off of AWS at some point. The only reason I wouldn't store it is because of GDPR, but if you're not worried about that then it's not like it's a lot of data. Also you only need to call create_platform_endpoint() once, storing the result ARN. Watch out for a change to the device token. If it does, you'll need to contact your server and notify it that it's changed and call create_platform_endpoint() again. I've never actually seen this happen, however.
1
0
0
I am designing a backend service that sends to some mobile app user messages from my server. Having retrieved their device token using a webhook, should I store these tokens in DB and call create_platform_endpoint() every time I need to send a message? Or storing device token on backend is needless and excessive and once having obtained ARN from create_platform_endpoint(), there is no need to store mobile device tokens on backend?
Storing endpoint ARN vs. Device Token at backend
1.2
0
0
359
46,162,326
2017-09-11T18:38:00.000
0
0
1
0
python,dictionary
46,162,606
1
true
0
0
The analogy of a 2-column table might work to some degree but it doesn't cover some important aspects of how and why dictionaries are used in practice. The comment by @Sayse is more conceptually useful. Think of the dictionary as a physical language dictionary, where the key is the word itself and the value is the word's definition. Two items in the dictionary cannot have the same key but could have the same value. In the analogy of a language dictionary, if two words had the same spelling then they are the same word. However, synonyms can exist where two words which are spelled differently could have the same definition. The table analogy also doesn't cover the behaviour of a dictionary where the order is not preserved or reliable. In a dictionary, the order does not matter and the item is retrieved by its key. Perhaps another useful analogy is to think of the key as a person's name and the value is the person themselves (and maybe lots of information about them as well). The people are identified by their names but they may be in any given order or location...it doesn't matter, since we know their names we can identify them. While the order of items in a dictionary may not be preserved, a dictionary has the advantage of having very fast retrieval for a single item. This becomes especially significant as the number of items to lookup grows larger (on the order of thousands or more). Finally, I would also add that dictionaries can often improve the readability of code. For example, if you wanted create a lookup table of HTML color codes, an API using a dictionary of HTML color names is much more readable and usable than using a list and relying on documentation of indices to retrieve the values. So if it helps you to conceptualize a dictionary as a table of 2 columns, that is fine, as long as you also keep in mind the rules for their use and the scenarios where they provide some benefit: Duplicate keys are not allowed The order of keys is not preserved and therefore not reliable Retrieving a single item is fast (esp. for many items) Improved readability of lookup tables
1
1
0
I've been noodling around with Python for quite a while in my spare time, and while I have sort of understood and definitely used dictionaries, they've always seemed somewhat foreign to me, like I wasn't quite getting them. Maybe it's the name "dictionary" throwing me off, of the fact I started way back when with Basic (I know) which had arrays, but they were quite different. Can I simply think of a dictionary in Python as nothing more or less than a two-column table where we name the contents of the first column "keys" and the contents of the second column "values"? Is this conceptualization extremely accurate and useful, or problematic? If the former, I think I can finally swallow the concept in such a way to finally make it more natural to my thinking.
Is it 100% accurate to think of a Python dictionary as merely a two column table?
1.2
0
0
193
46,164,287
2017-09-11T20:58:00.000
3
0
0
0
python,django
46,164,587
2
false
1
0
Personally, I would modify your existing classes to extend models.Model and maintain separate versions of these classes for use outside of Django. This will keep your classes lean and maintainable within their respective environments. You could also create a new class that extends both models.Model and your python model through multiple inheritance. However this will result in duplicate fields for the same data. If you would like, post an example Model as a new question and tag me in a link to it here, and I can help you convert it.
2
0
0
I have a small program with a command line interface that uses a number of python classes with thorough implementations. I want to scrap the command line interface and wrap the app within a Django app, but I'm just learning Django and I'm unfamiliar with the conventions. I have a number of classes, in-memory storage structures, getters/setters etc and I'd like to convert them into Django models so that I can persist them to the database and interact with them around the django app. Is there a general approach for doing something like this? Should I just inherit the django.db.models.Model class in my existing classes and set them up for direct interaction? Or is there a better, more general/conventional way to do this? I would like to be able to use all of this code in other apps, not necesarilly Django ones, so I don't really want to modify my existing classes in a way that would make them only work with Django. I thought of creating the models separately and then a sort of middle-man class to manage interaction of the actual in-memory class with the django model class, but that just seems like more places I have to make changes when I extend/modify the code. Thanks for any help ahead of time...
Converting existing python classes into Django Models
0.291313
0
0
386
46,164,287
2017-09-11T20:58:00.000
1
0
0
0
python,django
46,164,518
2
false
1
0
One of greatest django strengths is its ORM, if you want import i recommend you use it, and yes you would probably need rewrite the part that interacts with the database, but if you already have isolated this functions in a Models folder~classes, the modification won't be really hard Although in your case i would recommending checking out Tornado/Aiohttp Since looks like you are just trying to create a interface for your functions
2
0
0
I have a small program with a command line interface that uses a number of python classes with thorough implementations. I want to scrap the command line interface and wrap the app within a Django app, but I'm just learning Django and I'm unfamiliar with the conventions. I have a number of classes, in-memory storage structures, getters/setters etc and I'd like to convert them into Django models so that I can persist them to the database and interact with them around the django app. Is there a general approach for doing something like this? Should I just inherit the django.db.models.Model class in my existing classes and set them up for direct interaction? Or is there a better, more general/conventional way to do this? I would like to be able to use all of this code in other apps, not necesarilly Django ones, so I don't really want to modify my existing classes in a way that would make them only work with Django. I thought of creating the models separately and then a sort of middle-man class to manage interaction of the actual in-memory class with the django model class, but that just seems like more places I have to make changes when I extend/modify the code. Thanks for any help ahead of time...
Converting existing python classes into Django Models
0.099668
0
0
386
46,166,480
2017-09-12T01:22:00.000
0
0
0
1
python,python-3.x,dsl
46,199,573
2
false
0
0
I recommend explicitly activating a python env before you run your script in your jenkinsfile to ensure you are in an environment which has nose installed. Please check out virtualenv, tox, or conda for information on how to do so.
1
2
0
Code: sh 'python ./selenium/xy_python/run_tests.py' Error: Traceback (most recent call last): File "./selenium/xy_python/run_tests.py", line 6, in import nose ImportError: No module named nose
Calling a Python Script from Jenkins Pipeline DSL causing import error
0
0
1
5,801
46,166,696
2017-09-12T01:48:00.000
1
0
1
0
python
53,291,691
2
false
0
0
If you read the code here in theory there is this: meaning(term, disable_errors=False) so you should be able to pass True to avoid printing the error in case the word is not in the dictionary. I tried but I guess the version I installed via pip does not contains that code...
1
0
0
Very new to the PyDictionary library, and have had some trouble finding proper documentation for it. So, I've come here to ask: A) Does anybody know how to check if a word (in english) exists, using PyDictionary? B) Does anybody know of some more full documentation for PyDictionary?
Using PyDictionary to check if a word exists
0.099668
0
0
1,845
46,167,324
2017-09-12T03:13:00.000
2
0
1
0
python,pycharm
46,167,446
1
true
0
0
Try removing .py file's association. By right clicking on any .py files > properties > general tab > Opens with > Change
1
2
0
I recently installed PyCharm. Now when I run a *.py file from the command line, PyCharm opens up and tries to create a project - every time! I close PyCharm, try to run the file from cmd, PyCharm opens back up and interferes. Is there any way to prevent this? Thanks.
PyCharm opens automatically
1.2
0
0
126
46,168,305
2017-09-12T05:07:00.000
0
0
1
0
python,macos,logging,python-3.6
46,168,935
1
true
0
0
When I run the program, a new file named cover is created in the documents folder! And the logging information is copied to that file and not to the file that I have made. The program is working correctly. You don't create the logging file first, the application will create it automatically. This is because the logging module in Python is very comprehensive, it can log to files, network locations, the console, email addresses, other servers and when it comes to logging to files - it can automatically rotate the log files. For example, you can configure the logging module to create a new log file every day, or after the log file reaches a certain size, create a new log file. That's why the program is creating files for you. If the file is called cover this is because by default Windows will hide extensions of files. You can disable this option from the "View" menu of Windows Explorer.
1
0
0
My book states that to copy the logging information to a file ( assume it's name is cover.txt ) add the following code in your program: logging.basicConfig(filename='cover.txt',level=logging.DEBUG,format=' %(asctime)s - %(levelname)s - %(message)s') When I run the program, a new file named cover is created in the documents folder! And the logging information is copied to that file and not to the file that I have made. Does this always happen? If not, how can I copy the logging info to my original file cover.txt?
How to copy the logging information of a program to an existing file?
1.2
0
0
31
46,173,343
2017-09-12T09:55:00.000
0
0
0
0
python,xml,odoo-9,odoo-10
46,175,051
3
false
1
0
each and every model have active field , by default active=True, so you can create separate view which product not active. you can refer active field for reference
2
0
0
I want to hide all products that are not checked. for example:- I have a checkbox button in product.template model and i want to hide hide all those products that are unchecked.
Checkbox condition in odoo
0
0
0
846
46,173,343
2017-09-12T09:55:00.000
0
0
0
0
python,xml,odoo-9,odoo-10
46,176,387
3
false
1
0
You can archive products that you don't want to show particular product.
2
0
0
I want to hide all products that are not checked. for example:- I have a checkbox button in product.template model and i want to hide hide all those products that are unchecked.
Checkbox condition in odoo
0
0
0
846
46,173,428
2017-09-12T09:58:00.000
4
0
0
0
python,opencv,image-processing,computer-vision
46,173,790
1
true
0
0
Do not know if this works in general, but given your sample images I'd find the middle point of the short edges of the bounding box and get two rectangles for the two halves of the big BBox. I would then compute the sum of the mask pixels in the two separate half-BBoxes, assuming white is 1 and black is 0. Since the white area is bigger on the half of the rectangle where the "front" of the turbine is, pick the direction according to which of the two half-BBoxes has a higher sum
1
6
1
I am supposed to determine the direction a windmill is facing from aerial images (with respect to True North - 0 to 359 degrees). My question is, how can I determine the correct direction of the windmill and calculate its angle relative to the y-axis? Thanks!
How to determine object orientation in binary image? (Python, OpenCV)
1.2
0
0
1,499
46,174,679
2017-09-12T10:59:00.000
0
0
1
1
python,multiprocessing,python-multiprocessing
46,175,030
1
false
0
0
The most portable solution I can suggest (although this will still involve further research for you), is to have a long-running process that manages the "background worker" processes. This shouldn't ever be killed off, as it handles the logic for piping messages to each sub process. Manager.py can then implement logic to create communication to that long-running process (whether that's via pipes, sockets, HTTP or any other method you like). So manager.py effectively just passes on a message to the 'server' process "hey please stop all the child processes" or "please send a message to process 10" etc. There is a lot of work involved in this, and a lot to research. But the main thing you'll want to look up is how to handle IPC (Inter-Process Communication). This will allow your Manager.py script to interact with an existing/long-running process that can better manage each background worker. The alternative is to rely fully on your operating system's process management APIs. But I'd suggest from experience that this is a much more error prone and troublesome solution.
1
0
0
i see a lot of examples of how to use multiprocessing but they all talk about spawning workers and controlling them while the main process is alive. my question is how to control background workers in the following way: start 5 worker from command line: manager.py --start 5 after that, i will be able to list and stop workers on demand from command line: manager.py --start 1 #will add 1 more worker manager.py --list manager.py --stop 2 manager.py --sendmessagetoall "hello" manager.py --stopall the important point is that manager.py should exit after every run. what i don't understand is how to get a list of already running workers from an newly created manager.py program and communicate with them. edit: Bilkokuya suggested that i will have (1)a manager process that manage a list of workers... and will also listen to incoming commands. and (2) a small command line tool that will send messages to the first manager process... actually it sounds like a good solution. but still, the question remains the same - how do i communicate with another process on a newly created command line program (process 2)? all the examples i see (of Queue for example) works only when both processes are running all the time
using python multiprocessing to control independent background workers after the spawning process has been closed
0
0
0
223
46,176,656
2017-09-12T12:40:00.000
2
0
0
0
python,pandas,pandas-loc
46,176,863
2
false
0
0
Underneath the covers, both are using the __setitem__ and __getitem__ functions.
1
19
1
So .loc and .iloc are not your typical functions. They somehow use [ and ] to surround the arguments so that it is comparable to normal array indexing. However, I have never seen this in another library (that I can think of, maybe numpy as something like this that I'm blanking on), and I have no idea how it technically works/is defined in the python code. Are the brackets in this case just syntactic sugar for a function call? If so, how then would one make an arbitrary function use brackets instead of parenthesis? Otherwise, what is special about their use/defintion Pandas?
Why/How does Pandas use square brackets with .loc and .iloc?
0.197375
0
0
3,012
46,176,899
2017-09-12T12:51:00.000
2
0
1
0
python,debugging
46,177,166
3
false
0
0
Try this pip install -r requirements.txt --force --upgrade
1
3
0
I'm debugging an issue on a staging, and I've added a bunch of logging statements to a 3rd party package. Once I'm done with that, I'd like to get them back to their original state. In ruby, I could do a gem pristine lib_name and that would restore the lib to it's original source code. It might be relevant to mention that I'm modifying code that was installed with sudo pip install some_pkg. What's the usual way of reverting any changes done to a lib?
How can I revert changes done to python packages?
0.132549
0
0
4,669
46,178,062
2017-09-12T13:42:00.000
3
0
0
1
python,postgresql,google-cloud-platform,google-cloud-storage,google-cloud-sql
64,040,093
3
false
1
0
Hostname is the Public IP address.
1
3
0
I'm trying to connect to a PostgreSQL database on Google Cloud using SQLAlchemy. Making a connection to the database requires specifying a database URL of the form: dialect+driver://username:password@host:port/database I know what the dialect + driver is (postgresql), I know my username and password, and I know the database name. But I don't know how to find the host and port on the Google Cloud console. I've tried using the instance connection name, but that doesn't seem to work. Anyone know where I can find this info on Google Cloud?
What is the hostname for a Google Cloud PostgreSQL instance?
0.197375
1
0
6,507
46,179,209
2017-09-12T14:34:00.000
0
0
0
0
python-3.x,theano,lasagne
46,185,735
1
false
0
0
SOLVED in previous versions, the input parameter train_split has been a number, that was used by the same-named method. In nolearn 0.6.0, it's a callable object, that can implement its own logic to split the data. So instead of providing a float number to the input parameter train_split, I have to provide a callable instance (the default one is TrainSplit), that will be executed in each training epoch.
1
1
1
I am learning to deal with python and lasagne. I have following installed on my pc: python 3.4.3 theano 0.9.0 lasagne 0.2.dev1 and also six, scipy and numpy. I call net.fit(), and the stacktrace tries to call train_split(X, y, self), which, I guess, should split the samples into training set and validation set (both the inputs X as well as the outputs Y). But there is no method like train_split(X, y, self) , there is only a float field train_split - I assume, the ratio between training and validation set sizes. Then I get following error: Traceback (most recent call last): File "...\workspaces\python\cnn\dl_tutorial\lasagne\Test.py", line 72, in net = net1.fit(X[0:10,:,:,:],y[0:10]) File "...\Python34\lib\site-packages\nolearn\lasagne\base.py", line 544, in fit self.train_loop(X, y, epochs=epochs) File "...\Python34\lib\site-packages\nolearn\lasagne\base.py", line 554, in train_loop X_train, X_valid, y_train, y_valid = self.train_split(X, y, self) TypeError: 'float' object is not callable What could be wrong or missing? Any suggestions? Thank you very much.
Missing method NeuralNet.train_split() in lasagne
0
0
0
47
46,180,292
2017-09-12T15:27:00.000
1
0
1
0
python,multiprocessing,mpi4py
55,276,764
2
false
0
0
MPI-3 has a shared memory facility for precisely your sort of scenario. And you can use MPI through mpi4py.... Use MPI_Comm_split_type to split your communicator into groups that live on a node. Use MPI_Win_allocate_sharedfor a window on the node; specify nonzero size only on one rank. Use MPI_Win_shared_query to get pointers to that window.
1
2
1
I have a Python application that needs to load the same large array (~4 GB) and do a perfectly parallel function on chunks of this array. The array starts off saved to disk. I typically run this application on a cluster computer with something like, say, 10 nodes, each node of which has 8 compute cores and a total RAM of around 32GB. The easiest approach (which doesn't work) is to do n=80 mpi4py. The reason it doesn't work is that each MPI core will load the 4GB map, and this will exhaust the 32GB of RAM resulting in a MemoryError. An alternative is that rank=0 is the only process that loads the 4GB array, and it farms out chunks of the array to the rest of the MPI cores -- but this approach is slow because of network bandwidth issues. The best approach would be if only 1 core in each node loads the 4GB array and this array is made available as shared memory (through multiprocessing?) for the remaining 7 cores on each node. How can I achieve this? How can I have MPI be aware of nodes and make it coordinate with multiprocessing?
Python hybrid multiprocessing / MPI with shared memory in the same node
0.099668
0
0
981
46,180,651
2017-09-12T15:44:00.000
0
0
1
0
java,python,flat-file
46,181,307
2
false
0
0
Ensure that you normalize your data with an ID to avoid touching so many different data columns with even a single change. Like the file2 you mentioned above, you can reduce the columns to two by having just the propertyId and the property columns. Rather than having 1 propertyId associated with 2 property in a single row you'd have 1 propertyId associated with 1 property per your example above. You need another file to correlate your two main data table. Normalizing your data like this can make your updates to them very minimal when change occurs. file1: owner_id | name | position | 1 | Jack Ma | CEO | file2: property_id | property | 101 | Hollywood Mansion | 102 | Miami Beach House | file3: OwnerId | PropertyId | 1 | 101 1 | 102
1
0
0
I have two data files which is some weird format. Need to parse it to some descent format to use that for future purposes. after parsing i end up having two formats on which one has an id and respective information pertaining to that id will be from another file. Ex : From file 1 i get Name, Position, PropertyID from file 2 PropertyId, Property1,Property2 like this i have more columns from both the file. what is the idle way to store these information in a flat file to server as a database. i don't want to use database(Mysql,MsSql) for some reason. initially i thought of using single Coma separated file. but ill end up using so many columns which will create problem when i update these information. I ll be using the parsed data in some other application using java and python can anyone suggest better way to handle this Thanks
Storing data in flat files
0
1
0
1,067
46,182,724
2017-09-12T17:54:00.000
-1
0
1
0
python,sql
46,182,861
1
false
0
0
sh = book.sheet_by_index(0) a1 = sh.cell_value(rowx=0, colx=1) - 1 b=xlrd.xldate_as_datetime(a1,0) print (b) its done.
1
1
0
Im getting an error trying to convert a date 1500-12-31 to tuple in python if a date is under 1900 the Excel number is a negative one, i need to get a human readable text
PYTHON Convert dates under 1900 (Negatives) to text date xlrd EXCEL
-0.197375
0
0
72
46,183,843
2017-09-12T19:15:00.000
2
0
0
0
python,web-scraping,beautifulsoup,scrapy
46,185,425
2
true
1
0
Scrapy uses a link follower to traverse through a site, until the list of available links is gone. Once a page is visited, it's removed from the list and Scrapy makes sure that link is not visited again. Assuming all the websites pages have links on other pages, Scrapy would be able to visit every page of a website. I've used Scrapy to traverse thousands of websites, mainly small businesses, and have had no problems. It's able to walk through the whole site.
1
1
0
I have used Beautiful Soup with great success when crawling single pages of a site, but I have a new project in which I have to check a large list of sites to see if they contain a mention or a link to my site. Therefore, I need to check the entire site of each site. With BS I just don't know yet how to tell my scraper that it is done with a site, so I'm hitting recursion limits. Is that something Scrapy handles out of the box?
Does Scrapy 'know' when it has crawled an entire site?
1.2
0
1
344
46,184,423
2017-09-12T19:57:00.000
0
0
1
0
python,python-3.x,package,bundle
46,187,452
1
false
0
0
Couldn't you just use a virtual environment (virtualenv folder_name) and then activate it unless you are looking for something else. Once it's activated install of your libraries and drown its there using pip install
1
0
0
I have my Python script and my requirements.txt ready. What I want to do is to get all the packages listed in the "requirements.txt" into a folder. In the bundle, I'd for example have the full packages of "pymysql", "bs4" as well as all their dependencies. I have absolutely no idea how to do this. Could you help me please? I am stuck and I am really struggling with this. I am using Python 3.6 I am using "pip download -r requirements.txt" but it's not downloading the dependencies and outputs me only.whl files whereas I'm looking for "proper" folders..
Bundle all packages required from a Python script into a folder
0
0
0
244
46,185,297
2017-09-12T21:05:00.000
0
1
1
0
python,amazon-web-services,numpy,aws-lambda
61,375,373
6
false
0
0
1.) Do a Pip install of numpy to a folder on your local machine. 2.) once complete, zip the entire folder and create a zip file. 3.) Go to AWS lambda console, create a layer and upload zip file created in step 2 there and save the layer. 4.) After you create your lambda function, click add layer and add the layer you created. That's it, import numpy will start working.
2
8
0
I'm looking for a work around to use numpy in AWS lambda. I am not using EC2 just lambda for this so if anyone has a suggestion that'd be appreciated. Currently getting the error: cannot import name 'multiarray' Using grunt lambda to create the zip file and upload the function code. All the modules that I use are installed into a folder called python_modules inside the root of the lambda function which includes numpy using pip install and a requirements.txt file.
Using numpy in AWS Lambda
0
0
0
8,835
46,185,297
2017-09-12T21:05:00.000
6
1
1
0
python,amazon-web-services,numpy,aws-lambda
69,191,367
6
false
0
0
An easy way to make your lambda function support the numpy library for python 3.7: Go to your lambda function page Find the Layers section at the bottom of the page. Click on Add a layer. Choose AWS layers as layer source. Select AWSLambda-Python37-Scipy1x as AWS layers. Select 37 for version. And finally click on Add. Now your lambda function is ready to support numpy.
2
8
0
I'm looking for a work around to use numpy in AWS lambda. I am not using EC2 just lambda for this so if anyone has a suggestion that'd be appreciated. Currently getting the error: cannot import name 'multiarray' Using grunt lambda to create the zip file and upload the function code. All the modules that I use are installed into a folder called python_modules inside the root of the lambda function which includes numpy using pip install and a requirements.txt file.
Using numpy in AWS Lambda
1
0
0
8,835
46,187,056
2017-09-13T00:25:00.000
2
0
0
0
python,machine-learning,tensorflow,inference
46,187,418
2
true
0
0
How do you set up a server? If you are setting up a server using python framework like django, flask or tornado, you just need to preload your model and keep it as a global variable, and then use this global variable to predict. If you are using some other server. You can also make the entire python script you use to predict as a local server, and transform request or response between python server and web server.
1
2
1
Is there a way to compile the entire Python script with my trained model for faster inference? Seems like loading the Python interpreter, all of Tensorflow, numpy, etc. takes a non-trivial amount of time. When this has to happen at a server responding to a non-trivial frequency of requests, it seems slow. Edit I know I can use Tensorflow serving, but don't want to because of the costs associated with it.
Compiling model as executable for faster inference?
1.2
0
0
372
46,187,988
2017-09-13T02:39:00.000
0
0
0
0
r,python-2.7,arcgis,r-raster
46,215,900
1
false
0
0
This is for display/mapping purposes only? Use a DEM or TIN and display your arrow lines in ArcScene. EDIT: given your update about your data and software not working- Try this: 1) Make a raster surface covering the extent of your data with a cell size of 100m (or smaller or larger if that doesn't suit) 2) Convert that raster to a polygon layer e.g. 'area_grid100m' 3) Do a spatial join and assign all points a polygon cell id from one of the unique id fields in 'area_grid100m' 4) Use Summarize to get the mean lat/long of the start points and mean lat/long of the end points for each polygon. Summarize on the polygon id field and get select mean for both the lat and long fields 5) Add summary table to ArcMap, right click and select Display XY Data (set X Field as longitude and y Field as latitude). Right right the result and select Data > Export Data to make it permanent. You will now have two points per 'area_grid100m' cell. 5) Recreate your lines using this new file, which will give you one line per cell If the res is not small enough, make the 'area_grid' cells smaller.
1
0
1
I would like to generate vector arrows that conform to the topography/slope of a raster dataset of a river catchment area. I have created a Fishnet grid of points in ArcGIS and I would like to create a single arrow for each point of a set length that will follow the shape of the slope i.e. follow the path of least resistance, the line will follow progressively small numbers in a 3 x 3 grid. I think I can generate the vector arrows using vector plot. Is it possible to achieve the lines conforming to the raster? UPDATE: I have ~200,000 lines that I generated from a grid of points. I am going to turn these into a raster using R and set it to the same resolution as my slope raster. Any ideas on how to layer the raster lines on the slope so I can get the lines to follow the lowest values of the slope?
How to generate vector arrows that conforms to a raster slope layer?
0
0
0
322
46,188,842
2017-09-13T04:25:00.000
0
0
0
1
python,django,elasticsearch
46,230,007
1
false
1
0
It appears that the localhost value you are trying to connect to is a Unicode string host=u'localhost. Not sure how you are getting/assigning that value into a variable, but you should try to encode/convert it to ASCII so that it can be properly interpreted during the HTTP connection routine.
1
0
0
In local system elastic search works perfectly but when i'm trying to search in server system it shows in console : "ConnectionTimeout caused by - ReadTimeoutError(HTTPConnectionPool(host=u'localhost', port=9200): Read timed out. (read timeout=10))"
ConnectionTimeout caused by - ReadTimeoutError(HTTPConnectionPool(host=u'localhost', port=9200): Read timed out. (read timeout=10))
0
0
1
3,169
46,189,067
2017-09-13T04:50:00.000
0
0
1
0
python,python-3.x,directory
46,190,862
2
false
0
0
Some unix/linux systems are open source so you could modify the OS behavior to do that, i don't believe windows offer this feature, you probably should create an APP for that The best solution would be a python scripts that run it self each hour and check if this folder is modified and do something if its, and it will run each time you turn on the computer once
1
0
0
Say I have a folder called "Family Photos" and I want to automatically run a python program if that folder is selected. How would I go about doing that? Would I just put the code in the folder and it runs automatically? Edit: I'm on Windows 10
How can I run a Python 3 program if a folder is selected
0
0
0
54
46,189,569
2017-09-13T05:35:00.000
0
0
1
0
python,time
46,189,722
1
false
0
0
I never realised that this (inability to subtract datetime.time). The best I can think of is to subclass datetime.time and add the the __add__ and __sub__ methods there to return something meaningful. And I agree that this doesn't feel very "batteries included".
1
0
0
I struggled to come up with the title, but to explain it is probably easier to start with my actual issue. Concrete problem: I am wrapping a commandline program dealing with audio and need to manipulate song lengths, ex: a 4 minute and 50 second sample 4:50 needs 7 seconds chopped off, to become 4:43. The time module doesn't do subtraction and datetime does but gives timedelta, which is the total number of seconds. Is there a builtin python way or library to do calculations with abstract time, as in not a date? I can write the code properly formatting the timedelta but this does not feel "batteries included" and really gross using datetime when I'm not actually referencing dates. All I can find are things that make datetime easier, but not abstract time. If there is nothing, and this is the best way, that's fine, but I want to make sure it doesn't exist vs not being able to find it.
Is there a pythonic way to subtract abstract times?
0
0
0
50
46,190,589
2017-09-13T06:44:00.000
0
0
0
0
python,linux,django
46,199,931
2
false
1
0
I actually solved the problem via timeouts like timeout 30 yes | python manage.py makemigrations so if in case an infinite loop via wrong option selection on non-nullable field input returns. It will automatically exit after 30 seconds. Giving me an alternative somehow to proceed my CI/CD without pushing my migrations
2
0
0
I want to automate the python manage.py makemigrations as in if a user encounters Did you rename game.last to game.las (a CharField)? [y/N] then the input will always be y but if the user encounters You are trying to add a non-nullable field 'las' to game without a default then it will automatically and continuously enter 1. I tried yes | python manage.py makemigrations as researched however this will just throw an infinite loop of Please select a valid option if the default input is asked My desire is the automation between 1 and y value as mentioned on my first paragraph or just throw an error if I input a wrong option on the default input
Automated answer in django's makemigrations
0
0
0
481
46,190,589
2017-09-13T06:44:00.000
1
0
0
0
python,linux,django
54,245,154
2
false
1
0
First, run manage.py makemigrations locally. It makes a script to modify the database schema and that is part of development and places it in the project/migrations directory. Build the container such that the generated migrations are included in your Docker container (they should be automatically as they are part of the Django project source tree), and run manage.py migrate --noinput on start-up of the container. It will automatically apply any migrations that haven't been applied. Your migrations build up over time but you can squash them if you REALLY need to, I would recommend against it except rare circumstances. This way, you will keep your development and production database schema in sync, and won't run into strange bugs because you generated new migrations on production that didn't exist in your development environment. I would recommend purposefully designing your migrations so that they don't require any input on manage.py migrate. That is not always possible, for instance, stale models will sometimes require input and just be left there unless manually deleted. Since all of the Django source is available to you, it's trivial to make a version of the migrate command that assumes y and place it as a management command. Then run that management command instead of migrate, done. I would show an example of this but it is specific to the version of Django you are running.
2
0
0
I want to automate the python manage.py makemigrations as in if a user encounters Did you rename game.last to game.las (a CharField)? [y/N] then the input will always be y but if the user encounters You are trying to add a non-nullable field 'las' to game without a default then it will automatically and continuously enter 1. I tried yes | python manage.py makemigrations as researched however this will just throw an infinite loop of Please select a valid option if the default input is asked My desire is the automation between 1 and y value as mentioned on my first paragraph or just throw an error if I input a wrong option on the default input
Automated answer in django's makemigrations
0.099668
0
0
481
46,194,025
2017-09-13T09:39:00.000
0
0
0
0
python,machine-learning,cluster-analysis,k-means
46,212,289
2
false
0
0
There are "soft" variants of k-means that allow this. In particular, fuzzy-c-means (don't ask me why they use c instead of k...) But beware that the resulting soft assignment is far from a statistical probability. It's just a number that gives some relative weight based on the squared distance, without any strong statistical model.
1
0
1
How to find degree of fit in K-means++ clustering such that it shows how much percentage the inputs are aligned to each clusters. For instance, input A is in cluster 1 for 0.4 and in cluster 2 for 0.6.
How to find degree of fit in Kmeans++ clustering in python
0
0
0
121
46,194,582
2017-09-13T10:03:00.000
0
0
0
0
python,django,django-models,django-admin
46,196,573
2
false
1
0
there is no thing like clickable link in Database.you can put link as text and while importing it to your HTML use , and for Images there are two options. either you put your image path name in database or change TextField in models to FileField.
1
0
0
I am using django to create a blog. In my model I am using a text field for my blog content. But I am unable to insert any image or a clickable link. How to add links(clickable) and insert images?
how to add links and images in text field of django admin
0
0
0
814
46,196,953
2017-09-13T11:58:00.000
3
0
0
0
python,selenium,robotframework
46,242,993
9
true
1
0
Try to set up your Library to robotframework-selenium2library == 1.8.0, then the issue will disappear. If you have the last version it doesn't work for me.
5
1
0
I've installed PyCharm with the robotframework support plugin. The .robot files are identified successfully and I was able to create a simple script and run it in pyCharm. However, my problem is that no keywords nor even the robotframework libraries (selenium2library) are recognized by pycharm in order to be autocompleted when typing them. I also have the intellibot plugin installed. Is there something that I'm missing? Is there another configuration file somewhere? Thanks,
Robot Framework with Pycharm -- Autocomplete doesn't work
1.2
0
0
11,370
46,196,953
2017-09-13T11:58:00.000
1
0
0
0
python,selenium,robotframework
61,906,962
9
false
1
0
You can try install "Robot Framework support" plugin. it working for me.
5
1
0
I've installed PyCharm with the robotframework support plugin. The .robot files are identified successfully and I was able to create a simple script and run it in pyCharm. However, my problem is that no keywords nor even the robotframework libraries (selenium2library) are recognized by pycharm in order to be autocompleted when typing them. I also have the intellibot plugin installed. Is there something that I'm missing? Is there another configuration file somewhere? Thanks,
Robot Framework with Pycharm -- Autocomplete doesn't work
0.022219
0
0
11,370
46,196,953
2017-09-13T11:58:00.000
1
0
0
0
python,selenium,robotframework
54,354,095
9
false
1
0
There is a bug in the intellibot plugin. To resolve you need to 1. Uninstall your current intellibot plugin 2. Search for "IntelliBot @SeleniumLibrary Patched" in the plugins repository. 3. Install this patched plugin. This worked for me.
5
1
0
I've installed PyCharm with the robotframework support plugin. The .robot files are identified successfully and I was able to create a simple script and run it in pyCharm. However, my problem is that no keywords nor even the robotframework libraries (selenium2library) are recognized by pycharm in order to be autocompleted when typing them. I also have the intellibot plugin installed. Is there something that I'm missing? Is there another configuration file somewhere? Thanks,
Robot Framework with Pycharm -- Autocomplete doesn't work
0.022219
0
0
11,370
46,196,953
2017-09-13T11:58:00.000
2
0
0
0
python,selenium,robotframework
63,884,543
9
false
1
0
My solution: Uninstall the 'Robot framework support' (PyCharm/File/Settings/Plugins) Uninstall 'Intellibot' (PyCharm/File/Settings/Plugins) (Uninstall all similar plugins!) Exit PyCharm Uninstall robotframework-seleniumlibrary (Open command prompt with administration mode: pip uninstall robotframework-seleniumlibrary) Install robotframework-seleniumlibrary 3.3.1 (Command prompt: pip install robotframework-seleniumlibrary==3.3.1) Open PyCharm Install 'IntelliBot @SeleniumLibrary Patched' (PyCharm/File/Settings/Plugins) (If it isn't working then try the following: PyCharm/File/Invalidate Caches/Restart... and click the 'Invalidate and Restart')
5
1
0
I've installed PyCharm with the robotframework support plugin. The .robot files are identified successfully and I was able to create a simple script and run it in pyCharm. However, my problem is that no keywords nor even the robotframework libraries (selenium2library) are recognized by pycharm in order to be autocompleted when typing them. I also have the intellibot plugin installed. Is there something that I'm missing? Is there another configuration file somewhere? Thanks,
Robot Framework with Pycharm -- Autocomplete doesn't work
0.044415
0
0
11,370
46,196,953
2017-09-13T11:58:00.000
0
0
0
0
python,selenium,robotframework
62,624,002
9
false
1
0
use plugin Intellbot@SeleniumLibrary Patched and robotframework-seleniumlibrary version 3.3.1 It works for me after lot of research from the internet
5
1
0
I've installed PyCharm with the robotframework support plugin. The .robot files are identified successfully and I was able to create a simple script and run it in pyCharm. However, my problem is that no keywords nor even the robotframework libraries (selenium2library) are recognized by pycharm in order to be autocompleted when typing them. I also have the intellibot plugin installed. Is there something that I'm missing? Is there another configuration file somewhere? Thanks,
Robot Framework with Pycharm -- Autocomplete doesn't work
0
0
0
11,370
46,198,398
2017-09-13T13:05:00.000
1
0
0
0
python,ubuntu-16.04,robotframework
46,198,490
3
false
0
0
try to use full syntax of a callable file, by using: python ride.py.
2
1
0
I installed Robot Framework successfully using pip. When I use the command robot --version, I get this: Robot Framework 3.0.2 (Python 3.5.2 on linux) But when I try to use the command ride.py, I get this error: ride.py: command not found
ride.py: command not found
0.066568
0
0
2,606
46,198,398
2017-09-13T13:05:00.000
0
0
0
0
python,ubuntu-16.04,robotframework
54,742,296
3
false
0
0
First check the robot framework installation version through "pip freeze" at cmd prompt. if you find any version like " robotframework==3.1.1", then uninstall through this command pip uninstall robotframework. then again check through pip freeze command for confirmation. then install through this command pip install robotframework-ride. After successfully install you can check ride.py is available at C:\Python27\Scripts. for the RIDE type in command prompt ride.py.
2
1
0
I installed Robot Framework successfully using pip. When I use the command robot --version, I get this: Robot Framework 3.0.2 (Python 3.5.2 on linux) But when I try to use the command ride.py, I get this error: ride.py: command not found
ride.py: command not found
0
0
0
2,606
46,201,294
2017-09-13T15:22:00.000
0
0
0
0
python,calendar,applescript,icalendar
46,231,290
1
false
0
0
To 'copy' the events, normally one would 'import' the ics url, instead of (or as well as) subscribing, but it's a snapshot - no updates.
1
0
0
I have all my trips organised at trips and they offer a nice ics stream. My Problem is that my company calendar is shared with others (within O365) and they obviously don't see all the stuff from the TripIt ics stream and this is causing sometimes confusion. Can anyone think of any way (Mac automator, Apple Script, Python, etc) to read the Tripit ics stream and copy the events into the O365 calendar? Thanks
Copy Events from ICS Stream into a different calendar (O365)
0
0
0
64
46,201,898
2017-09-13T15:49:00.000
0
0
1
1
anaconda,atom-editor,path-variables,sublime-anaconda,python-install
64,047,273
1
false
0
0
If you want to run python scripts inside atom, install script package from atom packages. Then open anaconda prompt, cd into your preferred directory and run atom . Now when you run Ctrl+Shift+B to run your python scripts, this script package will run with anaconda python.
1
1
0
I have installed Anaconda on my Windows 10 system. Now I want to use python in newly installed Atom IDE. Atom cannot find python directory as it's not added to the environment variable path. I installed python 3.6 separately and added it to path variables to overcome this issue. However, I still run into issues like missing .dll files. I found that this will continue as long as there Anaconda is installed on the system. Is there a way I can add Anaconda python path to Atom or should I just add Anaconda library to path variables (which is not recommended by Anaconda)?
Using Anaconda python directory in Atom IDE
0
0
0
1,003
46,202,639
2017-09-13T16:31:00.000
0
0
1
1
python,multiprocessing,buffer
46,202,744
2
false
0
0
If you need to use different processes (as opposed to multiple functions in a single process), perhaps a messaging queue would work well for you? Whereby your first process would do whatever it does and put the results in a message queue, which your second process is listening to. There are obviously a lot of options available but based on your description, this sounds like a reasonable approach.
1
0
0
I have the following scenario. Data (packets in this case) are received processed by a Python function in real-time as each datum streams by. So each datum is received and translated into a python object. There is a light-weight algorithm done on that object which returns an output (small dictionary). Then the object is discarded and the next one is handled. I have that program running. Now, for each object the algorithm will produce a small dictionary of output data. This dictionary needs to be processed (also in real time) by a separate, second algorithm. I envision my code running two processes. I need to have the second process "listen" for the outputs of the first. So how do I write this second algorithm in python so it can listen for and accept the data that is produced by the first? for a concrete example, suppose the first algorithm applies the timestamp, then passes to a buffer, and the second algorithm listens-- it grabs from the buffer and processes it. If there is nothing in the buffer, then as soon as something appears it processes it.
How to send, buffer, and receive Python objects between two python programs?
0
0
0
395
46,203,909
2017-09-13T17:50:00.000
0
0
1
0
python,multithreading,performance
46,381,057
2
false
0
0
I am going by these assumptions: You already determine that it is your logging that is bottlenecking your program. You have a good reason why you are logging what you are logging. The perceived slowness is most likely due to the success or failure acknowledgement from the logging action. To avoid this "command queuing" make the calls to a separate process asynchonously and skipping the callback. This may end up consuming more resources, but this will alleviate the backlog in your main program. Nodejs handles this naturally or you can roll your own python listener. Since this will be a separate process. You can redirect the logging feature of your other programs to this one. You can even have a separate machine to handle this workload.
1
7
0
In order to get a speed boost for my python program, should I spawn a separate thread or a separate process for logging? My program uses a lot of logging and I am not sure if threading is suitable because of GIL. A lot of resources seem to suggest that it should be fine for I/O. I think that logging is I/O but I am not sure what does "should be fine" mean for most of the resources out there. I just need speed.
In order to get a speed boost for my python program, should I spawn a separate thread or a separate process for logging?
0
0
0
198
46,205,279
2017-09-13T19:17:00.000
1
0
0
0
python,multithreading,web-scraping,multiprocessing
46,205,430
1
false
1
0
"evaluating js on the webpage is a requirement" <- I think this is your problem right here. Simply downloading 50 web pages is fairly trivially parallelized and should only take as long as the slowest server takes to respond. Now, spawning 50 javascript engines in parallel (which is essentially what I guess it is you are doing) to run the scripts on every page is a different matter. Imagine firing up 50 chrome browsers at the same time. Anyway: profile and measure the parts of your application to find where the bottleneck lies. Only then you can see if you're dealing with an I/O bottleneck (sounds unlikely), a CPU bottleneck (more likely) or a global lock somewhere that serializes stuff (also likely but impossible to say without any code posted)
1
0
0
I am trying to do some python based web scraping where execution time is pretty critical. I've tried phantomjs, selenium, and pyqt4 now, and all three libraries have given me similar response times. I'd post example code, but my problem affects all three, so I believe the problem either lies in a shared dependency or outside of my code. At around 50 concurrent requests, we see a huge desegregation in response time. It takes about 40 seconds to get back all 50 pages, and that time gets exponentially slower with greater page demands. Ideally I'm looking for ~200+ requests in about 10 seconds. I used multiprocessing to spawn each instance of phantonjs/pyqt4/selenium, so each url request gets it's own instance so that I'm not blocked by single threading. I don't believe it's a hardware bottleneck, it's running on 32 dedicated cpu cores, totaling to 64 threads, and cpu usage doesn't typically spike to over 10-12%. Bandwidth as well sits reasonably comfortably at around 40-50% of my total throughput. I've read about the GIL, which I believe I've addressed with using multiprocessing. Is webscraping just an inherently slow thing? Should I stop expecting to pull 200ish webpages in ~10 seconds? My overall question is, what is the best approach to high performance web scraping, where evaluating js on the webpage is a requirement?
Python & web scraping performance
0.197375
0
1
598
46,206,424
2017-09-13T20:37:00.000
0
0
0
1
python-2.7,google-api-python-client
46,245,436
1
false
0
0
I found my answer. In the optional field of the Windows Scheduled Task Action dialog, "Start in" I added the path to the Python Scripts folder and the script now runs perfect.
1
0
0
I had a similar issue when a Python script called from a scheduled task on a windows server tried to access a network shared drive. It would run from the IDLE on the server but not from the task. I switched to using a local drive it worked fine. This script works if run from console or IDLE on the server and partially executes when run as a scheduled task. It pulls data from a MSSQL database and creates a local csv. That works called from the task but the part to upload the file to a Google Drive does not. I have, as I did, before try other methods of calling outside of the scheduled task ex Powershell, bat file... but same results. I am using google-api-python-client (1.6.2) and can't find anything. Thanks in advance!
Python27 to upload a file to google drive does not work when run as a windows scheduled task. Why?
0
0
1
33
46,209,466
2017-09-14T02:16:00.000
1
0
0
0
python,matplotlib
46,209,673
1
false
0
0
Assign or use numeric values to the x axis then label the x axis ticks with the non-numeric information. Then you can play around with x-axis scaling and limits to move the plot around.
1
1
1
I am currently working on a 2x2 subplot figure. In each subplot, I have 3 groups on the X axis. I want to decrease the spacing between each of these groups on the X-axis. Currently, the last value on 221 subplot is very close to the first value of 222 subplot. I have changed the spacing between subplots but I would like each of the subplots to be more compact by decreasing the spacing between the X-axis values. The variable on the X-axis is non-numeric.
Scatter plot: Decreasing spacing between scatter points/x-axis ticks
0.197375
0
0
452
46,210,040
2017-09-14T03:27:00.000
2
0
0
0
python,django,elasticsearch,django-models,django-signals
46,210,212
1
false
1
0
This decision really depends on what you are trying to address with elastic search. Recently I worked on a project that used elastic search. The reason was to speed up searching and to provide better search result. However the detail of selected product is queried from the database (mssql). The decision to map all or few fields depends on what you wanted out of elastic search.If its for better search result (result + speed) then I suggest you only map those fields that help user for searching.
1
1
0
Hello all I am working on a Django project with backend database as a PostgreSQL server. and I have chosen elastic search as a search engine for my project. I have used elastic search-dsl-py to create a mapping between Django models and elastic search doc type. and use Django signals to capture update and delete events. By the way, I haven't mapped all the fields from Django model to elastic search. When a user search to he/she get's a list of an item to homepage from elastic search server. When user clicks to the item list. Where should I query the detail data of an item, in the elastic_search server or in the Postgres server If I put all the details of every object in an elastic server, It will going to be a pain for me as, there is a nested relation in Django models. If I don't put all the details in elastic search server, I need to query to the database, to get the detail of an item which is going to be slow as compared to an elastic search query. Which approach should I go? Index all the properties along with the nested relation in elastic search server and do all the querying operation to elastic search. OR Index only the necessary field in elastic search server, and for the detail view, do q query to the database with the required field id. Does anyone have this kind of experience before?
indexing db content to elastic search
0.379949
0
0
296
46,210,226
2017-09-14T03:50:00.000
3
0
0
0
python,django,visual-studio
47,166,877
1
true
1
0
At Solution Properties -> Debug Tab In the Run Section, change Launch URL with the IP value and port that you need. e.g: http//localhost:5555
1
0
0
Has anyone found out a way to have visual studio run a Django app on a static port rather than it changing ports every run? I have a separate program sending requests to the server but it becomes a hassle having to change the port every time I re run the server. Thanks in advance for any direction on the subject.
Static Ports In Django w/ Visual Studio
1.2
0
0
912
46,210,282
2017-09-14T03:57:00.000
0
0
1
0
python,text,nltk
51,272,932
1
false
0
0
yes .. but u need to specify aspects by yourself by choosing the most essential specification for the product
1
0
1
I have a data set of consumer reviews. From these reviews I will like to extract most frequently occurring aspects. The process I am applying includes - Step 1: Tokenizing reviews into sentences - Step 2: Tokenizing sentences into words after basic NLP pre-processing. Pre-processing removes punctuation and English stop words. - Step 3: Pos_tagging and extracting all words with pos tag of 'NN','NNP','NNS','NNPS' - Step 4: Combining all the words across all reviews to find the most frequently occuring words - Step 5: Using top 40 terms as my aspects Is this a good approach or do you recommend doing something different?
Python, extracting key aspects from consumer reviews
0
0
0
696
46,212,117
2017-09-14T06:36:00.000
0
0
1
0
python,concurrency,io
46,212,271
2
false
0
0
You can use the below steps: Python script which runs in cron will check if the file is opened by any other process. In Linux, it can be done using lsof. If file is open, when cron runs, then it will not process the file data. Same logic can add for the script which will add data to the file, if the file is used by some other scripts.
1
0
0
I am trying to use a file (csv, json, txt, haven't decided format) that I can drop a few lines of data in. A python script will be on cron to run every 5 minutes and check the file if there is anything new and if there is, process it and delete each line as it is processed. I am trying to prevent a situation where I open the file, make some changes and save it while the process came by grabbed the data and emptied the file but my save writes it back in. I thought the only way to make this safe is to have it process a folder and just look for new files, all changes would be dropped in a new file. So there is never the risk of this happening. Is there a better way, or is this the best approach?
Edit file being processed by python script
0
0
0
72
46,212,711
2017-09-14T07:11:00.000
0
0
1
0
python
46,212,923
2
false
0
0
Short answer user space threads. Long answer from my knowledge of systems a process (or thread) is not user level or kernel level. Some critical Tasks are are not directly accessible by the user eg memory and IO. To use these resources the kernel will expose API's. These API's are better referred as system calls . So your thread might br using those system calls in your program. But you cannot just spawn kernel threads
2
3
0
I was trying to use the threading module in Python. Now I have this query as to the type of threads that this module supports. That is whether these threads are user space threads or kernel space threads
Python threading module creates user space threads or kernel spece threads
0
0
0
724
46,212,711
2017-09-14T07:11:00.000
4
0
1
0
python
46,213,061
2
false
0
0
The correct term is not kernel space thread (because Python doesn't have access to kernel memory space), but kernel-level threads. threading module uses system-provided mechanisms (such as pthread on POSIX systems) which are usually relying on kernel interfaces (to create task via clone(CLONE_THREAD) on Linux). Python supports user-level threads (those that are implemented purely in interpreter and occupy only one kernel-level thread) via generators, greenlets and similar libraries.
2
3
0
I was trying to use the threading module in Python. Now I have this query as to the type of threads that this module supports. That is whether these threads are user space threads or kernel space threads
Python threading module creates user space threads or kernel spece threads
0.379949
0
0
724
46,214,501
2017-09-14T08:47:00.000
1
0
0
0
python,jira,robotframework,testrail,robotframework-ide
46,220,408
1
false
1
0
If you want to use a python library in a robot test, you will need to create your own library that provides keywords that use the library. You can't just import any random python library and expect it to work like a robot keyword library.
1
0
0
We are using RIDE IDE and are trying to integrate TestRail and JIRA. We have downloaded the TestRail API python file (testrail.py), but we are not able to import it in our project in RIDE. Can we know how to implement the same. Is there any steps or tutorial video for integrating TestRail and JIRA in RIDE ? We are using RIDE 1.5.2.1 running on Python 2.7.12 Thanks
TestRail and JIRA integration with Robot Framework (RIDE IDE)
0.197375
0
0
1,211
46,216,147
2017-09-14T10:02:00.000
0
0
0
0
python,django
46,231,942
1
false
1
0
The fields you define will be prefixed with HTTP.
1
2
0
When use .\curl.exe -v -H 'HTTP_TEST:A17041708' http://127.0.0.1:8000/api/test open the url. I print the request.META,but i can't find my custom header in it
Why I can't get the custom header from Django request.META
0
0
0
590