Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
34,046,345
2015-12-02T15:19:00.000
1
0
1
1
python,eclipse,pydev,configuration-files,anaconda
34,048,699
2
false
0
0
Well, usually the default way of operating would be not committing files with a named interpreter, rather leave it empty and let it use the one that's configured for the user. Now, having said that, there are scenarios where it may be useful to commit a named interpreter, but it's usually if you're within a company that has standardized say a Python2 and a Python3 interpreter and a given project is dependent only on one of those (then it may make sense standardizing that), but usually the default is leaving empty and letting each user configure its own Python interpreter. On a side note, if you wanted to have the same interpreter for everyone, it's possible to have a plugin which would do that inside PyDev, although that'd require creating a plugin inside Eclipse (although it should be relatively straightforward).
1
0
0
I'm using Eclipse Luna Service Release 2 (4.4.2), with PyDev 4.0.0, on Windows 7. I've recently installed Anaconda 2.4.0 to use as my Python interpreter. I've configured a new "Anaconda2" Python interpreter in Eclipse, and modified my project settings to use this new interpreter. I'd like to commit my modified project file to source control, so that colleagues can take advantage of the update. I can see that .pydevproject has been modified, but when I look at the changes, it simply specified that "Anaconda2" is the interpreter to be used with the project. For this to be useful to others, they'll presumably also need my definition of what the "Anaconda2" interpreter actually is (i.e. the path to the Python executable). However, I can't find where this definition is stored. I've looked in my project directory, in the Eclipse installation directory (C:\eclipse) and in the Windows Registry, with no success. Where is this information stored, so that I can share the updated file with colleagues, rather than leaving them needing to manually set up the interpreter themselves? (Assume that we have a standard development environment, so that everyone will have Anaconda installed to the same location on their hard drive.)
Where is the definition for a new Python interpreter in Eclipse Pydev stored?
0.099668
0
0
328
34,048,431
2015-12-02T16:53:00.000
-1
0
1
1
python,windows,opencv,dll,64-bit
49,609,215
4
false
0
0
in this case, I just copy file 'python3.dll' from my python3 installation folder to my virtualenv lib folder, and then it works.
1
1
0
ImportError: DLL load failed: %1 is not a valid Win32 application Does anyone know how to fix this? This problem occurs when i am trying to import cv2. My laptop is 64bit and installed 64bit python, i also put the cv2.pyd file in the site-packages folder of Python. My PYTHONPATH value = C:\Python35;C:\Python35\DLLs;C:\Python35\Lib;C:\Python35\libs;C:\Users\CV\OpenCV\opencv\build\python\2.7\x64;%OPENCV_DIR%\bin; My OPENCV_DIR value = C:\Users\CV\OpenCV\opencv\build\x64\vc12 I also put reference of my pythonpath and my opencv_dir to the PATH by putting **%PYTHONPATH%;%PYTHONPATH%\Scripts\;%OPENCV_DIR%;** I also installed opencv_python-3.0.0+contrib-cp35-none-win_amd64 through pip install and command line. None of this solved my problem.
Import CV2: DLL load failed (Python in Windows 64bit)
-0.049958
0
0
10,949
34,048,666
2015-12-02T17:03:00.000
2
0
0
0
python,django,celery,task-queue
34,048,733
1
true
1
0
user submits data start a celery job with the data the celery job posts text updates to a database the django web app queries the database periodically and displays the text update to the user
1
0
0
With a Django web app what would be the easiest way of having a long running task run in the background but be able to provide the user with progress updates in text and percentage done/ETA? I've looked at Celery and I couldn't see a way to do regular text updates, only a progress update with percentage.
Long running Django task with text updates
1.2
0
0
173
34,052,714
2015-12-02T20:50:00.000
0
0
1
0
python,python-2.7,pip,importerror,moviepy
56,680,296
3
false
0
0
Try below version of moviepy pip install moviepy==0.2.3.5
1
3
0
I am very new to the world of coding, so I will try to provide as much information as i can regarding to my question. Essentially, I wanted to install a module (moviepy) for Python 3. The site were I found the module suggested I use pip to unpack and install the module, so I did. In my terminal, I entered pip install moviepy and pip proceeded to unpack and install my module, yay! I then went over to my IDLE to see if the module would import, import moviepy, but received this error: ImportError: No module named 'moviepy' Huh? I thought I had just installed moviepy? Upon further investigation, the module appears to have been written to my Python 2.7 site-packages folder and not my in Python 3 site-packages folder. So my question is: How can I get my module to install to Python 3? The modules website says that it is compatible with Python 3. Im assuming this is a file path issue of some kind, but i don't know where to begin. I'm currently using a OS X Yosemite version 10.10.2, Python 2.7.6, Python 3.5.0 Any help or comments are greatly appreciated here! Help the n00b!
Installing & running modules in Python 3 (Beginner)
0
0
0
4,073
34,056,181
2015-12-03T01:14:00.000
0
0
1
0
python,timer,counter
34,056,570
5
false
0
0
Maybe you should look into the Linux tool cron to schedule the execution of your script.
1
3
0
This may be simpler than I think but I'd like to create timer that, upon reaching a limit (say 15 mins), some code is executed. Meanwhile every second, I'd like to test for a condition. If the condition is met, then the timer is reset and the process begins again, otherwise the countdown continues. If the condition is met after the countdown has reached the end, some code is executed and the timer starts counting down again. Does this involve threading or can it be achieved with a simple time.sleep() function?
Python: Run code every n seconds and restart timer on condition
0
0
0
10,682
34,057,188
2015-12-03T03:09:00.000
0
0
1
0
python,ide,pycharm,rstudio
52,779,695
2
false
0
0
Maybe a bit too late, but for those who just see this, pycharm now supports the scientific mode, which for me is even better compare to Rstudio(R)
1
4
0
I have been work with R in Rstudio since 2013, but now I decide to move to Python and I have been used Pycharm IDE. This IDE is very stable and friendly, but I can't see the objects and the results of the code processing. My question is: How can I see the global environment (like Rstudio). It's important to see what my code has doing. Any idea to solve this problem?
How can I see the global environment in pycharm similar to Rstudio?
0
0
0
3,207
34,057,401
2015-12-03T03:31:00.000
0
0
0
0
python,qt,user-interface,stream,pyqt
34,061,396
2
false
0
0
An alternative might be a plotting buffer being somewhat bigger than what you need to display. If it runs short of values you fill it up. If you do this in a thread you can ensure data availability without the need of timers or read ready signals. You would just have to keep track of which data has already been read and the direction in time.
2
0
0
I'll start with saying that I have working code...but that doesn't make it ideal code which is why I want to run my approach by the community. I'm trying to do this the "QT way" which is forcing me down roads I don't normally go. I have some large 400+mb binary files containing raw time variant data. I need to plot this data to the user so that the data playback matches the time duration of the recording. I have a working approach using a QTimer.timeout to trigger a file read. I read x amount of bytes, and when the read is complete I emit a signal to trigger the plotting operation. By adjusting my timeout duration I can control the rate of plotting without blocking my interface(not blocking the GUI is key). This seems to work, but it feels overly complicated for something as simple as a file.read. When I receive data in a stream over TCP I can use the socket.readReady signal to tell me when to process data. Since the data is arriving serially in time, it naturally looks right over the TCP stream. I have essentially duplicated the readReady of a socket by using fread and emitting a signal. Does this sound like a reasonable approach?
Using QTimer for Streaming Large Data File in Python
0
0
0
94
34,057,401
2015-12-03T03:31:00.000
0
0
0
0
python,qt,user-interface,stream,pyqt
34,061,402
2
true
0
0
An alternative could be using the QFile::map() function to map the right slice of data to display into memory for direct access without any file reading. I guess that should be fast enough for fast displaying depending on the slice size. This approach may be combined with the buffer approach above to avoid excessive mapping. This would mean to map a slice larger that currently needed.
2
0
0
I'll start with saying that I have working code...but that doesn't make it ideal code which is why I want to run my approach by the community. I'm trying to do this the "QT way" which is forcing me down roads I don't normally go. I have some large 400+mb binary files containing raw time variant data. I need to plot this data to the user so that the data playback matches the time duration of the recording. I have a working approach using a QTimer.timeout to trigger a file read. I read x amount of bytes, and when the read is complete I emit a signal to trigger the plotting operation. By adjusting my timeout duration I can control the rate of plotting without blocking my interface(not blocking the GUI is key). This seems to work, but it feels overly complicated for something as simple as a file.read. When I receive data in a stream over TCP I can use the socket.readReady signal to tell me when to process data. Since the data is arriving serially in time, it naturally looks right over the TCP stream. I have essentially duplicated the readReady of a socket by using fread and emitting a signal. Does this sound like a reasonable approach?
Using QTimer for Streaming Large Data File in Python
1.2
0
0
94
34,061,651
2015-12-03T09:01:00.000
0
0
0
0
python,django,gis,geodjango
34,397,621
1
false
1
0
Yes, it is relatively easy to manage geometry data in the Django Admin, and it's all included. You can do any of the CRUD tasks relatively simply using the Geo Model manager in much the same way as any Django model or you can use the map interface you get in the admin. From time to time I find I want to investigate my data in more detail, and then I simply connect to my PostGIS database using QGIS and have a panoply of GIS tools at my disposal. I would strongly recommend using PostGIS from the start. If there is any 'mission creep' towards more geo-functionality in the future then it will save you oodles of time. It sounds like the sort of project where spatial queries might be very useful at some point.
1
0
0
I'm going to write a web system to add and manage the position of driver and shops. GEO searching is not required, so it would be easier to use SQLite instead of PostgreSQL. The core question here is that is there any easy way to manage GIS points using Django admin. I know Django have GeoModelAdmin to manage maps based on MapBox, but I could not find out how to use it just to save, delete, and update these points?
How to use Django to manage GIS points easily?
0
0
0
119
34,067,809
2015-12-03T13:52:00.000
1
0
1
0
python
34,067,855
3
false
0
0
Yes it can. From the module collections, use OrderedDict to get the sequence of adding keys retained.
1
0
0
Is it possible to append key-value pairs to a dictionary? For example, I want a dictionary like this: a = {4:"Hello", 1:"World"} can that be done using dictionaries? If it can't, is there an alternate data structure that can do that? I want it ordered in order I add them. In this case, I added the key-value pair 4:"Hello" first.
Append key value pair to a dictionary
0.066568
0
0
2,968
34,069,428
2015-12-03T15:11:00.000
0
0
0
0
android,python,kivy
34,069,875
1
false
1
1
You can probably render LaTeX in Kivy fairly easily using a png exporter (such as presumably exists for web export tools, and modes like emacs' preview mode). If you need to run python as part of a java app, probably a practical way to do it is to use kivy's python-for-android tools with your own java frontend, invoking the python interpreter via JNI. This would require some thinking and experimentation but should be possible. There are also other projects for building python for android, which might be able to do the same things.
1
0
0
I'm writing Android app. The problem is that it should execute some calculations and library for this is written in Python. What is the best way to invoke Python from Android/Java? I heard about Kivy and even managed to run application, but python code returns latex formulae, that can't be rendered within Kivy app.
Run Python Code from Android
0
0
0
231
34,070,704
2015-12-03T16:05:00.000
0
0
1
0
python
34,078,315
1
false
0
0
I found why it did that and it was because the option "run console afterwards" was checked. Just a configuration issue on my side... Thanks a lote anyway :)
1
0
0
I'm (very) new to Python and after installing Pycharm, I notice that even with the simplest instructions, the execution of the program in the console doesn't let me enter anything when prompted by an input(); it just skips everything and end the program. When I use debug and set points at the input however it works normally. Does anyone of you encountered this issue before? Thanks in advance, cheers
input skipped during program execution Python/Pycharm
0
0
0
116
34,075,288
2015-12-03T20:12:00.000
0
0
1
0
python
34,075,641
1
false
0
0
It's tough to tell, without a bit more information about the problem that you are trying to solve, the scope of your code, and your code's architecture. Generally speaking: If you're writing a script that's reasonably small in scope, there really is nothing wrong with declaring variables in the global namespace of the script. If you're writing a larger system - something that goes beyond one module or file, but is not running in a multi-process/multi-thread environment, then you can create a module (as a separate file) that handles storage of your re-used data. Anytime you need to read or write that data you can import that module. The module could just expose simple variables, or it can wrap the data in classes and expose methods for creation/reading/updating/deletion. If you are running in a multi-process/multi-thread environment, then any sort of global in-memory variable storage is going to be problematic. You'll need to use an external store of some sort - Redis or the like for temporary storage, or a database of some sort for permanent storage.
1
0
0
Old habits being what they are, I would declare global variables, and probably use lists to store records. I appreciate this is not the best way of doing this these days, and that Python actively discourages you from doing this by having to constantly declare 'global' throughout. So what should I be doing? I'm thinking I should maybe use instances, but I know of no way to create a unique instance name based on an identifier (all the records will have a unique ID) and then find out how many instances I have. I could use dictionaries maybe? The most important thing is that the values are accessible anywhere in my code, and that I can list the number of records and easily refer to / change the values.
Best way of storing records and then iterating through them?
0
0
0
45
34,075,427
2015-12-03T20:20:00.000
3
0
1
1
python,pycharm
39,897,548
6
false
0
0
Ctrl-Shift-F4 closes just one tab. Right-click on the run-bar next to Run: > Close All
4
13
0
PyDev has a feature to terminate all running processes. Does PyCharm have something similar? I only see a "stop" button in the menu.
Pycharm: terminate all running processes
0.099668
0
0
30,996
34,075,427
2015-12-03T20:20:00.000
1
0
1
1
python,pycharm
44,539,508
6
false
0
0
If you want to force all running processes to stop at once just kill python process. On Windows this can easily be done by clicking 'End Process' on the Task Manager (on the Processes tab). This is quite usefull if you end up stuck with some running ghost processes of your python app in the background as I had (even when PyCharm was closed).
4
13
0
PyDev has a feature to terminate all running processes. Does PyCharm have something similar? I only see a "stop" button in the menu.
Pycharm: terminate all running processes
0.033321
0
0
30,996
34,075,427
2015-12-03T20:20:00.000
1
0
1
1
python,pycharm
46,736,152
6
false
0
0
In PyCharm, if you click on the bottom right ... "Running x processes", then x number of windows pop up. (Here, x is the number of processing running.) Each has an X button to kill the process.
4
13
0
PyDev has a feature to terminate all running processes. Does PyCharm have something similar? I only see a "stop" button in the menu.
Pycharm: terminate all running processes
0.033321
0
0
30,996
34,075,427
2015-12-03T20:20:00.000
0
0
1
1
python,pycharm
71,991,268
6
false
0
0
To kill a Python program, when the program is called from within a Terminal opened in PyCharm, either: right-click in the terminal window and select "Close Session" from the dropdown menu press Ctrl + Shift - W
4
13
0
PyDev has a feature to terminate all running processes. Does PyCharm have something similar? I only see a "stop" button in the menu.
Pycharm: terminate all running processes
0
0
0
30,996
34,076,020
2015-12-03T21:00:00.000
0
0
1
1
python,python-3.x,pip
34,076,437
2
false
0
0
I just fixed it. Solution is to call pip as a python module. Remove pip.exe, pip3.exe, pip3.5.exe from PYTHON_PATH/Scripts Create file pip.bat inside folder described above Open pip.bat in text editor and copy lines below to it @echo off call "python" -m pip %*
1
0
0
When I run pip install or only pip from Windows Command Line, I think that it causes deadlock and it's impossible to exit running process by pressing CTRL + C When I run it from Git Bash, it gives me errors 0 [sig] bash 9796 get_proc_lock: Couldn't acquire sync_proc_subproc for(5,1), last 7, Win32 error 0 1040 [sig] bash 9796 proc_subproc: couldn't get proc lock. what 5, val 1
Running pip causes deadlock
0
0
0
245
34,077,340
2015-12-03T22:21:00.000
0
0
1
0
python,bash,command-line,escaping
34,077,922
2
false
0
0
cat file | grep -F "\t" substitute "\f" for whatever you are searching for. -F interprets "PATTERN" as fixed string.
1
1
0
I have a bug that is causing return statements in my output file, but I am not sure if they are already in my input file or if they are caused by my code. I have tried stripping my lines already. Can anyone point me to how to see the "\t", "\n" or spaces (" ") if they might be there? Any help would be great. Thank you! Bash or Python would be great.
Is there a way to see invisible characters that are in my file using Python or Bash?
0
0
0
708
34,078,431
2015-12-03T23:44:00.000
0
0
1
1
python
34,078,504
2
false
0
0
Ah the classic halting problem: is it really still running? There is no way to do this if you've already started the program, unless you've written in some debugging lines that check an external configuration for a debug flag (and I assume you haven't since you're asking this question). You could look to the output or log of the script (if it exists), checking for signs of specific places in the data that the script has processed and thereby estimate the progress of the data processing. Worst case: stop the thing, add some logging, and start it tonight just before bed.
1
5
0
Let's say I've written a script in python, program.py. I decide to run this in the Terminal using python program.py. This code runs through an exceptional amount of data, and takes several hours to run. Is there any way I can check on the status of this code without stoping the program?
Is there any way to check the progress on a Python script without interrupting the program?
0
0
0
3,248
34,078,498
2015-12-03T23:50:00.000
0
1
1
0
python,python-2.7,zip,bzip2
34,079,563
2
true
0
0
Turns out the zipfile module in Python 2.7 doesn't support a later version of PKZIP that has bzip2 support. Switching to Python 3.3 and using zipfile module works fine.
1
0
0
I'm working on an application that needs to scan files from 3rd parties and process them. Sometimes these are compressed, so I've created a function that checks the file extension (tar.gz, gz, zip) and uncompresses accordingly. Some of the .zip files return this error: NotImplementedError: compression type 12 (bzip2). Is there a better way for me to identify the the compression type other than the file extension?
Python Bzip2 File Hiding as a Zip file
1.2
0
0
288
34,083,761
2015-12-04T08:19:00.000
2
0
1
0
python,gil
34,083,799
2
true
0
0
I guess that the thread will release GIL when it is blocked. Yes, exactly. Principally, that's all needed in an answer :)
2
0
0
I have known that GIL affects multi threads when the thread execute CPU intensive task and it can not take advantage of multi cores. But I feel very confused that it works well when the thread execute IO intensive task.I guess that the thread will release GIL when it is blocked.It's right?
Why not GIL put little infuence on IO intensive multi threads
1.2
0
0
86
34,083,761
2015-12-04T08:19:00.000
1
0
1
0
python,gil
34,083,800
2
false
0
0
Yes, IO operations typically release the GIL.
2
0
0
I have known that GIL affects multi threads when the thread execute CPU intensive task and it can not take advantage of multi cores. But I feel very confused that it works well when the thread execute IO intensive task.I guess that the thread will release GIL when it is blocked.It's right?
Why not GIL put little infuence on IO intensive multi threads
0.099668
0
0
86
34,091,129
2015-12-04T14:57:00.000
1
0
1
1
python-2.7,python-3.x,macports
34,103,248
2
true
0
0
Your problems appear to be a generic Macports download problem. Resetting the download process via sudo port clean <portname> should help. As to the general question of using multiple versions: Macports allows you to install an arbitrary number of different versions in parallel. You switch between them using port select --set <application> <portname>, for example sudo port select --set python python34. For easier access, you can define your own shell alias (e.g. python3 or python34), pointing to /opt/local/bin/python34.
1
1
0
I've been using Python3.4 to complete certain tasks, though I still use Python2.7 as default. I think I should be able to begin downloading py34 ports from using sudo port install py34-whatever in the same location as my Python2.7 ports. However, I am running into significant downloading errors doing this. Is it possible to download both py27 and py34 ports into the same location? Will there be problems doing this?
Can I install using Macports both py27 and py34 ports in the same location?
1.2
0
0
156
34,092,384
2015-12-04T16:00:00.000
0
0
0
0
python,python-requests
34,152,881
1
false
0
0
So I decided to tackle the problem a different way and run the script multiple times but use if not os.path.isfile(FileNameToWrite): from import os.path to check if I had already processed the webpage and saved it to a file. If not I submitted the requests and eventually got all the data I needed. Incidentally I ran the same original code on a faster machine and it encountered the error sooner but having still processed the same number of files which incorporated roughly over 1000 requests
1
0
0
I have a script that worked away for 1hr to 1hr 18mins getting data from a web server until I got the error NewConnectionError(': Failed to establish a new connection: [Errno 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond',) I'm using sessions and with requests.Session() as s: to reduce some of the overhead in my requests but is there more I can do as I still have more content to get but also don't want to get a permanent block
Is there a maximum period I should submit requests using python-requests to avoid blocking by the web server
0
0
1
65
34,094,063
2015-12-04T17:27:00.000
0
1
0
0
php,python,node.js,apache
34,103,974
1
false
1
0
Write your processing code as a completely independent software not tied to web server at all. Web server application will only add tasks into some database and return immediately. Your process will run as a service, polling database for new tasks, executing them and pushing some in-progress updates and final results back to the database. Web server application could see that the processing started, display in-progress and final results only looking up the database, which is fast.
1
0
0
I am working on an algorithm in python for a problem which take multiple hours to finish. I want to accept some details using html/php from user and then use those to run the algorithms in python. Even when the user closes the browser, I want the python script to be running at the server side and when user logins again, and then it displays the result. Is this possible using apache server and php? Can server created using nodejs be solution? Plz help. Any help would be appreciated.
Running python script on the apache server end
0
0
0
84
34,094,108
2015-12-04T17:30:00.000
2
0
1
0
python,c++,operator-overloading,magic-methods
34,094,161
1
true
0
0
Yes and no. While implementing/overriding __eq__, __div__ etc is the same as operator overloading in other languages, some __ methods not necessarily resemble things from other languages.
1
0
0
For example, is using __eq__ in Python the same as operator== in C++? Do magic methods have any other function in Python?
Are magic methods (dunders) in Python same as operator overloading in C++?
1.2
0
0
386
34,097,020
2015-12-04T20:39:00.000
0
0
0
0
python,numpy
34,097,320
3
false
0
0
Split the array based on the condition and use the lengths of the remaining pieces and the condition state of the first and last element in the array.
1
1
1
Seemingly straightforward problem: I want to create an array that gives the count since the last occurence of a given condition. In this condition, let the condition be that a > 0: in: [0, 0, 5, 0, 0, 2, 1, 0, 0] out: [0, 0, 0, 1, 2, 0, 0, 1, 2] I assume step one would be something like np.cumsum(a > 0), but not sure where to go from there. Edit: Should clarify that I want to do this without iteration.
Count since last occurence in NumPy
0
0
0
79
34,097,042
2015-12-04T20:40:00.000
2
1
1
0
python,visual-studio,deployment,raspberry-pi,windows-10-iot-core
34,219,439
2
false
0
0
try changing Authentication to None in App Properties while deploying/ debugging. Can be found Debug -> Properties -> Authentication (Select 'None')
2
1
0
I am trying to deploy my first Python project onto a Raspberry Pi 2 B via Visual Studio 2015 Community Edition. The Pi runs Windows 10 Core IoT and I can connect to it via SSH and the web interface with no problem. My issue is that I get Error DEP6200 during deployment. VS asks for a PIN during deployment that I cannot provide. It's not the OS's login password nor any standard PIN you might expect (eg 0000 or 1234). Any hint is appreciated.
PIN required - Cannot deploy Python code out of Visual Studio 2015 onto Raspberry Pi 2 B
0.197375
0
0
476
34,097,042
2015-12-04T20:40:00.000
0
1
1
0
python,visual-studio,deployment,raspberry-pi,windows-10-iot-core
42,187,331
2
false
0
0
I was still getting errors after switching the authentication to "None". I had to add the port number after the remote machine name. MyPi3:8116 I also noticed that msvcmon.exe was running as administrator on the pi. I switched it to run as the DefaultAccount. I'm not sure if this was necessary. I'll switch it back to administrator later. You can shut it down from the processes link then on the debugging link, start it back up and check the box for DefaultAccount.
2
1
0
I am trying to deploy my first Python project onto a Raspberry Pi 2 B via Visual Studio 2015 Community Edition. The Pi runs Windows 10 Core IoT and I can connect to it via SSH and the web interface with no problem. My issue is that I get Error DEP6200 during deployment. VS asks for a PIN during deployment that I cannot provide. It's not the OS's login password nor any standard PIN you might expect (eg 0000 or 1234). Any hint is appreciated.
PIN required - Cannot deploy Python code out of Visual Studio 2015 onto Raspberry Pi 2 B
0
0
0
476
34,097,281
2015-12-04T20:55:00.000
84
0
0
0
python,numpy,tensorflow
34,097,344
12
false
0
0
To convert back from tensor to numpy array you can simply run .eval() on the transformed tensor.
3
274
1
How to convert a tensor into a numpy array when using Tensorflow with Python bindings?
Convert a tensor to numpy array in Tensorflow?
1
0
0
668,313
34,097,281
2015-12-04T20:55:00.000
4
0
0
0
python,numpy,tensorflow
65,860,219
12
false
0
0
You can convert a tensor in tensorflow to numpy array in the following ways. First: Use np.array(your_tensor) Second: Use your_tensor.numpy
3
274
1
How to convert a tensor into a numpy array when using Tensorflow with Python bindings?
Convert a tensor to numpy array in Tensorflow?
0.066568
0
0
668,313
34,097,281
2015-12-04T20:55:00.000
2
0
0
0
python,numpy,tensorflow
63,803,837
12
false
0
0
If you see there is a method _numpy(), e.g for an EagerTensor simply call the above method and you will get an ndarray.
3
274
1
How to convert a tensor into a numpy array when using Tensorflow with Python bindings?
Convert a tensor to numpy array in Tensorflow?
0.033321
0
0
668,313
34,097,386
2015-12-04T21:02:00.000
0
0
0
0
python,selenium,web-scraping,ip
39,429,048
2
false
0
0
Your ISP will assign you your IP address. If you sign up for something like hidemyass.com, they will probably provide you with an app that changes your proxy, although I don't know how they do it. But, if they have an app that cycles you through various proxies, then all your internet traffic will go through that proxy - including your scraper. There's no need for the scraper to know about these proxies or how hide my ass works - it'll connect through the proxies just like your browser or FTP client or ....
1
6
0
I am writing a web scraper using Selenium for Python. The scraper is visiting the same sites many times per hour, therefore I was hoping to find a way to alter my IP every few searches. What is the best strategy for this (I am using firefox)? Is there any prewritten code/a csv of IP addresses I can switch through? I am completely new to masking IP, proxies, etc. so please go easy on me!
Selenium Python Changing IP
0
0
1
7,693
34,098,567
2015-12-04T22:30:00.000
-3
0
0
0
python-sphinx
39,006,565
2
false
1
0
So, we were able to make it work by adjusting the HTML template and the globaltoc setting.
1
1
0
When I build HTML output using sphinx, it is possible to display h1 and h2 on separate pages, however, h3 is always displayed on the same page as h2. Does anyone know how to make sphinx display the content of h3 on a separate page? The same way traditional online help systems do this. For example: Section Sub-section Sub-section Sub-sub-section Sub-sub-section Sub-section So, I want when I click on sub-sub-section see the content only under that sub-sub-section and not from Sub-section above or sub-sub-section below. Thanks in advance!
Display each section (h1, h2, h3) in a new page in Sphinx
-0.291313
0
0
1,308
34,099,639
2015-12-05T00:16:00.000
0
0
1
0
python,tkinter,pyinstaller
34,137,551
1
false
0
1
If you do a bunch of work before calling mainloop, you won't see anything appear until you call mainloop or update. The only way a window will appear is in response to an event that asks for the window to be displayed, and tkinter can't process events if mainloop isn't running.
1
0
0
I have code which runs immediately when I run using python. The code has tkinter module and bunch of if statements. I created a standalone executable and it takes about 8 minutes to give the output for the GUI. I was wondering why it takes so much time to run ? Thanks in advance.
pyinstaller executable takes more than 8 minutes print out?
0
0
0
275
34,102,541
2015-12-05T07:42:00.000
0
0
0
0
python,django,web
34,102,645
1
false
1
0
You should separate your games. I would do this by making a Game model that has its own board. If you are planning to make this a multi-player app, I would also give the model player1 and player2 attributes so that you can determine which board to show to a particular user. I don't have a great way to keep the games in sync across multiple tabs other than to have some javascript that refreshes the board at a certain interval.
1
0
0
I am building a tic-tac-toe game using django. Different users will be simuntaneously playing the game at different places. So, how does the server store the states of game board for different persons?
Configuring my web app to be used simultaneously
0
0
0
24
34,103,210
2015-12-05T09:15:00.000
0
1
0
0
python,unit-testing,integration-testing,pytest-django
34,551,957
1
true
1
0
The main difference between unit tests and integration tests are that integration testing deal with the interactions between two or more "units". As in, a unit test doesn't particularly care what happens with the code surrounding it, just as long as the code within the unit test operates as it's designed to. As for your second question, if you feel the database and fixtures in your unit test suite is taking too long to run, mocking is a great solution.
1
1
0
In my project, I use pytest to write unit test cases for my program. But later I find I there are many db operation, ORM stuff in my program. I known unit-testing should be run fast, but what is the different between unit-testing and auto integration-testing except fast. Should I just use the database fixture instead of mocking them?
Django testing method, fixture or mock?
1.2
0
0
653
34,107,021
2015-12-05T15:09:00.000
1
0
1
1
windows,installation,python-docx
34,111,456
1
false
0
0
for Winpython, you may try this: click on the "WinPython Command Prompt.exe" icon then type on the opened console the following 3 words: pip install python-docx
1
0
0
I installed Python34 and Python32 on my Win10. Also I downloaded WinPython and tried to add the package 'python-docx' with their control panel. This failed: filenaming not recognized (tar.gz) Then I tried to install it myself with the cmd. The error was lxml not found. That installation failed because it didnt find Python on my computer. I'm running out of ideas.. Is it really that hard to install python docx?
Install Python-Docx on Win 10
0.197375
0
0
737
34,108,753
2015-12-05T17:45:00.000
1
1
0
1
python,total-commander
34,111,757
2
false
0
0
You can add a new button to the button bar. Right-click on an existing Icon and copy the icon by choosing "copy" from drop-down menu. Paste it into the button bar by right-clicking on it, choosing "paste" from the menu. Right-click on this copied icon and choose "modify" (or similar). This opens a window that allows you the choose a program and a parameter. Note: My version is set to a different language so the names of the menu items might be a bit different.
1
2
0
I've got a script that accepts a path as an external argument and I would like to attach it to Total Commander. So I need a script/plugin for TC that would pass a path of opened directory as an argument and run a Python script which is located for example at C:/Temp How can I achieve this? Best Regards, Marek
Run Python script with path as argument from total commander
0.099668
0
0
1,275
34,110,807
2015-12-05T21:00:00.000
1
0
0
0
python,django
34,116,556
1
false
1
0
If all you're looking for is to have some hints weather some data or some apps exist at template-render time, you could use a template context processor, as this is what they're for - loading something into every template. I definitely wouldn't recommend implementing template tags to retrieve data, this would break the MVC rules for once, but then you might get in trouble while trying to debug slow db queries and other things like that. If you're doing some db queries in the context processor, bear in mind that those will be executed every time a template is rendered, even if it doesn't need that data. To shave some time of that processing, you could use some sort of manual caching with an appropriate invalidation scheme. An alternative route if you are using class based views is to implement a mixin that will just add the data you need to the context (in the get_context_data method). If you're doing this, make sure to call super to also get the context of the class base view you're normally extending.
1
1
0
I have a thirdparty app (let's call it app A) that, in its views.py, it uses Context processors for sending data to specific urls. The data it sends is used in its templates to determine how the nav-bar is like. For example if there exists an A.project entry in the db, it will show the <i> Projects </i> in it's template. Now I'd like to extend that app, and use the nav-bar it uses but add an extra parameter blog to it where the blog app is a thirdparty app. The problem is that now whenever you go to the url associated with the blog app, e.g. (/blog), Any items from app A in the nav-bar will be missing because the context sent from the blog app is different and missing data from app A. I can probably create custom template tags to check if A.project, etc exist, but I'm not sure if that's really the best way to do it. Is there any better way of doing it?
Is it bad practice to use template tags to retrieve data in Django?
0.197375
0
0
198
34,113,812
2015-12-06T03:50:00.000
1
1
0
0
python,psychopy
34,122,112
2
true
1
0
Clicks in the beginning and end of sounds often occur because the sound is stopped mid-way so that the wave abruptly goes from some value to zero. This waveform can only be made using high-amplitude high-frequency waves superimposed on the signal, i.e. a click. So the solution is to make the wave stop while on zero. Are you using an old version of psychopy? If yes, then upgrade. Newer versions add a Hamming window (fade in/out) to self-generated tones which should avoid the click. For the .wav files, try adding (extra) silence in the end, e.g. 50 ms. It might be that psychopy stops the sound prematurely.
1
2
0
Pure tones in Psychopy are ending with clicks. How can I remove these clicks? Tones generated within psychopy and tones imported as .wav both have the same problem. I tried adding 0.025ms of fade out in the .wav tones that I generated using Audacity. But still while playing them in psychopy, they end with a click sound. Now I am not sure how to go ahead with this. I need to perform a psychoacoustic experiment and it can not proceed with tone presentation like that.
Pure tones in Psychopy end with unwanted clicks
1.2
0
0
481
34,115,098
2015-12-06T07:26:00.000
0
0
0
1
python,subprocess,octave,message-queue,oct2py
34,116,773
2
false
0
0
All three options are reasonable depending on your particular case. I don't want to rely on maintenance of external libraries such as oct2py, I am in favor of option 3 oct2py is implemented using option 3. You can reinvent what it already does or use it directly. oct2py is pure Python and it has permissive license: if its development were to stop tomorrow; you could include its code alongside yours.
1
2
1
I have a pretty complex computation code written in Octave and a python script which receives user input, and needs to run the Octave code based on the user inputs. As I see it, I have these options: Port the Octave code to python. Use external libraries (i.e. oct2py) which enable you to run the Octave/Matlab engine from python. Communicate between a python process and an octave process. One such possibility would be to use subprocess from the python code and wait for the answer. Since I'm pretty reluctant to port my code to python and I don't want to rely on maintenance of external libraries such as oct2py, I am in favor of option 3. However, since the system should scale well, I do not want to spawn a new octave process for every request, and a tasks queue system seems more reasonable. Is there any (recommended) tasks queue system to enqueue tasks in python and have an octave worker on the other end process it?
Running Octave tasks from Python
0
0
0
1,133
34,116,561
2015-12-06T10:50:00.000
0
0
1
0
python,canopy,mrjob
34,122,614
1
false
0
0
The "j" in MRjob should be capitalized. Try from mrjob.job import MRJob
1
0
0
In Canopy editor , while executing "from mrjob.job import MRjob" i am getting "ImportError: cannot import name MRjob" , not sure, whats wrong here. Anybody please suggest. Thanks much in advance Thanks & Regards, DP
python: ImportError: cannot import name MRjob
0
0
0
724
34,119,693
2015-12-06T16:20:00.000
0
0
0
0
macos,python-3.5,xlwings
34,120,936
1
false
0
0
Something seems to have gone wrong with the installation of the appscript package, a dependency of xlwings. Try to reinstall that. But generally speaking, I would suggest to open an issue on GitHub for these kind of questions since there is usually something very specific going on. For ease of installation, I recommend to use the Anaconda distribution.
1
0
0
When importing xlwings 0.6.1 in Python 3.5 on a Mac, I'm getting the following error message: Python 3.5.0 (default, Nov 8 2015, 20:38:08) [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.1.76)] on darwin Type "help", "copyright", "credits" or "license" for more information. import xlwings Traceback (most recent call last): File "", line 1, in File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xlwings/init.py", line 22, in from . import _xlmac as xlplatform File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/xlwings/_xlmac.py", line 5, in from appscript import app, mactypes File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/aeosa/appscript/init.py", line 8, in from aem.findapp import ApplicationNotFoundError File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/aeosa/aem/init.py", line 5, in import ae, kae, findapp, mactypes, aemconnect ImportError: No module named 'ae' Has anyone encountered this error with the recent release of xlwings 0.6.1?
xlwings 0.6.1 Import error on Mac OS X with python 3.5
0
0
0
812
34,119,746
2015-12-06T16:25:00.000
1
0
1
0
python-2.7,sorting
34,119,821
2
false
0
0
Try this: b = sorted(b, key = lambda i: (i[0], i[1]))
1
1
1
My code b=[((1,1)),((1,2)),((2,1)),((2,2)),((1,3))] for i in range(len(b)): print b[i] Obtained output: (1, 1) (1, 2) (2, 1) (2, 2) (1, 3) how do i sort this list by the first element or/and second element in each index value to get the output as: (1, 1) (1, 2) (1, 3) (2, 1) (2, 2) OR (1, 1) (2, 1) (1, 2) (2, 2) (1, 3) It would be nice if both columns are sorted as shown in the desired output, how ever if either of the output columns is sorted it will suffice.
how to sort list in python which has two numbers per index value?
0.099668
0
0
48
34,121,400
2015-12-06T19:01:00.000
0
0
1
0
javascript,ecmascript-6,ipython-notebook,jupyter
34,126,206
1
true
0
0
Support for Ecmascript 6 is planned for the Jupyter 6.0 major release. So until then, you can use a transpiler or perhaps Typescript to write widgets in the newer Ecmascript syntax.
1
0
0
The Ipython Notebooks now support javascript widgets. However, I was not sure if I could write a widget in Ecmascript 6 and make it work through the Ipython Notebook? I did not see any mention of version type in ipython widget repo on github. Just wanted to know what to expect when I did this.
Can I write ipython notebook(Jupyter) widgets in Ecmascript 6 or do they need to be transpiled?
1.2
0
0
132
34,122,417
2015-12-06T20:36:00.000
1
0
0
0
python,machine-learning,nltk,naivebayes
34,122,819
1
false
0
0
If you know that priors change, you should refit them periodically (through gathering new training set representable for a new priors). In general - every ML method will fail in terms of accuracy if the priors change and you will not give this information to your classifier. You need at least some kind of feedback for the classifier. Then, if you for example have a closed loop where you get info if the classification is right or not, and you assume that only priors change - you can simply learn changing priors online (through any optimization, as it is rather easy to fit new priors). In general you should look at concept drift phenomen.
1
1
1
Ive come across an issue using Naive Bayes on Document classification into various classes problem. Actually I was wondering that P(C) or the prior probability of classes that we have at our hands initially will keep on changing over the course of time. For instance for classes - [music, sports, news] initial probabilities are [.25, .25, .50] Now suppose that over the time during a certain month if we had a deluge of sports related documents (eg 80% sports ) then our NaiveBayes will fail as it will be based on a prior probability factor which says only 25% are sports. How do we deal with such a situation ?
Multiclass NaiveBayes classification on a text dataset with changing prior probabilities
0.197375
0
0
211
34,123,018
2015-12-06T21:30:00.000
0
0
1
0
python,installation,pygame,livewires
66,070,964
3
false
0
1
For python3 this works for me: First you need to install pip module (it will help to install livewires in last step) sudo apt install python3-pip Install pygame module which is required for livewires sudo apt install python3-pygame And at last install needed module pip3 install livewires But most likely livewires will not work well, as it isn't working for me. So you can use updated version - superwires. Installation is almost the same: pip3 install superwires And just use superwires instead of livewires in your instructions. Note, that this didn't work for me: sudo apt install python3-livewires with error: E: Unable to locate package python3-livewires
1
1
0
I am working on an assignment for school, and didn't realize that I needed to install the pygame/livewires package to use the program I am writing, because I have been using a school laptop while in class and haven't used my personal laptop. It is a really simple program, but I still can't run it because the pygame setup I downloaded isn't working. When I type "import pygame" into the shell it works, but when I try something like "from livewires import games" it says that the module livewires cannot be found. I am using Python3.1.1 but I also have Python3.4.3 installed. PLEASE HELP and thank you in advance.
Installing livewires for python
0
0
0
3,136
34,124,600
2015-12-07T00:23:00.000
2
0
1
0
c#,python
34,124,609
1
true
0
0
and in Python is the same as && in other languages
1
1
0
Is there Python equivalent of && operator in C#? Quick clarification, in C# if x && y if x is not true, then y is not even going to evaluated.
C# && in python
1.2
0
0
498
34,126,296
2015-12-07T04:09:00.000
7
0
1
0
html,ipython,ipython-notebook,jupyter
43,498,018
3
true
1
0
You can use Jupyter.keyboard_manager.disable() to disable the shortcuts temporarily, and use Jupyter.keyboard_manager.enable() to activate again.
1
8
0
One of my Jupyter notebooks uses an html <input> tag that expects typed user input, but whenever I type in the text box, command mode keyboard shortcuts activate. Is it possible to turn off keyboard shortcuts for a single cell or notebook?
Disable Jupyter Keyboard Shortcuts
1.2
0
0
2,360
34,127,714
2015-12-07T06:30:00.000
0
0
0
0
django,python-2.7,google-app-engine
34,309,480
1
false
1
0
You have likely configured something wrong. Could you post your project setup somewhere?
1
0
0
I am successfully able to install django-fobi in my virtual environment but when I hit localhost:8080/admin it gives me the following error: ImportError: No module named fobi.contrib.plugins.form_handlers.mail I get this error when i run my django project on google app engine.
Django-fobi on google app engine
0
0
0
139
34,129,016
2015-12-07T08:11:00.000
0
0
0
0
python,django,django-allauth
34,129,711
1
false
1
0
You can of course subclass the views, as long as you change your URLs to point to the overridden versions. However, there is no need to do this just to use your own templates; Django's template loader is specifically written with this use case in mind. Simply create your own directory inside your templates folder to match the one allauth is using, and create your own template files inside it; Django will find yours first and use them.
1
0
0
I'm building a website using django-all auth for it's authentication and social authentication functions. The forms that come bundled with the app are hardly great to look at and hence I decided to create my own views. The problem is: How do I create them while ensuring that the backend of Django all auth is still available to me? I've dug into the source code and found that it uses class based views for rendering and performing CRUD operations. I want to know if I can subclass those views in my own app/views.py and just change their template_name field to my own templates. Any advice would be most helpful. Thanks.
Subclassing and overriding Django Class based views
0
0
0
138
34,129,887
2015-12-07T09:07:00.000
0
0
0
0
python,mysql,ruby-on-rails,ruby,database
34,133,045
2
false
1
0
Learn a good book on software development methodologies before you get into this . Then read some simple tutorial online on mysql . Then it will be a lot more easy to do this .
1
0
0
I'm self-teaching programming through the plethora of online resources to build a startup idea I've had for awhile now. Currently, I'm using the SaaS platform at sharetribe.com for my business but I'm trying to build my own platform as share tribe does not cater to the many options I'd like to have available to my users. I'm try to setup the database at this time and I'm currently working on the architecture. I plan to use MySQL for my database. The website will feature an online inventory management system where users can track all their items, update availability, pricing, delivery, payments, analytical tools, etc. This is so the user can easily monitor their current items, create new listings, etc. so it creates more of a "business" feel for the users. Here is a simple explanation of the work flow. Users will create their profile having access to rent or rent out their items. Once their account is created they can search listing based on the category, subcategory, location, price, etc. When rental is placed, the user will request the rental at specified time, once approved, the rental process will begin. My question is how should I set up the infrastructure/architecture for the database? I have this as my general workings but I know I'm missing a lot of queries and criteria to suit the application. User queries: -user_ID -name -email -username -encrypted_password -location -social_media -age -photo Product queries: -item_ID -user_ID -category_ID -subcategory_ID -price -description -availability -delivery_option As you can see, I'm new to this but as many of the resources I've used for my research, all have said the best way to learn is to do. I'm probably taking on a bigger project that I should for my beginning stages but there will be plenty of mistakes made that will assist my learning. Any and all recommendations and assistance are appreciated. For general knowledge, I intend to utilize Rails as my server language. If you recommend Python/Django over Ruby/Rails, could you please explain why this would be more beneficial to me? Thanks.
How do I build the database for my P2P rental marketplace?
0
1
0
657
34,132,184
2015-12-07T11:12:00.000
1
0
0
0
python,debugging,pandas,ide
34,132,511
2
false
0
0
I don't believe that something like that exists, but you can always use df.info().
1
0
1
I'm doing some development in Python, mostly using a simple text editor (Sublime Text). I'm mostly dealing in databases that I fit in Pandas DataFrames. My issue is, I often lose track of the column names, and occasionally the column types as well. Is there some IDE / plug-in / debug tool that would allow me to look into each DataFrame and see how it's defined, a little bit like Eclipse can do for Java classes? Thank you,
Python Pandas IDE that would "know" columns and types
0.099668
0
0
308
34,132,203
2015-12-07T11:14:00.000
-1
0
0
0
python,python-2.7,tkinter,tk
68,737,194
2
false
0
1
you can use the Clmage module to increase the resolution of your GUI render
1
9
0
I use LinuxMint 17.3 Cinnamon in VirtualBox, 1920*1080 resolution is used in this machine, the Hi-DPI option is turned on. The host machine is Windows 10, with 3840*2160 resolution. Despite turning on Hi-DPI option in LinuxMint, some applications become to look good for comfortable work, in terms of scaling, but python-tk GUI (python2) hasn't been changed - font size is tiny, changing of Font options in Cinnamon doesn't change fonts in tk. Is there any way to scale correctly already written tk GUI applications?
Scaling of Tkinter GUI in 4k (3840*2160) resolution?
-0.099668
0
0
15,979
34,133,095
2015-12-07T12:02:00.000
1
0
1
0
python,linux,multiprocessing
34,135,765
3
false
0
0
One way to approach this, as has already been suggested, was to partition your input set in the application and process it in parallel using the multiprocessing module. Alternatively you can partition the input up-front and run multiple copies of your program on the inputs using GNU parallel or good old xargs (look at -n and -P options). Then there is the problem of merging the results back together, if needed. All of this hinges on being able to split inputs into parts that can be processed independently, without coordination or shared memory. Otherwise it gets more complex.
3
1
0
So I have an executable which I need to run a number of times with different input parameters. It takes around an hour per run, and I've noticed during the entire hour CPU utilisation of only 1 core out of 8 is at 100%, rest of them are idling. Is it possible to spawn 4-5 processes, each using a different core and working on different parameters? Is this common practice? This is the first time I'm bothering about multiple cores so if there are any other things I need to be aware of please let me know. Currently I'm using Python to run and wait for this executable to finish.
Force-utilizing multiple cores
0.066568
0
0
60
34,133,095
2015-12-07T12:02:00.000
2
0
1
0
python,linux,multiprocessing
34,133,555
3
false
0
0
What you describe is routinely done when compiling big programs - multiple compiler processes are spawned, working on different files. If your program is CPU bound as it seems, the input data is easily partitionable and the various instances wouldn't stomp on each other's feet when writing the results, you can try and see if you obtain the expected speedup.
3
1
0
So I have an executable which I need to run a number of times with different input parameters. It takes around an hour per run, and I've noticed during the entire hour CPU utilisation of only 1 core out of 8 is at 100%, rest of them are idling. Is it possible to spawn 4-5 processes, each using a different core and working on different parameters? Is this common practice? This is the first time I'm bothering about multiple cores so if there are any other things I need to be aware of please let me know. Currently I'm using Python to run and wait for this executable to finish.
Force-utilizing multiple cores
0.132549
0
0
60
34,133,095
2015-12-07T12:02:00.000
1
0
1
0
python,linux,multiprocessing
34,133,433
3
false
0
0
When running parallel processes shared resources should be taken into consideration, plus depending on load profile it may or may not be faster than single process(for example if bottleneck is not the cpu) Common problems are usually related to "race conditions" and deadlocks, former is the case when two processes work with same data not knowing about each other so that data gets corrupted due to overwrites for example Not knowing more details about the task it's impossible to answer exactly
3
1
0
So I have an executable which I need to run a number of times with different input parameters. It takes around an hour per run, and I've noticed during the entire hour CPU utilisation of only 1 core out of 8 is at 100%, rest of them are idling. Is it possible to spawn 4-5 processes, each using a different core and working on different parameters? Is this common practice? This is the first time I'm bothering about multiple cores so if there are any other things I need to be aware of please let me know. Currently I'm using Python to run and wait for this executable to finish.
Force-utilizing multiple cores
0.066568
0
0
60
34,135,672
2015-12-07T14:14:00.000
0
0
1
0
python,debugging,pycharm
34,135,775
2
false
0
0
Set a breakpoint at the next line of code after the comprehension and then hit play again.
2
5
0
I am new to Python and Pycharm, I am trying to step over a line with list comprehension, but instead of moving me to the next line, pycharm is incrementing the loop in 1 iteration. any ideas how to move to the next line without pushing F8 3000 times? thanks!
how to step over list comprehension in pycharm?
0
0
0
941
34,135,672
2015-12-07T14:14:00.000
2
0
1
0
python,debugging,pycharm
34,135,795
2
false
0
0
PyCharm has 'Run to Cursor' option - just move your cursor one line down and hit it.
2
5
0
I am new to Python and Pycharm, I am trying to step over a line with list comprehension, but instead of moving me to the next line, pycharm is incrementing the loop in 1 iteration. any ideas how to move to the next line without pushing F8 3000 times? thanks!
how to step over list comprehension in pycharm?
0.197375
0
0
941
34,135,856
2015-12-07T14:23:00.000
5
0
1
1
python,emacs,spacemacs
45,569,548
2
true
0
0
The variable that needed to be set was flycheck-python-pycompile-executable, to "python3". To get support for async, emacs25 must be used (note debian will install emacs24 and emacs25 side-by-side, and use emacs24 by default).
1
16
0
I would like to use spacemacs for python development, but I see a syntax error on Python 3 constructs, like print(*(i + 1 for i in range(n)) or async def foo():. Adding a shebang to my file (#!/usr/bin/python3 or #!/usr/bin/env python3) does not help. What configuration changes do I need to make to use a specific python version? Ideally per-project or per-file, but global is better than nothing. I have 2.7 and 3.4 installed system-wide, and 3.5 in ~/local (~/local/bin is in my $PATH).
How do I configure spacemacs for python 3?
1.2
0
0
10,781
34,135,973
2015-12-07T14:30:00.000
0
0
0
0
python,xml,openerp,xml-rpc
34,148,043
1
false
1
0
Chandu, Well you can call on_change method on through xml-rpc which will give you desire data and you can pass those data back to the server to store correct values. Bests
1
0
0
I have used xml-rpc in my Odoo ERP so whenever some user inputs data in external website that will come to my ERP. Everything working fine i.e. getting data which user inputs from website like personal details, But the problem is i've some onchange selection fields in custom model.for that data is not getting updated over here. Got my point?? I would like to know how to resolve this issue. At least i need to know someone's approach. Thanks in advance
Can't retrive data from webpage for onchange fields in odoo?
0
0
0
257
34,138,415
2015-12-07T16:28:00.000
0
0
0
0
python,django
34,143,416
1
true
1
0
Nginx will solve it problem. Set static folder in the Nginx.
1
1
0
Currently, I am building the Django on the local. I developed the program to allow people download and reconnect some files. But now I am using the static address. If I set up the Django on the website. What should I do? Do I need to set up my base url and how? Do I need to set the address in the view.py and urls.py?
Django how to allow people to connect my file in the application
1.2
0
0
46
34,141,600
2015-12-07T19:29:00.000
0
0
0
1
python,redis,celery
34,169,982
1
false
0
0
Don't. Your Redis command latency with over 10,000 connections will suffer, usually heavily. Even the basic Redis ping command shows this. Step one: re-evaluate the 10k worker requirement. Chances are very high it is heavily inflated. What data supports it? Most of the time people are used to slow servers where concurrency is higher because each request takes orders of magnitude more time than Redis does. Consider this, a decently tuned Redis single instance can handle over a million requests per second. Do the math and you'll see that it is unlikely you will have the traffic workload to keep those workers busy without slamming into other limits such as the speed of light and Redis capacity. If you truly do need that level of concurrency perhaps try integrating Twemproxy to handle the connections in a safer manner, though you will likely see the latency effects anyway if you really have the workload necessary to justify 10k concurrent connections. Your other options are Codis and partitioning your data across multiple Redis instances, or some combination of the above.
1
1
0
Currently, redis has maxclients limit of 10k So, I cant spawn more than 10k celery workers (a celery worker with 200 prefork across 50 machines). Without changing redis maxclient limit, what are some of the things I can do to accommodate more than 10k celery workers? I was thinking setting up master-slave redis cluster but how would a celery daemon know to connect different slaves?
Celery w/ Redis broker: Is it possible to have more 10k connection
0
0
1
599
34,143,037
2015-12-07T20:54:00.000
1
0
0
1
python,linux,amazon-web-services,lxml,amazon-elastic-beanstalk
34,144,641
1
false
0
0
Answer to my question, the package name is: glibc-devel.i686
1
0
0
I need to install glibc-devel package for my EBS python 2.7 64 bits at AWS. Different from any other solutions, I have to install python27-devel instead of python-devel, postgresql93-devel instead of postgresql-devel, so I was wondering the correct name for glibc-devel package because with that name it seems to skipt the installation yum packages (.ebextensions/config file). The main problem is to install lxml from pip packages. I successfully installed libxslt-devel and libxml2-devel in that server, also gcc and patch.
correct package name for glibc in Amazon Linux EBS
0.197375
0
0
217
34,145,637
2015-12-07T23:56:00.000
0
0
1
0
python,python-2.7,steam,steam-web-api
34,146,839
1
false
0
0
I found the answer. The "01" represents the time in UTC, and the "12" represents the volume sold in the hour at that value.
1
0
0
So I have written a short script in python that retrieves an item's market history from steam, and I get a response containing the json data. However, I am having some trouble understanding it. For example, one of the pieces of data I receive is ["Sep 26 2014 01: +0",36.548,"12"]. I originally thought that the "01" might represent the hour, but in the file, there are multiple elements with the same first part of "Sep 26 2014 01: +0". I am also confused as to the meaning of the "12" after. Any help would be much appreciated.
Steam Market History Data
0
0
0
406
34,146,996
2015-12-08T02:33:00.000
0
0
0
0
python,machine-learning,nlp,sentiment-analysis
34,163,258
2
false
0
0
When you annotate the sentiment, don't annotate 'Positive', 'Negative', and 'Neutral'. Instead, annotate them as either "has negative" or "doesn't have negative". Then your sentiment classification will only be concerned with how strongly the features indicate negative sentiment, which appears to be what you want.
1
0
1
I am trying to do sentiment analysis on a review dataset. Since I care more about identifying (extracting) negative sentiments in reviews (unlabeled now but I try to manually label a few hundreds or use Alchemy API), if the review is overall neutral or positive but a part has negative sentiment, I'd like my model to consider it more toward as a negative review. Could someone give me advices on how to do this? I'm thinking about using bag of words/word2vect with supervised (random forest, SVM) /unsupervised learning models (Kmeans).
Review data sentiment analysis, focusing on extracting negative sentiment?
0
0
0
268
34,148,739
2015-12-08T05:34:00.000
0
1
0
1
python,websocket,server
34,150,245
1
false
0
0
It is hard to give a definite yes or no answer, because there are a million ways in which your server may expose the .py file. The crucial point is though, that your server needs to actively expose the file to the outside world. A computer with no network-enabled services running does not expose anything on the network, period. Only physical access to the computer would allow you access to the file. From this absolute point, it's a slow erosion of security with every additional service that offers a network component. Your Python server itself (presumably) doesn't expose its own source code; it only offers the services it's programmed to offer. However, you may have other servers running on the machine which actively do offer the file for download, or perhaps can be tricked into doing so. That's where an absolute "No" is hard to give, because one would need to run a full audit of your machine to be able to give a definitive answer. Suffice it to say that a properly configured server without gaping security holes will not enable users to download the underlying source code through the network.
1
1
0
I am working on a Python WebSocket server. I initiate it by running the python server.py command in Terminal. After this, the server runs fine and actually pretty well for what I'm using it for. The server runs on port 8000. My question is, if I keep the server.py file outside of my localhost directory or any sub-directory, can the Python file be read and the code viewed by anyone else? Thanks.
Can Python server code be read?
0
0
0
123
34,149,013
2015-12-08T05:58:00.000
4
0
1
0
python
34,149,996
2
false
0
0
Everything isn't black and white. The meaning of stack based shouldn't be taken as excluding the possibility of data being stored on the heap. First of all it's only references to objects that lives on the stack in the VM, but still not all references lives there (there are references to other objects in the objects that lives on the heap too). The stack here is that temporary data is being held on a stack instead of in registers. Which means that if you are to add two numbers you put them on the stack and then execute an addition instruction as opposed to in a register machine you put them into registers and add them together. Now since the lifetime of objects are not limited in a way that makes it possible (or at least feasible) to have them live on a stack you have to complete this model with a heap for storing the actual objects. In addition it's rather impractical to be a slave under the stack model. For example the VM must be able to access the global scope (which by definition is a dict-like object). While it could be solved by always having a reference to the global scope always lying around on the stack, it's rather impractical. Instead there's instructions to directly do lookups in the global scope. A similar reasoning applies to local scope, you could have all local variables lying around on the stack, but instead you have them in an array which is more like what would be the case in a register machine, that's where LOAD_FAST comes into the picture. So the fact is that the CPython VM is more of a mix of a register machine and stack machine. Then regarding stack-less python it's also somewhat confusing. The stack in that name doesn't refer to the same stack. Instead it refers to the C-stack, and it doesn't mean that it's free of a C-stack, but rather that the python interpreter doesn't use the C call-stack to keep track of python call-stack. What this actually means is that stack-less python has more of stack in the VM than CPython rather than less. The difference is what happens when you call a function. In CPython the VM will simply call itself to execute the method which will mean that information on how to return to the caller is maintained on the C stack. In stack-less python on the other hand the return address is pushed onto the python stack and the VM continues executing the function directly and the information on how to return to the caller is maintained on the VM stack. One advantage of stack-less python is that each python thread does not have to be executed with a separate thread in C (which means that stack-less python can have thread even on platforms that doesn't support multithreading). There are implementations of python that uses register machine instead, but then again it's not black and white there either. As you probably realized the calling of function will need the return information to be stored somewhere and it's basically required that it would be stored on a stack. So it too is kind of a mix of a stack machine and register machine. Of course it would possibly use the C stack for saving information for returning to the caller which could make the stack inaccessible for the VM.
1
3
0
I have read how CPython is stack based. What does it mean? When I use the dis module, I see operations like LOAD_FAST etc, where the values are placed on a stack. But I have read that all values in python are objects and hence go in the heap. I guess I am confusing two different things here. I have also read that there is something called stackless python. Can someone clarify this?
What does it mean that python is stack based?
0.379949
0
0
1,922
34,150,069
2015-12-08T07:15:00.000
2
0
1
1
c#,python,os.walk
34,150,899
1
true
0
0
It should be fine for the external app to create and write to a file. If the Python app is reading a file, the .NET app may not be able to write to it while Python is reading it, without both processes opening the file in a shareable way, however. Likewise if the Python app is going to start reading the newly-created file, it may either find that it can't do so until the .NET app has finished writing to it, or it may read incomplete data. Again, changes would quite possibly be required to both processes to allow reading at all. It's worth thoroughly testing all the poosibilities you're concerned about, possibly involving the creation of a "fake" external app which writes to a file very slowly, but opening it in the same way that the real one does.
1
0
0
We have a python application that checks a directory(C:\sample\folder) every 5 seconds, there's also this external application(.net app) that puts file into that same directory (C:\sample\folder). Will there be any conflict when the two application access the same folder at the same time (accidentally)? Conflicts like : the external app wont be able to place a file because the python app is currently walking through that same directory?
Multiple executable accessing the same folder at the same time
1.2
0
0
814
34,152,619
2015-12-08T09:46:00.000
0
0
1
0
python,json,validation,configuration,build
34,166,171
1
false
0
0
JSON has its own schema validation language (which is specified in JSON). And the results of a Google search for "json schema validator python" indicate there are Python implementations of the JSON schema validation language. Will the JSON schema validation language be sufficient for your needs? Or is there a compelling reason for you to invent your own schema validation language?
1
0
0
So I have a few configuration files (JSONs) which I need to deploy along with my project. I'd like to set them up in a python package, which when built checks against a few basic validation rules (written in python) If globalConfig.json contains {"rulesToUse":["rule1","rule2"]}, then If ruleSets.json must contain {"rule1":"<somerule>", "rule2":"<otherrule"} What's the best way to go about this? If I create a regular validation_cfg.py file at the same level as my 'setup.py' file, and import validation_cfg; validation_cfg.validate() which looks for the Configs (at pre-specified locations) and runs my validation logic. Would this even work? What's the best practice in such cases?
Validating Configs according to specified rules
0
0
0
33
34,153,844
2015-12-08T10:46:00.000
0
0
0
0
python,loops,selenium
34,154,600
1
false
0
0
Your attempts is always less then 5 because there is no variable increment. So your loop is infinite
1
0
1
I'm trying to study customers behavior. Basically, I have information on customer's loyalty points activities data (e.g. how many points they have earned, how many points they have used, how recent they have used/earn points etc). I'm using R to conduct this analysis I'm just wondering how should I go about segmenting customers based on the above information? I'm trying to apply the RFM concept then use K-means to segment my customers(although I have a few more variables than just R,F,M , as i have recency,frequency and monetary on both points earn and use, as well as other ratios and metrics) . Is this a good way to do this? Essentially I have two objectives: 1. To segment customers 2. Via segmenting customers, identify customers behavior(e.g.customers who spend all of their points before churning), provided that segmentation is the right method for such task? Clustering <- kmeans(RFM_Values4,centers = 10) Please enlighten me, need some guidance on the best methods to tackle such problems.
Python Selenium infinite loop
0
0
0
436
34,155,452
2015-12-08T12:05:00.000
2
0
1
0
python,regex,python-2.7
34,155,651
4
false
0
0
Regular Expressions describe patterns on Strings, they cannot analyse Integers/Floats. Unless there is a very obscure hack to make it work (which I think is unlikely). You can validate if a given string is a valid mathematical expression, but you cannot evaluate mathematical expressions using regexes.
1
0
0
I was wondering if it was possible to use regex to solve simple mathematical expressions with real numbers, and the operators +, -, *, / and ^. For example the input would be a string like '3.5+4^2' (this could also be written as '+3.5+4^2') and the output 19.5. My idea was to have regex first recognise ^ as the procedure to perform first. So he would take 4^2 and return 16 so the expression would be '3.5+16'. Then it would recognise + and return 19.5. Another input example would be -4+5.5*4/2 --> -4+22/2 --> -4+11 --> 7
Python: use regex to solve mathematical expression
0.099668
0
0
1,609
34,155,481
2015-12-08T12:07:00.000
1
0
0
0
python,django,admin
34,156,302
2
false
1
0
The admin pages (as the name indicates) should be reserved for admins. It is designed to give access to the 'raw' data stored in the database. For your users, you should create views, templates and forms to log in and view/change their information. This way you can choose how their info is displayed and how they are allowed to use it (validation, permissions...).
2
0
0
I am building my first django application, I have set up a custom user and profile, I would like the users to be able to edit some of their own content and view their own pages of analytics data. Currently my users are being created and logged in to the admin area I am using a a custom back end to allow them to see / edit the content. My question: Should I allow my users log into the django admin area or should I build a separate login form that authenticates them and build authenticated pages, so I would end up with two admin areas the main area where I can control users and billing etc the other where the customer can view and edit profile information and interact with the application.
Django app with logged in users
0.099668
0
0
74
34,155,481
2015-12-08T12:07:00.000
1
0
0
0
python,django,admin
34,155,953
2
true
1
0
Of course it's better to create another page for the users to get control from, so you set up the authentications and all of the custom permissions that you want to give them. By giving them permissions that you set explicitly you make sure the users don't temper with anything that you don't want them to touch. So the best thing to do, is to create a custom admin panel for them. A more controlled environment for you and your users.
2
0
0
I am building my first django application, I have set up a custom user and profile, I would like the users to be able to edit some of their own content and view their own pages of analytics data. Currently my users are being created and logged in to the admin area I am using a a custom back end to allow them to see / edit the content. My question: Should I allow my users log into the django admin area or should I build a separate login form that authenticates them and build authenticated pages, so I would end up with two admin areas the main area where I can control users and billing etc the other where the customer can view and edit profile information and interact with the application.
Django app with logged in users
1.2
0
0
74
34,155,609
2015-12-08T12:13:00.000
0
0
0
0
python,dom,data-science
34,160,545
1
false
1
0
First you would need to identify which elements in the page actually uniquely identify a page as being of a specific webpage-class. Then you could use a library like BeautifulSoup to actually look through the document to see if those elements exist. Then you would just need a series of if/elifs to determine if a page has the qualifying elements, if so classify it as the appropriate webpage-class.
1
1
0
I want to classify a given set of web pages to different classes, mainly to 3 classes(product page, index page and product-related items page). I think it can be done using analyzing their structure. I just look for comparing the web pages based on their DOM(Document Object Model) structure. I want to whether there is library in python for resolving this problem. Thanks in advance.
Web page structure comparison using python
0
0
1
192
34,160,995
2015-12-08T16:26:00.000
2
0
1
0
python,colors,turtle-graphics
34,161,243
1
false
0
1
You are ending fill after every new coordinate. You need to call t.begin_fill() before your for loop and call t.end_fill() after the last coordinate, otherwise you are just filling in your single line with each iteration.
1
2
0
I am currently using the turtle.goto cords from a text file. I have the triangle drawn and everything but I don't know how to fill the triangle.
Python Turtle fill the triangle with color?
0.379949
0
0
812
34,161,318
2015-12-08T16:42:00.000
1
0
1
1
python,windows,installation
49,567,169
3
false
0
0
In windows, py3, conda install imutils is not avaliable. pip3 install imutils and the newest version will be installed.
1
0
0
I want to install imutils 0.2 package to python and i have windows 7 operating system. I only found the gz file and would like to know the way of gz files. Or else if there are any exe files available please let me know
how to install imutils 0.2 for python in windows 07
0.066568
0
0
25,716
34,165,743
2015-12-08T20:48:00.000
0
1
0
0
python-2.7,time,raspberry-pi2
34,198,704
1
false
0
0
I don't know what the problem was, but i switched to python DateTime.now() and everything seems to work fine, no weird times now.
1
0
0
I have the following system setup client app running on my computer server app running on my computer publisher Raspberry-Pi unit subscriber Raspberry-Pi unit the client app sends a message to the server who then sends a message to the publisher which forwards this message to the subscriber which then returns the message back to the server app I am trying to measure the elapsed time in seconds using time.time() or timeit.default_timer() however both returned same results. i measure time in 4 points: Message arriving from client to server app. Message arriving at publisher from server Message arriving at subscriber from publisher Message arriving at server app from the publisher What happens is that the first and the last time make sense how ever both time stamps on the publisher and subscriber happen before the first time-stamp on the server, which makes no sense, unless that raspberry pi traveled back in time.These are the times measured: [1449606796.36039, 1449606784.0, 1449606784.0, 1449606804.49233] When i measure time.time() on the different machines manually everything seems to be in sync. Any idea what's going wrong here ?
Raspberry Pi (Python) Measuring time between units
0
0
0
401
34,166,369
2015-12-08T21:28:00.000
3
0
1
0
python,gensim,word2vec
34,166,580
4
false
0
0
It seems gensim throws a misleading error message. Gensim wants to iterate over your data multiple times. Most libraries just build a list from the input, so the user doesn't have to care about supplying a multiple iterable sequence. Of course, generating an in-memory list can be very resource-consuming, while iterating over a file for example, can be done without storing the whole file in memory. In your case, just changing the generator to a list comprehesion should solve the problem.
1
25
0
I have an generator (a function that yields stuff), but when trying to pass it to gensim.Word2Vec I get the following error: TypeError: You can't pass a generator as the sentences argument. Try an iterator. Isn't a generator a kind of iterator? If not, how do I make an iterator from it? Looking at the library code, it seems to simply iterate over sentences like for x in enumerate(sentences), which works just fine with my generator. What is causing the error then?
Generator is not an iterator?
0.148885
0
0
6,813
34,167,557
2015-12-08T22:48:00.000
2
1
0
1
python,uwsgi,pyenv
34,168,578
3
true
0
0
I had the same (or better: a similar) problem with uwsgi when upgrading Python from 2.7.3 to 2.7.10: The module that I tried to import was socket (socket.py) Which in turn tried to import _socket (_socket.so) - and the unresolved symbol was _PyInt_AsInt The problem is a mismatch between some functions between Python minor minor releases (which doesn't break any backward compatibility, BTW). Let me detail: Build time: when your uwsgi was built, the build was against Python 2.7.10 (as you specified). Python could have been compiled/built: statically - most likely, the PYTHON LIBRARY (from now on, I am going to refer to it as PYTHONCORE as it's named by its creators) in this case: (libpython2.7.a) is in a static lib which is included in the python executable resulting a huge ~6MB executable dynamically - PYTHONCORE (libpython2.7.so) is a dynamic library which python executable (~10KB of bytes large, this time) uses at runtime Run time: the above uwsgi must run in an Python 2.7.11 environment Regardless of how Python is compiled, the following thing happened: between 2.7.10 and 2.7.11 some internal functions were added/removed (in our case added) from both: PYTHONCORE Dynamic (or extension) modules (written in C) - .so files located in ${PYTHON_LIB_DIR}/lib-dynload (e.g. /home/user/.pyenv/versions/2.7.11/envs/master2/lib/python2.7/lib-dynload); any dynamic module (.so) is a client for PYTHONCORE So, basically it's a version mismatch (encountered at runtime): 2.7.10 (which uwsgi was compiled against): PYTHONCORE - doesn't export PyCodecInfo_GetIncrementalEncoder _io.so (obviously) doesn't use the exported func (so, no complains at import time) 2.7.11 (which uwsgi is run against): PYTHONCORE - still (as it was "embedded" in uwsgi at compile (build) time, so it's still 2.7.10) doesn't export PyCodecInfo_GetIncrementalEncoder _io.so - uses/needs it resulting a situation where a Python 2.7.11 dynamic module was used against Python 2.7.10 runtime, which is unsupported. As a conclusion make sure that your uwsgi buildmachine is in sync (from Python PoV) with the runmachine, or - in other words - build uwsgi with the same Python version you intend to run it with!
1
1
0
when i start uwsgi 2.0.11.2 under pyenv 2.7.11 i get: ImportError: /home/user/.pyenv/versions/2.7.11/envs/master2/lib/python2.7/lib-dynload/_io.so: undefined symbol: _PyCodecInfo_GetIncrementalEncoder also uwsgi prints Python version: 2.7.10 (default, May 30 2015, 13:57:08) [GCC 4.8.2] not sure how to fix it
uwsgi fails under pyenv/2.7.11 with _io.so: undefined symbol: _PyCodecInfo_GetIncrementalEncoder
1.2
0
0
23,361
34,168,019
2015-12-08T23:23:00.000
0
0
1
0
python,python-3.x
52,257,045
4
false
0
0
In MacOS, I have anaconda package manager installed, so after pip install 3to2 I found executable at /Users/<username>/anaconda3/bin/3to2 Run ./3to2 to convert stdin (-), files or directories given as arguments. By default, the tool outputs a unified diff-formatted patch on standard output and a “what was changed” summary on standard error, but the -w option can be given to write back converted files, creating .bak-named backup files. In Windows its in C:\Python27\Scripts\ as a file 3to2 Run by invoking python python 3to2 <filetoconvert> to display the diff on console or with -w option to write back the converted to same file.
2
9
0
I have to convert some of my python 3 files to 2 for class, but I can't figure out how to use 3to2. I did pip install 3to2 and it said it was successful. It install 2 folders 3to2-1.1.1.dist-info and lib3to2. I have tried doing python 3to2 file_name, `python lib3to2 file_name' I also tried changing the folder to 3to2.py like I saw on an answer on someone else question still didn't work. What is the correct way to use this?
How to use 3to2
0
0
0
16,706
34,168,019
2015-12-08T23:23:00.000
11
0
1
0
python,python-3.x
38,457,022
4
false
0
0
Had the same question and here's how I solved it: pip install 3to2 rename 3to2 to 3to2.py (found in the Scripts folder of the Python directory) Open a terminal window and run 3to2.py -w [file] NB: You will either have to be in the same folder as 3to2.py or provide the full path to it when you try to run it. Same goes for the path to the file you want to convert. The easy way around this is to copy 3to2.py into the folder your py file is in and just run the command inside that folder. Use 3to2.py --help for info on how the script works.
2
9
0
I have to convert some of my python 3 files to 2 for class, but I can't figure out how to use 3to2. I did pip install 3to2 and it said it was successful. It install 2 folders 3to2-1.1.1.dist-info and lib3to2. I have tried doing python 3to2 file_name, `python lib3to2 file_name' I also tried changing the folder to 3to2.py like I saw on an answer on someone else question still didn't work. What is the correct way to use this?
How to use 3to2
1
0
0
16,706
34,168,281
2015-12-08T23:47:00.000
0
0
0
1
python,service,supervisord
34,168,319
1
true
0
0
Run with nohup. you should detach the process from your current terminal or it will terminate as soon as you exit.
1
0
0
I'm running a supervisor instance on a VPS, but it seems to exit when I exit the terminal. Why is that happening?
Supervisor instance on VPS (digitalocean) exits when exiting the terminal
1.2
0
0
47
34,169,111
2015-12-09T01:16:00.000
1
0
0
0
python,tcp,wireshark,pcap
34,496,093
1
false
0
0
I did it in C but the general ideas is, you need to keep track of TCP sequence numbers (there are two streams for each TCP session, one from client to server, the other is from server to client). This is a little complex. For each stream, you have a pointer to keep track of the consecutive sequence numbers sent so far and use a linked-list to keep track of the pairs ( sequence number + data length) that have a gap to the pointer. Each time you see a new data packet in the stream, you either update the pointer, or add to the linked-list. Note that after you update the pointer, you should check if the linked-list to see if some of the gaps are closed. You can tell retransmitted data packets this way. Hope it helps. Good luck.
1
0
0
The data was captured from LTE network. I don't know how to recognize count TCP retransmission of a single TCP flow, using Python. Could I recognize the type of retransmission? It's because of congestion or packet loss? Thanks.
How could I find TCP retransmission and packet loss from pcap file?
0.197375
0
1
1,227
34,169,547
2015-12-09T02:02:00.000
2
0
1
0
python,python-2.7
34,169,656
3
false
0
0
There are lots of ways. You can make a variable part of a class - not a member of the object, but of the class itself. It is initialized when the class is defined. Similarly you can put a variable at the outer level of a module. It will belong to the module, and will be initialed when the module is imported the first time. Finally there's the hack of defining an object as a default parameter to a function. The variable will be initialized when the function is defined, and will belong to the function. You will only be able to access it with the parameter name, and it can be overridden by the caller.
1
4
0
I want to do matching my time-series data to meta data from a given file. In my code, main function calls "create_match()" function every 1 minute. Inside "create_match(), there is a "list_from_file()" function to read data from file and store in lists to perform matching. The problem is that my code is not effective since every 1 minute, it reads the file and rewrite in the same lists. I want to read file only one time (to initialize lists only one time), and after that ignoring the "list_from_file()" function. I do not want to just move this task to main function and pass lists through function. Does python have a special variable like static variable in c programming?
python: is there anyway to initialize a variable only one time?
0.132549
0
0
18,130
34,172,598
2015-12-09T07:03:00.000
0
0
0
0
python-3.x,selenium-webdriver
40,609,613
1
false
0
0
Which driver are you using for your script? I've run in to this problem with Chromedriver and assumed it was due to a buffer/latency issue when posting data to a text field. By breaking this data up in to smaller chunks, I resolved the issue. I did this manually as it was just one section, but you might want to create a function that takes larger data and splits it in to smaller chunks if you do this a lot. I've also heard it is specific to Chrome, but have not had a chance to test this in other browsers.
1
1
0
Rarely, I see below message during execution of the script. Can someone tell me what is the reason for this? Is it due to script or because of the application I am working on? [7192:4260:1209/122546:ERROR:latency_info.cc(157)] RenderWidgetHostImpl::OnSwapCompositorFrame, Late ncyInfo vector size 323 is too big.
Warning messages during python selenium script : Late ncyInfo vector size
0
0
1
1,344
34,173,343
2015-12-09T07:46:00.000
1
0
0
0
python,web-scraping
34,173,449
2
false
0
0
You can use Selenium web drivers to actually use browsers to make the requests for you. In such cases, I usually checkout the request made by Chrome from my dev tools "Network" tab. Then I right click on the request and copy the request as cURL to run it on command line to see if it works perfectly. If it does, then I can be certain it can be achieved using Python's requests package.
1
1
0
I am currently trying to write a small bot for a banking site that doesn't supply an API. Nevertheless, the security of the login page seems a little more ingenious than what I'd have expected, since even though I don't see any significant difference between Chrome and Python, it doesn't let requests made by Python through (I accounted for things such as headers and cookies) I've been wondering if there is a tool to record requests in FireFox/Chrome/Any browser and replicate them in Python (or any other language)? Think selenium, but without the overhead of selenium :p
Easily replicate browser requests with python?
0.099668
0
1
779
34,173,840
2015-12-09T08:17:00.000
1
0
1
0
python,plot,graphing,python-ggplot
34,457,756
1
true
0
0
Yes. They are currently doing a major rewrite.
1
2
1
Python ggplot is great, but missing many customization options. The commit history on github for the past year does not look very promising... Does anyone know if it is still being developed?
Is python ggplot still being developed?
1.2
0
0
154
34,177,156
2015-12-09T11:02:00.000
1
1
0
1
python,firebase,backend,iot,real-time-data
34,178,035
2
false
1
0
You're comparing apples to oranges here in your options. The first three are entirely under your control, because, well, you own the server. There are many ways to get this wrong and many ways to get this right, depending on your experience and what you're trying to build. The last three would fall under Backend-As-A-Service (BaaS). These let you quickly build out the backend of an application without worrying about all the plumbing. Your backend is operated, maintained by a third party so you lose control when compared to your own server. ... and of course at the best price AWS, Azure, GAE, Firebase, PubNub all have free quotas. If your application becomes popular and you need to scale, at some point, the BaaS options might end up being more expensive.
1
0
0
I'm working on an IoT App which will do majority of the basic IoT operations like reading and writing to "Things". Naturally, it only makes sense to have an event-driven server than a polling server for real-time updates. I have looked into many options that are available and read many articles/discussions too but couldn't reach to a conclusion about the technology stack to use for the backend. Here are the options that i came across: Meteor Python + Tornado Node.js + Socket.io Firebase PubNub Python + Channel API (Google App Engine) I want to have as much control on the server as possible, and of course at the best price. What options do i have? Am i missing something out? Personally, i prefer having a backend in Python from my prior experience.
Real-time backend for IoT App
0.099668
0
0
586
34,179,566
2015-12-09T13:04:00.000
2
0
0
0
python,django,amazon-web-services,pypy,aws-cli
34,180,615
1
true
1
0
The best way to run PyPy (also on AWS) is to install it (pypy is bundled these days with the default AWS distribution) and use virtualenv to manage python dependencies.
1
3
0
I have a Django application, that does some intensive computational tasks. To make its execution faster I run it with PyPy (the Python alternative extension to run scripts faster). I have to deploy it on amazon-aws (Elastic Beanstalk). I want to deploy it, such that it runs on PyPy on aws, (and not on conventional/default Python).
Run a Django app on PyPy on Amazon AWS
1.2
0
0
1,018
34,184,735
2015-12-09T17:05:00.000
0
0
1
0
python,django,dictionary,global-variables
34,184,791
2
false
1
0
You can put the dictionary in your static directory and put the path in your settings.py file. Then when you try to use it, you load the dictionary in your views.py.
1
1
0
I want to use word (english) dictionary in my Django application. However Django does not recommend using Global variables because of its threading model. This dictionary does not have thread-safety issues, I want to load the dictionary at the beginning and after it is constant (will be reading that from different Django views). Is there any way to achieve this ?
How to use the global variables in django?
0
0
0
273
34,188,594
2015-12-09T20:43:00.000
1
0
1
0
python,c++,qt,pyqt,qlistview
34,188,907
1
true
0
1
self. listWidget.setAlternatingRowColors(True) this will give you Alternate color for each row.
1
0
0
I use QListView (in PyQt5) to display strings. I want to have the background color of each item in that list changing between two colors to make it easier to read. I tried Qt.DecorationRole but this only create an "icon" on the left side of each item.
QListView with row background in two different colors
1.2
0
0
963
34,190,298
2015-12-09T22:33:00.000
3
0
0
0
python,tensorflow,tensorboard
61,137,400
3
false
0
0
I advise to always start tensorboard with --reload_multifile True to force reloading all event files.
2
19
1
What is the best way to quickly see the updated graph in the most recent event file in an open TensorBoard session? Re-running my Python app results in a new log file being created with potentially new events/graph. However, TensorBoard does not seem to notice those differences, unless restarted.
What's the best way to refresh TensorBoard after new events/logs were added?
0.197375
0
0
21,777
34,190,298
2015-12-09T22:33:00.000
0
0
0
0
python,tensorflow,tensorboard
44,359,908
3
false
0
0
My issue is different. Each time I refresh 0.0.0.0:6006, it seems the new graph keep appending to the old one, which is quite annoying. After trying kill process and delete old log several times, I realized the issue comes from writer.add_graph(sess.graph), because I didn't reset the graph in jupyter notebook. After resetting, the tensorboard could show the newest gragh.
2
19
1
What is the best way to quickly see the updated graph in the most recent event file in an open TensorBoard session? Re-running my Python app results in a new log file being created with potentially new events/graph. However, TensorBoard does not seem to notice those differences, unless restarted.
What's the best way to refresh TensorBoard after new events/logs were added?
0
0
0
21,777
34,192,290
2015-12-10T01:41:00.000
43
0
1
0
python,ipython-notebook
34,192,474
3
true
0
0
Using Jupyter notebook you can click on a cell, press esc and then r. That converts it to a "raw" cell. Similar thing can be done to convert it back, esc + y. No comments needed, just key presses. Within Jupyer notebook, go to Help -> Keyboard shortcuts for more. Here's a snippet: Command Mode (press Esc to enable) ↩ : enter edit mode ⇧↩ : run cell, select below ⌃↩ : run cell ⌥↩ : run cell, insert below y : to code m : to markdown r : to raw
1
32
0
In my ipython notebook, there is part of cells that serves as preliminary inspection. Now I want to turn it off, since after running it I know the status of the dataset, but I also want to keep it, so other people using this notebook can have this functionality. How can I do it? Is there any example of doing it? I can comment out these cells, but then switching between on and off would be quite laborious. And may not be quite convinent for other people. I can abstract it into a function, but that itself has some methods, so the code would be quite convoluted, and may be hard to read?
Ipython Notebook: Elegant way of turning off part of cells?
1.2
0
0
21,607
34,197,011
2015-12-10T08:32:00.000
1
0
0
0
python,django,django-migrations
34,878,492
2
false
1
0
First, I'd really look (very hard) for a way to launch a script that does as masnun suggests on the client side, really hard. Second, if that does not work, then I'd try the following: Configure on your local machine all client databases in the settings variable DATABASES Make sure you can connect to all the client databases, this may need some fiddling Then you run the "manage.py migrate" process with the extra flag --database=mydatabase (where "mydatabase" is the handle provided in the configuration) for EACH client database I have not tried this, but I don't see why it wouldn't work ...
2
1
0
We are developing a b2b application with django. For each client, we launch a new virtual server machine and a database. So each client has a separate installation of our application. (We do so because by the nature of our application, one client may require high use of resources at certain times, and we do not want one client's state to affect the others) Each of these installations are binded to a central repository. If we update the application code, when we push to the master branch, all installations detect this, pull the latest version of the code and restart the application. If we update the database schema on the other hand, currently, we need to run migrations manually by connecting to each db instance one by one (settings.py file reads the database settings from an external file which is not in the repo, we add this file manually upon installation). Can we automate this process? i.e. given a list of databases, is it possible to run migrations on these databases with a single command?
Running django migrations on multiple databases simultaneously
0.099668
1
0
510
34,197,011
2015-12-10T08:32:00.000
3
0
0
0
python,django,django-migrations
34,197,250
2
false
1
0
If we update the application code, when we push to the master branch, all installations detect this, pull the latest version of the code and restart the application. I assume that you have some sort of automation to pull the codes and restart the web server. You can just add the migration to this automation process. Each of the server's settings.py would read the database details from the external file and run the migration for you. So the flow should be something like: Pull the codes Migrate Collect Static Restart the web server
2
1
0
We are developing a b2b application with django. For each client, we launch a new virtual server machine and a database. So each client has a separate installation of our application. (We do so because by the nature of our application, one client may require high use of resources at certain times, and we do not want one client's state to affect the others) Each of these installations are binded to a central repository. If we update the application code, when we push to the master branch, all installations detect this, pull the latest version of the code and restart the application. If we update the database schema on the other hand, currently, we need to run migrations manually by connecting to each db instance one by one (settings.py file reads the database settings from an external file which is not in the repo, we add this file manually upon installation). Can we automate this process? i.e. given a list of databases, is it possible to run migrations on these databases with a single command?
Running django migrations on multiple databases simultaneously
0.291313
1
0
510
34,198,892
2015-12-10T10:04:00.000
-3
0
1
1
python,ubuntu
55,326,327
6
false
0
0
Its simple just try: sudo apt-get remove python3.7 or the versions that you want to remove
4
13
0
I have recently get hold of a RackSpace Ubuntu server and it has pythons all over the place: iPython in 3.5, Pandas in 3.4 &2.7, modules I need like pyodbc etc. are only in 2,7 Therefore, I am keen to clean up the box and, as a 2.7 users, keep everything in 2.7. So the key question is, is there a way to remove both 3.4 and 3.5 efficiently at the same time while keeping Python 2.7?
Ubuntu, how do you remove all Python 3 but not 2
-0.099668
0
0
127,783