Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
43,931,149
2017-05-12T06:53:00.000
0
0
1
0
python,vba
63,744,401
2
false
0
0
I know that this is from years ago, but I just stumbled across it and thought I would share my 2 cents. One possibility is to translate it into VBScript - this is very similiar to VBA, and does not require Excel, etc in order to run. It would, of course, depend on what was in the original function.
2
0
0
I bought some API for VBA Excel/reference. Is is possible to use this API in Python 2.x ? Maybe the question could be is there possible to import VBA reference into Python. This is just the idea. Do not have a any clue if this is even possible ? If it is not possible, is there some nice solution ? Do you have some some experience ? Thanks
VBA API in Python
0
0
0
498
43,935,258
2017-05-12T10:17:00.000
0
0
1
0
python,multithreading
43,951,426
1
true
0
0
I solved this problem by creating two threads, which starts at the same time: Thread0 - DoSomethig Thread1 - TimeManager Both threads run as a daemon and have infinite loops: thread0 prints value of a global variable, thread1 sleep some time interval then change value which prints thread0. When you want to change the value you don't have direct access to printed variable but you can change value of temporary one. Thread1 assign temporary value to value which is printed.
1
2
0
I have thread which prints in infinite loop a variable value (e.g. var_x = "Some string", var_x - is global). If I change the value of var_x, the thread prints changed value instantly. Let's assume that there are some time intervals e. g. 15 seconds. I want to get a result: when I change the value of var_x e.g. in the 5th second of the interval the thread shouldn't change it instantly but should wait till interval time ends (next 10 seconds). What is the best approach in Python for that kind of problem?
Execute function if certain time is reached, Python's best approach
1.2
0
0
49
43,939,316
2017-05-12T13:44:00.000
0
0
1
0
python,file
43,939,525
2
false
0
0
Python processes, threads and coroutines offers synchronization primitives such as locks, rlocks, conditions and semaphores. If your threads access randomly one or more shared variables then every thread should acquire lock on this variable so that another thread couldn't access it.
1
0
1
I'm doing some research in neuroscience and I'm using python's tinydb library for keeping track of all my model training runs and the data they generate. One of the issues I realized might come up is when I try to train multiple models on a cluster. What could happen is that two threads might try to write to the tinydb json file at the same time. Can someone please let me know if this will be an issue?
File write collisions on parallelized python
0
0
0
678
43,939,873
2017-05-12T14:11:00.000
0
0
1
0
python,windows
43,940,430
1
false
0
1
Just an idea - not sure if that would work under your specific conditions (PyQT etc), but couldn't you run it from the pen drive directly? As in create a Python virtual environment (for example using venv, with all the dependencies) on the pendrive and then call your program using the python interpreter in the installed virtual environment. Or use the virtual environment and it's interpreter to install the dependencies?
1
0
0
I'm bringing you this issue: I'm trying to create a program to run in Windows using PyQT, to work on a pen drive. My idea is: I plug my pen drive, and everything that I need to run the program is there, including Python 3, PyQT, etc.. I don't want the user to install all the requirements, I just want one executable that install all the programs necessary and then, there will be the executable to open the program. Considering, of course, that Python 3 is not installed in this Windows Machine Just wondering how can I do it? Do you guys have any idea? Thanks, Gus.
Creating a "pen drive program" with Python
0
0
0
322
43,941,174
2017-05-12T15:15:00.000
2
0
0
0
python,python-3.x,nginx,gunicorn
43,942,327
1
false
1
0
Not quite. Basically flask is the webapp, it gets loaded when gunicorn starts up. At that point the flask app is up and running and gunicorn itself can answer requests by sending them to the flask app within its python processes (ie, no net traffic). Nginx sits on top of gunicorn and proxies requests between clients and gunicorn as gunicorn is not a web server. So nginx -> gunicorn -> flask (loaded by gunicorn itself) When gunicorn starts up, it loads and initialises the flask app on its own. Doing that on every request would be very slow. Nginx just proxies to gunicorn's listening port. It does not load a Flask app by itself, which is really a WSGI compliant Python webapp.
1
1
0
I was wondering how exactly is the request handled, I mean,I think it's something like this: Nginx receives the request, does initial handling based on configuration,passes to Gunicorn Gunicorn receives it, and initiate a instance of the Flask app, with the request data Flask app receives the request data, and does the work it was programmed to Is it something like this? Does a new instance of the Flask app get initiated at each request?
How does the Flask-Gunicorn-Nginx setup works under the hood?
0.379949
0
0
538
43,942,850
2017-05-12T16:52:00.000
0
0
1
0
python,datetime
43,943,119
1
false
0
0
datetime.strptime('19950129000000', "%Y%m%d%H%M%S")
1
0
0
I have a series of tables that I am using to create a map in ArcGIS desktop. In the attribute table there is a date column in the format "19950129000000" and I would like to convert this format to something more meaningful such as "29/1/1995". The column says it is in a string format, but the metadata says it is in a date format. I have done something similar before but I am having trouble getting it to work. I've tried: def dtConversion(date): from datetime import datetime od = datetime.strptime(date, "YYYYMMddhhmmss") nd = datetime.strftime(od, "%d/%m/%Y") return nd esri_field_calculator_splitter dtConversion(!CMPLDT!)
Changing date formats with python from one to another
0
0
0
360
43,942,997
2017-05-12T17:01:00.000
0
1
0
0
python,unit-testing,tdd
56,775,200
3
false
0
0
Just run the test with --last-failed option (you might need pytest)
1
5
0
I have a large python test file using unittest that I run from the command line. Some tests take a while to run. This is a mild pain point because I'm often only concerned with the last test I added. What I want is this: add test. run tests (one fails because I haven't written the code to make it pass) implement the behaviour run only the test that failed last time fix the silly error I made when implementing the code run only the failing test, which passes this time run all the tests to find out what I broke. Is it possible to do this from the command line?
Python unittest, running only the tests that failed
0
0
0
1,455
43,944,313
2017-05-12T18:25:00.000
0
0
1
0
python
43,944,353
1
true
0
0
The flow is quite simple: You create your lib. You define setup.py file (versioning here is important). You build your lib. You upload it to a pypi server (either public or your private). Other applications simply bump version and pip install from either public or your private pypi server (there is a flag in pip command tool to switch to other server). You should start with learning either distutils or setuptools (my favourite) for points 2 and 3. For a private pypi server you have to set it up. I've actually never done it by myself but I assume it can't be hard. Google pypi server.
1
0
0
In python we do pip install to install some external libraries, modules etc. If i have to create a reusable python module that can be used by several different APIs in an enterprise, is there a known way to create this module and ship it so that consuming applications just install it somehow and import the module rather that taking the source code from a common repository, creating a local module out of it and then do a module import. Can someone educate me with what best practices we have in python for this use case ?
What are different ways I can ship python module as a reusable module
1.2
0
0
44
43,944,404
2017-05-12T18:31:00.000
0
0
0
0
python,amazon-web-services,aws-lambda,amazon-rds
56,725,577
2
false
0
0
AWS recommends making a global connection (before your handler function definition) in order to increase performance. Idea is that a new connection does not have to be established and the previous connection to the DB is reused, even when multiple instances of Lambda are run in close connection. But if your use case involves referencing MySQL tables through Lambda, especially if the table is regularly updated, I'd recommend initiating the connection object locally (inside the handler function) and then closing it after you run your queries. This is much in tandem to @dnevins' response and was the only way it worked for me as well. Hope this helps!
1
3
0
I have a AWS lambda implemented using python/pymysql with AWS rds Mysql instance as backend. It connects and works well and I can also access the lambda from my android app. The problem I have is after I insert a value into rds mysql tables successfully using local machine mysql workbench and run the lambda function from AWS console its not showing the newly inserted value instantly. On the python aws lambda code I am not closing the connection or cursor. But if I edit the lambda function on the AWS console, by edit I mean just insert a space and again run the lambda from AWS console it fetches the newly inserted value. How do I configure/code to make lambda fetch db values in realtime.
Python Aws lambda function not fetching the rds Mysql table value in realtime
0
1
0
878
43,945,538
2017-05-12T19:54:00.000
0
0
0
0
python,django,base64,microsoft-graph-api
43,986,333
1
true
1
0
Well, I got confused among Microsoft API manual and base64 information everywhere. I just needed to write the raw binary from Microsoft Graph API to a jpg file openend in binary mode, and thats was it.
1
0
0
Ok, I am connecting my Django app to Microsoft Graph API to get my user photo. The problem is that I cant find how to transform the 64base not encoded binary data to a file I can use. I have been reading and searching for about 2 hours, with no luck. Thanks for your help.
Microsoft Graph binary photo to file in Django
1.2
0
0
116
43,947,405
2017-05-12T22:42:00.000
5
0
1
0
python,multithreading,coroutine,eventlet
43,962,169
1
true
0
1
You wrote the answer yourself, I can only rephrase it. With regard to Eventlet, Gevent, Twisted, Asyncio and other cooperative multitasking libraries we use term "blocking" to denote that it blocks everything. Unpatched time.sleep(1) will block all coroutines/greenthreads as opposed to OS threads semantics where it would only block caller OS thread and allow other OS threads to continue. To differentiate things that block OS thread from things that block coroutine/greenthread we use term "yielding". A yielding function is one that allows execution to rest of coroutines, while blocking (due to Python execution semantics) only caller coroutine. Armed with that powerful terminology, tpool.execute() turns blocking call into yielding one. Combined with eventlet.spawn(tpool.execute, fun, ...) it would not block even the caller coroutine. Maybe you find this a helpful combination. And patches are always welcome. Eventlet is a great library because it contains combined effort of many great people.
1
2
0
I am trying to understand what eventlet.tpool is useful for. The docs say that tpool.execute() lets you take a blocking function and run it in a new thread. However, the tpool.execute() method itself blocks until the thread is complete! So how is this possibly useful? If I have some blocking/long running function myfunc() and call it directly, it will block. If I call it inside tpool.execute(myfunc) then the tpool.execute(myfunc) call will block. What exactly is the difference? The only thing I can guess is that when myfunc() is called directly, it not only blocks this coroutine but also prevents other coroutines from running, while calling tpool.execute() will block the current coroutine but somehow yields so that other coroutines can run. Is this the case? Otherwise I don't see how tpool can be useful.
How is eventlet tpool useful?
1.2
0
0
1,179
43,954,187
2017-05-13T14:16:00.000
-2
0
0
0
python,opencv,image-processing,distance
43,954,917
1
false
0
0
I am sorry but finding a distance is a metrology problem, so you need to calibrate your camera. Calibrating is a relatively easy process which is necessary for any measurements. Let's assume you only have one calibrated camera, if the orientation/position of this camera is fixed relatively to the ground plane, it is possible to calculate the distance between the camera and the feet of somebody (assuming the feet are visible).
1
0
1
I want to find out the distance between the camera and the people (detected using the HOG descriptor) in front of camera.I'm looking into more subtle approach rather than calibrating the camera and without knowing any distances before hand. This can fall under the scenario of an autonomous car finding the distance between the car in front. Can someone help me out with a sample code or an explanation on how to do so
How to finding distance between camera and detected object using openCV in python?
-0.379949
0
0
2,354
43,954,548
2017-05-13T14:56:00.000
3
0
1
0
python,numpy
61,691,668
2
false
0
0
Math is a standard library but still it is not working with the Foobar.
1
5
1
I am working on a problem (doomsday_fuel) using python and I need to use matrices, so I would like to import numpy. I have solved the problem and it runs perfectly on my own computer, but Google returns the error: ImportError: No module named numpy [line 3]. The beginning of my code looks like: import fractions from fractions import Fraction import numpy as np I have checked constraints.txt and they do not seem to restrict numpy "Your code will run inside a Python 2.7.6 sandbox. Standard libraries are supported except for bz2, crypt, fcntl, mmap, pwd, pyexpat, select, signal, termios, thread, time, unicodedata, zipimport, zlib." Does anyone have any ideas how or why this would happen? Or do people have ideas as to what steps I could take to ask Google about this?
Google foo.bar Challenge Issue: Can't import libraries
0.291313
0
0
4,751
43,956,485
2017-05-13T18:16:00.000
0
0
1
0
python-3.x,anaconda
43,956,833
1
false
0
0
Anaconda is certainly the best way. Do you start it using the Anaconda Navigator? What is the error message you get? Jupiter starts in a virtual environment, so you shouldn’t see that problem.
1
0
0
I have installed python 3.x.On top of that if I install Anaconda then It's not working but when I uninstall Python 3.x then I install anaconda then It's working.What is the reason behind this scenario? I want to use machine learning library scikit-learn.Is there any way to install scikit-learn with all dependencies?
Anaconda is not working in windows
0
0
0
608
43,956,820
2017-05-13T18:49:00.000
0
1
0
1
python,python-3.x,emacs
43,966,958
2
false
0
0
Refer to @Ehvince's comment. Make sure that pylint, is in fact, installed using the command line.
2
0
0
I'm using Emacs 24 and elpy to run some Python 3 code. However, after I open a shell with C-U-C-C-C-Z and then run my code with C-U-C-C-C-C, I get the error in my command line: Cannot open load file: no such file or directory, pylint This is odd, as I've made no recent changes to Emacs, but it always tends to be finicky about if it wants to run any code. The python shell works fine, so that shouldn't be the issue. Thanks.
Emacs fails to run Python code
0
0
0
277
43,956,820
2017-05-13T18:49:00.000
1
1
0
1
python,python-3.x,emacs
43,969,466
2
true
0
0
Do you mean that it used to work ? To me this error means that the elisp pylint package was not installed or not "required". Try installing it with M-x package-install (and be sure to have the pypi pylint package installed in the current virtual env too, if you use one inside emacs -this is a current elpy installation shortcoming). (made an answer of my comment)
2
0
0
I'm using Emacs 24 and elpy to run some Python 3 code. However, after I open a shell with C-U-C-C-C-Z and then run my code with C-U-C-C-C-C, I get the error in my command line: Cannot open load file: no such file or directory, pylint This is odd, as I've made no recent changes to Emacs, but it always tends to be finicky about if it wants to run any code. The python shell works fine, so that shouldn't be the issue. Thanks.
Emacs fails to run Python code
1.2
0
0
277
43,962,568
2017-05-14T09:53:00.000
1
0
1
0
python,spyder
49,761,138
2
false
0
0
You need to hold the fn key and then hit f5. To run: fn + f5
2
2
0
I've checked the shortcut for spyder in mac. F5: run F9: run selection or current line However, those shortcuts are not working in my spyder! Every time I have to select the lines I wanted to run or select all my code, and press cmd+enter, this is not efficient. I tried to go to keyboard shortcut to change the setting but I couldn't change the default. Where can I make it like F5 run and F9 run current line by putting my cursor on? PS: it was working good in Windows10 when I hit F5 or F9. I even don't need to highlight the line I wanted to run, I only need to put the cursor on the line and it works.
python spyder 3.1.4 shortcut (macOS Sierra 10.12.4)
0.099668
0
0
6,100
43,962,568
2017-05-14T09:53:00.000
0
0
1
0
python,spyder
61,897,463
2
false
0
0
Alt + Enter for running the code, or selecting all and then running the code
2
2
0
I've checked the shortcut for spyder in mac. F5: run F9: run selection or current line However, those shortcuts are not working in my spyder! Every time I have to select the lines I wanted to run or select all my code, and press cmd+enter, this is not efficient. I tried to go to keyboard shortcut to change the setting but I couldn't change the default. Where can I make it like F5 run and F9 run current line by putting my cursor on? PS: it was working good in Windows10 when I hit F5 or F9. I even don't need to highlight the line I wanted to run, I only need to put the cursor on the line and it works.
python spyder 3.1.4 shortcut (macOS Sierra 10.12.4)
0
0
0
6,100
43,964,318
2017-05-14T13:15:00.000
0
0
0
0
python,openerp
44,001,967
1
true
1
0
In real it's a fake problem because the field "Application" in the form corresponds to an Application category not Application (or module) so a users group (from the menu Setting -> Users -> Groups ) will be applied to a set of Applications (Modules) not for a unique application, so if anyone want to make a group for a unique module he could name the category attribute in the manifest file the as the same of the application (module name) so try to get the category of your application and choose it from the list.
1
0
0
I've created a new module in odoo 10 with différents menus, now I want to create a user who have some menus of this module. So I created the user but when I tried to create the group (in order to associte to the groupe the menu) in the field application I can't find the module that I've created. I've seted in the manifest file the field application to true and I've checked in the database in ir_module the field is true. Can someone help me please
odoo 10 creating a group and associate it to a custom module
1.2
0
0
413
43,965,401
2017-05-14T15:01:00.000
0
0
0
0
python-3.x,pyqt5,pyuic
43,965,546
2
false
0
1
PyQt5 is not compatibile with PyQt4, hence pyuic5 is also not backward compatible. You can install pyqt4-dev-tools package on a debian based system which includes the pyuic4 utility
1
0
0
Is there some sort of backward compatibility with the pyuic5 shell command? I updated to pyQt5 a while ago, but I have a few projects running with pyQt4 on a separate python 3.4 environment. Unfortunately the pyuic4 shell command is now unavailable. How can I convert .ui files to pyQt4 compatible code?
pyuic5 backward compatibility
0
0
0
350
43,967,808
2017-05-14T19:07:00.000
3
0
1
0
python,algorithm,sorting,theory
43,968,199
1
true
0
0
Let's try the following: Suppose we have an m times n grid grid with p different colors. First we work row by row with the following algorithm: Column reduction Drag the piece at (1,1) to (1,2), then (1,2) to (1,3) and so on until you reach (1, n) Drag the piece at (1,1) the same way to (1,n-1). Continue till you reach (1, n-p) with the piece moved. The first step is guaranteed to move the color that was originally at (1,1) to (1,n) and collect all pieces of the same color on its way. The succeeding steps collect the remaining colors. After this part of the algorithm we are guaranteed to have only the columns p to n filled, each with a different color. This we repeat for the remaining m-1 rows. After that the columns 1 to n-p-1 are guaranteed to be empty. Row reduction Now we repeat the same process with the columns, i.e. drag (1, j) to (m, j) for all j >= n-p and then drag (1,j) to (m-1, j). After this part we are guaranteed to have only filled a p times p subgrid. Full grid search Now we collect each different color by brute force: Move (p,p) to (p,p+1), (p, p+2), ... (p, n) and then to (p + 1, n), (p+1, n-1), ..., (p+1, p) and then to (p+2, p), ..., (p+2, n) and so on until we reach either (m, p) or (m,n), depending wether p is even or odd. This step we repeat p times, only that we stop each time on field short of the last one. As a result only the remaining p fields are filled and each contains a stack of the same color. Problem solved. To estimate the complexity: The row moving part requires n + n-1 + n-2 + ... + n-p= n*(n+1)/2 - (n-p)*(n-p+1)/2=np+(p^2+p)/2=O(n^2) moves per row, hence O(mn^2). The column moving part similarly requires O(nm^2) moves. The final moving requires p^2 moves for each color, i.e. O(p^3). If q = max(n,m,p) the complexity is O(q^3). Note: If we do not know p we could immediately start with the full grid search. We still remain in complexity O(q^3). If, however, p << n or p << m the column and row reduction will reduce the practical complexity greatly.
1
0
1
Tagged this as Python because is the most pseudo-y-code language in my opinion I'll explain graphically and the answer can be graphical/theorical too (maybe its the wrong site to post this?) Let's say I want to make an algorithm that solves a simple digital game for infants (this is not the actual context, its much more complex) These are the rules : There is a square grid seen from above, with a colored lego piece in each spot You can drag pieces to try and stack on top of each other. If their color match, they will stack, leaving the spot of the first piece you dragged empty. If you move a piece to an empty spot, it will move to that spot If their color don't match and you drag one of top of the other, they will switch spots. The amount of pieces of a same color is randomly generated when a new grid is started. The goal of the game is to obviously drag pieces of the same color until you only have one stack of each color. Now here comes question, I want to make a script that solves the game, but it will be "blind", meaning it won't be able to see colors, or track when a match occurs. It will have to traverse in a way that it will ensure it tried all possible "drags" The main problem for me to even start thinking about this comes from the fact that they swap positions if the script fails to guess the color, and there's no feedback to know that you failed. Also is the complexity of this calculable? Is it too insane?
An algorithm for grouping by trying without feedback
1.2
0
0
58
43,967,864
2017-05-14T19:13:00.000
0
0
0
0
python,nginx,streaming,rtmp
44,003,701
1
false
0
0
Firstly I would check your firewall, TCP port 1935 needs to be open.
1
0
0
we created a RTMP server using NGINX and have a camera that is streaming video to that server. We have a python program that should connect to the RTMP server and then display the video on the computer. When we run the program we keep getting the below error: RTMP_Connect0, failed to connect socket. 110 (Connection timed out) I found a RTMP url on that was used for testing the code and it works but our RTMP server doesnt. Does anyone know of any settings that need to be set to be able to get passed this error?
RTMP Connection Timeout in Python
0
0
1
972
43,968,046
2017-05-14T19:33:00.000
0
0
0
0
wxpython
43,990,763
2
false
0
1
Yes you can. Pass None as the parent. This should have no impact on sizer behavior. Just be sure to destroy the dialog after it closes to prevent an orphaned dialog.
1
0
0
I would like to create a dialog with wx.Dialog. I have two questions on that. Do I have to set a wx.Frame as a parent window or can I use my wx.Dialog as the main window? Are the Sizers usables in a wx.Dialog without parent? Thank you for your answers.
Basic questions about wx.Dialog
0
0
0
137
43,969,594
2017-05-14T23:05:00.000
4
1
1
0
python,parallel-processing,apache-flink
44,073,887
2
false
0
0
There are several interesting points in your question. First, the slots in Flink are the processing capabilities that each taskmanager brings to the cluster, and they limit, first, the number of applications that can be executed on it, as well as the number of executable operators at the same time. Tentatively, a computer should not provide more processing power than CPU units present in it. Of course, this is true if all the tasks that run on it are computation intensive in CPU and low IO operations. If you have operators in your application highly blocking by IO operations there will be no problem in configuring more slots than CPU cores available in your taskmanager as @Till_Rohrmann said. On the other hand, the default parallelism is the number of CPU cores available to your application in the Flink cluster, although it is something you can specify as a parameter manually when you run your application or specify it in your code. Note that a Flink cluster can run multiple applications simultaneously and it is not convenient that only one block entire cluster, unless it is the target, so, the default parallelism is usually less than the number of slots available in your Cluster (the sum of all slots contributed by your taskmanagers). However, an application with parallelism 4 means, tentatively, that if it contains an stream: input().Map().Reduce().Sink() there should be 4 instances of each operator, so, the sum of cores used by the application Is greater than 4. But, this is something that the developers of Flink should explain ;)
2
2
0
I'm trying to understand the logic behind flink's slots and parallelism configurations in .yaml document. Official Flink Documentation states that for each core in your cpu, you have to allocate 1 slot and increase parallelism level by one simultaneously. But i suppose that this is just a recommendation. If for a example i have a powerful core(e.g. the newest i7 with max GHz), it's different from having an old cpu with limited GHz. So running much more slots and parallelism than my system's cpu maxcores isn't irrational. But is there any other way than just testing different configurations, to check my system's max capabilities with flink? Just for the record, im using Flink's Batch Python API.
Flink Slots/Parallelism vs Max CPU capabilities
0.379949
0
0
1,748
43,969,594
2017-05-14T23:05:00.000
5
1
1
0
python,parallel-processing,apache-flink
43,973,809
2
false
0
0
It is recommended to assign each slot at least one CPU core because each operator is executed by at least 1 thread. Given that you don't execute blocking calls in your operator and the bandwidth is high enough to feed the operators constantly with new data, 1 slot per CPU core should keep your CPU busy. If on the other hand, your operators issue blocking calls (e.g. communicating with an external DB), it sometimes might make sense to configure more slots than you have cores.
2
2
0
I'm trying to understand the logic behind flink's slots and parallelism configurations in .yaml document. Official Flink Documentation states that for each core in your cpu, you have to allocate 1 slot and increase parallelism level by one simultaneously. But i suppose that this is just a recommendation. If for a example i have a powerful core(e.g. the newest i7 with max GHz), it's different from having an old cpu with limited GHz. So running much more slots and parallelism than my system's cpu maxcores isn't irrational. But is there any other way than just testing different configurations, to check my system's max capabilities with flink? Just for the record, im using Flink's Batch Python API.
Flink Slots/Parallelism vs Max CPU capabilities
0.462117
0
0
1,748
43,971,678
2017-05-15T04:39:00.000
1
0
0
0
python,image-processing,computer-vision,ipython,opencv-python
43,971,827
1
false
0
0
ok i will give you the steps, but the coding has to be done by you assuming you have python installed and pip in you machine Install pillow using pip get the images in the script and calculate the width and store them in a list, you will get to know how to calculate width from the Pillow documentation Install matplotlib using pip Pass that list you created from the images to the plotting function of matplotlib. the Histogram representation can be found in Matlpoltlib documentation hope it helps,
1
0
1
I have some training images (more than 20 image with format .tif)that i want to plot their histogram of width in python. I will be more than happy if any one can helps.
How to plot the histogram of image width in python?
0.197375
0
0
194
43,972,059
2017-05-15T05:25:00.000
0
0
0
0
python,machine-learning,scikit-learn,statistical-sampling
43,972,717
1
false
0
0
To evaluate a classifier's accuracy against another classifier, you need to randomly sample from the dataset for training and test. Use the test dataset to evaluate each classifier and compare the accuracy in one go. Given a dataset stored in a dataframe , split it into training and test (random sampling is better to get an indepth understanding of how good your classifier is for all cases , stratified sampling can sometimes mask your true accuracy) Why? Let's take an example : If you are doing stratified sampling on some particular category (and let's assume this category has an exceptionally large amount of data[skewed] and the classifier predicts that one category well , you might be led to believe that the classifier works well, even if doesn't perform better on categories with less information. Where does stratified sampling work better? When you know that the real world data will also be skewed and you will be satisifed if the most important categories are predicted correctly. (This definitely does not mean that your classifier will work bad on categories with less info, it can work well , it's just that stratified sampling sometimes does not present a full picture) Use the same training dataset to train all classifers and the same testing dataset to evaluate them. Also , random sampling would be better.
1
1
1
I'm working on a project to classify 30 second samples of audio from 5 different genres (rock, electronic, rap, country, jazz). My dataset consists of 600 songs, exactly 120 for each genre. The features are a 1D array of 13 mfccs for each song and the labels are the genres. Essentially I take the mean of each set of 13 mfccs for each frame of the 30 second sample. This leads to 13 mfccs for each song. I then get the entire dataset, and use sklearn's scaling function. My goal is to compare svm, knearest, and naive bayes classifiers (using the sklearn toolset). I have done some testing already but I've noticed that results vary depending on whether I do random sampling/do stratified sampling. I do the following function in sklearn to get training and testing sets: X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=0, stratify=y) It has the parameters "random state" and "stratify". When "random state" is ommitted, it randomly samples from the entire dataset; when it is set to 0, the training and test sets are guaranteed to be the same each time. My question is, how do I appropriately compare different classifiers. I assume I should make the same identical call to this function before training and testing each classifer. My suspicion is that I should be handing the exact same split to each classifier, so it should not be random sampling, and stratifying as well. Or should I be stratifying (and random sampling)?
Music genre classification with sklearn: how to accurately evaluate different models
0
0
0
894
43,972,351
2017-05-15T05:52:00.000
2
0
0
0
python,turtle-graphics
44,012,869
3
true
0
1
Deleting all my references to objects in the canvas (including, of course, the TurtleWindow) and then destroying the canvas with canvas.destroy() did the trick. Perhaps there are other solutions, but this was the best that I could think of. I appreciate everyone's help, as it will serve me well in the future, at least with objects not created using the turtle API.
1
9
0
I made a small tkinter game that uses turtle for graphics. It's a simulation of the Triangle Peg Game from Cracker Barrel that is able to tell the player the next best move to make at any point in the game, among other features. Pegs are just instances of a subclass of turtle.RawPen, and I keep plenty of plain instances of RawPen around to draw arrows representing moves. I noticed that when I restart the game (which calls turtle.bye()) to kill the turtle window, that memory consumption actually increases, as turtles don't seem to be deleted. Even if I call window.clear() beforehand, which clears _turtles in window.__dict__, there are still references to the turtles. I ensured that all the references that I make to them are deleted during restart, so that's not the issue. Is there any way to truly delete a turtle so it can be garbage collected?
How to fully delete a turtle
1.2
0
0
19,821
43,972,774
2017-05-15T06:29:00.000
1
0
0
0
python,html,django,django-templates
44,286,610
1
false
1
0
Well, you can solve it like this: Install mathfilters: pip install django-mathfilters Add 'mathfilters' to Installed_Apps Replace {{ forloop.counetr }} by something like this in your template: {{ page_obj.number|add:-1|mul:page_obj.paginator.per_page|add:forloop.counter }} where: page_obj.number - page number page_obj.paginator.per_page - number of items on page
1
0
0
I am using normal html table, and I am having more than 1000 records to display. So,I used pagination concept and make 100 results per page. The problem is, for the serial number column, Iam using forloop counter, for first page I got serial numbers as 1 to 100, for second page also I am getting serial numbers as 1 to 100. What I need is, it has to be 101 to 200 for second page and for third page it has to be 201 to 300, but it is showing 1 to 100. How to make serial number like I am expecting?
Forloop counter with dynamic value range
0.197375
0
0
187
43,973,305
2017-05-15T07:04:00.000
1
1
0
0
python-3.x,unit-testing,django-rest-framework,django-testing,web-api-testing
43,977,762
2
false
1
0
APITestCase in rest_framework.test is to test the api's in the rest. It is specific for the api operations and api calls. Django.test.TestCase is used to test the Django classes.
1
2
0
What is the major difference in these test classes django.test.TestCase and rest_framework.test.APITestCase . which is Better to test my views.py ? can you suggest me documentations to understand about these. Thank you in advance. :-)
what is difference between `django.test.APITestCase` and `rest_framework.test.TestCase` in `python-django`
0.099668
0
0
1,881
43,973,509
2017-05-15T07:17:00.000
-4
0
1
0
python,python-import
43,973,936
3
false
0
0
This problem happens because you're writing Python program like you would write Swift or Java program. Such approach never works well: every language is different and has different best practices. If you write unpythonic code, not only it looks ugly and undecipherable to other Python developers, you're struggling with the language instead of enjoying it. Just structure your code like python developers structure it. Group related classes in one module. If there's a rare case where there's hard to avoid import cycles (almost never happens), import offending dependency in the function or method instead of global scope.
1
2
0
I'm new at Python and previously I've been using languages like Swift, where import is not a big deal: you're just defining a new class and can access it from another part of your program. I can't use this way with Python because here import works in the other way: you can't make cyclic imports, where two files import each other. I understand that I'm facing this problem because of using the language in a wrong way but I don't understand how to avoid it. I mean, in most cases you just can solve this problem by combining two classes into a single file but it doesn't feel right. Also, I've found advice like "move your import statement to the end of file" but it doesn't feel like a good idea too. I would like to understand the Python's philosophy, if you will. How should I organize my project and what should I be guided by, when deciding on making a class in a separate file?
How to avoid cyclic import
-1
0
0
3,598
43,975,795
2017-05-15T09:26:00.000
0
0
0
0
python,openerp,odoo-8,erp,odoo-10
43,982,399
1
true
1
0
In real it was a fake probelm because the field "Application" in the form corresponds to an Application category not Application (or module) so a users group (from the menu Setting -> Users -> Groups ) will be applied to a set of Applications (Modules) not for a unique application, so if anyone want to make a groupd for a unique module he could name the category attribute in the manifest file the as the same of the application (module name)
1
0
0
Greeting; I have created a new module named phpevaluation, and i want to configure security on this module (who use what), so i have to create a set of users groups to define my types of access, the problem is in the users group form , when I try to choose my created module from the list of applications I could not found it; -I named the module phpevaluation (without dashes nor underscores) -Inside my manifest.py file the application attribute is setted to True -When I access using pgAdmin to the "ir_module_module" table I find the record and the "application" attribute is correction setted to True I am using Odoo 10
Odoo security : My Module not found in the applications list when creating new users group
1.2
0
0
288
43,977,734
2017-05-15T10:58:00.000
0
0
0
0
python,r,python-2.7,equation,logarithm
43,977,884
2
false
0
0
That is no programming question but a mathematics question and if I get the function in your question right, the answer is "wherever the graph hits the x-axis". But I think that was not what you wanted. Maybe you want the rectangle between O(0,0) and P(x, y)? Than you still should simply use a cas and a-level mathematics: A = x*(-15.7log(x)+154.94)
1
1
1
I'm trying to get the x and y coordinates for which the area under the curve: y=-15.7log(x)+154.94 is maximum. I would like to compute this in R or Python. Can someone please help me to find it? Background: I have data points of sales (y) vs prices (x). I tried fitting a log curve in R: lm(formula = y ~ log(x)) which gave me the above equation. I'm trying to increase the revenue which is the product of sales and prices. Hence the rectangular area under the curve should be maximized.
How to maximize the area under -log(x) curve?
0
0
0
161
43,978,775
2017-05-15T11:50:00.000
1
0
0
0
ironpython,spotfire,rscript
46,858,137
1
false
0
0
This can be done. 1-> Create function or packaged function which returns ref-cursor. 1.1-> In that update your value in table based on where clause. 2-> Once you have function ready, create informationlink on that object using parameter type single. 3-> Once you do that import information link to spotfire using on demand values. 4-> create document property and use that do property as parameter for on demand. 5-> keep datatable refresh manually(uncheck Auto Refresh). 6-> provide user text box to have new values. 7-> provide 1 button and use datatable.Refresh(). 8-> it will pass your doproperty value to database and your function will return ref-cursor, in that you can return the sql%rowcount or success or failure msg.
1
3
1
How do we edit a row in a datatable in spotfire? Can we do it using ironpython or R script? I have a requirement where I want to edit the values in spotfire datatable to see the effect in the respective visuals. The data table is populated using an information link (from a SQL database).
Edit a row value in datatable in spotfire
0.197375
0
0
1,165
43,980,261
2017-05-15T13:01:00.000
1
0
0
1
python,python-3.x,packages,easy-install
43,981,171
1
true
0
0
easy_install must be used as a command in the command prompt and it cannot be opened as an application. Go to the folder where easy_install is and open command-prompt in that folder. Now perform installation of any libraries using: >easy_install pandas #example Or you can set this path in your environment variables and use it instead of using this path to install everytime.
1
1
0
I'm trying to get new packages (request for example) and trying to do it through easy_install, but when I try to open it (both easy_install and easy_install-3.6) all I get is a blank terminal screen popping up for a second and than closing with nothing happening. What's wrong with it and how can I get new packages?
easy_install not working with no error
1.2
0
0
347
43,982,543
2017-05-15T14:51:00.000
8
0
1
0
python,windows,pip,cython
44,400,710
7
true
0
0
I reinstalled the Cython with conda and install the Microsoft Visual C++ Build Tools and it works fine.
4
56
0
I'm trying do from Cython.Build import cythonize and I get the message ImportError: No module named 'Cython', but I instaled the Cython with the comand pip install Cython. What's wrong? Python 3.5 Cython 0.25.2 Windows 8
ImportError: No module named 'Cython'
1.2
0
0
104,955
43,982,543
2017-05-15T14:51:00.000
1
0
1
0
python,windows,pip,cython
64,713,016
7
false
0
0
it should be path problem. go to windows search for python idle right click idle - open file location where right click on python.exe - open file location if the module not in that location type cmd and press enter in path now install module with pip install cython it will work fine
4
56
0
I'm trying do from Cython.Build import cythonize and I get the message ImportError: No module named 'Cython', but I instaled the Cython with the comand pip install Cython. What's wrong? Python 3.5 Cython 0.25.2 Windows 8
ImportError: No module named 'Cython'
0.028564
0
0
104,955
43,982,543
2017-05-15T14:51:00.000
3
0
1
0
python,windows,pip,cython
69,661,106
7
false
0
0
The problem for me was the pip version. Running python -m pip install pip --upgrade solved the issue.
4
56
0
I'm trying do from Cython.Build import cythonize and I get the message ImportError: No module named 'Cython', but I instaled the Cython with the comand pip install Cython. What's wrong? Python 3.5 Cython 0.25.2 Windows 8
ImportError: No module named 'Cython'
0.085505
0
0
104,955
43,982,543
2017-05-15T14:51:00.000
1
0
1
0
python,windows,pip,cython
70,331,520
7
false
0
0
I personally ran into this problem when I was trying to set up a new virtual environment. I simply installed Cython with python -m pip install Cython and then proceeded to install all the rest of my stuff I needed with python -m pip install -r requirements.txt. Once it was done, it actually uninstalled Cython for me... Why? Heck do I know... I ain't that kind of expert :<
4
56
0
I'm trying do from Cython.Build import cythonize and I get the message ImportError: No module named 'Cython', but I instaled the Cython with the comand pip install Cython. What's wrong? Python 3.5 Cython 0.25.2 Windows 8
ImportError: No module named 'Cython'
0.028564
0
0
104,955
43,984,158
2017-05-15T16:07:00.000
3
0
1
0
python,nlp,chatbot
43,984,466
3
false
0
0
I'm guessing you are making an integration for something like Slack. Using an autocorrect feature might be very dangerous as you could "correct" something from a neutral state to a destructive state. Styling your inputs to be simple and but descriptive would be a lot safer. You could also implement a "did you mean" feature with some simple character counting that would let the user see that they messed up, and offer then the option to correctly input the right key phrase. Input: derete file1.jpg check possible keyword position 0 with existing keyword set...add/remove/delete 0/6 match for add 2/6 match for remove 5/6 match for known keyword 'delete', picking 'delete' as suggestion Output: Did you mean delete file1.jpg ? I think that would be safer and not too painful to code. Just have a function that iterates through each character and increments a counter if the character matches. It's FAR from perfect but would be a step in the right direction if you wanted to make it manually.
1
1
0
I'm making a chatbot and would like users to spell correctly to make everything on the back end easier. Are there any autocorrect and/or autocomplete libraries out there?
Are there any auto-correct/auto-complete libraries for Python?
0.197375
0
0
11,125
43,984,705
2017-05-15T16:36:00.000
2
0
1
0
postgresql,plpython
44,186,978
1
false
0
0
you can simply run python 2 sudo apt-get install postgresql-contrib postgresql-plpython-9.6 python 3 sudo apt-get install postgresql-contrib postgresql-plpython3-9.6 Then check the extension is installed SELECT * FROM pg_available_extensions WHERE name like '%plpython%'; To apply the extension to the database, use for python 2 CREATE EXTENSION plpython2u; for python 3 CREATE EXTENSION plpython3u;
1
2
0
Ubuntu 14.04.3, PostgreSQL 9.6 Maybe I can get the plpythonu source code from the PostgreSQL 9.6 source code or somewhere else, put it into the /contrib directory, make it and CREATE EXTENSION after that!? Or something like that. Don't want to think that PostgreSQL reinstall is my only way.
Is there a way to install PL/Python after the database has been compiled without "--with-python" parameter?
0.379949
1
0
2,374
43,985,976
2017-05-15T17:59:00.000
0
0
0
0
python,image-processing,optimization,itk,image-registration
43,987,786
1
false
0
0
Similarity metrics in ITK usually give the cost, so the optimizers try to minimize them. Mutual information is an exception to this rule (higher MI is better), so in order to fit into the existing framework it has negative values - bigger negative number is better than small negative number, while still following the logic that it should be minimized. Modified time is used to check whether a certain filter should be updated or not. Generally lower metric means better registration. But it is not comparable between different metrics, or even between different types of images using the same metric. Random sampling will take 10-20% of samples in your RoI. I am not sure whether it picks randomly within RoI, or picks randomly within image and then checks whether it is in RoI.
1
0
1
1. Mattes Mutual Info Doubts In SimpleITK Mattes Mutual information is a similarity metric measure, is this a maximizing function or minimizing function? I have tried a 3D registration(image size : 480*480*60) with Metric Mattes Mutual Info metric and Gradient Descent Optimizer Output numofbins = 30 Optimizer stop condition: RegularStepGradientDescentOptimizerv4: Step too small after 24 iterations. Current step (7.62939e-06) is less than minimum step (1e-05). Iteration: 25 Metric value: -0.871268982129 numofbins = 4096 Optimizer stop condition: RegularStepGradientDescentOptimizerv4: Step too small after 34 iterations. Current step (7.62939e-06) is less than minimum step (1e-05). Iteration: 23 Metric value: -1.7890 If it is a minimization function then the lower one is better, which I suspect. 2. Transformation matrix final Output TranslationTransform (0x44fbd20) RTTI typeinfo: itk::TranslationTransform Reference Count: 2 Modified Time: 5528423 What is Modified Time? 3. Final Metric is a registration accuracy measurement? Is metric a sign of registration accuracy? Is a higher metric value mean better registration? Or it is just a value at optimum point after optimization? 4. Random sampling for registration 10-20% of random sample points suffice for a registration. But the doubt arises whether the samples are taken from the main ROI or outside the ROI? Masking is an option, is there any other option in SimpleITK? Thanks
Mattes Mutual Info basic doubts on 3D image registration
0
0
0
770
43,987,779
2017-05-15T19:56:00.000
3
1
1
0
python,package,pycrypto
62,125,607
3
false
0
0
By using python 3, i solved it by installing pycryptodome (pip3 install pycryptodome). No need to replace Crypto by Cryptodome
2
8
0
pycrypto is installed (when I run pip list one of the result is pycrypto (2.6.1)) and it works but when I would like to use the MODE_CCM it returns: module 'Crypto.Cipher.AES' has no attribute 'MODE_CCM' My Python version: Python 3.5.2 :: Anaconda 4.2.0 (x86_64)
python: module 'Crypto.Cipher.AES' has no attribute 'MODE_CCM' even though pycrypto installed
0.197375
0
0
14,474
43,987,779
2017-05-15T19:56:00.000
1
1
1
0
python,package,pycrypto
54,408,843
3
false
0
0
You can use dir(AES) to see the list of supported MODE_xxx.
2
8
0
pycrypto is installed (when I run pip list one of the result is pycrypto (2.6.1)) and it works but when I would like to use the MODE_CCM it returns: module 'Crypto.Cipher.AES' has no attribute 'MODE_CCM' My Python version: Python 3.5.2 :: Anaconda 4.2.0 (x86_64)
python: module 'Crypto.Cipher.AES' has no attribute 'MODE_CCM' even though pycrypto installed
0.066568
0
0
14,474
43,987,799
2017-05-15T19:58:00.000
-1
1
0
1
python,linux,automation,cron,crontab
43,987,885
2
false
0
0
Try with 10 * * * * yourscript name.Then check if crontab -l is include your script. Then you can check crontab logs.
1
0
0
Currently, I’ve scheduled a python script on Linux by adding the following: */10 * * * * /file/testscripts/test_script.py to crontab -e. It did not run after 10 minutes, so I wrote some code to write the current time on there but wasn’t being updated either. What could be the issue? And how can I determine my python script has been scheduled for a cron job properly? Thank you in advance and will accept/upvote answer
Python + Linux: How to determine cron job is scheduled?
-0.099668
0
0
542
43,988,174
2017-05-15T20:22:00.000
1
0
0
0
python,opengl,math,pygame
43,988,562
1
false
0
1
You need to compute the player forward vector: The forward vector is the vector that points in the forward direction seen from the player's eyes - it tells you in which direction the eyes of the player are looking. The local forward vector (I call it lfw for now) is probably (0,0,1) because you specified the y axis as "up". The world forward vector (called wfv for now) is: (rotationMatrix * lfw); That is the direction which the player is looking at in world coordinates because you multiplied it with the players rotationMatrix. The final lookAt position is: position + wfv ( means: make one step from position in the forward direction --> yields the point after you took the step.) Hope this helps a bit
1
0
0
I started to to something with opengl in pygame, but I'm stuck at the point where gluLookAt takes world coordinates to orient the camera, but I want to move the camera with the mouse, so I have values like "mouse moved 12 pixels to the right". At the moment I have gluLookAt(player_object.x, player_object.y, player_object.z, lookat_x, lookat_y, -100, 0, 1, 0) but I don't know how to convert the movement of the mouse to these coordinates. Maybe someone knows the answer or a formula to convert it. (I use python, but I think it's easy to port code or just a formula)
opengl gluLookAt with orientation in degrees instead of coordinates
0.197375
0
0
198
43,988,667
2017-05-15T20:55:00.000
0
0
1
0
python,list,collections,tuples
43,988,763
2
false
0
0
Yes, it is OK. However, depending on the operations you're doing, you might want to consider using the set function in Python. This will convert your input iterable (tuple, list, or other) to a set. Sets are nice for a few reasons, but especially because you get a unique list of items that has constant time lookup for items. There's nothing "un-pythonic" about holding large data sets in memory, though.
2
16
0
I have a quite large list (>1K elements) of objects of the same type in my Python program. The list is never modified - no elements are added, removed or changed. Are there any downside to putting the objects into a tuple instead of a list? On the one hand, tuples are immutable so that matches my requirements. On the other hand, using such a large tuple just feels wrong. In my mind, tuples has always been for small collections. It's a double, a tripple, a quadruple... Not a two-thousand-and-fiftyseven-duple. Is my fear of large tuples somehow justified? Is it bad for performance, unpythonic, or otherwise bad practice?
Is it OK to create very large tuples in Python?
0
0
0
2,269
43,988,667
2017-05-15T20:55:00.000
20
0
1
0
python,list,collections,tuples
43,988,809
2
true
0
0
In CPython, go ahead. Under the covers, the only real difference between the storage of lists and tuples is that the C-level array holding the tuple elements is allocated in the tuple object, while a list object contains a pointer to a C-level array holding the list elements, which is allocated separately from the list object. The list implementation needs to do that because the list may grow, and so the memory containing the C-level vector may need to change its base address. A tuple can't change size, so the memory for it is allocated directly in the tuple object. I've created tuples with millions of elements, and yet I lived to type about it ;-) Obscure In CPython, there can even be "a reason" to prefer giant tuples: the cyclic garbage collection scheme exempts a tuple from periodic scanning if it only contains immutable objects. Then the tuple can never be part of a cycle, so cyclic gc can ignore it. The same optimization cannot be used for lists; just because a list contains only immutable objects during one run of cyclic gc says nothing about whether that will still be the case during the next run. This is almost never highly significant, but it can save a percent or so in a long-running program, and the benefit of exempting giant tuples grows the bigger they are.
2
16
0
I have a quite large list (>1K elements) of objects of the same type in my Python program. The list is never modified - no elements are added, removed or changed. Are there any downside to putting the objects into a tuple instead of a list? On the one hand, tuples are immutable so that matches my requirements. On the other hand, using such a large tuple just feels wrong. In my mind, tuples has always been for small collections. It's a double, a tripple, a quadruple... Not a two-thousand-and-fiftyseven-duple. Is my fear of large tuples somehow justified? Is it bad for performance, unpythonic, or otherwise bad practice?
Is it OK to create very large tuples in Python?
1.2
0
0
2,269
43,988,892
2017-05-15T21:11:00.000
2
0
0
0
python,robotframework,pyodbc,pymssql
44,121,400
1
true
0
0
I was able to connect using @Goralight approach: Connect To Database Using Custom Params pymssql ${DBConnect} where ${DBConnect} contained database, user, Password, host and port
1
2
0
I'm having issues connecting to a working SQL\Express database instance using Robot Framework's DatabaseLibrary. If I use either Connect To Database with previously defined variables or Connect To Database Using Custom Params with a connection string, I get the following results: pyodbc: ('08001', '[08001] [Microsoft][ODBC SQL Server Driver][DBNETLIB]SQL Server does not exist or access denied. (17) (SQLDriverConnect); [01000] [Microsoft][ODBC SQL Server Driver][DBNETLIB]ConnectionOpen (Connect()). (53)') pymssql:: InterfaceError: Connection to the database failed for an unknown reason. The connection string I'm using is the following: 'DRIVER={SQL Server};SERVER=localhost\SQLExpress;UID=sa;PWD=mypass;DATABASE=MyDb' I copied several examples from guides and tutorials and all of them yield the same result, so my guess is that there is something wrong on my end, but I just can't figure out what. I can access the database using the Microsoft SQL Server Management Studio just fine, so the database is running. Any guidance will be greatly appreciated!
Cannot connect to SQL\Express with pyodbc/pymssql and Robot Framework
1.2
1
0
1,854
44,000,085
2017-05-16T11:20:00.000
0
1
1
0
python,ruby-on-rails,ruby
44,006,799
1
true
1
0
As other have stated, there are serious security implications for letting users blindly run things on your server, but if you really wanted to, you could write the python to a file and then execute the python using system "python #{file_path}". I'm not too familiar with python, but you could probably switch stdout and stderr to write to files and then read those files to get any print statements.
1
0
0
I am trying to make a site like Codeacademy. Where users can learn Python online, solve problems and master theory. Framework for the server is Ruby On Rails I am trying to understand how can I translate python's code to the server, then execute this code on the server? Any python's interpreters created in Ruby? I totally can not understand how this should work. Python's version: 2.7 or 3.5 (Not fundamentally) Thank you for attention
Python embedded in RubyOnRails
1.2
0
0
81
44,000,687
2017-05-16T11:51:00.000
1
0
0
0
javascript,python,sql,sqlite
44,001,568
1
false
1
0
It's not a good idea to allow clients to access directly to the db. If you have to do it be carefull to not give to the account you use full write/read access to the db or any malicius client can erase modify or steal information from the db. An implementation with client identification server-side and rest API to return or modify DB it's safer.
1
0
0
I have sqlite db runing on my server. I want to access it using client side javascript in browser. Is this possible? As of now, I am using python to access the db and calling python scripts for db operations.
Access sqlite in server through client side javascript
0.197375
1
0
158
44,005,182
2017-05-16T15:06:00.000
1
1
1
0
python
44,025,088
2
false
0
0
I have not been able to determine where the problem was so I just specificied the full path using the command getcwd from os. It has worked so far. It means I must have a hidden .pyc or .py~ file somewhere.
1
1
0
I am currently trying to load function from another .py file. I have in the same folder: algo.py and test_algo.py. I need to import all functions from algo in test_algo so I use the command: from algo import * The import is succesful however one function do_sthg() takes 3 arguments in algo but the imported version requires 4 arguments which was the case in a very old version of the code. I deleted all .py~ related files and there are not any other scripts with the name algo on my computer. How is that possible and how can i solve this issue? (I can not specify the full links to my script as it should change over time, I am using 2.7 version of Python) Any help would be appreciated.
Old version of a script is imported using import on Python
0.099668
0
0
875
44,006,560
2017-05-16T16:12:00.000
2
0
1
0
python,pyaudio
44,007,671
1
true
1
0
Surely you can record audio for more than an hour using pyaudio. Try invoking the recording function in a thread and put the main process in a loop or sleep for that period. Note: Make sure you do not run out of memory.
1
0
0
I use pyaudio with python2.7.13 to record wav ,but my pragram dead when I record more than 1 hour , how can I do if I want to record for more than 1 hour with py2.7. Thanks for your replay!
I want to use pyaudio to record wav more than hours
1.2
0
0
204
44,008,037
2017-05-16T17:32:00.000
0
0
0
0
python,mysql,django,pip
44,036,652
1
true
1
0
I was able to fix this by running pip install mysql. I do not understand why this worked because I already had MySQL installed on my system and had been using it. I am going to assume it is because Python uses environments and MySQL wasn't installed in the environment but I would like to know for sure.
1
0
0
I am trying to install mysqlclient on mac to use mysql in a django project. I have made sure that setup tools is installed and that mysql connector c is installed as well. I keep getting the error Command "python setup.py egg_info" failed with error code 1 in. This is my first django project since switching from rails. Is there something I am missing? I am using python 3 and I use pip install mysqlclient.
Mysqlclient fails to install
1.2
1
0
599
44,009,244
2017-05-16T18:43:00.000
8
0
0
0
python,arrays,numpy,image-processing,tensorflow
44,009,737
2
true
0
0
If you are looking to create a 1D array, use .reshape(-1), which will create a linear version of you array. If you the use .reshape(32,32,3), this will create an array of 32, 32-by-3, arrays, which is the original format described. Using '-1' creates a linear array of the same size as the number of elements in the combined, nested array.
2
4
1
I have RGB images (32 x 32 x 3) saved as 3D numpy arrays which I use as input for my Neural Net (using tensorflow). In order to use them as an input I reshape them to a 1D np array (1 x 3072) using reshape(1,-1). When I finish training my Net I want to reshape the output back, but using reshape(32,32,3) doesn't seem to provide the desired outcome. Is this the correct way to do it? How I can be sure that each datum will be back to the correct place?
Python: How can I reshape 3D Images (np array) to 1D and then reshape them back correctly to 3D?
1.2
0
0
11,600
44,009,244
2017-05-16T18:43:00.000
2
0
0
0
python,arrays,numpy,image-processing,tensorflow
44,009,566
2
false
0
0
If M is (32 x 32 x 3), then .reshape(1,-1) will produce a 2d array (not 1d), of shape (1, 32*32*3). That can be reshaped back to (32,32,3) with the same sort of reshape statement. But that's reshaping the input to and from But you haven't told us what the output of your Net is like. What shape does it have? How are you trying to reshape the output, and what is wrong with it?
2
4
1
I have RGB images (32 x 32 x 3) saved as 3D numpy arrays which I use as input for my Neural Net (using tensorflow). In order to use them as an input I reshape them to a 1D np array (1 x 3072) using reshape(1,-1). When I finish training my Net I want to reshape the output back, but using reshape(32,32,3) doesn't seem to provide the desired outcome. Is this the correct way to do it? How I can be sure that each datum will be back to the correct place?
Python: How can I reshape 3D Images (np array) to 1D and then reshape them back correctly to 3D?
0.197375
0
0
11,600
44,009,966
2017-05-16T19:25:00.000
33
0
1
0
python,conda,python-twitter
44,021,398
3
true
0
0
You can install pip in your conda env and then run pip install python-twitter. It should work.
1
26
0
I'm using python 3.6 as anaconda, and im trying to install python-twitter package, but there is no package compatible from conda manager. How can i download the package outside conda manager in order to use it later with jupyter notebook?
Installing package not found in conda
1.2
0
0
39,174
44,010,155
2017-05-16T19:37:00.000
1
1
0
0
linux,bash,python-3.x,minecraft,gnu-screen
44,896,034
1
true
0
0
Well, the ideal solution would be to write a bukkit plugin/forge mod to do this, rather than doing this entirely from outside the actual server. That being said, however, your best bet is probably watching the log files, as JNevill says in the comment.
1
1
0
I'm currently in the process of hacking together a bit of bash and python3 to integrate my Minecraft server with my friends Discord. I managed to power through most of the planned features with nary a hitch, however now I've gotten myself stuck halfway into the chat integration. I can send messages from the Discord to the server no problem, but I have no idea how to read the console output of the server instance, which is running in a screen session. I would appreciate some pointers in the right direction, if you know how this sort of thing is done. Ideally I would like a solution that is capable of running asynchronously, so I don't have to do a whole lot of busy-waiting to check for messages. P.S.: Sorry if this belongs on superuser instead, I wasn't sure where to put it.
How do I grab console output from a program running in a screen session?
1.2
0
0
254
44,011,414
2017-05-16T20:54:00.000
2
0
0
0
python,flask,web-development-server
44,011,562
2
true
1
0
Files you put in the /static folder will be accessible, but flask doesn't do directory listing. So if you put a script.js in /static/js for instance, you should be able to GET /static/js/script.js even though GET /static will 404.
1
0
0
Flask Documentation states: Dynamic web applications also need static files. That’s usually where the CSS and JavaScript files are coming from. Ideally your web server is configured to serve them for you, but during development Flask can do that as well. Just create a folder called static in your package or next to your module and it will be available at /static on the application. But in my webapp, when I try to acces localhost:5000/static/, I get a 404 error on the browser. What is more weird is that when I run it in debug mode, I get a 200 (ok) on the terminal, and 404 on the browser. Could you explain what is happening? I want a directory listing of my static directory on the browser.
Flask - GET request to 'static' directory returns Error 404
1.2
0
0
587
44,011,776
2017-05-16T21:21:00.000
0
0
0
0
python,google-app-engine,oauth-2.0
55,574,741
5
false
1
0
Run this sudo python -m pip install oauth2client
1
49
0
We are receiving an error: ImportError: No module named OAuth2Client We have noticed scores of questions around this topic, many unanswered and at least one answer that describes the solution of copying over files from the Google App Engine SDK. This approach, however, seems tedious because all the dependencies are unclear. If we copy over oauth2client then run, the next error is another module that is missing. Fix that, then another module is missing, etc., etc. What is ironic is that we can see all the files and modules needed, listed from Google App Engine SDK right in PyCharm but they seem inaccessible to the script. Is there no better way to pull in all the files that oauth2client needs for Python to work on App Engine?
How to prevent "ImportError: No module named oauth2client.client" on Google App Engine?
0
0
1
80,755
44,012,748
2017-05-16T22:40:00.000
1
0
1
0
python
44,012,838
4
false
0
0
Yes, you need to store the value into a file and load it back when the program runs again. This is called program state serialization or persistency.
1
5
0
I have a Python script that I want to increment a global variable every time it is run. Is this possible?
How to increment variable every time script is run in Python?
0.049958
0
0
9,184
44,013,107
2017-05-16T23:18:00.000
2
0
0
1
python-3.x,https,cherrypy,bottle
44,016,254
3
false
1
0
You need to put your WSGI server (not WsgiRef certainly) behind a reverse-proxy with https support. Nginx is the most common choice.
1
8
0
I'm using Bottle as my webservice. Currently, its running on bottle's default wsgi server and handles HTTP requests. I want to encrypt my webservice and handle HTTPS requests. Can someone suggest a method for this. I tried running on cherrypy server, but the latest version is not supporting the pyOpenSSLAdapter.
How to make bottle server HTTPS python
0.132549
0
1
4,247
44,014,764
2017-05-17T02:58:00.000
-1
0
0
0
windows,python-3.x,docker,tensorflow
44,054,062
2
false
0
0
If you're planning to use Python 3, I'd recommend docker run -it gcr.io/tensorflow/tensorflow:latest-devel-py3 (Numpy is installed for python3 in that container). Not sure why Python 3 is partially installed in the latest-devel package.
1
0
1
All of my steps have worked very well up to this point. I am on a windows machine currently. I am in the root directory after using the command: docker run -it gcr.io/tensorflow/tensorflow:latest-devel then followed by a cd /tensorflow, I am now in the directory and it is time to train the images so i jused: /tensorflow# python tensorflow/examples/image_retraining/retrain.py \ --bottleneck_dir=/tf_files/bottlenets \ --how_many_training_steps 500 \ --model_dir=/tf_files/retrained_graph.pb \ --output_labels=/tf_files/retrained_labels.txt \ --image_dir /tf_files/ And i get this error: File "tensorflow/examples/image_retraining/retrain.py", line 77, in import numpy as np ImportError: No module named 'numpy' I DO already have numpy installed in my python35 folder and it is up to date. Thanks a lot for any help, I am really stuck on this!
Using Docker for Image training in Python (New to this)
-0.099668
0
0
237
44,015,587
2017-05-17T04:27:00.000
0
0
1
0
python,c,matlab,loops,append
44,015,867
5
false
0
0
In windows, use copy data*.txt data-all.txt In Unix, use cat data*.txt >> data-all.txt
1
0
0
I have some data files, say data1.txt, data 2.txt,... and so on. I want to read all these data files using a single loop structure and append the data values into a single file, say data-all.txt. I am fine with any of the following programming languages: c, python, matlab
Read data from text files having same name structure and append all data into a new file
0
0
0
954
44,015,911
2017-05-17T04:53:00.000
3
0
0
0
python,html
44,016,198
1
false
0
0
This can be explained by several reasons: Either the website filters the clients by a criterion (like the User Agent header) so it only sends the contents to "real" clients (ie browsers) Either the website loads an empty webpage and then populates it with javascript, which means that you only get the dummy page with your GET request (this can only be the case if you use Inspect Element and not View source code)
1
0
0
For example, "view source code" on Internet Explorer → <html> aaa(bbb)ccc </html> requests.get(url).text → <html> aaa()ccc </html> Why? How I can get the former html-text in Python?
What is the difference between html-text by "view source code" on Internet Explorer and by requests.get() method in python?
0.53705
0
1
69
44,017,326
2017-05-17T06:38:00.000
0
0
1
0
python-3.x,tensorflow,pip,installation,jupyter-notebook
44,020,731
1
true
0
0
There is a package called nb_conda that helps manage your anaconda kernels. However, when you launch Jupyter make sure that you have jupyter installed inside your conda environment and that you are launching Jupyter from that activated environment. So: Activate your conda environment that has Tensorflow installed. You can check by doing conda list. If Tensorflow is not installed within your environment then do so. Install jupyter and nb_conda if you haven't already. From your activated environment, run jupyter notebook. You should now be running in the correct kernel. You should see a kernel named Python [conda env:namehere] in the top right. You may also have a choice of kernels thanks to nb_conda if installed. See if that works for you.
1
1
1
I installed tensorflow via python3.5 pip, it is in the python3.5 lib folder and I can use it perfectly on shell IDLE. I have anaconda(jupyter notebook) on my computer at the same time, however, I couldn't import tensorflow on notebook. I guess notebook was using the anaconda lib folder, not python3.5 libs. is there any way to fix that instead of install again on anaconda folder? thanks
How could I use TensorFlow in jupyter notebook? I install TensorFlow via python 3.5 pip already
1.2
0
0
4,674
44,020,050
2017-05-17T08:54:00.000
3
0
0
1
python,tensorflow,pycharm,cudnn
44,022,536
1
true
0
0
The solution is: Run PyCharm from the console. OR add the environment variable to the IDE settings: LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
1
1
1
I have an issues with tensorflow on pycharm. Whenever I import tensorflow in the linux terminal, it works correctly. However, in PyCharm community 2017.1, it shows: ImportError: libcudnn.so.5: cannot open shared object file: No such file or directory Any hint on how to tackle the issue. Please note that I am using python 3.5.2, tensorflow 1.1.0, Cuda 8 and CuDnn 5.1 EDIT: when printing sys.path, I get this in PyCharm: ['/home/xxx/pycharm-community-2017.1.2/helpers/pydev', '/home/xxx/pycharm-community-2017.1.2/helpers/pydev', '/usr/lib/python35.zip', '/usr/lib/python3.5', '/usr/lib/python3.5/plat-x86_64-linux-gnu', '/usr/lib/python3.5/lib-dynload', '/usr/local/lib/python3.5/dist-packages', '/usr/lib/python3/dist-packages', '/usr/local/lib/python3.5/dist-packages/IPython/extensions', '/home/xxx/xxx/xxx'] and this in the terminal: ['', '/usr/local/bin', '/usr/lib/python35.zip', '/usr/lib/python3.5', '/usr/lib/python3.5/plat-x86_64-linux-gnu', '/usr/lib/python3.5/lib-dynload', '/usr/local/lib/python3.5/dist-packages', '/usr/lib/python3/dist-packages', '/usr/local/lib/python3.5/dist-packages/IPython/extensions', '/home/xxx/.ipython']
Tensorflow and Pycharm
1.2
0
0
921
44,021,777
2017-05-17T10:06:00.000
0
0
0
0
python,tensorflow,neural-network,pixel,convolution
44,079,737
2
false
0
0
Actually, I'm trying to train a NN that get corrupted images and based on them the grand truth, remove noise from that images.It must be Network in Network, an another word pixels independent.
1
0
1
I am new by tensorflow. I want to write a Neural network, that gets noisy images from a file and uncorrupted images from another file. then I want to correct noisy images based on the other images.
get pixel of image in tensorflow
0
0
0
594
44,022,180
2017-05-17T10:23:00.000
4
0
0
0
python,gensim,word2vec
44,037,339
1
true
0
0
This would normally work, if the file was created by gensim's native .save(). Are you sure the file 'ammendment_vectors.model.bin' is complete and uncorrupted? Was it created using the same Python/gensim versions as in use where you're trying to load() it? Can you try re-creating the file?
1
2
1
I am trying to load a binary file using gensim.Word2Vec.load(fname) but I get the error: File "file.py", line 24, in model = gensim.models.Word2Vec.load('ammendment_vectors.model.bin') File "/home/hp/anaconda3/lib/python3.6/site-packages/gensim/models/word2vec.py", line 1396, in load model = super(Word2Vec, cls).load(*args, **kwargs) File "/home/hp/anaconda3/lib/python3.6/site-packages/gensim/utils.py", line 271, in load obj = unpickle(fname) File "/home/hp/anaconda3/lib/python3.6/site-packages/gensim/utils.py", line 933, in unpickle return _pickle.load(f, encoding='latin1') _pickle.UnpicklingError: could not find MARK I googled but I am unable to figure out why this error is coming up. Please let me know if any other information is required.
Unpickling Error while using Word2Vec.load()
1.2
0
0
5,562
44,023,078
2017-05-17T11:05:00.000
0
0
0
0
python,authentication,jenkins
44,024,658
2
false
0
0
Error 403 is basically issued when the user is not allowed to access the resource. Are you able to access the resource manually using the same credentials? If there are some other admin credentials, then you can try using those. Also, I am not sure but may be you can try running the python script with admin rights.
2
1
0
I am trying to create a job using the python api. I have created my own config, but the authentication fails. It produces an error message: File "/usr/lib/python2.7/dist-packages/jenkins/__init__.py", line 415, in create_job self.server + CREATE_JOB % locals(), config_xml, headers)) File "/usr/lib/python2.7/dist-packages/jenkins/__init__.py", line 236, in jenkins_open 'Possibly authentication failed [%s]' % (e.code) jenkins.JenkinsException: Error in request.Possibly authentication failed [403] The config file I have created was copied from another job config file as it was the easiest way to build it: I am using the import jenkins module. The server instance I create is using these credentials: server = jenkins.Jenkins(jenkins_url, username = 'my_username', password = 'my_APITOKEN') Any help will be greatly appreciated.
Authentication failed with Jenkins using python API
0
0
1
2,119
44,023,078
2017-05-17T11:05:00.000
0
0
0
0
python,authentication,jenkins
44,050,111
2
false
0
0
As far as I know for security reasons in Jenkins 2.x only admins are able to create jobs (to be specific - are able to send PUT requests). At least that's what I encountered using Jenkins Job Builder (also Python) and Jenkins 2.x.
2
1
0
I am trying to create a job using the python api. I have created my own config, but the authentication fails. It produces an error message: File "/usr/lib/python2.7/dist-packages/jenkins/__init__.py", line 415, in create_job self.server + CREATE_JOB % locals(), config_xml, headers)) File "/usr/lib/python2.7/dist-packages/jenkins/__init__.py", line 236, in jenkins_open 'Possibly authentication failed [%s]' % (e.code) jenkins.JenkinsException: Error in request.Possibly authentication failed [403] The config file I have created was copied from another job config file as it was the easiest way to build it: I am using the import jenkins module. The server instance I create is using these credentials: server = jenkins.Jenkins(jenkins_url, username = 'my_username', password = 'my_APITOKEN') Any help will be greatly appreciated.
Authentication failed with Jenkins using python API
0
0
1
2,119
44,023,311
2017-05-17T11:16:00.000
0
0
0
1
python,ubuntu,gtk,gtk3
44,053,546
1
true
0
1
Limiting directory changes isn't directly available in FileChooser, but there are a few ways: You can define file filters (Gtk.FileFilter) but those basically filter on the file extension (or mime type). More interesting is that, when changing the folder, a signal is emitted called 'current_folder_changed'. So, you could bind a function to that signal and take action. Mind: if you programmatically change the folder as a result of this signal, the signal will probably be called again, so you have to temporarily block the signal while doing that.
1
0
0
Can gtk filechooser set to be folder-restricted? A normal filechooser will display all folder files tree starting from / (root), what I need is, to allow filechooser displaying only from /media folder only. So, the top visible folder is only /media, not everything else like /home, /usr, etc. Thank you for all your kindly help.
Python gtk3 filechooser restrict folder
1.2
0
0
518
44,023,863
2017-05-17T11:41:00.000
0
1
0
1
python,c++,python-3.x,c++11
44,025,218
2
false
0
1
There's POSIX popen and on Windows _popen, which is halfway between exec and system. It offers the required control over stdin and stdout, which system does not. But on the other hand, it's not as complicated as the exec family of functions.
1
0
0
I am trying to make a program in c++, but i cant make the program because in one part of the code I need to run a python program from c++ and I dont know how to do it. I've been trying many ways of doing it but none of them worked. So the code should look sometihnglike this:somethingtoruntheprogram("pytestx.py"); or something close to that. Id prefer doing it without python.h. I just need to execute this program, I need to run the program because I have redirected output and input from the python program with sys.stdout and sys.stdin to text files and then I need to take data from those text files and compare them. I am using windows.
How to run a python program from c++
0
0
0
362
44,026,930
2017-05-17T13:55:00.000
0
0
1
0
python,python-3.x,python-multithreading,python-sockets
44,027,062
1
false
0
0
Yes, it is possible. Do note however that you will not gain throughput this way unless you are able to process the datagrams using a compiled extension module like NumPy or some custom logic.
1
0
0
Bit of a python noob, but I was wondering … I want to start a thread on start up and pass udp socket data to the thread when it comes in for the thread to process AND then respond to the client accordingly. All the examples I have seen so far create a thread, do something, bin it, repeat. I don’t want thousands of threads to be created, just one to handle message data of a particular type. Is this possible and does anyone know of any examples ? Thanks
Python: how to pass UDP socket data to thread for processing
0
0
1
186
44,026,952
2017-05-17T13:55:00.000
0
0
1
0
printing,jupyter-notebook,ipython-notebook,pdflatex
55,794,790
3
false
0
0
I think I've found a decent solution as I was stuck with the same problem. For an aesthetically pleasing print-out of the Jupyter notebook (.pdf format) for the usage of study and learning (as lecture slides), I recommend take a print-out using your web-browser (Chrome: Print=Ctrl+P). Outcome: an aestheically pleasing document, containing all codes, pictures as embedded within a Jupyter notebook. tl:dr Avoid any conversion within the Jupyter notebook option; print straight out of web-browser.
1
16
0
What is the best way to print an iPython notebook (.ipynb) that contains a lot of figures/plots, photos, and code that would appear with a horizontal scroll bar? I've tried converting them to HTML, slides, PDF, etc, but neither has produced a decent output. For example, the slides have super-large font/zoom such that one page has no more than 5 lines of text in it. I've tried GitPrint, but that's only good for markdown (md) files. I've tried converting ipynb to tex and using pdflatex to convert to PDF, but there are a lot of errors and I keep getting stuck with a question mark prompt (?). When I hit enter through them, the output doesn't contain the photos. So what is the best way here? I don't care about the extension, only that it looks good (like the ipynb) on paper.
Best way to print an iPython notebook
0
0
0
22,223
44,027,154
2017-05-17T14:04:00.000
1
1
0
0
testing,automation,automated-tests,appium,python-appium
44,125,346
2
false
0
0
If you use the full reset option as false in desired capabilities, then the app won't be started fresh every time. I used the following in my desired capabilities. capabilities.setCapability("fullReset", false); If you are using appium 1.5.3 GUI, check the box against No reset in iOS/Android settings.
2
0
0
I was trying to execute an appium test suite i have created, which consists of multiple test files within the suite. Can anyone please help, i'm unable to execute the second test script after the execution of first script. It restarts the app and starts afresh. I need to start from where it left off in the first script. I've tried session-override flag, also tried launch_app().
How to execute multiple appium test scripts one after the other?
0.099668
0
0
1,731
44,027,154
2017-05-17T14:04:00.000
1
1
0
0
testing,automation,automated-tests,appium,python-appium
44,171,420
2
false
0
0
You need to add the above capability in the main method where you have mentioned the other capabilities such as device name, app path, .. so that the main method will install the app in your device first and start execute the first test method freshly followed by the other test methods in the suite without reset.I am using this in my automation both in iOS and Android. Uninstall the app if you have already before running the test in the test device.
2
0
0
I was trying to execute an appium test suite i have created, which consists of multiple test files within the suite. Can anyone please help, i'm unable to execute the second test script after the execution of first script. It restarts the app and starts afresh. I need to start from where it left off in the first script. I've tried session-override flag, also tried launch_app().
How to execute multiple appium test scripts one after the other?
0.099668
0
0
1,731
44,028,186
2017-05-17T14:48:00.000
1
0
0
0
python,excel,openpyxl,xlsxwriter
45,254,364
2
true
0
0
As far as I know, there is no such feature at the moment in the openpyxl. However, this can be easily done in an optimized way in the xlsxwriter module.
1
1
0
I am looking for a method for hiding all rows in an excel sheet using python's openpyxl module. I would like, for example, to hide all rows from the 10th one to the end. Is it possible in openpyxl? For instance in xlsxwriter there is a way to hide all unused rows. So I am looking for a similar funcitonality in openpyxl, but I can not find it in docs or anywhere else, so any help will be much appreciated. I know it can be easily done by iterating over rows, but this approach is awfully slow, so I would like to find something faster.
python openpyxl: hide all rows from to the end
1.2
1
0
413
44,033,533
2017-05-17T19:43:00.000
4
0
0
0
python,numpy
44,035,007
3
true
0
0
You can use the function np.logaddexp() to do such operations. It computes logaddexp(x1, x2) == log(exp(x1) + exp(x2)) without explicitly computing the intermediate exp() values. This avoids the overflow. Since exp(0.0) == 1, you would compute np.logaddexp(0.0, 1000.0) and get the result of 1000.0, as expected.
1
2
1
I am a newbie in python sorry for the simple question. In the following code, I want to calculate the exponent and then take the log. Y=numpy.log(1+numpy.exp(1000)) The problem is that when I take the exponent of 710 or larger numbers the numpy.exp() function returns 'inf' even if I print it with 64float it prints 'inf'. any help regarding the problem will be appreciated.
How to deal with exponent overflow of 64float precision in python?
1.2
0
0
2,564
44,033,590
2017-05-17T19:47:00.000
0
0
0
0
vpython
44,033,675
2
false
1
1
You could change the attribute that is modified directly to the electron object instead of creating it all the time. Apply the modifications to electron and add the computing action in the while. Is it what you meant?
1
0
0
I'm doing a project for a year 11 physics class and I'm trying to make a battery that generates electrons. This is the code: electron = sphere(radius = 1, color = color.yellow, vel = vec(-1,0,0)); while battery.voltage > 0: eb = electron.clone(pos=vec(0,0,0), vel = vec(-1,0,0)); I'm trying to make "eb" constantly, but it only applies eb.pos = eb.pos + eb.vel * deltat; apply to a the first electron. Is there any way to do this without making 600 different electron objects?
making objects constantly in a while loop
0
0
0
86
44,034,748
2017-05-17T21:05:00.000
1
0
0
0
python,python-3.x,pygame
44,073,263
1
false
0
1
Okay, so I figured it out. I was going about it the wrong way. pygame.mouse.get_pos() is what I was using, and that only tracks the position of the mouse on the Pygame surface created. So if there's another window there, the mouse isn't actually on the Pygame surface. I ended up installing the module "win32api" and using import win32api mouse = win32api.GetCursorPos() x1 = mouse[0] y1 = mouse[1] for the x and y coordinates of the mouse position on the computer display. I realized afterward that there were questions answering what I needed to know, I just wasn't asking quite the right question.
1
2
0
Okay, so I'm using Pygame, and I created a code that tracks the mouse on my Pygame display. However, when I pull up another window over the Pygame one, it stops tracking the mouse position. I was wondering, is there any way to continue tracking the mouse even when my Pygame window is not the active one?
Continue tracking mouse with Pygame even when Pygame window isn't active
0.197375
0
0
61
44,035,526
2017-05-17T22:06:00.000
2
0
0
0
python,pygame,artificial-intelligence,simulation,pymunk
44,059,670
1
true
0
1
Pymunk itself doesn't depend on any visualization. You move the simulation forward with the space.step method, and you can call it as many times as you want, for example 1000 times with a dt of 0.1 to move the simulation forward 100 units (seconds). If you want to see something you have the option to read out the state and draw it at that time. The pygame integration provided with pymunk is just for those that want an quick and easy way to get something on screen. If you don't want anything drawn you absolutely not need to use it. Just be aware that it is not the same to call space.step 100 times with a dt of 0.01 and calling it with a dt of 1 (the later will give a much less accurate simulation)
1
1
0
The question I have is, is it possible to run a Pymunk simulation without having the screen to visualize it pop up? I'm working on a research project that involves Pymunk and Pygame. The goal is to develop an agent that can infer certain properties about physics simulations involving objects and agents within the Pymunk space. Part of the project requires a comparison of many different simulations and the fact that a screen pops up so I can view each simulation causes the problem to take too much time (as I have to view each sim before being able to collect the data from it). I'd like to basically run each sim in the background as fast as possible to just collect the physical data. There is no need for me to actually visualize the simulations at certain points. Let me know if I've been clear enough or this is a duplicate. Though, I searched for an answer here, I have not found one.
Possible To Run Pymunk Simulation Without Screen (as in without actually seeing it)?
1.2
0
0
269
44,037,702
2017-05-18T02:39:00.000
4
1
0
0
python,version-control,pem
44,037,794
1
true
0
0
In most cases a .pem file contains sensitive information and is environment specific, it should not be part of the project source code. It should be available in a secured server and downloadable with appropriate authorization.
1
0
0
I was having some SSL connectivity issues while connecting to a secured URL, Later I fixed this problem by providing .pem file path for verifying. ie, verify="file/path.pem" My doubt is should this file be stored in a common place on the server or should this file be part of the project source code and hence version control. Please advise.
Python: Should I version control .pem files?
1.2
0
1
513
44,040,630
2017-05-18T06:54:00.000
18
0
0
0
python,windows,qt,python-3.6,cx-freeze
44,041,557
2
false
0
1
I solved it by copy and paste "platforms" folder to the .exe folder. In my case, because I have installed Anaconda IDE, the path of this folder is Anaconda3/Library/plugins/platforms. Hope this will help you.
2
5
0
I create an .exe file by cx_freeze and copy all the .dll file I can find to the folder which includes that .exe. The problem is I can run the .exe on my computer perfectly but can't run on another computer by using the same folder. I have tried 3 different computers and all pop up the error message "This application failed to start because it could not find or load the Qt platform plugin "windows" in ""." It really confuses me why the problem exists on another computer but doesn't exist on mine.
Could not find or load the Qt platform plugin "windows" -- cx_freeze(.exe)
1
0
0
12,076
44,040,630
2017-05-18T06:54:00.000
3
0
0
0
python,windows,qt,python-3.6,cx-freeze
67,828,390
2
false
0
1
I ran into the same error and solved it with a different method than those mentioned in other posts. Hopefully this will help future readers. BUILD: Windows 10 (64bit) Minicoda (using python 3.9.4) (pkgs are from conda-forge channel) pyqt 5.12.3 VScode 1.56.2 My scenario: I was building a GUI application for some embedded work. I had two machines that were used for development (same OS and architecture), one had zero internet connection. After packaging up my environment and installing on the offline machine, I ran into the error that you got. Solution: locate the qt.conf file in your conda environment. for me: C:\Users"name"\miniconda3\envs"env_name"\qt.conf Make sure the paths are correct. I needed to update the "name" as this was left over from the old machine. Hopefully this helps someone.
2
5
0
I create an .exe file by cx_freeze and copy all the .dll file I can find to the folder which includes that .exe. The problem is I can run the .exe on my computer perfectly but can't run on another computer by using the same folder. I have tried 3 different computers and all pop up the error message "This application failed to start because it could not find or load the Qt platform plugin "windows" in ""." It really confuses me why the problem exists on another computer but doesn't exist on mine.
Could not find or load the Qt platform plugin "windows" -- cx_freeze(.exe)
0.291313
0
0
12,076
44,041,347
2017-05-18T07:31:00.000
1
0
0
0
python,machine-learning,scikit-learn,cluster-analysis,data-mining
44,055,484
2
false
0
0
To be closer to the intuition of DBSCAN you probably should only consider core points. Put the core points into a nearest neighbor searcher. Then search for all noise points, use the cluster label of the nearest point.
1
2
1
I'm using clustering algorithms like DBSCAN. It returns a 'cluster' called -1 which are points that are not part of any cluster. For these points I want to determine the distance from it to the nearest cluster to get something like a metric for how abnormal this point is. Is this possible? Or are there any alternatives for this kind of metric?
sklearn: Get Distance from Point to Nearest Cluster
0.099668
0
0
2,660
44,042,709
2017-05-18T08:39:00.000
0
0
1
0
python,ssl
44,043,522
1
false
0
0
The cert file verification can be ignored by setting the verfiy_mode to CERT_NONE (default mode in Python 3.6, ssl module). In this mode, only encryption of data is proceeded, the identity of the server and the client will not be veried. Be careful that this is unsecure.
1
1
0
I'm trying to establish a connection using ssl in python (I'm using python 3.6.1). I'm very new to ssl and so I read the ssl documentation and I saw that there is a function called create_default_context that return a new SSLContext object with default setting and I did'nt fully understand this function. I want to create a ssl server (client is in javascript). My question is if I can use only the default context that I'm creating or do I need to create self signed certificate and key file for the server as well ?
python default context object ssl
0
0
1
1,300
44,043,171
2017-05-18T09:00:00.000
2
0
0
0
python,python-3.x,neo4j
44,043,172
1
true
0
0
This seems to be a issue with neo4j version 3.2.0. Setting cypher.default_language_version to 3.1 in neo4j.conf and restarting the server should fix this.
1
0
0
Getting neo4j.v1.api.CypherError: Internal error - should have used fall back to execute query, but something went horribly wrong when using python neomodel client with neo4j community edition 3.2.0 server. And the neo4j server logs has the below errors: 2017-05-16 12:54:24.187+0000 ERROR [o.n.b.v.r.ErrorReporter] Client triggered an unexpected error [UnknownError]: Internal error - should have used fall back to execute query, but something went horribly wrong, reference 4c32d6e0-a66a-4db4-830c-b8d03ce6f1e3. 2017-05-16 12:54:24.187+0000 ERROR [o.n.b.v.r.ErrorReporter] Client triggered an unexpected error [UnknownError]: Internal error - should have used fall back to execute query, but something went horribly wrong, reference 4c32d6e0-a66a-4db4-830c-b8d03ce6f1e3. Internal error - should have used fall back to execute query, but something went horribly wrong org.neo4j.cypher.internal.ir.v3_2.exception.CantHandleQueryException: Internal error - should have used fall back to execute query, but something went horribly wrong
neo4j.v1.api.CypherError: Internal error - should have used fall back to execute query, but something went horribly wrong
1.2
1
0
213
44,043,579
2017-05-18T09:18:00.000
2
0
0
1
python,celery,celery-task,celeryd
44,094,110
1
true
0
0
Using CELERY_ACKS_LATE = True solved the problem
1
1
0
Setup: Celery 3.1, broker=RabbitMQ, backend=Redis. Scenario: Having a task is state=STARTED (running) my worker is being restarted. I'm getting worker: Warm shutdown (MainProcess) message (stdout). The worker successfully restarted but the task is stuck on STARTED state (monitored via flower) and nothing happen. Desired state: I wish that the stuck task will run again (or fail before the shutdown) - and not be ignored and left as 'STARTED' forever.
worker: Warm shutdown (MainProcess) after task started
1.2
0
0
4,467
44,044,773
2017-05-18T10:06:00.000
0
0
0
0
python,tweepy
44,423,577
1
false
0
0
the twitter api doesn't allow that. you'll have to check for each returned tweet whether or not it actually contains one of your exact phrases.
1
0
1
I can't get tweepy filtering to quite work how I want to. stream.filter(track=['one two' , 'three four']) I want to retweet based on a specific two word set i.e. "one two" but I'm getting retweets where the tweet has those two words, but not in order and separated i.e. "three two one" or "one three two" etc. I want tweets which contain my phrase but in order i.e. "one two three" or "three one two" or "one two" etc.
Filtering in tweepy - exact phrase
0
0
0
990
44,045,913
2017-05-18T10:59:00.000
0
0
0
0
python,machine-learning,statistics,regression,data-science
44,049,390
2
false
0
0
Why did customer service calls drop last month? It depends on what type and features of data you have to analyze and explore the data. One of the basic things is to look at correlation between features and target variable to check if you can identify any feature that can correlate with the drop of calls. So exploring different statistic might help better to answer this question than prediction models. Also it is always a good practice to analyze and explore the data before you even start working on prediction models as its often necessary to improve the data (scaling, removing outliers, missing data etc) depending on the prediction model you chose. Should we go with this promotion model or another one? This question can be answered based on the regression or any other prediction models you designed for this data. These models would help you to predict the sales/outcome for the feature if you can provide the input features of the promotion models.
1
0
1
Thanks for your help on this. This feels like a silly question, and I may be overcomplicating things. Some background information - I just recently learned some machine learning methodologies in Python (scikit and some statsmodels), such as linear regression, logistic regression, KNN, etc. I can work the steps of prepping the data in pandas data frames and transforming categorical data to 0's and 1's. I can also load those into a model (like, logistic regression in scikit learn). I know how to train and test it (using CV, etc.), and some fine tuning methods (gridscore, etc.). But this is all in the scope of predicting outcomes on new data. I mainly focused on learning on building a model to predict on new X values, and testing that model to confirm accuracy/precision. However, now I'm having trouble identifying and executing the steps to the OTHER kinds of questions that say, a regression model, can answer, like: Why did customer service calls drop last month? Should we go with this promotion model or another one? Assuming we have all our variables/predictor sets, how would we determine those two questions using any supervised machine learning model, or just a stat model in the statsmodels package. Hope this makes sense. I can certainly go into more detail.
Answering business questions with machine learning models (scikit or statsmodels)
0
0
0
200
44,046,665
2017-05-18T11:36:00.000
1
0
0
1
python,filesystems,inotify,watchdog,python-watchdog
46,293,315
1
false
0
0
Yes it does. Check how is the external program creating the file. In my case, the external program was creating a file with filename initiated with a '.' and ending with '.tmp' and when it is done writing to the temporary file it was moving it to actual filename which is ending with '.json'(for which I have set up the watcher). Only on_moved event is triggered in this case. Overriding on_moved handler will solve the issue here.
1
0
0
Does watchdog's "trigger event on file creation" depend on anything specific to how the files are created? I'm finding a discrepancy between when files are saved into a directory by an external program and when they are copied into the directory. I'm using watchdog to monitor a directory, trigger off new files created in that directory, and then it runs a bunch of other scripts for those files. However I'm having a strange problem. I'm monitoring one directory where new files are saved into it by an external program over time. Watchdog does not trigger when these files appear in the directory. However I'm running a separate instance of the program which monitors a 2nd directory, and when I copy the files into this directory, watchdog triggers as expected and runs the code. I'm running this on a Linux machine. Any ideas? Thanks.
Python Watchdog Issue Not triggering Events for files saved by external software
0.197375
0
0
885
44,047,544
2017-05-18T12:17:00.000
0
0
0
0
python
44,048,354
3
false
0
0
I had a similar problem, solved it by installing an older pandas version pip install pandas==0.19.2
1
1
1
Trying to import Keras 2.0.4 with Tensorflow 1.0.1 on Windows10 as backend, but I got the following message: AttributeError: module 'pandas' has no attribute 'computation' I've recently upgraded my pandas into version 0.20.1, is it the reason why I failed to import keras? There is a lot more information available on the error message. if you want to know about it, just let me know
Trying to import keras but got an error
0
0
0
1,406
44,048,555
2017-05-18T13:03:00.000
3
0
0
1
python,docker,flask
44,055,755
1
true
1
0
By default docker does not restrict memory usage by containers. However, on Mac and Windows installs, Docker runs in a VM and that VM is limited in how much memory it takes from your OS. You can adjust this setting in the Docker preferences for Mac and Windows.
1
0
0
I have python flask app which loads to memory large file (3.5 GB). When I run that app in docker, it doesn't respond to requests, but container somehow works. When I try to run my app without loading that large file to memory it respons to requests.
How to allocate more memory for docker container?
1.2
0
0
808
44,049,685
2017-05-18T13:52:00.000
1
0
0
1
python,process,psutil
55,455,049
1
true
0
0
I had exactly the same problem, running python with "as Administrator" solved the issue (On Win10 with python3.7). Create a windows shortcut to "python " and set "Run as administrator" in the Advanced settings of the shortcut.
1
2
0
How can I set real time priority with psutil. When I try this: process.nice(psutil.REALTIME_PRIORITY_CLASS) REALTIME_PRIORITY_CLASS results in HIGH_PRIORITY_CLASS.
How to set real time priority with psutil
1.2
0
0
1,253
44,051,051
2017-05-18T14:48:00.000
0
0
0
0
python,nlp,gensim,word2vec
44,051,350
1
false
0
0
EDIT: this was intended as a comment. Don't know how to change it now, sorry correlation between the word occurrence-frequency and vector-length I don't quite follow - aren't all your vectors the same length? Or are you not referring to the embedding vectors?
1
1
1
I am using gensim version 0.12.4 and have trained two separate word embeddings using the same text and same parameters. After training I am calculating the Pearsons correlation between the word occurrence-frequency and vector-length. One model I trained using save_word2vec_format(fname, binary=True) and then loaded using load_word2vec_format the other I trained using model.save(fname) and then loaded using Word2Vec.load(). I understand that the word2vec algorithm is non deterministic so the results will vary however the difference in the correlation between the two models is quite drastic. Which method should I be using in this instance?
Gensim save_word2vec_format() vs. model.save()
0
0
0
3,611
44,052,893
2017-05-18T16:11:00.000
1
0
1
0
python,arrays,string
44,053,256
2
false
0
0
First Part: @njzk2 is exactly right. Simply removing the literal spaces to change from l.strip().split(' ') to l.strip().split() will correct the error, and you will see the following output for f_values: [['-91.', '0.444253325'], ['-90.', '0.883581936'], ['-89.', '-0.0912338793']] And the output for newarray shows float values rather than strings: [[-91.0, 0.444253325], [-90.0, 0.883581936], [-89.0, -0.0912338793]] Second Part: For the second part of the question "if negative, add 512", a simple loop would be clear and simple, and I'm a big believer in clear, readable code. For example the following is simple and straightforward: for items in newarray: if items[0] < 0: items[0] += 512.00 When we print newarray after the loop, we see the following: [[421.0, 0.444253325], [422.0, 0.883581936], [423.0, -0.0912338793]]
1
1
1
Let me start by saying that I know nothing about Python, but I am trying to learn(mostly through struggling it seems). I've looked around this site and tried to cobble together code to do what I need it to, but I keep running into problems. Firstly, I need to convert a file of 2 columns and 512 rows of strings to floats then put them in a 512x2 array. I check the first column (all rows) for negative values. If negative, add 512. Then I need to reorder the rows in numerical order and write/save the new array. On to my first problem, converting to floats and putting the floats into an array. I have this code, which I made from others' questions: with open("binfixtest.composite") as f: f_values = map(lambda l: l.strip().split(' '), f) print f_values newarray = [map(float, v) for v in f_values] Original format of file: -91. 0.444253325 -90. 0.883581936 -89. -0.0912338793 New format of f_values: ['-91. 0.444253325'], ['-90. 0.883581936'], ['-89. -0.0912338793'] I'm getting the error: Traceback (most recent call last): File "./binfix.py", line 10, in <module> newarray = [map(float, v) for v in f_values] ValueError: invalid literal for float(): -91. 0.444253325 which I can't seem to fix. If I don't convert to float, when I try to add 512.0 to negative rows it gives me the error TypeError: cannot concatenate 'str' and 'float' objects Any help is most definitely appreciated as I am completely clueless here.
Python: Converting string to floats, reading floats into 2D array, if/then, reordering of rows?
0.099668
0
0
352
44,052,970
2017-05-18T16:14:00.000
0
0
1
0
python,dictionary,set,unique
44,053,477
1
true
0
0
Yes. The set datatype is based on (implemented using) the dict datatype. But this has nothing to do with the {} notation. The {} notation for dicts has been part of Python since Python 1.5.2 (probably before that, but that was before my time). The {} notation for sets is very new.
1
0
0
Are dictionary and set datatypes some how related in python? Considering that they both have no order, take unique values, and use "{ }" for their representation.
Are dictionary and set datatypes some how related in python?
1.2
0
0
38
44,055,109
2017-05-18T18:13:00.000
0
0
0
1
python,cmake
44,056,709
1
true
0
0
Any file should be explicitely mentioned in CMakeLists.txt in some way for track its changes in CMake. That is why using GLOB() command for collect sources is not recommended: CMake will not detect automatically whether new source file has been added.
1
0
0
I have a sub-directory in which i have my python files for compilation I added a new file and accidentally forgot to add it to CMakeList.txt The problem is that the build was successful. My question is does CMake detect automatically all the files in the indicated sub-directory? or does it detect any files that have dependencies with other files added to CmakeList.txt? Thanks.
Cmake files detection
1.2
0
0
145
44,057,174
2017-05-18T20:25:00.000
0
0
1
0
python-2.7,pyinstaller,py2exe,cx-freeze,six
44,057,428
2
false
0
0
When you create a .exe file using cx_freeze it kind of compiles all the needed libraries into the .exe folder, you probably had to configurate a setup file from cx_freeze to be able to create the .exe, right? There you must "tell" cx_freeze which libraries are going to be needed when someone runs the program. Keep in mind that when you create a .exe you dont need to have python neither six to run it.
1
1
0
I've tried making an exe from a program using py2exe, cx_freeze and pyinstaller. All of which give me an error 'ImportError: No module named six' when I go to launch the .exe The .exe is able to be created. I've looked through the forums and all of them say to pip install six (it's already installed). I've tried uninstalling and re-installing six. One post mentioned uninstalling matplotlib, so I did that. When I instlalled pyinstaller one of the requirements was that six be installed! So this is very baffling.
When I try to compile and .exe I get ImportError: No module named six
0
0
0
1,926