Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
29,355,581
2015-03-30T20:22:00.000
0
0
1
0
python,python-3.x,installation,64-bit
44,684,580
6
false
0
0
I had just the exact problem. But in mine case i've additionally removed the c:\python27 and c:\python36 directories and associated installers got stuck around Install/Uninstall and could not completely repair the installation (the /Scripts subdirectory has been missed and the python.exe reported an error about missed encodings module). But i've found a solution for myself. Seems somehow the PYTHONHOME and PYTHONPATH environment variables (and may be PATH too) has been interfering with the Python installation process. But because i could not run Uninstall from the Windows Uninstall list in the Control Panel, then i did this: Cleanup the PATH environment variable from all python path occurrences. Remove PYTHONHOME and PYTHONPATH environment variables. Restart Windows Explorer if environment variables (console command set PY must return the empty list) is not updated. Run repair from (!) the python-3.4.4*.exe/python-2.7.11*.exe executables (download it if not done yet). Icons in the Windows Uninstall list in the Control Panel will reset into the original state for a repaired python installation. Run the Uninstall from the same executables or from Windows Uninstall list in the Control Panel. And it did the trick! If you still needs the both versions of the python installation, then try install the older versions before the new versions. Seems it's important too.
3
5
0
Trying to install Python 3.4.3 64-Bit and it gives me the following error: 'There is a problem with this Windows Installer package. A program required for this install to complete could not be run. Contact your support or package vendor' I have no bloody idea what this means so please help. Thanks in advance I'm the admin on my computer and have all permissions My windows is 64bit and is Windows 8.1
python 3.4.3 64bit a program required for this install could not be run
0
0
0
6,196
29,355,581
2015-03-30T20:22:00.000
10
0
1
0
python,python-3.x,installation,64-bit
29,614,069
6
false
0
0
I had the same phenomenon occur when trying to clean up (uninstall various versions of Python and perform a clean install of 3.4.3) on my Windows 7 64-bit laptop. Unfortunately, I cannot tell you what "program required for this install to complete could not be run". Repeated attempts to "install for all users" produced the same "could not be run" (followed by a roll back of the install). Just before getting out Orca and diving into the innards of the MSI file, I attempted an "install just for me", and the install completed. I am, in fact, the only (human) user of this computer. There is another user account that was created during a cygwin setup, and access to some aspect of that user's profile/resources may have been the issue. If you are installing Python for your own use - and not as a "platform-wide" resource for other users as well - you might try installing "just for me".
3
5
0
Trying to install Python 3.4.3 64-Bit and it gives me the following error: 'There is a problem with this Windows Installer package. A program required for this install to complete could not be run. Contact your support or package vendor' I have no bloody idea what this means so please help. Thanks in advance I'm the admin on my computer and have all permissions My windows is 64bit and is Windows 8.1
python 3.4.3 64bit a program required for this install could not be run
1
0
0
6,196
29,356,269
2015-03-30T21:05:00.000
50
0
1
0
python,matplotlib,spyder
46,616,236
5
false
0
0
Go to Tools >> Preferences >> IPython console >> Graphics >> Backend:Inline, change "Inline" to "Automatic", click "OK" Reset the kernel at the console, and the plot will appear in a separate window
1
67
1
When I use Matplotlib to plot some graphs, it is usually fine for the default inline drawing. However, when I draw some 3D graphs, I'd like to have them in a separate window so that interactions like rotation can be enabled. Can I configure in Python code which figure to display inline and which one to display in a new window? I know that in Spyder, click Tools, Preferences, Ipython Console, Graphics and under Graphics Backend select “automatic” instead of “inline”. However, this make all the figures to be in new windows. It can be messy when I have a lot of plots. So I want only those 3D plot to be in new windows, but all the other 2D plots remain inline. Is it possible at all? Thanks!
Plot inline or a separate window using Matplotlib in Spyder IDE
1
0
0
218,892
29,357,420
2015-03-30T22:24:00.000
0
0
0
1
python,google-cloud-storage,gsutil
59,030,743
6
false
0
0
If for whatever reason you want to do something depending on the result of that listing (if there are for example parquet files on a directory load a bq table): gsutil -q stat gs://dir/*.parquet; if [ $? == 0 ]; then bq load ... ; fi
1
18
0
I have a GCS bucket containing some files in the path gs://main-bucket/sub-directory-bucket/object1.gz I would like to programmatically check if the sub-directory bucket contains one specific file. I would like to do this using gsutil. How could this be done?
Gsutil - How can I check if a file exists in a GCS bucket (a sub-directory) using Gsutil
0
0
0
24,723
29,358,494
2015-03-31T00:14:00.000
0
0
0
0
java,python,hadoop,mapreduce,apache-spark
29,516,550
1
false
0
0
You can use pairRDD.countByKey() function for counting the rows according their keys.
1
0
1
We need to get the count of each key (the keys are not known before executing), and do some computation dynamically in each Mapper. The key count could be global or only in each Mapper. What is the best way to implement that? In Hadoop this is similar to an aggregator function. The accumulator in Spark needs to be defined before the Mapper jobs run. But we do not know what and how many keys are there.
Get the count of each key in each Mapper or globally in Spark MapReduce model
0
0
0
110
29,368,155
2015-03-31T12:00:00.000
0
0
1
0
python,python-imaging-library,pillow
29,368,336
2
false
0
1
You can't. PIL deals with image manipulations in memory. There's no way of knowing the size it will have on disk in a specific format. You can save it to a temp file and read the size using os.stat('/tmp/tempfile.jpg').st_size
1
2
0
resized_image = Image.resize((100,200)); Image is Python-Pillow Image class, and i've used the resize function to resize the original image, How do i find the new file-size (in bytes) of the resized_image without having to save to disk and then reading it again
How to get image size in python-pillow after resize?
0
0
0
1,687
29,371,570
2015-03-31T14:49:00.000
0
0
0
0
python,sql-server,pyodbc
29,679,132
2
true
0
0
Use IronPython. It allows direct access to the .net framework, and therefore you can build a DataTable object and pass it over.
1
1
0
MS SQL Server supports passing a table as a stored-procedure parameter. Is there any way to utilize this from Python, using PyODBC or pymssql?
Can you pass table input parameter to SQL Server from Python
1.2
1
0
830
29,372,365
2015-03-31T15:25:00.000
0
0
0
0
python,mysql
29,372,847
2
false
0
0
The error tells you that 'test_user' at machine 'machine02' is not allowed. Probably user 'test_user' is on 'mysql.user' table registered with 'localhost' as connection's host. Check it using a query like this: select host, user from mysql.user; Best regards, Oscar.
2
0
0
I am using db = MySQLdb.connect(host="machine01", user=local_settings.DB_USERNAME, passwd=local_settings.DB_PASSWORD, db=local_settings.DB_NAME) to connect to a DB, but I am doing this from machine02 and I thought this would still work, but it does not. I get _mysql_exceptions.OperationalError: (1045, "Access denied for user 'test_user'@'machine02' (using password: YES)") as a result. However, if I simply ssh over to machine01 and perform the same query, it works just fine. Isn't the point of host to be able to specify where the MySQL db is and be able to query it from any other host instead of having to jump on there to make the query?
Query MySQL db from Python returns "Access Denied"
0
1
0
41
29,372,365
2015-03-31T15:25:00.000
0
0
0
0
python,mysql
29,372,390
2
true
0
0
Make sure your firewall isn't blocking port 3306.
2
0
0
I am using db = MySQLdb.connect(host="machine01", user=local_settings.DB_USERNAME, passwd=local_settings.DB_PASSWORD, db=local_settings.DB_NAME) to connect to a DB, but I am doing this from machine02 and I thought this would still work, but it does not. I get _mysql_exceptions.OperationalError: (1045, "Access denied for user 'test_user'@'machine02' (using password: YES)") as a result. However, if I simply ssh over to machine01 and perform the same query, it works just fine. Isn't the point of host to be able to specify where the MySQL db is and be able to query it from any other host instead of having to jump on there to make the query?
Query MySQL db from Python returns "Access Denied"
1.2
1
0
41
29,378,991
2015-03-31T21:33:00.000
0
1
0
0
python,optimization,routing
29,379,456
2
false
0
0
I used networkx the other day, and it was superb. Very easy to use, and very quick. So you'll need to get your data into some kind of usable format, and then run your algorithms through this. Python is often a good choice for scripting and pulling together pieces of data and analysing them!
1
0
0
I want to create a vehicle routing optimization program. I will have multiple vehicles travelling and delivering items by finding the shortest path from A to B. At first I will simply output results. I may later create a visual representation of the program. It has been suggested to me that I will find it easiest to do this in Python. I have to do this task, but it seems very daunting. I am not the best programmer yet, but not a beginner either, I am good at maths and a quick learner. Any advice on how to break down this task would be really helpful. Should I use Python? Any Python modules that will be particularly suited to this task?
How to create an automated vehicle routing simulation?
0
0
0
1,138
29,381,646
2015-04-01T02:04:00.000
0
0
1
0
python
29,402,494
2
false
0
0
In addition to the answer by @wim it is worth noting that the object has already been created by the time __init__ is called i.e. __init__ is not a constructor. Furthermore, __init__ methods are optional: you do not have to define one. Finally, an __init__ method is defined first only by convention i.e. it could be defined after any of the other methods.
1
1
0
Is __init__ in python a constructor or a method? Somewhere it says constructor and somewhere it says method, which is quite confusing.
Is this a constructor or a method
0
0
0
85
29,381,919
2015-04-01T02:34:00.000
13
0
1
0
python,import,file-not-found,win64,enchant
54,417,639
8
false
0
0
Resolved: On Win7-64 I ran pip3 install pyenchant==1.6.6 which seems to be the latest version of PyEnchant that still shipped with Win-64 binaries. Newer versions did not install for me, but this one did.
4
24
0
The question is why I see the error message in the title when trying to import enchant. I am using Win64.
ImportError: The 'enchant' C library was not found. Please install it via your OS package manager, or use a pre-built binary wheel from PyPI
1
0
0
28,732
29,381,919
2015-04-01T02:34:00.000
3
0
1
0
python,import,file-not-found,win64,enchant
33,700,617
8
false
0
0
For me, the problem I ran into was that I had an old version of pip. I installed the latest version and was able to download the pyenchant library. pip install -U pip
4
24
0
The question is why I see the error message in the title when trying to import enchant. I am using Win64.
ImportError: The 'enchant' C library was not found. Please install it via your OS package manager, or use a pre-built binary wheel from PyPI
0.07486
0
0
28,732
29,381,919
2015-04-01T02:34:00.000
0
0
1
0
python,import,file-not-found,win64,enchant
70,978,138
8
false
0
0
I have fix the bugs on the colab. !apt update !apt install enchant --fix-missing After fixing the missing files, you could run the enchant.
4
24
0
The question is why I see the error message in the title when trying to import enchant. I am using Win64.
ImportError: The 'enchant' C library was not found. Please install it via your OS package manager, or use a pre-built binary wheel from PyPI
0
0
0
28,732
29,381,919
2015-04-01T02:34:00.000
24
0
1
0
python,import,file-not-found,win64,enchant
30,007,220
8
false
0
0
On Ubuntu, run sudo apt-get install libenchant1c2a
4
24
0
The question is why I see the error message in the title when trying to import enchant. I am using Win64.
ImportError: The 'enchant' C library was not found. Please install it via your OS package manager, or use a pre-built binary wheel from PyPI
1
0
0
28,732
29,382,745
2015-04-01T04:15:00.000
0
0
1
0
python,multithreading,console
29,382,835
1
false
0
0
You could replace sys.stdout with a custom class that filters output based on thread.
1
0
0
BLUF: Is there a way to suppress console output just for a single thread? I have a background thread that does system checks which include a bunch of pings to various things. These ping checks take awhile and then the results are flushed to the console. I don't want them to display at all, so I found that you can set stdout to devnull. The problem there is that the system checks are in a while True loop (with a 60 second sleep between loops) and the checks themselves take up a good portion of that 60 seconds so I'm afraid suppressing console output during these functions will result in no console output for a good percentage of total runtime for the entire program -- console output I might later want to see from the main thread.
Python -- suppress console output for a constantly running thread, not the entire program
0
0
0
596
29,383,061
2015-04-01T04:49:00.000
0
0
0
1
python-3.x,root
29,383,623
1
false
0
0
Don't you need to just launch the program as sudo (root not recommended), I am not sure you can run partial code as root. Alternately split the program into a and with some messaging schema between them
1
0
0
I want to write a simple program to manage the screen brightness on my laptop, running Python3 under Ubuntu Linux. To directly change the screen brightness levels, I can deal with a single file in the folder /sys/class/backlight/acpi_video0, called brightness. (the maximum brightness is another text file called max_brightness, so it's easy to find) The problem is, however, that I want to grant my program partial access to root permissions, just enough to modify the files in that folder (though, I'd like it to be flexible enough to choose any folder in /sys/class/backlight/, in case it's not named acpi_video0), but not actually run as root, as that may cause problems as it tries to access GTK for a graphical interface. How do I grant a Python3 program partial root permissions?
Root access in Python3
0
0
0
518
29,383,893
2015-04-01T06:05:00.000
0
0
0
0
python-2.7,cron,crontab,cron-task
29,386,729
2
false
0
0
You could use the 'at' command to set a new job for the next time you need to run it. So if your scraper tells you the next update is in 7 minutes you can set the 'at' command to run 'now + 6 minutes'
1
0
0
Here's the idea -- there's a website that I want to scrape. It updates every 10 minutes, but sometimes get's out of sync. It's important that the information I scrape is just before it updates. Each time I check the site, I can scrape the 'time remaining' until next update. Is there a way to make a cron job that -- after each iteration -- I can specifically set the time to wait before running the the time (t+1) iteration based on some variable from the time (t) iteration? I'm not particularly familiar with cron jobs -- my current super rough implementation just uses -sleep-. Not ideal.
Variable time between cron jobs (or similar implementation)
0
0
1
633
29,384,494
2015-04-01T06:46:00.000
0
0
0
0
python,geometry,intersection
29,385,747
3
false
0
0
In general I would recommend to first make your algorithm work and then make it faster if you need to. You would be amazed by how fast Python in combination with a set of carefully selected libraries can be. So for your problem, I would do the following: 1.) Install a set of libraries that makes your life easier: - Matplotlib for 2D plotting of the rectangle, the circle and the trajectory 2.) Numpy for general purpose array manipulation 3.) Optionally Scipy for its KDTree support (nearest neighbor search) 4.) Start implementing your problem a.) Create a rectangle and visualize it using Matplotlib b.) Create a set of circles and plot them within the rectangular area of 4a c.) Create a trajectory and plot them within the rectangular area Now the more difficult part starts. The way forward depends a little on how your trajectory is defined. For example, if your trajectory consists of line segments, you could calculate the intersection point between a circle and a line segment analytically. Three possible solutions exist, no intersection, 1 intersection (line touches circle) and 2 intersections. If your trajectory is more complex, you could discretize it by generating many points along it and than calculate if this point is on the edge of one of the circles. You have to be a little clever though about how the 3 possible solutions can be identified, because the points along the trajectory are finite. Another option would be to also discretize the points on the edges of the circles. This would mean that the problem reduces for a large part to nearest neighbor search for which you can use the Scipy KDTree class.
2
0
1
I am new in coding. Now I have a question. I have an object who keep moving in an rectangle area. And I also have a lot of circle in this area too. I want to get all the intersection point between the trajectory and the all the circle. As the object is moving step by step, so was thinking that I can calculate the distance between the position of object and all the centre of each circle and compare the distance with radius of the circle. But I think that this will do a lot of computation as you need to calculate the distance at each step. Do you have any good idea or reference. By the way, I am woking on python. Thank you. As I do not have enough reputation , I can not add a picture about the problem
The intersection between a trajectory and the circles in the same area
0
0
0
1,435
29,384,494
2015-04-01T06:46:00.000
1
0
0
0
python,geometry,intersection
29,388,615
3
true
0
0
Let a be a number somewhere between the radius and diameter of the larger circles (if they have different radii). Generate a grid of square tiles of side length a, so that grid(i,k) is the square from (i*a,k*a) to ((i+1)*a, (k+1)*a). Each tile of the grid contains a list with pointers to circles or indices into the circle array. For each circle, register it with each tile that it intersects with. Should be less than 4. Now to test the point (x,y) of the trajectory for circle intersections resp. containment inside the corresponding disk, you only need to test it against the list of circles in tile ((int)(x/a), (int)(y/a).
2
0
1
I am new in coding. Now I have a question. I have an object who keep moving in an rectangle area. And I also have a lot of circle in this area too. I want to get all the intersection point between the trajectory and the all the circle. As the object is moving step by step, so was thinking that I can calculate the distance between the position of object and all the centre of each circle and compare the distance with radius of the circle. But I think that this will do a lot of computation as you need to calculate the distance at each step. Do you have any good idea or reference. By the way, I am woking on python. Thank you. As I do not have enough reputation , I can not add a picture about the problem
The intersection between a trajectory and the circles in the same area
1.2
0
0
1,435
29,384,696
2015-04-01T06:58:00.000
26
0
1
0
python,python-2.7,python-3.x
29,384,723
2
false
0
0
Use the date.weekday() method. Digits 0-6 represent the consecutive days of the week, starting from Monday.
1
56
0
Please suggest me on the following. How to find whether a particular day is weekday or weekend in Python?
how to find current day is weekday or weekends in Python?
1
0
0
91,511
29,395,356
2015-04-01T15:52:00.000
1
0
0
0
python,events,widget,gtk,redraw
29,396,431
1
false
0
1
You want the .queue_draw() and related GtkWidget methods. Note that these will mark your widget as needing redraw when you get back to the main loop; I don't think GTK+ has a method for drawing right now (but marking as dirty and letting the system redraw when it's ready is usually better for optimization reasons).
1
0
0
My application have a window which handle key press events. When user press a key I run some long tasks and during a task I update a label on the window. To update the label while task is still running I call the following code. while gtk.events_pending(): gtk.main_iteration(False) This update the label but problem is that it process all the events including key presses. if user press a key while a tasks is running calling main_iteration start processing that task. I want that it should only update the label but any other events should not be processed . Those events should be processed when task is completed. One way to do this is to remove the key press handler or with in that keypress handler check if task is running then ignore the key presses but in this way keypresses will be lost. I want that somehow it should only update the label but leave other events and those events should be handled after task is completed and application become idle. Is there a way to do this? Thanks
How to redraw gtk widget in python without calling gtk.main_iteration?
0.197375
0
0
280
29,395,946
2015-04-01T16:23:00.000
1
0
0
0
python,amazon-web-services,amazon-ec2,cron,boto
29,407,280
2
false
1
0
The entire issue appeared to be HTTP_PROXY environment variable. The variable was set in /etc/bashrc and all users got it this way but when cron jobs ran (as root) /etc/bashrc wasn't read and the variable wasn't set. By adding the variable to the configuration file of crond (via crontab -e) the issue was solved
1
1
0
I have a very basic python script which uses boto to query the state of my EC2 instances. When I run it from console, it works fine and I'm happy. The problem is when I want to add some automation and run the script via crond. I notices that the script hangs and waits indefinitely for the connection. I saw that boto has this problem and that some people suggested to add some timeout value to boto config file. I couldn't understand how and where, I added manually /etc/boto.cfg file with the suggested timeout value (5) but it didn't help. With strace you can see that this configuration file is never being accessed. Any suggestions how to resolve this issue?
Connection with boto to AWS hangs when running with crond
0.099668
0
1
774
29,397,839
2015-04-01T18:11:00.000
1
0
0
1
python
29,398,092
2
false
0
0
It is basically useless if you don't have executable permission in the remote machine. You need to contact your administrator to obtain an executable permission. In the case for the SCP files to the remote server, you may still be able to cp you files but you may not be able to execute it.
1
0
0
I am SSHed into a remote machine and I do not have rights to download python packages but I want to use 3rd party applications for my project. I found cx_freeze but I'm not sure if that is what I need. What I want to achieve is to be able to run different parts of my project (will mains everywhere) with command line arguments on the remote machine. My project will be filled with a few 3rd party python packages. Not sure how to get around this as I cannot pip install and am not a sudoer. I can SCP files to the remote machine
Using 3rd party packages on remote machine without download/install rights
0.099668
0
0
33
29,403,497
2015-04-02T01:20:00.000
1
0
0
0
python,django
29,404,146
2
false
1
0
You have to install Django inside the virtualenv. sudo command will give you the global package so I guess django already installed in global. Activate virtualenv then pip install django will resolve your issue.
2
0
0
new to python and django and getting the ImportError when I run python manage.py runserver. I figured the problem was that django was not installed in the site_packages of the python version running in the virtualenv. I ran the command under sudo "sudo python manage.py runserver" and it works. So all is good. Can someone explain to a noob what I did wrong in installing django or setting up the virtualenv.
ImportError: No module named 'django' when in virtualenv
0.099668
0
0
1,424
29,403,497
2015-04-02T01:20:00.000
1
0
0
0
python,django
29,404,154
2
true
1
0
Did you remember to activate the virtual environment. Virtual environments never use the sudo command because nothing is being installed in the machines local library. To activate the virtual environment you open up terminal and type source /virtualenv/bin/activate.
2
0
0
new to python and django and getting the ImportError when I run python manage.py runserver. I figured the problem was that django was not installed in the site_packages of the python version running in the virtualenv. I ran the command under sudo "sudo python manage.py runserver" and it works. So all is good. Can someone explain to a noob what I did wrong in installing django or setting up the virtualenv.
ImportError: No module named 'django' when in virtualenv
1.2
0
0
1,424
29,405,412
2015-04-02T05:13:00.000
4
0
0
0
python,google-bigquery
29,405,486
2
false
0
0
Try dragging the grey line just under the big red "Run Query" button.
1
4
0
This is hardly a programming question, so please don't laugh me out of here. On the Google bigquery Web UI, is there a way to make the "New Query" box taller? I have been getting into fairly length queries and I would like to be able to see them all at once. Or should I be graduating to a different mechanism (python?) for writing and running queries? Suggestions appreciated.
Google bigquery Web UI: change height of the "New Query" box?
0.379949
0
1
91
29,406,835
2015-04-02T06:56:00.000
0
0
1
0
python,time,influxdb
29,487,862
2
true
0
0
AFAIK, Influxdb API returns Timestamp in miliseconds. But for Python Influxdb API is in clearly in seconds. Which version of InfluxDB Python API are you using?
1
0
0
I wanted to ask in what format does Python InfluxDB API return time? Because it doesn't return it in timestamp, when I divide it from 100 I get about 2200 years, when I divide it from 1000 I get 2024 years and when I divide it from 10000, I get 1980 years.
InfluxDB time storing
1.2
0
0
626
29,422,093
2015-04-02T20:42:00.000
0
0
0
0
python,machine-learning,scikit-learn,pca,logistic-regression
45,776,219
1
false
0
0
you can segment your data on few models which output will be input to the next model which will give you result. Basically its RNN architecture. Put such massive data in one network just not possible due to memory limitation.
1
1
1
I am trying to apply Machine Learning to a Kaggle.com dataset. The dimension of my dataset is 244768 x 34756. Now at this size none of the scikit algorithms work. I thought i would apply PCA , but even that doesnt scale up to this dataset. Is there anyway i can reduce redundant data from my training dataset? I can reduce dimension by applying PCA , but if i could apply PCA. Since i am doing document classification , i resampled my dataset to 244768*5672 , by reducing the word vector size. PCA cant be applied to even this dataset. Can i apply PCA by doing this method. Suppose my matrix is A - X = A.T * A pca(X) (X becomes 5672 x 5672 matrix) Will this give me wrong answers? Also when i apply Logistic regression , can i train the model incrementally , .ie If A = 10000 x 500 Can i take 1000 x 500 , to logistic.fit(A) and then do the same for other rows? Is this kind of training wrong?
Machine Learning -Issues with big dataset
0
0
0
110
29,430,979
2015-04-03T11:04:00.000
0
0
0
0
python,sqlite,python-3.x,numpy,hdf5
29,477,966
1
false
0
0
You could create a region reference dataset where each element relates to one of the ~2000 identifiers. Then the Python code to reference a particular identifier would look like this: reg_ref - reg_ref_dset[identifier] mysub = data_dset[reg_ref]
1
2
1
I need to store a table on disk, and be able to retrieve a subset of that table into a numpy.ndarray very fast. What's the best way to do that? I don't mind spending the time to preprocess this dataset before storing it on disk, since it won't be changed once it's created. I'd prefer not to write any C code, and instead rely on existing python libraries. I am considering HDF5 (with either pytables or h5py), sqlite, numpy's memmap, or a custom binary file format. For a custom file, I would sort the rows by the identifier, and add to the file a table of contents which, for every identifier, would specify the beginning and ending file offsets that encompass the data related to this identifier. This would probably be very fast in terms of I/O, but at a cost of using python rather than C code (since I don't think there's a library that does precisely that). Details: ~100 million rows, ~5 columns of float and str data. One of the columns contains 100,000 different identifiers (so there are about 1000 rows per identifier). The subset to be retrieved is always specified by a set of identifiers (usually I need to retrieve ~2000 identifiers, so ~2% of the entire dataset). Python 3.4, Linux, SSD drive (so random access is as fast as sequential).
Storing a large table on disk, with fast retrieval of a specified subset to np.ndarray
0
0
0
179
29,431,557
2015-04-03T11:43:00.000
-1
0
1
0
python,sql-server,pyodbc
29,431,913
2
false
0
0
You could overwrite query function in way that None will be replace with "NULL"
1
5
0
I am passing the output from a sql query to again insert the data to ms sql db. If my data is null python / pyodbc is returning None instead of NULL. What is the best way to convert None to NULL when I am calling another query using the same data. Or a basic string transformation is the only way out ? Thanks Shakti
How convert None to NULL with Python 2.7 and pyodbc
-0.099668
1
0
16,822
29,435,173
2015-04-03T15:33:00.000
0
1
0
0
python,django,security,twitter-oauth
29,502,136
1
false
1
0
Many other libraries ask you to put your API Keys in settings.py, this is also useful if you want to use them in different application within your project.
1
1
0
I've just started using python-twitter with django hosted on OpenShift and need to use Oauth. At the moment it's just on the dev server. Before I put it live, I was wondering if there's a "best" place to store my token / secret info? Right now I just have them in my views.py file but would it be safer to store them in settings.py and access them from there?
python-twitter - best location for Oauth keys in Django?
0
0
0
136
29,440,056
2015-04-03T21:40:00.000
1
0
1
0
python
29,440,149
2
false
0
0
Here's the general idea: 1. read your lines from the file into a list using readlines() 2. use a for loop to loop over the indices of the lines 3. use if statements within the loop to skip the appropriate lines. MattDMo is correct. It'd help if you'd show us your attempt.
1
0
0
I am new to python. I need to read the first 11 lines then skip next 9 lines and read two, skip next 9 lines and read two until the end of the file. I appreciate any help.
python read file and skip n lines and read again and skip
0.099668
0
0
1,681
29,443,218
2015-04-04T05:46:00.000
1
0
1
0
algorithm,python-2.7
29,443,892
1
false
0
0
There are 6402373705728000 permutations of 18 elements so it takes years to iterate over them. It should be better to think of an analytic solution for this problem.
1
0
0
I got a problem in calculating permutations. The program needs to generate permutations(xrange(num), num)) and for each permutation I have to count the number of consecutive primes. i.e sum of every adjacent two digits in the number should be a prime. max value 'num' would be 18 primes = permutations(xrange(1,num+1), num) for val in primes: for x in range(0,len(val-1)): if (prime(val[x] + val[x+1])): num_primes += 1 if the 'num' range from 10 to 18, it is giving a response message of 'killed' after a long waiting. please help me to solve this..
how to avoid memory error in generating and processing python permutations?
0.197375
0
0
198
29,445,943
2015-04-04T11:38:00.000
1
0
0
0
python,graph,plotly
29,460,131
1
true
0
0
Full disclosure, I work for Plotly. Here's my shot at summarizing your problem in general, you've got 4 dimensions for each country (year, exports, gdp, standard of living). You might be able to use either or both of these solutions: visualize this in two dimensions using x-value, y-value, marker-size, and marker-line-size (a bubble chart in 2d) visualize this in three dimensions using x-value, y-value, z-value, and marker-size I'll leave a link to a notebook in the comments, but since it's not a very permanent link, I won't include it in the answer here.
1
0
1
I have 3 sets of comparison data(y axes) which needs to be plotted against a target source values. I'm comparing exports, gdp, standard of living values of different countries against a target countries values for different years. But values of each category are haphazard i.e exports in millions of dollars, gdp in percentage and standard of living scale of 1 to 10. Moreover I have years value for comparison as well. What I want to see is over the years how different parameters for each country over different years vary against the target country parameters. All of this plotted in one graph in plotly. I can plot multiple y axes in plotly, but the scale doesn't match. Has anyone any suggestions how to fit all the comparison in one layout. Maybe this is more of a graphing suggestion needed rather than help in plotly? Any ideas how to squeeze all in one graph?
graph of multiple y axes in plotly
1.2
0
0
925
29,446,102
2015-04-04T11:55:00.000
3
0
0
0
python,python-2.7,tkinter
29,446,371
2
true
0
1
There is nothing built-in to tkinter, nor available as a third party library, that makes it possible to display an ad and receive revenue from it.
1
3
0
My question is : is it possible to put ads on a tkinter program , something like google ads or something , because I made a program which a lot of people started using and I am not getting any benefits from it , is there a way ?
Advertising on a tkinter program
1.2
0
0
1,029
29,451,794
2015-04-04T21:53:00.000
12
0
1
0
python
29,451,831
2
true
0
0
The key difference between those methods is that split() returns a variable number of results, and partition() returns a fixed number. Tuples are usually not used for APIs which return a variable number of items.
1
8
0
Comparing Python's str.split() with str.partition(), I see that they not only have different functions (split() tokenizes the whole string at each occurrence of the delimiter, while partition() just returns everything before and everything after the first delimiter occurrence), but that they also have different return types. That is, str.split() returns a list while str.partition() returns a tuple. This is significant since a list is mutable while a tuple is not. Is there any deliberate reason behind this choice in the API design, or is it "just the way things are." I am curious.
Python: Why Does str.split() Return a list While str.partition() Returns a tuple?
1.2
0
0
1,298
29,452,879
2015-04-05T00:28:00.000
0
0
1
1
python,regex
29,452,916
3
false
0
0
Set a flag false. Iterate over each line. For each line, 1) When you match your pattern, set a flag. 2) If the flag is currently set set, print the line.
1
2
0
I have an issue that I can't seem to find a solution within python. From command line I can do this by: sed '1,/COMMANDS/d' /var/tmp/newFile This delete everything from line #1 till regex "COMMANDS". Simple But I can't do the same with Python that I can find. The re.sub and multiline doesn't seem to work. So I have a question how can I do this in a pythonic way? I really rather not run sed from within python unless I have to.
Python delete lines of text line #1 till regex
0
0
0
1,429
29,453,049
2015-04-05T01:00:00.000
0
0
1
0
python-3.x,pip
34,915,465
1
false
0
0
Remove the related pip build directory for that particular package and try installing it again. That'd work. If you're working in a virtual environment say venv, pip build directory will be <path-to-venv>/venv/build/<package-name>.
1
1
0
I am having trouble installing and installing again the same package. I get an 'AssertionError: Multiple .dist-info directories' error. I think that it has something to do with the pip uninstall not completely getting rid of all the files of a package, but am not sure how to remedy the situation. I don't know what information would be helpful here, except maybe that this is a Django project, running on virtualenv on IIS. EDIT: I just re-installed python and with it, a new version of pip.
Pip Uninstall and Install same package gives 'AssertionError: Multiple .dist-info directories'
0
0
0
1,741
29,453,737
2015-04-05T03:14:00.000
1
0
1
1
python,exe,executable,py2exe,command-window
29,453,760
1
true
0
0
If all your program does is print something and you run it by double-clicking the executable, then it simply closes the console when it finishes running. If you want the window to stay open, run your program from the command line. You can also create a batch file that runs your program and then pauses the console, so that you at least get a "press any key" before the console closes.
1
0
0
I recently made an executable of a Python program with py2exe, and when I ran the executable, a command window appeared for a split second and then disappeared. My program itself never actually ran at all. Everything is still inside the dist folder, so I'm not sure what's actually wrong. Is there a solution for this?
Command window popping up when running a Python executable?
1.2
0
0
2,357
29,454,902
2015-04-05T06:50:00.000
1
0
0
0
python,network-programming,client-server,port
29,454,929
1
false
0
0
Your problem could be due to port forwarding, to fix this you would need to enable port forwarding on your router. Each router is different, but this is usually done by opening the router's webpage and setting port forwarding to the IP of your computer
1
0
0
My friend and I trying to communicate each other using simple python client - server. When we were in the same LAN , the communication was great. Now , each of us is in his house and we can't connect beacuse of error 10060. We read about the firewall problem , we tried to turn it off - and still not working. What should we do? Thank is advance.
Networking programming - client - server not in the same LAN
0.197375
0
1
77
29,456,031
2015-04-05T09:44:00.000
0
0
0
0
qpython,qpython3
29,654,733
2
false
0
1
Just write a wrapper script which get the parameters and pass to the real script using some function like execfile, and put the script into /sdcard/com.hipipal.qpyplus/scripts or /sdcard/com.hipipal.qpyplus/scripts3 (for qpython3). Then you can see the script in scripts when clicking the start button.
1
2
0
I am running a simple client-server program written in python, on my android phone using QPython and QPython3. I need to pass some commandline parameters. How do I do that?
Passing commandline arguments to QPython
0
0
0
2,584
29,457,275
2015-04-05T12:29:00.000
2
1
0
0
python,mysql,xml,database
29,457,336
1
false
0
0
That depends on the way you want to work with the data. If you have structured data, and want the exchange it between different programs, xml might be a good choice. If you do mass processing, plain text might be a good choice. If you want to filter the data, a database might be a good choice.
1
0
0
I want to help my friend to analyze Posts on Social Networks (Facebook, Twitter, Linkdin and etc.) as well as several weblogs and websites. When it comes to the Storing the Data, I have no experience in huge data. Which one is the best for a bunch of thousand post, tweet and article per day: Database, XML file, plain text? If database, which one? P.S. The language that I am going to start programming with is Python.
Storing Huge Data; Database, XML or Plain text?
0.379949
0
1
124
29,458,467
2015-04-05T14:43:00.000
1
0
0
0
python,c++,robotics,ros
54,047,590
2
false
0
0
ROS publishing/subscribing is many-to-many (in terms of connections) and one-way transport of data. It's asynchronous, meaning your code will not block and you will need to implement a callback function that will act asynchronously. ROS services, on the other hand, are synchronous and one-to-one two-way transport of data, meaning the client blocks and waits for a response from the server. Think about the following cases: Think of a robot and a simulator with the robot's model. The simulator needs to update the robot's model in real-time (as the robot in the real world changes configuration, the simulator needs to update the model to reflect that change such that the robot model in the simulator is always up-to-date with the current configuration of the real robot). Think of a node that controls a robot cashier. This node requires to detect a customer using the robot's camera to start the interaction. In the first case, you will need a publishing/subscribing model because the simulator will need data to flow in real-time while doing something else. So the robot will publish its joint values to a topic and the simulator will subscribe to that topic with a callback function that will update the robot model in the simulator in real-time asynchronously. In the second case, however, you don't want a node to constantly check for a customer and is not something you want to do constantly. You know in your program logic when you need to detect a customer. When you first start your node, you know that you need to block and wait for a customer to come in. It's more appropriate here to use a service. When you want to detect a customer you send a request to the server (and as a result your program blocks waiting for a response). The server will use the camera to detect a customer (using some detection algorithm) and will respond accordingly back to you. Generally speaking, you will use publishing/subscribing when you need data to flow constantly and you want to act on these data asynchronously, and services when you need a specific calculation to happen synchronously.
1
0
0
I am using ROS system and I am a beginner. I came across Service and Message (srv and msgs) in ROS. From the ros wiki, I manage to understand like msgs are used to define the type of message been passed and service is about request and response. Please correct me if I got this wrong. Nevertheless, I am unable to understand when to use them. I thought that maybe it would be useful if I have modules written in C++ and other Processing Modules in Python than perhaps I can use srv and msgs for communicating between the 2 modules. However, than ROS also have publisher and subscriber system which could be used instead? Secondly, when we use srv than only we need to define the msgs or either can be independently be used?
ROS Service and Message
0.099668
0
0
5,748
29,461,480
2015-04-05T19:44:00.000
0
1
1
0
python,performance,networking,cpu,mininet
29,502,719
1
false
0
0
I finally found the real problem. It was not because of the prints (removing them improved performance a bit, but not significantly) but because of a thread that was using a shared lock. This lock was shared over multiple CPU cores causing the whole thing being very slow. It even got slower the more cores I added to the executing VM which was very strange... Now the new bottleneck seems to be the APScheduler... I always get messages like "event missed" because there is too much load on the scheduler. So that's the next thing to speed up... :)
1
0
0
I am doing my bachelor's thesis where I wrote a program that is distributed over many servers and exchaning messages via IPv6 multicast and unicast. The network usage is relatively high but I think it is not too high when I have 15 servers in my test where there are 2 requests every second that are going like that: Server 1 requests information from server 3-15 via multicast. every of 3-15 must respond. if one response is missing after 0.5 sec, the multicast is resent, but only the missing servers must respond (so in most cases this is only one server) Server 2 does exactly the same. If there are missing results after 5 retries the missing servers are marked as dead and the change is synced with the other server (1/2) So there are 2 multicasts every second and 26 unicasts every second. I think this should not be too much? Server 1 and 2 are running python web servers which I use to do the request every second on each server (via a web client) The whole szenario is running in a mininet environment which is running in a virtual box ubuntu that has 2 cores (max 2.8ghz) and 1GB RAM. While running the test, i see via htop that the CPUs are at 100% while the RAM is at 50%. So the CPU is the bottleneck here. I noticed that after 2-5 minutes (1 minute = 60 * (2+26) messages = 1680 messages) there are too many missing results causing too many sending repetitions while new requests are already coming in, so that the "management server" thinks the client servers (3-15) are down and deregisters them. After syncing this with the other management server, all client servers are marked as dead on both management servers which is not true... I am wondering if the problem could be my debug outputs? I am printing 3-5 messages for every message that is sent and received. So that are about (let's guess it are 5 messages per sent/recvd msg) (26 + 2)*5 = 140 lines that are printed on the console. I use python 2.6 for the servers. So the question here is: Can the console output slow down the whole system that simple requests take more than 0.5 seconds to complete 5 times in a row? The request processing is simple in my test. No complex calculations or something like that. basically it is something like "return request_param in ["bla", "blaaaa", ...] (small list of 5 items)" If yes, how can I disable the output completely without having to comment out every print statement? Or is there even the possibility to output only lines that contain "Error" or "Warning"? (not via grep, because when grep becomes active all the prints already have finished... I mean directly in python) What else could cause my application to be that slow? I know this is a very generic question, but maybe someone already has some experience with mininet and network applications...
Console output consuming much CPU? (about 140 lines per second)
0
0
1
102
29,463,072
2015-04-05T22:46:00.000
2
0
1
0
python
29,463,089
1
true
0
0
Your question is very unclear, but every program requires at least one module in order to run code, regardless of whether your code is object oriented or not, though it is typical to create different files for different purposes.
1
1
0
What exactly is a Python module? From what I've read, it seems like any Python file can be considered a module. With that in mind, is it true that python modules are needed if object orientation is to be used in a python program?
Is every file in Python considered a module?
1.2
0
0
56
29,465,038
2015-04-06T03:43:00.000
0
0
1
0
python-3.x
29,465,060
2
false
0
0
str in python 3 is typically stored as 16-bit integers instead of bytes, unlike the encoded bytes object. This makes the string twice as large. Some extra metadata is probably also present, inflating the object further.
1
1
0
In Python 3, the size of a string such as 'test'.__sizeof__() returns 73. However, if I encode it as utf-8, 'test'.encode().__sizeof__() returns 37. Why does the size of string significantly larger than the size of its encoding in utf-8?
python3 - why is size of string bigger than encode
0
0
0
170
29,465,822
2015-04-06T05:28:00.000
0
1
0
0
python-2.7,email,smtp,gmail,django-1.6
70,995,528
1
false
1
0
BCC limit for Gmail: 500 in any 24 hours period. If you want to send emails in bulk, you will need to request it as 500 per request.
1
0
0
How many maximum number of recipients can be added at a time to BCC field while sending a bulk e-mail? I'm using python Django framework and gmail, smtp for sending mail.
How many recipients can be added to BCC in python django
0
0
0
188
29,467,603
2015-04-06T08:09:00.000
1
0
1
0
python,kivy,spyder
29,635,815
1
true
0
1
(Spdyer dev here) There is no way at the moment for Spyder to highlight kivy files, sorry :-(
1
0
0
How do I setup Spyder to highlight and auto-complete kivy files (.kv)?
How do I setup Spyder to highlight and auto-complete kivy files (.kv)?
1.2
0
0
707
29,469,458
2015-04-06T10:28:00.000
0
0
1
0
python,montecarlo
29,469,527
1
false
0
0
Do you want a uniform distribution of these 3 values? If so, random.choice will give you exactly that.
1
0
1
I want to apply a simple Monte Carlo simulation on a variable that has three distinct values. Should I use random.random and assign the float to a variable value, or use random.choice(["a", "b", "c"])?
random.random or random.choice for Monte Carlo simulation?
0
0
0
143
29,470,767
2015-04-06T11:55:00.000
5
0
0
0
python,google-app-engine,google-cloud-datastore,app-engine-ndb
29,473,764
1
true
1
0
_pre_put_hook is called immediately before NDB does the actual put... so if an exception is raised inside of _pre_put_hook, then the entire put will fail
1
1
0
I'm using a pre put hook to fetch some data from an api before each put. If that api does not respond, or is offline, I want the request to fail. Do I have to write a wrapper around a put() call, or is there some way so that we can still type My_model.put() and just make it fail?
Can I cause a put to fail from the _pre_put_hook?
1.2
0
1
247
29,470,916
2015-04-06T12:03:00.000
0
0
1
0
android,python,android-uiautomator
30,371,869
1
false
0
0
UiAutomator works on the current UI of the device/emulator. So if you aren't on the correct screen in which you are searching it will not work, so you have to make it synchronous. Anyway, as far as I know UiAutomator doesn't like multi-threading. I'm not saying it would present errors or completely fail, but it's a really bad option. I'm saying this not from experience but I've read it (don't recall where sorry) related to the usage of Thread.sleep().
1
0
0
now, I use uiautomator to do test works on Android. We develop the framework to do different tasks based on python and uiautomator. But I have some questions about the multiple threading to call uiautomator at the some time. For example, in the main function I use uiautomator to detect the text like "Browser" on the Android apps menu, and at the same time I use uiautomator to detect the text like "Calculator" based on python threading module, and then I find the problem sometimes "Browser" can be found, and sometimes "Calculator" can be found, but not always. I'm puzzled about the uiautomator tool, does it support multiple threading?
Does uiautomator support multiple threading
0
0
0
449
29,475,511
2015-04-06T16:43:00.000
1
0
1
0
python-import,qpython
29,654,629
1
true
0
0
You don't have enough arguments to unpack in this step script, first, second, third = argv try script, = argv
1
1
0
I'm a newbie to python trying to run this code on Qpython: from sys import argv script, first, second, third = argv print "The script is called:", script print "Your first variable is:", first print "Your second variable is:", second print "Your third is:", third but the console keeps returning this value error: need more than one value to unpack. Pls help..
how to import on Qpython
1.2
0
0
809
29,478,997
2015-04-06T20:19:00.000
0
0
1
0
python,image-processing
30,521,654
1
false
0
0
Well I can understand your concern I myself had a similar problem when i did a project on image processing and i had to port it to nanoboard(FPGA) so using external libraries is much of a headache. What i did was, first programmed the code using the libraries at hand and then looked up for their implementation. You can view the source code for functions in python and they could be used with a little modifications. Hope it helps. Reach me for any furthur queries.
1
0
0
Im working of a project in python that is intended to be as modular and flexible as possible. The project must to be written in Python, and it involves some image processing (access to individual pixels and image size). what would be the best way to manipulate an image in Python without the use of external libraries? I am trying to use as little dependancies as possible, so it will be easy to transfer to different platforms. I would appreciate any other approaches that might work better.
Python image processing without external libraries
0
0
0
1,904
29,479,112
2015-04-06T20:26:00.000
0
0
1
1
python,ruby,database,oop,orm
29,479,399
2
false
0
0
This question doesn't really make sense. Presumably LINQ, like any .NET library, can be used in any language that runs in the CLR: C#, VB, IronPython, IronRuby, etc. The most common cross-language runtime that works on Linux is the Java VM, and you can use Java libraries - including ORMs like JDO - in any language that uses that VM: Java, Scala, Clojure, Jython, JRuby, etc.
2
0
0
The predominant ORMs that run in a linux-based environment seem to be written around a specific language. Microsoft LINQ, however, supports access from a number of languages. Can I do this in linux-land (i.e. non-LINQ-land, non-JVM-land), for example between native versions of Python and Ruby?
Can I use a linux-based ORM from multiple languages?
0
0
0
124
29,479,112
2015-04-06T20:26:00.000
1
0
1
1
python,ruby,database,oop,orm
29,481,003
2
true
0
0
It seems that the only way to do this is to use languages which share a common VM, such as .NET CLR (and LINQ) or the Java JVM (Hibernate, Eclipse Link, etc). So for the various languages running in their native implementation, the answer is no.
2
0
0
The predominant ORMs that run in a linux-based environment seem to be written around a specific language. Microsoft LINQ, however, supports access from a number of languages. Can I do this in linux-land (i.e. non-LINQ-land, non-JVM-land), for example between native versions of Python and Ruby?
Can I use a linux-based ORM from multiple languages?
1.2
0
0
124
29,481,698
2015-04-07T00:15:00.000
0
0
0
0
python,decision-tree
35,924,390
1
false
0
0
For this kind of decision tree you need to use DecisionTreeClassifier(). It appears that DecisionTreeRegressor only works with numerical predictor data. DecisionTreeClassifier() only works with class predictor data. I really wanted one that does both, but it doesn't appear possible.
1
0
1
I have used the python's DecisionTreeRegressor() to segment data based on a Predictor that is continuous, and it works well. In the present project I have been asked to use Categorical data as Predictor. Predictor - Industry Domain, Response - Revenue. On using DecisionTreeRegressor() it threw error "Cannot change string to float : Industry Domain". Can you suggest if there is any way to resolve this problem?
How to use DecisionTreeRegressor() for Categorical Segmentation?
0
0
0
385
29,482,125
2015-04-07T01:11:00.000
0
0
0
0
python-2.7,passwords,wifi
29,482,182
3
false
0
0
The password for the wifi will be stored in the keychain => /Applications/Utilities/ It will be in either the Login keychain or System keychain, just double click the keychain entry with the wifi name and tick show password, once you have entered your password it should show you the password used to connect to the network.
2
1
0
I just bought a new Macbook Pro, but I forgot to write down my own wifi password. I tried contacting my ISPs (or whatever you call them) but no one responded. I don't think I will ever get an answer from them. Using Python 2.7.9, is a program able to hack into my own wifi and retrieve the password?
Recovering Wifi Password Using Python
0
0
1
1,948
29,482,125
2015-04-07T01:11:00.000
0
0
0
0
python-2.7,passwords,wifi
36,777,109
3
false
0
0
Connect to your router via ethernet. You should then be able to set the wifi password to whatever you want
2
1
0
I just bought a new Macbook Pro, but I forgot to write down my own wifi password. I tried contacting my ISPs (or whatever you call them) but no one responded. I don't think I will ever get an answer from them. Using Python 2.7.9, is a program able to hack into my own wifi and retrieve the password?
Recovering Wifi Password Using Python
0
0
1
1,948
29,484,408
2015-04-07T05:29:00.000
0
0
0
0
python,numpy,pygame
29,488,142
1
false
0
1
Instead of setting each individual pixel, use pygame's line drawing function to draw a line from the current coordinate to the next instead of using sub-pixel coordinates (pygame.draw.line or even pygame.draw.lines). This way, the "gaps" between two points are filled; no need for sub-pixel coordinates. You just have to draw the lines in the right order; just ensure the coordinates are sorted. Other than that, you could also simple convert your sub-pixel coordinates by casting the x/y values to integers.
1
0
0
The question I have as the title says is on the idea of setting up a graph in pygame that graphs sub-pixel coordinates. Something a friend and I spoke of was how I could try to make a function graphing program for fun in python, and I thought about how I could use it, but I found a few issues. The first one was the use of range, due to it using integers and not floats, but arange from numpy fixed that problem, but that brings me to the second issue. The idea for the graph I thought about so that it would be simple, not making massive thick lines or odd shaped one, is that it uses display.set_at to make a single pixel a color. And for simple graphs, this works perfectly. But when I went into more complicated graphs, I ran into two main errors: The first error is that the graph shows pixels without any line between them, the idea of the line was the illusion of having all of the pixels near each other. But I found that with a range step of one in range, it leaves this gap. In theory, using arange with a step of .01, the gaps would vanish all together, but this brings to the second problem. The display.set_at does not work with sub-pixel coordinates. Would anyone be able to suggest a way to make this work? It would be most appreciated.
Pygame, sub-pixel coordinates.
0
0
0
312
29,485,901
2015-04-07T07:14:00.000
0
0
1
0
python-3.4
29,486,011
1
false
0
0
you can do it like this dict1 ['key'] = 'value'
1
0
0
I am a Newbie in Python programming : I am trying the following. Please guide: I have a dictionary as following: dict1 = {'w' : 4,'e' : 5,'r' : 8}. I Want to append the values in the dict1 as follows : dict = {'w' : (4,6),'e' : 5,'r' : 8}. I have tried using update but it replaces the key and inserts a new key. Is there any option to append the values for the key which we want. Is it possible to achieve? If so, please let me know. Thanks in Advance
Python 3 : How to append integer values for a key in a dictionary?
0
0
0
1,863
29,486,671
2015-04-07T08:01:00.000
0
0
0
0
python,openpyxl
29,487,114
2
false
0
0
I suspect that there might be a subtle difference in what you think you need to write as the formula and what is actually required. openpyxl itself does nothing with the formula, not even check it. You can investigate this by comparing two files (one from openpyxl, one from Excel) with ostensibly the same formula. The difference might be simple – using "." for decimals and "," as a separator between values even if English isn't the language – or it could be that an additional feature is required: Microsoft has continued to extend the specification over the years. Once you have some pointers please submit a bug report on the openpyxl issue tracker.
1
3
0
I have a script to format a bunch of data and then push it into excel, where I can easily scrub the broken data, and do a bit more analysis. As part of this I'm pushing quite a lot of data to excel, and want excel to do some of the legwork, so I'm putting a certain number of formulae into the sheet. Most of these ("=AVERAGE(...)" "=A1+3" etc) work absolutely fine, but when I add the standard deviation ("=STDEV.P(...)" I get a name error when I open in excel 2013. If I click in the cell within excel and hit (i.e. don't change anything within the cell), the cell re-calculates without the name error, so I'm a bit confused. Is there anything extra that needs to be done to get this to work? Has anyone else had any experience of this? Thanks, Will --
openpyxl and stdev.p name error
0
1
0
813
29,488,957
2015-04-07T10:05:00.000
0
0
0
0
python,html,selenium
29,564,991
2
true
1
0
The problem in my case was that I was not waiting for the element to load. At least I assume that's what the problem is because if I allow selenium to wait for the element instead and then click on it, it works.
2
3
0
I'm using the python package to move the mouse in some specified pattern or just random motions. The first thing I tried is to get the size of the //html element and use that to make the boundaries for mouse movement. However, when I do this the MoveTargetOutOfBoundsException rears its head and displays some "given" coordinates (which were not anywhere near the input. The code I used: origin = driver.find_element_by_xpath('//html') bounds = origin.size print bounds ActionChains(driver).move_to_element(origin).move_by_offset(bounds['width'] - 10, bounds['height'] - 10).perform() So I subtract 10 from each boundary to test it and move to that position (apparently the move_to_element_by_offset method is dodgy). MoveTargetOutOfBoundsException: Message: Given coordinates (1919, 2766) are outside the document. Error: MoveTargetOutOfBoundsError: The target scroll location (17, 1798) is not on the page. Stacktrace: at FirefoxDriver.prototype.mouseMoveTo (file://... The actual given coordinates were (1903-10=1893, 969-10=989). Any ideas?
Selenium moving to absolute positions
1.2
0
1
1,318
29,488,957
2015-04-07T10:05:00.000
0
0
0
0
python,html,selenium
29,489,027
2
false
1
0
Two possible problems: 1) There could be scroll on the page, so before clicking you should have scroll into element view 2) Size is given without browser elements respect, and in real world you should substitute about 20 or 30 to have original size (you could test test that values)
2
3
0
I'm using the python package to move the mouse in some specified pattern or just random motions. The first thing I tried is to get the size of the //html element and use that to make the boundaries for mouse movement. However, when I do this the MoveTargetOutOfBoundsException rears its head and displays some "given" coordinates (which were not anywhere near the input. The code I used: origin = driver.find_element_by_xpath('//html') bounds = origin.size print bounds ActionChains(driver).move_to_element(origin).move_by_offset(bounds['width'] - 10, bounds['height'] - 10).perform() So I subtract 10 from each boundary to test it and move to that position (apparently the move_to_element_by_offset method is dodgy). MoveTargetOutOfBoundsException: Message: Given coordinates (1919, 2766) are outside the document. Error: MoveTargetOutOfBoundsError: The target scroll location (17, 1798) is not on the page. Stacktrace: at FirefoxDriver.prototype.mouseMoveTo (file://... The actual given coordinates were (1903-10=1893, 969-10=989). Any ideas?
Selenium moving to absolute positions
0
0
1
1,318
29,499,285
2015-04-07T19:05:00.000
0
0
1
1
python,pip
29,558,245
1
false
0
0
You can try this. sudo mv /var/lib/dpkg/info /var/lib/dpkg/info.bak sudo mkdir /var/lib/dpkg/info sudo apt-get update Hope it can help you(and others).
1
0
0
I am trying to install PIP for Python3, but no matter what I try at some point I always end up with: E: Sub-pricess /usr/bin/dpkg returned an error code(1) E: Failed to process build dependencies. I tried with : python get-pip.py from the official PIP page. sudo apt-get install python3-pip sudo apt-get build-dep python3.4 I have the version of PIP for python2.7, so that's why I ran the latest command. Can someone help me out please ?
PIP Python Installation weird error
0
0
0
57
29,502,122
2015-04-07T21:58:00.000
0
0
1
0
python,python-2.7
29,523,500
3
false
0
0
There are already some great answers above. A quick fix for your print problems would be to make use of the future module which backports some Python 3 features to Python 2. I'd recommend as a minimum writing you're Python 2.7 code with the new print function. To do this import the new print function from future. i.e. from __future__ import print_function You will now get syntax errors in Python 2 using print as: print x and will now have to do: print(x) Other solutions like 2to3 and six exist, but these might be a bit complicated at the moment, especially as you are learning Python.
1
0
0
I wrote some code in Python 2.7.2 and now I may need to switch to 3.4.3, but my code breaks (simply print statements right now, but who knows what else). Is it possible to write the syntax in such a way that it will be compliant with both 2.7.2 and 3.4.3? I am just starting out with Python and don't want to build habits with one flavor and then have to relearn things with another version later.
Can Python code satisfy both 2.7.x and 3.4.x requirements?
0
0
0
188
29,504,313
2015-04-08T01:26:00.000
7
0
1
0
python,import
29,504,318
3
false
0
0
There's very little cost to a repeated import statement, since Python caches modules and only imports them once (at the first import), unless explicitly asked to reload a module with the reload function. The effect (and rough performance impact) of a repeated import statement is essentially just binding the imported names in the local namespace. It isn't completely free, however; import does have to lock and unlock the import table lock, and resolve the provided names. This means that can still slow down your program if called frequently.
1
11
0
I am often tempted to import modules in narrow contexts where they are needed. For example in the body of a function that utilises the module. In this case, the import statement may be executed many times. Apart from stylistic issues, what is the performance cost of doing this?
python: What is the cost of re-importing modules?
1
0
0
4,429
29,508,958
2015-04-08T07:53:00.000
2
0
0
0
django,python-2.7,django-templates,bokeh
32,680,856
5
false
1
0
It must put {{the_script|safe}} inside the head tag
1
31
0
I want to display graphs offered by the bokeh library in my web application via django framework but I don't want to use the bokeh-server executable because it's not the good way. so is that possible? if yes how to do that?
how to embed standalone bokeh graphs into django templates
0.07983
0
0
15,838
29,509,526
2015-04-08T08:25:00.000
1
0
1
0
python,wxpython,spyder
29,619,247
1
false
0
1
So I managed to find a solution to my problem. Winpython has the option to "register" the distribution, this will add associate file extensions, add icons, and importantly for my case, register WinPython as a standard Python distribution. When I registered my copy of Winpython, in the Advanced tab of the Winpython control panel , the wxPython installer was able to see Winpython in the Windows registry and copy all the files to the corresponding folders. Now if I run: import wx, it works
1
1
0
I run python using Winpython. I would like to use the GUI libraries from wxpython in my Spyder IDE. I tried the wxpython installer but for some reason the packages are not copied to the WinPython\python\Lib\site-packages folder. I also tried the build it "Winpython Control Panel" which is supposed to add new packages but dragging and dropping the installer file didn't really work. How can I install wxpython so that I can use it from Winpython Spyder?
Installing/running wxpython on Winpython Spyder
0.197375
0
0
1,863
29,513,201
2015-04-08T11:21:00.000
9
0
0
0
python,python-2.7,plone,plone-4.x
29,515,070
1
true
1
0
Go to portal_setup (from ZMI), then: go in the "Upgrades" tab select your profile (the one where you defined the metadata.xml) From here you commonly can run upgrade step not yet ran. In your case click on the "Show" button of "Show old upgrades".
1
3
0
I've got a Plone 4.2.4 application and from time to time I need to create an Upgrade Step. So, I register it in the configure.zcml, create the function to invoke and increase the profile version number in the metadata.xml file. However, it might happen that something goes not really as expected during the upgrade process and one would like to rerun the Upgrade with the corrected Upgrade Step. Is there a way to rerun the Upgrade Step or do I always need to increase the version and create new Upgrade Step to fix the previous one?
Is there a way to rerun an Upgrade Step in Plone?
1.2
0
0
161
29,513,493
2015-04-08T11:36:00.000
0
0
0
0
python,kivy
29,522,735
1
false
0
1
There is no built in widget for displaying photos, the Kivy philosophy is instead to make it easy to build such a thing from component widgets, e.g. in this case layouts and Image widgets probably. That said, we would be happy to include image browser implementations in the Kivy garden user repository.
1
0
0
How can I choose photo in Kivy ? I couldn't see a module to see and select a photo. from kivy.uix.filechooser import FileChooserListView shows files by names Thank you
Kivy Photo Chooser
0
0
0
259
29,515,509
2015-04-08T13:06:00.000
0
1
0
0
python-2.7,exception,testing,exception-handling,python-behave
29,516,346
1
false
0
0
Regadless to framework/programming language exception is a state when something went wrong. This issue has to be handled somehow by the application, that's why a good programmer will write exception handling code in places where it needed at most. Exception handling can be everything. In your case you want to test that exception is logged. Therefore I see the an easy test sequence here: Execute the code/sequence of actions which will rase the exception Verify that log file has an entry related to the exception raised in previous step with help of your test automation framework.
1
4
0
When an exception is raised in the application that is not accounted for (an uncaught/unhandled exception), it should be logged. I would like to test this behaviour in behave. The logging is there to detect unhandled exceptions so developers can implement handling for these exceptions or fix them if needed. In order to test this, I think I have to let the code under test raise an exception. The problem is that I cannot figure out how to do that without hard-coding the exception-raising in the production code. This is something I like to avoid as I do not think this test-code belongs in production. While unit-testing I can easily mock a function to raise the exception. In behave I cannot do this as the application is started in another process. How can I cause an exception to be raised in behave testing, so it looks as if the production code has caused it, without hard-coding the exception in the production code?
How to test uncaught/unhandled exceptions in behave?
0
0
0
557
29,524,885
2015-04-08T20:37:00.000
3
0
0
0
python,screenshot,python-imaging-library
29,624,597
1
false
0
1
The cursor isn't on the same layer as the desktop or game your playing, so a screenshot won't capture it (try printscreen and paste into mspaint). A workaround is to get the position of the cursor and draw it on the image. you could use win32gui.GetCursorPos(point) for windows.
1
6
0
I'm making a program that streams my screen to another computer(like TeamViewer), I'm using sockets, PIL ImageGrab, Tkinter. Everything is fine but the screenshot I get from ImageGrab.grab() is without the mouse cursor, which is very important for my program purpose. Do you know how can I take screenshot with the mouse cursor?
Include mouse cursor in screenshot
0.53705
0
0
2,661
29,526,895
2015-04-08T23:09:00.000
0
0
1
0
python,pygame
29,527,035
2
false
0
1
The command turtle.down() will work, I guess
1
1
0
I am making a game for a presentation and I cannot seem to understand how to make a delay in Python. For example, whenever I press the D key, my character not only moves but also changes pictures so it looks like it's running. I have the movement part down, I just need to slow down the changing of the sprite so that it doesn't look like he's running a million miles per hour. I have set the FPS.
How Do you make a delay in python without stoping the whole program
0
0
0
165
29,528,342
2015-04-09T01:44:00.000
0
0
1
0
python
29,528,361
2
true
0
0
It does say "should return $250,000 * 0.40 + 50,000 * 0.80 = $140,000," but all your function should actually return is the final value of 250000. The function should simply do the calculation and return the result. The equation is written out in order to help you create the function, not as an output requirement. However, the best person to clarify assignments is the teacher who assigned them.
1
0
0
Question: Define a Python function named calculate_tax() which accepts one parameter, income, and returns the income tax. Income is taxed according to the following rule: the first $250,000 is taxed at 40% and any remaining income is taxed at 80%. For example, calculate_tax(100000) should return $100,000 * 0.40 = $40,000, while calculate_tax(300000) should return $250,000 * 0.40 + 50,000 * 0.80 = $140,000. My question is simple, does the question ask for me to print out the whole math operation $100,000 * 0.40 = $40,000, or just the final answer$40,000?
Can anyone please explain this following programming questioñ?
1.2
0
0
186
29,528,394
2015-04-09T01:51:00.000
1
0
0
0
python,time-series,influxdb
29,533,201
1
false
0
0
Not using Python, but in my case i use continuous queries in InfluxDb to consolidate automatically data in one place/serie. Then i request every X seconds on the newly created serie using a time window to select my data. They are then draw using a standard framework (highcharts.js) Maybe in your case you could wait for a predefined data volume before trigerring the push to the processing function.
1
0
0
I think InfluxDB is a really cool time series DB. I am planning to use it as an intermediate data aggregator (collecting time based metrics from many sensors). The data needs to be processed in "moving window" manner - when X samples received, Python based processing algorithm should be triggered. What is the best wait to trigger the algorithm upon enough data aggregated? (I assume that polling with select queries is not the best option). Is there any events I can wait on? Thanks! Meir
How to use InfluxDB as an intermediate data storage
0.197375
1
0
528
29,528,931
2015-04-09T02:53:00.000
2
0
1
0
python,function,dictionary
29,528,957
3
true
0
0
return [k for k, v in counttext.items() if v >= n]
1
0
0
So I am trying to write a basic function that takes a text input and an integer 'n', and returns the words in the input that occur n times or more. Here is what I have: My problem is the 'return keys' line - clearly that will not work. What can I use to return the relevant words? Thanks
Returning a dictionary key from its value
1.2
0
0
54
29,529,660
2015-04-09T04:15:00.000
0
0
1
0
python,python-2.7
29,529,811
1
true
0
0
There are tow main types of windows -- modal and non-modal. You cannot interact with two modal windows at the same time. Depending on your GUI framework there are different ways how to have opened two non-modal windows. If you are using PyQt4, google for modeless dialogs to see examples of how to use them. Otherwise give more details of what are you trying to achieve.
1
0
0
How do you run two python GUI windows at the same time in python 2.7?
How to Have Multiple Python GUIs Running at the Same time?
1.2
0
0
235
29,533,144
2015-04-09T08:08:00.000
7
0
0
0
python,multithreading,flask,multiprocessing,multitasking
29,534,134
3
false
1
0
These kind of long polling jobs are best achieved using sockets, they don't really fit the Flask/WSGI model as this is not geared to asynchronous operations. You may want to look at twisted or tornado. That said your back-end process that reads/writes to telnet could be running in a separate thread that may or may not be initiated from a HTTP request. Once you kick off a thread from the flask app it won't block the response. You can just read from the data store it writes to by occasionally polling the Flask app for new data. This could be achieved client-side in a browser using javascript and timeouts, but it's a bit hacky.
1
11
0
I have a long running process that continuously reads from a telnet port and may occasionally write to it. Sometimes I want to send an HTTP request to it to fetch the info its read since the last time I asked. Sometimes I may send an HTTP request to write certain data to another telnet port. Should I do this with 2 threads and if so should I use a mutex or an instruction queue. How do you do threading with flask anyway? Should I use multiprocessing? Something else? The reason I ask this is I had problem with a similar problem(but serial ports instead of telnet port and directly in the app instead of a local/remote HTTP service) and ended up with the non data reading thread somehow almost never running even when I inserted tons of sleep calls. I ended up re-writing it from mutex to queues and then to using multiprocesing w/ queues. Edit: The telnet ports are connections to an application which communicates(mainly reads debug data) with hardware(a printer). The flask HTTP service I want to write would be accessed by test running against the printer(either on the same machine or a different machine then the HTTP service), none of this involves a web browser!
Whats the best way to present a flask interface to ongoing backround task?
1
0
0
1,176
29,535,168
2015-04-09T09:49:00.000
4
0
0
0
python,django,templates,jinja2
29,535,357
2
true
1
0
My suggestion is to use the built-in one. This way you'll save some time at the beginning having a possibility to learn Django internals first.
1
1
0
I have just started learning django (with some non web python experience). I see there are at least two templates engines: default django and jinja2. I see they are quite similar about syntax. Which one i better for beginner? which one is more perspective? Many thanks, Tomasz
Django: Just started learning django should i use django or jinja2 templates
1.2
0
0
203
29,538,870
2015-04-09T12:44:00.000
1
0
0
0
python,postgresql
29,538,970
1
false
0
0
Database designers spend a lot of time on caching and optimization. Unless you hit a specific problem, it's probably better to let the database do the database stuff, and your code do the rest instead of having your code try to take over some of the database functionality.
1
0
0
I'm writing a web application in python and postgreSQL. Users are to access a lot of information during a session. All such information (almost) are indexed in the database. My question is, should I litter the code with specific queries, or is it better practice to query larger chunks of information, cashing it, and letting python process the chunk for finer pieces? For example: A user is to ask for entries in a payment log. Either one writes a query asking for the specific entries requested, or one collect the payment history of the user and then use python to select the specific entries. Of course cashing is preferred when working with heavy queries, but since nearly all my data is indexed, direct database access is fast and the cashing approach would not yield much if any extra speed. But are there other factors that may still render the cashing approach preferable?
General queries vs detailed queries to database
0.197375
1
0
30
29,539,555
2015-04-09T13:14:00.000
1
0
0
0
python,web-scraping,data-extraction
29,540,245
1
false
1
0
You shouldn't try to fetch information about delivery price from a cart or any other pages, because like you see it depends on cart amount or other conditions on e-commerce site. It means only one right way here is to emulate this rules/conditions when you try to calculate total price of an order on your side. Do it like this and you'll avoid too many problems with the correct calculations of delivery prices.
1
0
0
I have a python script that extracts product data from an ecommerce website. However, one essential piece of information is missing from the page - delivery cost. This is not provided on any of the product pages, and is only available when you add the product to the shopping basket in order to test how much the product costs to deliver. Complexity is also added due to different delivery rules - e.g free delivery on orders over £100, different delivery prices for different items, or a flat rate of shipping for multiple products. Is there a way that I can easily obtain this delivery cost data? Are there any services that anyone knows of through which I can obtain this data more easily, or suggestions on a script that I could use? Thanks in advance.
Scraping / Data extraction of shipping price not on product page (only available on trolley)
0.197375
0
1
363
29,539,678
2015-04-09T13:20:00.000
0
0
1
0
python
29,547,417
3
true
0
0
the pearl wrapper for GGNFS (c implementation) was rewritten into python by Brian Gladman. Look for factmsieve.py
1
5
1
Is there any inbuilt or online Implementation of GNFS factoring in Python? I need a version that can easily be used to factor integers in other programs so I would need to import and preferably is comparable with or only needs minimal change to work with Python 3. I need this to factor (multiple) numbers of over 90 digits in length and elliptic curve factorization is too slow for the purpose. I have checked online and could only find Perl and C++ implementations. If not is there any online resource that could guide me step by step to my own implementation of this algorithm?
Is there a pre-existing implementation of the General Number Field Sieve (GNFS) in Python?
1.2
0
0
2,960
29,539,795
2015-04-09T13:25:00.000
0
0
0
0
python,linux,django,suse
29,539,856
2
false
1
0
It sounds like the python interpreter is what you don't have permission for. Do you have permission to run python?
2
0
0
I'm trying to use django on a Suse server to use it in production with apache and mod_python but I'm finding some problems. I have installed python 2.7.9 (the default version was 2.6.4) and django 1.7. I had some problem with the installation but they are now solved. My current problem is that when I try to execute django-admin I get this error: -bash: /usr/local/bin/django-admin: .: bad interpreter: Permission denied I have searched through the web but I have not found a solution. I have tried to make the file executable: sudo chmod +x django-admin and the problem remains equal. Any idea? Thanking you in advance.
django-admin bad interpreter: Permission Denied
0
0
0
496
29,539,795
2015-04-09T13:25:00.000
0
0
0
0
python,linux,django,suse
29,540,690
2
false
1
0
had you try add your user to the group with persmission for execute python? you can look the file /etc/passwd . In these file you can describe each permission.
2
0
0
I'm trying to use django on a Suse server to use it in production with apache and mod_python but I'm finding some problems. I have installed python 2.7.9 (the default version was 2.6.4) and django 1.7. I had some problem with the installation but they are now solved. My current problem is that when I try to execute django-admin I get this error: -bash: /usr/local/bin/django-admin: .: bad interpreter: Permission denied I have searched through the web but I have not found a solution. I have tried to make the file executable: sudo chmod +x django-admin and the problem remains equal. Any idea? Thanking you in advance.
django-admin bad interpreter: Permission Denied
0
0
0
496
29,541,619
2015-04-09T14:40:00.000
1
0
0
0
javascript,python,bash,youtube,command-line-interface
29,541,704
1
false
0
0
Python or Node (JS) will probably be a lot easier for this task than Bash, primarily because you're going to have to do OAuth to get access to the social network. Or, if you're willing to get a bit "hacky", you could issue scripts to PhantomJS, and automate the interaction with the sites in question...
1
0
0
I would like to write a script to access data on a website, such as: 1) automatically searching a youtuber's profile for a new posting, and printing the title of it to stdout. 2) automatically posting a new video, question, or comment to a website at a specified time. For a lot of sites, there is a required login, so that is something that would need to be automated as well. I would like to able to do all this stuff from the command line. What set of tools should I use for this? I was intending to use Bash, mostly because I am in the process of learning it, but if there are other options, like Python or Javascript, please let me know. In a more general sense, it would be nice to know how to read and directly interact with a website's JS; I've tried looking at the browser console, but I can't make much sense of it.
How to interact with social websites (auto youtube posting, finding titles of new videos etc.) from the command line
0.197375
0
1
52
29,541,928
2015-04-09T14:54:00.000
2
0
1
0
python,precision
29,542,118
2
true
0
0
If you run 0.1 == 0.10 in IDLE and it will show that it evaluates to true. Same goes for 0.1 == 0.10000, this will evaluate to true.
2
2
0
Hello I have a quick question before I go and do a complicated loop full of type conversions and stuff. While comparing two values, will this result in True? 0.1 == 0.10 (in floating point) I'm really comparing members of a list and they might come out like this and I just wanted to make sure equal values will result in true for my if statements Will this result in true or would I need to change the decimal point precision for one of them?
Python quickie: decimal point precision equality
1.2
0
0
119
29,541,928
2015-04-09T14:54:00.000
1
0
1
0
python,precision
29,542,377
2
false
0
0
If you are doing decimal arithmetic that needs to be exact, unlike float, use the Decimal type.
2
2
0
Hello I have a quick question before I go and do a complicated loop full of type conversions and stuff. While comparing two values, will this result in True? 0.1 == 0.10 (in floating point) I'm really comparing members of a list and they might come out like this and I just wanted to make sure equal values will result in true for my if statements Will this result in true or would I need to change the decimal point precision for one of them?
Python quickie: decimal point precision equality
0.099668
0
0
119
29,546,225
2015-04-09T18:30:00.000
0
0
0
1
python-2.7,easy-install
29,546,275
1
false
0
0
Have you recently upgraded your OS? Sometimes the X-Code Command Line Tools need to be re-installed after an OS upgrade.
1
0
0
I seem to have screwed up my Python install on my Mac (running OSX 10.10.3), I can run python but not easy_install. Running easy_install just gives me sudo: easy_install: command not found However, sudo easy_install-3.4 pip doesn't give me any error but when I then try to use pip using pip install gevent I get -bash: /usr/local/bin/pip: No such file or directory If I use pip3.4 install geventI get a long set of errors ending with Cleaning up... Command /Library/Frameworks/Python.framework/Versions/3.4/bin/python3.4 -c "import setuptools, tokenize;file='/private/var/folders/sb/bk7v6n4x30s6c_w_p3jf7mrh0000gn/T/pip_build_Oskar/gevent/setup.py';exec(compile(getattr(tokenize, 'open', open)(file).read().replace('\r\n', '\n'), file, 'exec'))" install --record /var/folders/sb/bk7v6n4x30s6c_w_p3jf7mrh0000gn/T/pip-q7w99lz8-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /private/var/folders/sb/bk7v6n4x30s6c_w_p3jf7mrh0000gn/T/pip_build_Oskar/gevent Storing debug log for failure in /var/folders/sb/bk7v6n4x30s6c_w_p3jf7mrh0000gn/T/tmpoowjltmj How can I restore my Python setup?
easy_install not working on OS X
0
0
0
314
29,548,735
2015-04-09T20:51:00.000
1
1
0
1
python,psutil,iowait
29,548,863
1
true
0
0
%wa is giving your the iowait of the CPU, and if you are using times = psutil.cpu_times() or times = psutil.cpu_times_percent() then it is under the times.iowait variable of the returned value (Assuming you are on a Linux system)
1
1
0
I am writing a python script to get some basic system stats. I am using psutil for most of it and it is working fine except for one thing that I need. I'd like to log the average cpu wait time at the moment. from top output it would be in CPU section under %wa. I can't seem to find how to get that in psutil, does anyone know how to get it? I am about to go down a road I really don't want to go on.... That entire CPU row is rather nice, since it totals to 100 and it is easy to log and plot. Thanks in advance.
Get IO Wait time as % in python
1.2
0
0
1,680
29,551,003
2015-04-09T23:49:00.000
0
0
1
0
python,bash,qt,pyqt,tornado
29,551,539
2
false
0
1
You won't need a bash script. Probably simplest to write a PyQt application and have the application launch the web server. The server may be in a separate thread or process depending on your requirements, but I'd start by having a single thread as a first draft and splitting it out later. Having the PyQt app as your main thread makes sense as your GUI is going to be responsible for user inputs (start/stop server, etc) and program outputs (server status, etc) and therefore it makes sense to make this the controlling thread with references to other objects or threads.
1
0
0
I am trying to program an application that runs a HTTP server as well as a GUI using Tornado and PyQt4 respectively. I am confused about how to use these two event loops in parallel. Can this be done with the multiprocessing module? Should the HTTP server be run in a QtThread? Or is a bash script the best way to go to run both of these processes at the same time?
How can I combine PyQt4 and Tornado's event loops into one application?
0
0
0
757
29,552,242
2015-04-10T02:16:00.000
0
0
0
0
python,django,amazon-s3,boto,collectstatic
39,135,308
4
false
1
0
Old question but to fix this easily i just added the Environment variable "AWS_DEFAULT_REGION" with the region i was using (eg "ap-southeast-2"). This work locally (windows) and in AWS EB
2
12
0
I'm using boto with S3 to store my Django site's static files. When using the collectstatic command, it uploads a good chunk of the files perfectly before stopping at a file and throwing "Error 32: Broken Pipe." When I try to run the command again, it skips over the files it has already uploaded and starts at the file where it left off, before throwing the same error without having uploaded anything new.
Using Django's collectstatic with boto S3 throws "Error 32: Broken Pipe" after a while
0
0
0
1,924
29,552,242
2015-04-10T02:16:00.000
0
0
0
0
python,django,amazon-s3,boto,collectstatic
43,571,560
4
false
1
0
I also had the problem only with jquery.js, probably because it is too big like @Kyle Falconer mentions. It had nothing to do with region in my case. I "solved" it by copying the file locally to the S3 bucket where it needed to be.
2
12
0
I'm using boto with S3 to store my Django site's static files. When using the collectstatic command, it uploads a good chunk of the files perfectly before stopping at a file and throwing "Error 32: Broken Pipe." When I try to run the command again, it skips over the files it has already uploaded and starts at the file where it left off, before throwing the same error without having uploaded anything new.
Using Django's collectstatic with boto S3 throws "Error 32: Broken Pipe" after a while
0
0
0
1,924
29,552,853
2015-04-10T03:30:00.000
3
0
0
1
python,hadoop,mapreduce,hive,apache-pig
29,991,069
1
false
0
0
Python Map Reduce or anything using Hadoop Streaming interface will most likely be slower. That is due to the overhead of passing data through stdin and stdout and the implementation of the streaming API consumer (in your case python). Python UDF's in Hive and Pig do the same thing. You might not want to compress data flow into ORC on the Python side. You'll be subjected to using Python's ORC libraries, which I am not sure if they are available. It would be easier if you let Python return your serialized object and the Hadoop reduce steps to compress and store as ORC (Python as a UDF for computation) Yes. Pig and Python have some what of a nice programmatic interface where in you can write python scripts to dynamically generate Pig Logic and submit it in parallel. look up Embedding Pig Latin in Python. It's robust enough to define Python UDFS and let Pig do the overall abstraction and job optimization. Pig does a lazy evaluation so in cases of multiple joins or multiple transformations it can demonstrate pretty good performance in the optimizing the complete pipe line. You say HDP 2.1. Have you had a look at Spark ? If performance is important to you and looking at the datasets size which dont look huge you ll expect many time faster overall pipeline execution than Hadoop s native MR engine
1
2
1
I have the below requirements and confused about which one to choose for high performance. I am not java developer. I am comfort with Hive, Pig and Python. I am using HDP2.1 with tez engine. Data sources are text files(80 GB) and Oracle table(15GB). Both are structured data. I heard Hive will suite for structure data and Python map reduce streaming concept too will have high performance than hive & Pig. Please clarify. I am using Hive and the reasons are: need to join those two sources based on one column. using ORC format table to store the join results since the data size is huge text file name will be used to generate one output column and that has been performed with virtual column concept input__file__name field. After join need to do some arithmetic operations on each row and doing that via python UDF Now the complete execution time from data copy into HDFS to final result taken 2.30 hrs with 4 node cluster using Hive and Python UDF. My questions are: 1) I heard Java Mapreduce always faster. Will that be true with Python Map reduce streaming concept too? 2) Can I achieve all the above functions in Python like join, retrieval of text file name, compressed data flow like ORC since the volume is high? 3) Will Pig join would be better than Hive? If yes can we get input text file name in Pig to generate output column? Thanks in advance.
Which will give the best performance Hive or Pig or Python Mapreduce with text file and oracle table as source?
0.53705
0
0
2,382
29,552,868
2015-04-10T03:32:00.000
0
0
0
0
python,mysql
29,552,956
1
true
0
0
I think the right answer is to try and handle the connection errors; it sounds like you'd only be pulling in a much a larger library just for this feature, while trying and catching is probably how it's done, whatever level of the stack it's at. If necessary, you could multithread these things since they're probably IO-bound (i.e. suitable for Python GIL threading as opposed to multiprocessing) and decouple the production and the consumption with a queue, too, which would maybe take some of the load off of the database connection.
1
0
0
I am using Python to stream large amounts of Twitter data into a MySQL database. I anticipate my job running over a period of several weeks. I have code that interacts with the twitter API and gives me an iterator that yields lists, each list corresponding to a database row. What I need is a means of maintaining a persistent database connection for several weeks. Right now I find myself having to restart my script repeatedly when my connection is lost, sometimes as a result of MySQL being restarted. Does it make the most sense to use the mysqldb library, catch exceptions and reconnect when necessary? Or is there an already made solution as part of sqlalchemy or another package? Any ideas appreciated!
Persistant MySQL connection in Python for social media harvesting
1.2
1
0
48
29,554,475
2015-04-10T06:07:00.000
0
0
0
0
python,sockets
29,554,763
1
false
0
0
When the communication isn't heavy between clients and server, one way to do this is to have clients do a handshake to server and have the server enumerate clients and send back id's for communication. Then the client sends it's id along with any communication it has with server in order for the server to identify it. At least that is what I did.
1
0
0
I was able to set up a simple socket server and client connection between two devices, with the ability to send and receive values. My issue is with setting up the remote server to accept two clients from the same device, and differentiate the data being received by them. Specifically, each client will be running a similar code to accept encoder/decoder values from their respective motor. My main program, attached to the server, needs to use the data from each client separately, in order to carry out the appropriate calculations. How do I differentiate the incoming signals coming from both clients?
Python server to receive specific data from two clients on same remote device
0
0
1
117
29,560,307
2015-04-10T11:28:00.000
0
0
0
0
python,pyqt,qt-designer,qtabwidget
44,781,276
2
false
0
1
I see that this thread is kinda old. But I hope this will still help. You can use the remove() method to "hide" the tab. There's no way to really hide them in pyqt4. when you remove it, it's gone from the ui. But in the back end, the tab object with all your settings still exist. I'm sure you can find a way to improvise it back. Give it a try!
1
1
0
I am trying to build a GUI which will: Load a file with parameters which describe certain type of problem. Based on the parameters of the file, show only certain tab in QTabwidget (of many predefined in Qt Designer .ui) I plan to make a QTabwidget with, say 10 tabs, but only one should be visible based on the parameters loaded. Enabling certain tab is not an option since it takes to many space and the disabled tabs are grey. I do not want to see disabled tabs. Removing tab could be an option but the index is not related to a specific tab so I have to take care of the shift in the indices. And furthermore if user loads another file with different parameters, a good tab should be added and the current one removed. My questions are: How to do this effectively? Is it better to use any other type of widget? In Qt designer, is it possible to define many widgets one over another and then just push the good one in front. If yes, how? And how to edit and change any of them? If using RemoveTab, how to use pointers on tabs, rather than indices? I use PyQt4
PyQT Qtabwidget add, remove, hide, show certain tab
0
0
0
7,611
29,562,943
2015-04-10T13:38:00.000
0
0
1
0
python,regex
29,563,089
3
false
0
0
The group (@#)? is saying that the word may begin with "@#". What you are looking for is [@#]? which is saying the first character is @ or #, but it is not required. If you need the match to be part of a group you could use (@|#)?.
1
1
0
I want to match a set of patterns at "word boundary", but the patterns may have a prefix [#@] which should get matched if present. I'm using following regex pattern in python. r"\b[@#]?(abc|ef|ghij)\b" Sample text is : #abc is a pattern which should match. also abc should match. And finally @ef In this text only abc, abc and ef are matched without and not #abc and @ef as I want.
Python regex not matching at word boundary as required
0
0
0
977
29,565,712
2015-04-10T15:51:00.000
1
1
0
0
python,database,unit-testing
29,576,807
2
false
0
0
As it seems I got the wrong end of the stick, I had a similarish problem and like you an ORM was not an option. The way I addressed it was with simple collections of Data Transfer objects. So the new code I wrote, had no direct access to the db. It did everything with simple lists of objects. All the business logic and ui could be tested without the db. Then I had an other module that did nothing but read and write to the db, to and from my collections of objects. It was a poor mans ORM basically, a lot of donkey work. Testing was run the db creation script, then some test helper code to populate the db with data I needed for each test. Boring but effective, and you can with a bit of care, refactor it in to the code base without too much risk.
2
1
0
I'm trying to create unit tests for a function that uses database queries in its implementation. My understanding of unit testing is that you shouldn't be using outside resources such as databases for unit testing, and you should just create mock objects essentially hard coding the results of the queries. However, in this case, the queries are implementation specific, and if the implementation would change, so would the queries. My understanding is also that unit testing is very useful because it essentially allows you to change the implementation of your code whenever you want while being sure it still works. In this case, would it be better to create a database for testing purposes, or to make the testing tailored to this specific implementation and change the test code if we ever change the implementation?
Unit testing on implementation-specific database usage
0.099668
1
0
105
29,565,712
2015-04-10T15:51:00.000
2
1
0
0
python,database,unit-testing
29,566,319
2
true
0
0
Well, to start with, I think this is very much something that depends on the application context, the QA/dev's skill set & preferences. So, what I think is right may not be right for others. Having said that... In my case, I have a system where an extremely complex ERP database, which I dont control, is very much in the driver's seat and my code is a viewer/observer, rather than a driver of that database. I don't, and can't really, use an ORM layer much, all my added value is in queries that deeply understand the underlying database data model. Note also that I am mostly a viewer of that db, in fact my code has read-only access to the primary db. It does have write access to its own tagging database which uses the Django ORM and testing there is different in nature because of my reliance on the ORM. For me, it had better be tested with the database. Mock objects? Please, mocking would have guzzled time if there is a lot of legitimate reasons to view/modify database contents with complex queries. Changing queries. In my case, changing and tweaking those queries, which are the core of my application logic, is very often needed. So I need to make fully sure that they perform as intended against real data. Multi-platform concerns. I started coding on postgresql, tweaked my connectivity libraries to support Oracle as well. Ran the unit tests and fixed anything that popped up as an error. Would a database abstraction have identified things like the LIMIT clause handling in Oracle? Versioning. Again, I am not the master of the database. So, as versions change, I need to hook up my code to it. The unit testing is invaluable, but that's because it hits the raw db. Test robustness. One lesson I learned along the way is to uncouple the test from the test db. Say you want to test a function that flags active customers that have not ordered anything in a year. My initial test approach involved manual lookups in the test database, find CUST701 to be a match to the condition. Then call my function and test if CUST701 is the result set of customers needing review. Wrong approach. What you want to do is to write, in your test, a query that finds active customers that have not ordered anything in a year. No hardcoded CUST701s at all, but your test query query can be as hardcoded as you want - in fact, it should look as little as your application queries as possible - you don't want your test sql to replicate what could potentially be a bug in your production code. Once you have dynamically identified a target customer meeting the criteria, then call your code under test and see if the results are as expected. Make sure your coverage tools identify when you've been missing test scenarios and plug those holes in the test db. BDD. To a large extent, I am starting to approach testing from a BDD perspective, rather than a low-level TDD. So, I will be calling the url that handles the inactive customer lists, not testing individual functions. If the overall result is OK and I have enough coverage, I am OK, without wondering about the detailed low-level to and fro. So factor this as well in qualifying my answer. Coders have always had test databases. To me, it seems logical to leverage them for BDD/unit-testing, rather than pretending they don't exist. But I am at heart a SQL coder that knows Python very well, not a Python expert who happens to dabble in SQL.
2
1
0
I'm trying to create unit tests for a function that uses database queries in its implementation. My understanding of unit testing is that you shouldn't be using outside resources such as databases for unit testing, and you should just create mock objects essentially hard coding the results of the queries. However, in this case, the queries are implementation specific, and if the implementation would change, so would the queries. My understanding is also that unit testing is very useful because it essentially allows you to change the implementation of your code whenever you want while being sure it still works. In this case, would it be better to create a database for testing purposes, or to make the testing tailored to this specific implementation and change the test code if we ever change the implementation?
Unit testing on implementation-specific database usage
1.2
1
0
105