Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
27,315,968
2014-12-05T12:32:00.000
4
1
0
0
python,django,amazon-web-services,rabbitmq,amazon-sqs
27,317,451
2
true
1
0
I haven't had any problems with slow performance on SQS, but then again it maybe that the be the nature of my apps don't count on sub-millisecond response times for items in my queue. For me the work done on the items in the queue contributes more to the lag than the time it takes to use the queue. For me the distributed, highly available and 'hands-off' nature of SQS suits the bill. Only you can decide whats is more important: a few more milliseconds of performance in a non-redundant system that you need to support yourself, or the 'queue as a service' offerings of AWS. Not knowing you application, I can't say if the perceived extra performance is a necessary trade off for you.
1
0
0
Our startup is currently using RabbitMQ (with Python/Django) for messaging queues, now we are planning to move to Amazon SQS for its high availability & their delayed queue feature. But I am reading on INTERNET everywhere that SQS is slow performing & also very cost effective, so is it wise decision to move to Amazon SQS or should to stick to RabbitMQ? And if we its good to stick with RabbitMQ, whats the alternative solution for "delayed queues"?
Moving from RabbitMQ to Amazon SQS
1.2
0
0
3,023
27,322,027
2014-12-05T18:04:00.000
0
0
0
0
python,mysql,sql,sqlite
27,323,942
2
false
0
0
You have several options. The simplest is to simply save the output in chunks instead (say save one file for all the 1st molecule 'distance' scores, a second file for the second molecule distances, etc., with 60,000 files in all). That would allow you to also process your work in batches, and then aggregate to get the combined result. If that's not possible due to operations you need to do, you can perhaps sign up for a free Amazon Web Services Trial and upload the data to a Redshift server instance there, it has very good compression (>10x is common), is frighteningly fast for data analysis, and if you know SQL, you should be fine.
1
2
0
I am dealing with some performance issues whilst working with a very large dataset. The data is a pairwise distance matrix of ~60k entries. The resulting vectors have been generated in the following format: mol_a,mol_b,score, year_a, year_b 1,1,1,year,year 1,2,x,year,year 1,3,y,year,year ... 1,60000,z,year,year 2,1,x,year,year 2,2,1,year,year ... where mol_a and mol_b are unique molecules (INTs) and score is their jaccard/tanimoto similarity score (FLOAT/REAL) and year_a and b are dates (INT(4)) that are associated with mol_a and b respectively. Since this is a distance matrix the values are reflected across the diagonal ie 0 1 2 3 1 1 x y 2 x 1 z 3 y z 1 The resulting file has ~3.6 billion rows and becomes a ~100GB sqlite3 db. It take about 10 hours to make using all the PRAGMA tweaks I have read and doing executemany in 5 million entry batches. I would love to throw away half of it while building the but I can't think of a good way of doing this without ending up building a (prohibitively) giant list in memory... It's constructed via 2 nested for loops: for i in MolList: for j in MolList: a = calculate_score(i,j) write_buffer.append(a) Though the creation is slow it is not the prohibitive part, the actual analysis i want to do with it is.. I will be grouping things by year_a and year_b so I started creating an index on year_a, year_b and score to have a 'Covering Index' but the build is 13 hours in and occasionally using mass amounts of harddrive space on my c: drive (which is a small SSD vs the raid where the database is). The whole thing is running on 12 core workstation with 16GB of ram, Windows 7 on a 240GB SSD and data storage on a 1TB Raid 1 array (built in MoBo controller). I have also been attempting to build a MySql database on a small little ubuntu server that I have (intel core duo 2ghz 4GB ram 128GB SSD) but the python inserts across the network are taking forever! and I'm not convinced I'll see any real improvements. SQLite from what I've read seems like what I really wanted, essentially this would all be handled in Python memory if I had ~150GB of RAM at my disposal, but I needed a file based storage solution which seems like exactly what SQLite was designed for. However watching SQLite consume a pittance of memory and CPU while Disk IO bounces around 5MB/s chugs away makes me think that disk is just a bottleneck. What are my options for streamlining this process on a single node (aka no Hadoop clusters at my disposal)? I'm a DB newb so please keep suggestions within the realm of possibility. Having never worked with datasets in the billions I don't quite know what I'm in for but would greatly appreciate help/advice from this sage community.
database solution for very large table
0
1
0
314
27,323,228
2014-12-05T19:25:00.000
0
0
0
0
python,python-2.7,tkinter,tkinter-canvas
27,324,242
2
false
0
1
You can't. You'll have to do math on all of your coordinates to translate them from one coordinate system to another.
2
0
0
I want to move the coordinates on a tkinter canvas so that the bottom left corner is (0, 0), meaning that the top right corner is (height, width). How would I do this?
Move base coordinates Tkinter
0
0
0
85
27,323,228
2014-12-05T19:25:00.000
0
0
0
0
python,python-2.7,tkinter,tkinter-canvas
27,323,269
2
true
0
1
Simplest way is to write a function that inverts your vertical coordinate, given the canvas height and desired element height passed in.
2
0
0
I want to move the coordinates on a tkinter canvas so that the bottom left corner is (0, 0), meaning that the top right corner is (height, width). How would I do this?
Move base coordinates Tkinter
1.2
0
0
85
27,327,104
2014-12-06T01:02:00.000
0
0
1
0
python,macos,numpy
27,370,090
2
true
0
0
The new NumPY version would install (via pip) into the System path, where it wasn't being recognized by Python. To solve this I ran pip install --user numpy==1.7.1 to specify I want NumPY version 1.7.1 on my Python (user) path. :)
2
1
1
Trying to update NumPY by running pip install -U numpy, which yields "Requirement already up-to-date: numpy in /Library/Python/2.7/site-packages". Then checking the version with import numpy and numpy.version.version yields '1.6.2' (old version). Python is importing numpy via the path '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy'. Please help me out here.
OS X not using most recent NumPY version
1.2
0
0
708
27,327,104
2014-12-06T01:02:00.000
0
0
1
0
python,macos,numpy
27,328,371
2
false
0
0
You can remove the old version of numpy from /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy . Just delete the numpy package from there and then try to import numpy from the python shell.
2
1
1
Trying to update NumPY by running pip install -U numpy, which yields "Requirement already up-to-date: numpy in /Library/Python/2.7/site-packages". Then checking the version with import numpy and numpy.version.version yields '1.6.2' (old version). Python is importing numpy via the path '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy'. Please help me out here.
OS X not using most recent NumPY version
0
0
0
708
27,327,731
2014-12-06T02:42:00.000
0
1
1
0
python,unicode,centos6,vtk,python-unicode
27,519,430
2
true
0
0
I tried to compile VTK with my python build several times. Checked the various paths in CMAKE to avoid conflict with system python. Still couldn't get rid of the error. Finally, I built the python with --enable-unicoe=ucs2. That solved the problem. Thanks for the help though.
2
0
0
I've run into a strange problem. I built VTK with python wrappings on cent os 6.5. On importing vtk it gives me PyUnicodeUCS2_* error. I checked python used for the build for unicode setting with sys.maxunicode. It is UCS4. I searched for this error and found that the error occurs when the VTK is built using UCS2 python. But, This is not the case in my case. What could be the reason for error? The python that I'm using is picked from some other machine . If I run maxunicode on original previous machine it shows USC2. The same python (I copied the whole folder python2.6) on the other machine where I'm building VTK, shows maxunicode as UCS4. I think this has something to do with the problem. Please help.
PyUnicodeUCS2_* error while importing VTK
1.2
0
0
99
27,327,731
2014-12-06T02:42:00.000
0
1
1
0
python,unicode,centos6,vtk,python-unicode
27,327,870
2
false
0
0
This error is caused by using an extension built by a UCS2-based Python interpreter with a UCS4-based interpreter (or vice-versus). If you built it using the same Python interpreter then something is confusing in your build environment.
2
0
0
I've run into a strange problem. I built VTK with python wrappings on cent os 6.5. On importing vtk it gives me PyUnicodeUCS2_* error. I checked python used for the build for unicode setting with sys.maxunicode. It is UCS4. I searched for this error and found that the error occurs when the VTK is built using UCS2 python. But, This is not the case in my case. What could be the reason for error? The python that I'm using is picked from some other machine . If I run maxunicode on original previous machine it shows USC2. The same python (I copied the whole folder python2.6) on the other machine where I'm building VTK, shows maxunicode as UCS4. I think this has something to do with the problem. Please help.
PyUnicodeUCS2_* error while importing VTK
0
0
0
99
27,331,444
2014-12-06T11:59:00.000
0
0
0
0
python,xpath,web-scraping,scrapy
41,831,890
2
false
1
0
I had the same issue because I was trying to scrape a wikipedia page. The class name for the table shows up as "wikitable sortable jquery-tablesorter" because of the plugin mentioned in the other answer which adds to the class name after it is used. In order to pick up the table you can just look for the following class instead "wikitable sortable". This picks up the code for me.
1
0
0
I'm trying to select all the tables inside a division which has xpath similar to //*[@id="mw-content-text"]/table[@class="wikitable sortable jquery-tablesorter"]. But the selector doesn't returns any value. How can I get through those tags which have spaces in their id/class ?
How to select tables in scrapy using selectors whose class id have spaces in it?
0
0
1
2,468
27,332,961
2014-12-06T15:01:00.000
0
0
0
0
python,pygame,mouse
28,238,371
1
false
0
1
A possible solution could be to find the slope of the line between the point of origin(where you are shooting the bullet from) and the location of the mouse click. If you are storing the bullet's x and y values you can then use the slope to change those values every clock tick depending on the speed of the bullet or however you've got it working. Ex. If the slope of the line is 2/5, every tick you change the bullet's y by 2 and the bullet's x by 5. I think this may work, or at least may be close to what you are looking for.
1
0
0
So I'm trying to figure out how to make a bullet class move from character to mouse click and if it hits a monster it will do damage to a monster. I've figured out how to calculate the mouse position and character position. It has been a while since ive been in a math class. i'm using pygame, python 3.4
Python, Shooting from Character Position to Mouse position on Mouseclick
0
0
0
276
27,333,918
2014-12-06T16:43:00.000
1
0
1
0
python
27,333,978
3
true
0
0
In python the root module that is run and selects what else to run is always __main__ and is referred to as "main" - note that if each of the sub modules can also be run independently it may have its own "main" protected with if __name__ == "__main__: Of course you can also refer to it as your "User Interface" as that is what it is.
2
1
0
Is there a technical name for a module or file from which the code is run? I am asked to write some code to integrate both numerically and with fourier analysis. I have to prove that this code works to give the e-field and the voltage at a p-n junction. To do this I have created 7 modules, Run.py, ODESolver.py, NODESolver.py, FODESolver.py, Functions.py, BinFunction.py, UserInput.py. In order to demonstrate the code the module Run.py must be called in the terminal, it then calls whatever else it needs depending on user inputs. The module called Run.py is clearly special. Is there a technical name for it? EDIT: I am not looking for a file name - rather a way to refer to the module in the report. (i.e. this is the XYZ module of the program). Though its quite possible that there is a naming convention for the file name.
Name for a module from which the program is run
1.2
0
0
75
27,333,918
2014-12-06T16:43:00.000
0
0
1
0
python
27,333,957
3
false
0
0
You don't NEED a special name for it but there are some programming standards. Some are called main.py and many are just run. If you create and .exe or .app file it doesn't mater. It is good practise, especially in open source.
2
1
0
Is there a technical name for a module or file from which the code is run? I am asked to write some code to integrate both numerically and with fourier analysis. I have to prove that this code works to give the e-field and the voltage at a p-n junction. To do this I have created 7 modules, Run.py, ODESolver.py, NODESolver.py, FODESolver.py, Functions.py, BinFunction.py, UserInput.py. In order to demonstrate the code the module Run.py must be called in the terminal, it then calls whatever else it needs depending on user inputs. The module called Run.py is clearly special. Is there a technical name for it? EDIT: I am not looking for a file name - rather a way to refer to the module in the report. (i.e. this is the XYZ module of the program). Though its quite possible that there is a naming convention for the file name.
Name for a module from which the program is run
0
0
0
75
27,334,562
2014-12-06T17:47:00.000
0
0
1
0
python,html,timestamp,zip
27,334,938
1
false
1
0
In general, there is no way to know about the file's timestamp unless the web page gives it to you. Sometimes you can read the file's parent container and get an index page for all of its files, but most public web servers try to block that sort of thing.
1
0
0
I have a zipfile in html page, Is there any way to read that file's create\modified timestamp in python. I know we can read timestamp if it is in local directory. Thanks in advance.
Python : How to read file timestamp which is in html page in python
0
0
0
225
27,337,587
2014-12-06T23:03:00.000
0
1
0
1
python,linux,boot,raspbian,autostart
27,344,131
2
false
0
0
Try to use bootup option in crontab: @reboot python /path/to/pythonfile.py
1
0
0
Can anyone tel me how to start a python script on boot, and then also load the GUI ? I am debian based Raspbian OS. The reason I want to run the python script on boot is because I need to read key board input from a RFID reader. I am currently using raw_input() to read data from the RFID reader. The 11 character hex value is then compared against a set of values in a txt file. This raw_input() did not work for me on autostarting python script using crontab and also using with LXDE autostart. So, I am thinking to run python script at boot, so that it reads keyboard input. If there are any other ways of reading keyboard input using crontab autostart and LXDE autostart, please let me know.
Starting a python script at boot and loading GUI after that
0
0
0
1,590
27,339,088
2014-12-07T02:56:00.000
0
0
0
0
python,django
27,339,105
3
false
1
0
You do not have to do more than you have done, however, you should make sure that you do not have the server from your previous project running, for one. To confirm, are you at the point in the tutorial where you have run django-admin.py startproject mysite . You have also run python manage.py migrate. Also, make sure that you are running python manage.py runserver such that the manage.py you are running corresponds to the manage.py in your new tutorial project. That is, check which directory you are running from.
1
0
0
I'm running into problems trying to use "python manage.py runserver." According to the tutorial, I'm supposed to get a “Welcome to Django” screen when I visit my development server at http://127.0.0.1:8000/ but instead, I see one of my previous projects that's locally hosted on my computer. I'm wondering why this is, do I have to more specific or specify another when I use "python manage.py runserver"
Problems setting up Python development server at http://127.0.0.1:8000/
0
0
0
1,753
27,342,216
2014-12-07T11:34:00.000
0
0
0
0
python,windows,sockets,networking,python-3.x
27,344,058
2
false
0
0
UDP is not connection-based. Since no connection exists when using UDP, there is nothing to disconnect. Since there is nothing to disconnect, you can't ever know when something disconnects. It never will because it was never connected in the first place.
1
0
0
How can I create a UDP server in Python that is possible to know when a client has disconnected? The server needs to be fast because I will use in an MMORPG. Never did a UDP server then I have a little trouble.
UDP Server in Python
0
0
1
291
27,342,256
2014-12-07T11:41:00.000
7
1
0
1
http,python-3.x,ipv6,uwsgi
27,342,634
1
true
1
0
In your INI config file specify something like this [uwsgi] socket = [::]:your_port_number Or from the CL, ./uwsgi -s [::]:your_port_number The server shall now listen along all the interfaces (including IPv4, if the underlying OS supports dual stack TCP sockets)
1
3
0
Why doesn't uWSGI listen on IPv6 interface, even if system is 100% IPv6 ready? As far as I could see there aren't parameters nor documentation covering this issue.
uWSGI --http :80 doesn't listen IPv6 interface
1.2
0
0
1,462
27,346,760
2014-12-07T19:18:00.000
1
0
1
0
python,xcode,macos,swift,swift-playground
27,347,157
1
true
0
0
Yes Playgrounds only supports swift. Playground can only compile Swift code.
1
0
0
I tried to use Xcode's Playground feature to play around with a bit of new Javascript and Python tricks I started to learn. Unfortunately, whenever I switch the Syntax Coloring (Editor>Syntax Coloring) to a language other then Swift, I type in the first command and the Xcode crashes. Anyone tackled this? P.S after a bit of research I found out I may be quite dumb. Is PlayGround Solely for Swift?
Xcode Playground crashing with non-swift code
1.2
0
0
349
27,347,356
2014-12-07T20:20:00.000
2
0
0
1
python,python-3.x,packaging,kivy
27,349,249
1
true
0
1
All are possible, but I'm not sure what people are recommending right now - the Kivy website has instructions for pyinstaller (specifically on windows as I remember, but it works well on other platforms too), with the disadvantage that pyinstaller only supports python2 right now. You can use other tools too, I've seen some activity with e.g. nuitka, but I don't know the current state. Your best bet may be to ask on the kivy mailing list or irc, where some of the people using these tools are most likely to be around to comment. I haven't seen anyone do .deb or .rpm. I'm fairly sure it shouldn't be too hard, though you'd need to do some stuff yourself to make it work since you'd quite likely be forging new ground. Android and iOS are covered only by kivy's own build tools. These are fine on android, I can't comment on iOS.
1
3
0
Is there a tool that creates installers from the source code of a Kivy game for all the different supported platforms with a single button press? Linux: .deb, .rpm or just a portable .zip file that contains a .sh script Windows: .exe (installer or portable executable) Mac: .app (installer or portable executable) and possibly Android and iOS If not, is it possible?
Creating installers or executables on Linux for every supported platform for a Kivy game
1.2
0
0
464
27,351,360
2014-12-08T04:18:00.000
0
1
0
1
python,linux,performance
29,195,455
1
true
0
0
Months ago I found out that this problem is well known as c10k problem. It has to do amongst other things with how the kernel allocates and processes tcp connections internally. The only efficient way to address the issue is to bypass the kernel tcp stack and implement various other low-level things by your own. All good approaches I know are working with low-level async implementations There are some good ways to deal with the problem depending on the scale. For further information i would recommend to search for the c10k problem.
1
0
0
I've coded a small raw packet syn port scanner to scan a list of ips and find out if they're online. (btw. for Debian in python2.7) The basic intention was to simply check if some websites are reachable and speed up that process by preceding a raw syn request (port 80) but I stumbled upon something. Just for fun I started trying to find out how fast I could get with this (fastest as far as i know) check technique and it turns out that despite I'm only sending raw syn packets on one port and listening for responses on that same port (with tcpdump) the connection reliability quite drops starting at about 1500-2000 packets/sec and shortly thereafter almost the entire networking starts blocking on the box. I thought about it and if I compare this value with e.g. torrent seeding/leeching packets/sec the scan speed is quiet slow. I have a few ideas why this happens but I'm not a professional and I have no clue how to check if I'm right with my assumptions. Firstly it could be that the Linux networking has some fancy internal port forwarding stuff running to keep the sending port opened (maybe some sort of feature of iptables?) because the script seems to be able to receive syn-ack even with closed sourceport. If so, is it possible to prevent or bypass that in some fashion? Another guess is that the python library is simply too dumb to do real proper raw packet management but that's unlikely because its using internal Linux functions to do that as far as I know. Does anyone have a clue why that network blocking is happening? Where's the difference to torrent connections or anything else like that? Do I have to send the packets in another way or anything?
speed limit of syn scanning ports of multiple targets?
1.2
0
1
126
27,351,387
2014-12-08T04:22:00.000
0
0
0
0
python,window,pygame
27,357,511
1
false
0
1
As I understood you, you don't want the game to start, before the window is in fullscreen-mode, right? When I try you attempt (starting through rawinput), my display is fullscreen from the start. Have you set everything correctly? I suggest that you stop the game until there is an actual key-input (whatever controls you have set). Like this the player has the time to arrange everything to his liking before starting the game. Because, even if you figure out how to analyse the focus-issue: When the game starts, the window HAS focus, therefore this approach wouldn't work anyway.
1
2
0
I made a Python game using Pygame. I try to make it so when it starts, it loads the shell, and entering a raw input would display the pygame window and start the game so that I can make the game playable without having to close the shell. It works, however, the window starts minimized. The game is a simple "dodge the object" and has no pause what so ever. The game still runs in the background, possibly having the player hit multiple projectiles before the user realizes it. Is there a way to focus on the window?
How to focus on a window with pygame?
0
0
0
2,019
27,358,870
2014-12-08T13:15:00.000
3
0
0
0
python,kivy
27,380,669
3
true
0
1
There is no position nor size in a Canvas. Canvas act just as a container for graphics instructions, like Fbo that draw within a Texture, so it have a size. In Kivy, Canvas.size doesn't exists, but i guess you called your widget a canvas. By default, a Widget size is 100, 100. If you put it into a layout, the size will be changed, when the layout will known its own size. Mean, you need to listen to the changes of the Widget.size, or use a size you known, like Window.size.
1
2
0
In my app I need to know how big the canvas is in pixels. Instead calling canvas.size returns [100,100] no matter how many pixels the canvas is wide. Can you please tell me a way to get how many pixels the canvas is wide and high?
Size of canvas in kivy
1.2
0
0
5,156
27,359,258
2014-12-08T13:36:00.000
0
0
0
0
python,proxy,python-webbrowser
27,373,111
1
false
0
0
Sorry for the noise. I just realized that urllib2.urlopen(url) was breaking the things, not webbrowser.open_new_tab(url). We need to use proxyhandlers of urllib2 to get away with this. @Lawrence, Thanks for the help
1
0
0
I have a proxy setup on my machine (Win -7). I have written a python program which tries to open new tab of a browser with given URL, with the help of webbrowser module in python. But webbrowser.open_new_tab(URL) fails when I check the "Use proxy server for your LAN" checkbox in Internet Explorer settings (under LAN Settings), but it works perfectly fine when I uncheck this box . I don't understand why this is happening. Is there any way by which the webbroser module works with a proxy? Am I doing anything wrong here?
Pythons webbrowser.open_new_tab(url) with proxy
0
0
1
1,204
27,359,964
2014-12-08T14:17:00.000
0
0
0
0
python,django,django-1.6,django-1.7
71,538,417
4
false
1
0
pip install Django==1.6 in CMD
1
17
0
I started a new project a few months back using Django 1.7. The company has decided to settle on using Django 1.6 for all projects. Is there a nice way to downgrade from Django 1.7 to 1.6? Are migrations the only thing I have to worry about? Are the changes between the two versions large enough that I need to rewrite the app? I was hoping to just change the version in the requirements.txt and then install south and create new database migrations.
How to downgrade from Django 1.7 to Django 1.6
0
0
0
28,236
27,360,212
2014-12-08T14:30:00.000
1
0
1
0
python,tornado
27,405,926
1
true
0
0
@asynchronous is a promise to call self.finish() instead of letting the request be finished automatically. This allows you to use asynchronous operations via callbacks. @gen.coroutine (and @gen.engine, which is mostly obsolete) give the yield keyword special meaning, allowing you to use asynchronous operations via Futures and Tasks. Use @gen.coroutine when you use the yield keyword, and @asynchronous when you use callbacks. In Tornado 3.0 it was sometimes necessary to use both together (and put @asynchronous first), but since Tornado 3.1 there is no reason to do so and you should only use one or the other.
1
0
0
When should we add this decorator ? what is the benefits of adding this decorator ? what's the differences between with tornado.gen? I will be very appreciate if anyone could give me some details
Really confusing about @tornado.web.asynchronous
1.2
0
0
60
27,361,967
2014-12-08T16:06:00.000
-1
0
0
0
python,sql-server,spss
27,362,970
3
true
0
0
This isn't as clean as working directly with whatever database is holding the data, but you could do something with an exported data set: There may or may not be a way for you to write and run an export script from inside your Admin panel or whatever. If not, you could write a simple Python script using Selenium WebDriver which logs into your admin panel and exports all data to a *.sav data file. Then you can use the Python SPSS extensions to write your analysis scripts. Note that these scripts have to run on a machine that has a copy of SPSS installed. Once you have your data and analysis results accessible to Python, you should be able to easily write that to your other database.
3
1
0
I'm so sorry for the vague question here, but I'm hoping an SPSS expert will be able to help me out here. We have some surveys that are done via SPSS, from which we extract data for an internal report. Right now the process is very cumbersome and requires going to the SPSS Data Collection Interviewer Server Administration page and manually exporting data from two different projects (which takes hours at a time!). We then take that data, massage it, and upload it to another database that drives the internal report. My question is, does anyone out there know how to automate this process? Is there a SQL Server database behind the SPSS data? Where does the .mdd file come in to play? Can my team (who is well-versed in extracting data from various sources) tap into the SQL Server database behind SPSS to get our data? Or do we need some sort of Python script and plugin? If I'm missing information that would be helpful in answering the question, please let me know. I'm happy to provide it; I just don't know what to provide. Thanks so much.
Automating IBM SPSS Data Collection survey export?
1.2
0
0
1,546
27,361,967
2014-12-08T16:06:00.000
1
0
0
0
python,sql-server,spss
29,892,280
3
false
0
0
There are a number of different ways you can accomplish easing this task and even automate it completely. However, if you are not an IBM SPSS Data Collection expert and don't have access to somebody who is or have the time to become one, I'd suggest getting in touch with some of the consultants who offer services on the platform. Internally IBM doesn't have many skilled SPSS resources available, so they rely heavily on external partners to do services on a lot of their products. This goes for IBM SPSS Data Collection in particular, but is also largely true for SPSS Statistics. As noted by previous contributors there is an approach using Python for data cleaning, merging and other transformations and then loading that output into your report database. For maintenance reasons I'd probably not suggest this approach. Though you are most likely able to automate the export of data from SPSS Data Collection to a sav file with a simple SPSS Syntax (and an SPSS add-on data component), it is extremely error prone when upgrading either SPSS Statistics or SPSS Data Collection. From a best practice standpoint, you ought to use the SPSS Data Collection Data Management module. It is very flexible and hardly requires any maintenance on upgrades, because you are working within the same data model framework (e.g. survey metadata, survey versions, labels etc. is handled implicitly) right until you load your transformed data into your reporting database. Ideally the approach would be to build the mentioned SPSS Data Collection Data Management script and trigger it at the end of each completed interview. In this way your reporting will be close to real-time (you can make it actual real-time by triggering the DM script during the interview using the interview script events - just a FYI). All scripting on the SPSS Data Collection platform including Data Management scripting is very VB-like, so for most people knowing VB, it is very easy to get started and it is documented very well in the SPSS Data Collection DDL. There you'll also be able to find examples of extracting survey data from SPSS Data Collection surveys (as well as reading and writing data to/from other databases, files etc.). There are also many examples of data manipulation and transformation. Lastly, to answer your specific questions: Yes, there is always an MS SQL Server behind SPSS Data Collection - no exceptions. However, generally speaking the data model is way to complex to read out data directly from it. If you have a look in it, you'll quickly realize this. The MDD file (short for Meta Data Document) is containing all survey meta data including data source specifications, version history etc. Without it you'll not be able to make anything of the survey data in the database, which is the main reason I'd suggest to stay within the SPSS Data Collection platform for as large part of your data handling as possible. However, it is indeed just a readable XML file. Note that the SPSS Data Collection Data Management Module requires a separate license and if the scripting needed is large or complex, you'd probably want base professional too, if that's not what you already use for developing the questionnaires and handling the surveys. Hope that helps.
3
1
0
I'm so sorry for the vague question here, but I'm hoping an SPSS expert will be able to help me out here. We have some surveys that are done via SPSS, from which we extract data for an internal report. Right now the process is very cumbersome and requires going to the SPSS Data Collection Interviewer Server Administration page and manually exporting data from two different projects (which takes hours at a time!). We then take that data, massage it, and upload it to another database that drives the internal report. My question is, does anyone out there know how to automate this process? Is there a SQL Server database behind the SPSS data? Where does the .mdd file come in to play? Can my team (who is well-versed in extracting data from various sources) tap into the SQL Server database behind SPSS to get our data? Or do we need some sort of Python script and plugin? If I'm missing information that would be helpful in answering the question, please let me know. I'm happy to provide it; I just don't know what to provide. Thanks so much.
Automating IBM SPSS Data Collection survey export?
0.066568
0
0
1,546
27,361,967
2014-12-08T16:06:00.000
2
0
0
0
python,sql-server,spss
30,706,866
3
false
0
0
As mentioned by other contributors, there are a few ways to achieve this. The simplest I can suggest is using the DMS (data management script) and windows scheduler. Ideally you should follow below steps. Prerequisite: 1. You should have access to the server running IBM Data collection 2. Basic knowledge of windows task scheduler 3. Knowledge of DMS scripting Approach: 1. Create a new DMS script from the template 2. If you want to perform only data extract / transformation, you only need input and output data source 3. In the input data source, create/build the connection string pointing to your survey on IBM Data collection server. Use the data source as SQL 4. In the select query: use "Select * from VDATA" if you want to export all variables 5. Set the output data connection string by selecting the output data format as SPSS (if you want to export it in SPSS) 6. run the script manually and see if the SPSS export is what is expected 7. Create batch file using text editor (save with .bat extension). Add below lines cd "C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Scripts\Data Management\DMS" Call DMSRun YOURDMSFILENAME.dms Then add a line to copy (using XCOPY) the data / files extracted to the location where you want to further process it. Save the file and open windows scheduler to schedule the execution of this batch file for data extraction. If you want to do any further processing, you create an mrs or dms file and add to the batch file. Hope this helps!
3
1
0
I'm so sorry for the vague question here, but I'm hoping an SPSS expert will be able to help me out here. We have some surveys that are done via SPSS, from which we extract data for an internal report. Right now the process is very cumbersome and requires going to the SPSS Data Collection Interviewer Server Administration page and manually exporting data from two different projects (which takes hours at a time!). We then take that data, massage it, and upload it to another database that drives the internal report. My question is, does anyone out there know how to automate this process? Is there a SQL Server database behind the SPSS data? Where does the .mdd file come in to play? Can my team (who is well-versed in extracting data from various sources) tap into the SQL Server database behind SPSS to get our data? Or do we need some sort of Python script and plugin? If I'm missing information that would be helpful in answering the question, please let me know. I'm happy to provide it; I just don't know what to provide. Thanks so much.
Automating IBM SPSS Data Collection survey export?
0.132549
0
0
1,546
27,364,149
2014-12-08T18:09:00.000
15
0
0
0
python,django,amazon-web-services,redis,django-redis
27,382,430
2
true
1
0
Ok, figured it out. What I needed to do was prefix my location with redis://. This is specific to the django-redis library and how it parses the location url. That explains why when I manually set up a StrictRedis connection using the python redis library I was able to connect.
1
8
0
I've got a django project using django-redis 3.8.0 to connect to an aws instance of redis. However, I receive ConnectionError: Error 111 connecting to None:6379. Connection refused. when trying to connect. If I ssh into my ec2 and use redis-py from the shell, I am able to read and write from the cache just fine, so I don't believe it's a security policy issue.
Can't connect to redis using django-redis
1.2
0
0
6,100
27,367,178
2014-12-08T21:22:00.000
2
0
1
0
python,git,installation
27,367,242
1
true
0
0
Create a requirements.txt file and place it in your directory. Each line in the file should contain the name of a package that your entire team should have installed. Your team members, once they have the new version of the file, can run pip install -r requirements.txt. Then, update the requirements.txt file every time you have a new package required, and rerun the command. Some editors (like PyCharm) will even automatically detect a requirements file and prompt you to install the packages.
1
1
0
My team uses git to track the code and we want to do the following: (1) One team-member installs a third-party python package and (2) makes it available on our git repo, so that the rest of the team can simply install the package by pulling the latest version of our code. Is that possible at all? If so, what are feasible solutions? If not, what approach works best in your experience? Background: We are using python 2.7.*. The third-party package is testfixtures to unittest the logging of our software. We use Windows and Mac.
Python: How to install a third-party package and make it available to rest of team
1.2
0
0
103
27,377,908
2014-12-09T11:30:00.000
0
0
0
1
python,cronexpression,apscheduler
27,396,550
3
true
0
0
Given that APScheduler supports a slightly different set of fields, it's not immediately obvious how those expressions would map to CronTrigger's arguments. I should also point out that the preferred method of scheduling jobs does not involve directly instantiating triggers, but instead giving the arguments to add_job() instead. If you want to do that yourself, you could simply split the expression and map the elements to whichever trigger arguments you want.
1
1
0
I'm trying to run some scheduled jobs using cron expressions in python. I'm new to python and I've already worked with quartz scheduler in java to achieve almost the same thing. Right now, I am trying to work with apscheduler in python. I know that it is possible to do this using crontrig = CronTrigger(minute='*', second = '*'); But, I was working with cron expressions (like "0/5 * * * * *") and I would like to know if there is anything which could directly parse the expression and generate a CronTrigger.
create apscheduler job trigger from cron expression
1.2
0
0
3,162
27,383,896
2014-12-09T16:26:00.000
1
0
1
0
python,sql,r,fuzzy-search,fuzzy-logic
27,384,466
2
true
0
0
That is exactly what I am facing at my new job daily (but lines counts are few million). My approach is to: 1) find a set of unique strings by using p = unique(a) 2) remove punctuation, split strings in p by whitespaces, make a table of words' frequencies, create a set of rules and use gsub to "recover" abbreviations, mistyped words, etc. E.g. in your case "AUTH" should be recovered back to "AUTHORITY", "UNIV" -> "UNIVERSITY" (or vice versa) 3) recover typos if I spot them by eye 4) advanced: reorder words in strings (to often an improper English) to see if the two or more strings are identical albeit word order (e.g. "10pack 10oz" and "10oz 10pack").
2
0
1
I've got a database with property owners; I would like to count the number of properties owned by each person, but am running into standard mismatch problems: REDEVELOPMENT AUTHORITY vs. REDEVELOPMENT AUTHORITY O vs. PHILADELPHIA REDEVELOPMEN vs. PHILA. REDEVELOPMENT AUTH COMMONWEALTH OF PENNA vs. COMMONWEALTH OF PENNSYLVA vs. COMMONWEALTH OF PA TRS UNIV OF PENN vs. TRUSTEES OF THE UNIVERSIT From what I've seen, this is a pretty common problem, but my problem differs from those with solutions I've seen for two reasons: 1) I've got a large number of strings (~570,000), so computing the 570000 x 570000 matrix of edit distances (or other pairwise match metrics) seems like a daunting use of resources 2) I'm not focused on one-off comparisons--e.g., as is most common for what I've seen from big data fuzzy matching questions, matching user input to a database on file. I have one fixed data set that I want to condense once and for all. Are there any well-established routines for such an exercise? I'm most familiar with Python and R, so an approach in either of those would be ideal, but since I only need to do this once, I'm open to branching out to other, less familiar languages (perhaps something in SQL?) for this particular task.
fuzzy matching lots of strings
1.2
0
0
963
27,383,896
2014-12-09T16:26:00.000
1
0
1
0
python,sql,r,fuzzy-search,fuzzy-logic
27,385,088
2
false
0
0
You can also use agrep() in R for fuzzy name matching, by giving a percentage of allowed mismatches. If you pass it a fixed dataset, then you can grep for matches out of your database.
2
0
1
I've got a database with property owners; I would like to count the number of properties owned by each person, but am running into standard mismatch problems: REDEVELOPMENT AUTHORITY vs. REDEVELOPMENT AUTHORITY O vs. PHILADELPHIA REDEVELOPMEN vs. PHILA. REDEVELOPMENT AUTH COMMONWEALTH OF PENNA vs. COMMONWEALTH OF PENNSYLVA vs. COMMONWEALTH OF PA TRS UNIV OF PENN vs. TRUSTEES OF THE UNIVERSIT From what I've seen, this is a pretty common problem, but my problem differs from those with solutions I've seen for two reasons: 1) I've got a large number of strings (~570,000), so computing the 570000 x 570000 matrix of edit distances (or other pairwise match metrics) seems like a daunting use of resources 2) I'm not focused on one-off comparisons--e.g., as is most common for what I've seen from big data fuzzy matching questions, matching user input to a database on file. I have one fixed data set that I want to condense once and for all. Are there any well-established routines for such an exercise? I'm most familiar with Python and R, so an approach in either of those would be ideal, but since I only need to do this once, I'm open to branching out to other, less familiar languages (perhaps something in SQL?) for this particular task.
fuzzy matching lots of strings
0.099668
0
0
963
27,384,091
2014-12-09T16:37:00.000
0
1
0
0
python,c++,c,sockets
27,401,759
1
false
0
1
I ended up altering the the modules such that the C++ part and the Python part can work more or less independently of each other. But if passing data was necessary, I guess I would have gone via the socket route.
1
0
0
I need to send some data from a Python program to a C++ program. The current design is such that the C++ program executes the python program in a separate thread. I wish to pass some result from the python program back to the C++ program. What I have found so far includes: Sending over sockets Sending via a pipe Using a temporary file Embedding the python interpreter in the C++ program Using boost.python My data (to be passed back to the C++ program) is essentially a python dictionary, and a few files. (I am sending email details, and the attachments). What strategy should I use? Is there anything I can do to make my work easier? Or can I improve my program design? EDIT: Added boost.python to the list of options found.
Strategies to send data from a Python program to a C++ program
0
0
0
622
27,384,395
2014-12-09T16:53:00.000
3
1
0
0
python,opencv,raspberry-pi
27,387,097
1
true
0
0
Check the API docs for 3.0. Some python functions return more parameters or in a different order. example: cv2.cv.CV_HAAR_SCALE_IMAGE was replaced with cv2.CASCADE_SCALE_IMAGE or (cnts, _) = cv2.findContours(...) now returning the modified image as well (modImage, cnts, _) = cv2.findContours(...)
1
0
1
I've installed on my raspberry opencv python module and everything was working fine. Today I've compiled a C++ version of OpenCV and now when I want to run my python script i get this error: Traceback (most recent call last): File "wiz.py", line 2, in import cv2.cv as cv ImportError: No module named cv
OpenCV python on raspberry
1.2
0
0
1,523
27,385,097
2014-12-09T17:29:00.000
1
0
0
0
python-2.7,xlsxwriter
42,188,912
9
false
0
0
I am not sure what caused this but it went all well once I changed the path name from Lib into lib and I was finally able to make it work.
4
38
0
I recently downloaded the xlsxwriter version 0.6.4 and installed it on my computer. It correctly added it to my C:\Python27\Lib\site-packages\xlsxwriter folder, however when I try to import it I get the error ImportError: No module named xlsxwriter. The traceback is File "F:\Working\ArcGIS\ArcGIS .py\Scripts\Append_Geodatabase.py". However if I try to import numpy (I can't remember what numby is, however it is located in the same site-packages folder C:\Python27\Lib\site-packages\numpy) it has no problem. Any idea of what could be causing this issue? Thanks for the help.
ImportError: No module named xlsxwriter
0.022219
1
0
200,367
27,385,097
2014-12-09T17:29:00.000
0
0
0
0
python-2.7,xlsxwriter
67,318,348
9
false
0
0
I found the same error when using xlsxwriter in my test.py application. First, check if you have xlsxwriter module installed or not. sudo pip install xlsxwriter Then check the python version you are using, The following worked for me python2 test.py
4
38
0
I recently downloaded the xlsxwriter version 0.6.4 and installed it on my computer. It correctly added it to my C:\Python27\Lib\site-packages\xlsxwriter folder, however when I try to import it I get the error ImportError: No module named xlsxwriter. The traceback is File "F:\Working\ArcGIS\ArcGIS .py\Scripts\Append_Geodatabase.py". However if I try to import numpy (I can't remember what numby is, however it is located in the same site-packages folder C:\Python27\Lib\site-packages\numpy) it has no problem. Any idea of what could be causing this issue? Thanks for the help.
ImportError: No module named xlsxwriter
0
1
0
200,367
27,385,097
2014-12-09T17:29:00.000
1
0
0
0
python-2.7,xlsxwriter
72,355,605
9
false
0
0
in VSCode: instead of activating your environment with script use python select interpreter from VSCode(press ctrl + shift + p) and then select your environment from the list (marked with recommended)
4
38
0
I recently downloaded the xlsxwriter version 0.6.4 and installed it on my computer. It correctly added it to my C:\Python27\Lib\site-packages\xlsxwriter folder, however when I try to import it I get the error ImportError: No module named xlsxwriter. The traceback is File "F:\Working\ArcGIS\ArcGIS .py\Scripts\Append_Geodatabase.py". However if I try to import numpy (I can't remember what numby is, however it is located in the same site-packages folder C:\Python27\Lib\site-packages\numpy) it has no problem. Any idea of what could be causing this issue? Thanks for the help.
ImportError: No module named xlsxwriter
0.022219
1
0
200,367
27,385,097
2014-12-09T17:29:00.000
5
0
0
0
python-2.7,xlsxwriter
50,458,074
9
false
0
0
I managed to resolve this issue as follows... Be careful, make sure you understand the IDE you're using! - Because I didn't. I was trying to import xlsxwriter using PyCharm and was returning this error. Assuming you have already attempted the pip installation (sudo pip install xlsxwriter) via your cmd prompt, try using another IDE e.g. Geany - & import xlsxwriter. I tried this and Geany was importing the library fine. I opened PyCharm and navigated to 'File>Settings>Project:>Project Interpreter' xlslwriter was listed though intriguingly I couldn't import it! I double clicked xlsxwriter and hit 'install Package'... And thats it! It worked! Hope this helps...
4
38
0
I recently downloaded the xlsxwriter version 0.6.4 and installed it on my computer. It correctly added it to my C:\Python27\Lib\site-packages\xlsxwriter folder, however when I try to import it I get the error ImportError: No module named xlsxwriter. The traceback is File "F:\Working\ArcGIS\ArcGIS .py\Scripts\Append_Geodatabase.py". However if I try to import numpy (I can't remember what numby is, however it is located in the same site-packages folder C:\Python27\Lib\site-packages\numpy) it has no problem. Any idea of what could be causing this issue? Thanks for the help.
ImportError: No module named xlsxwriter
0.110656
1
0
200,367
27,385,373
2014-12-09T17:44:00.000
0
0
1
0
python,win32com,typelib
27,413,739
1
false
0
1
OK. I found out what is the problem. When you create a python file using makepy tool it updates the dicts.dat file in gen_py directory. So you need to copy over that file as well to other machines.
1
0
0
I have a project that uses COM and 'Python' scripting. Earlier we were using 'ComTypes' now we use Win32Com. To keep backward compatibility I need to change name of some of the interfaces. So here is what I do 1) Use the 'makepy' utility to create a python file from my .tlb file, this creates a .py file at ..\Lib\site-packages\win32com\gen_py folder 2) I change the name of the interface that I am interested in changing in the created python file. 3) When I load my application a corresponding .pyc file gets created and everything works fine. Now I don't want to repeat this exercise on every machine where my software is deployed. So through the installer I copy the .py and .pyc files to ..\Lib\site-packages\win32com\gen_py But when my application is launched it does not recognize the changed interface. Behaves as if there is no .py or .pyc file. All other interfaces work, but the changed name one does not work. It dynamically seem to create compiled python behind the scene, ignoring the .pyc file. If I delete the .dat file and .pyc file at those locations, it does create the .pyc file again when the application is launched. However its not utilized, because my changed interface does not work. If I use the steps 1,2 and 3 everything works again !! I am puzzled. Please help.
python .pyc file with win32com
0
0
0
190
27,390,553
2014-12-09T23:07:00.000
1
1
0
1
python,nose
27,390,659
2
false
0
0
You should be able to upgrade nosetests via pip, while still staying with python 2.6. At least, nose 1.3.4 (latest as of this writing) installs cleanly inside the py2.6 virtualenv I just threw together. I don't have any py2.6-compatible code to hand to show that it's working correctly, though.
1
1
0
The Question Where can I access the documentation for legacy versions of the nose testing framework? Why I have to support some python code that must run against python 2.6 on a Centos 6 system. It is clear from experimentation that nosetests --failed does not work on this system. I'd like to know if I'm just missing a module or not. More generally, I need to know what capabilities of nose that I have grown used to I will have to do without, without having to check for them individually.
online documentation for old versions of nose
0.099668
0
0
41
27,393,613
2014-12-10T04:58:00.000
2
0
0
0
python,machine-learning,nlp,nltk
27,412,604
3
false
0
0
I generally recommend using Scikit as Slater suggested. Its more scalable than NLTK. For this task using Naive Bayes Classifier or Support Vector Machine is your best bet. You are dealing with binary classification so you don't have multi classes. As for the features that you should extract, try unigrams, bigrams, trigrams, and TFIDF features. Also, LDA might turn out useful, but start with the easier ones such as unigrams. This also depends on the type and length of texts you are dealing with. Document classification has been around for more than a decade and there are so many good papers that you could find useful. Let me know if you have any further questions.
1
1
1
I know this is a very vague question but I'm trying to figure out the best way to do document classification. I have two sets training and testing. The training set is a set of documents each labeled 1 or 0. The documents are labeled 1 if is it a informative summary and a 0 if it is not. I'm trying to create a supervised classifier. I can't decide which nlp toolkit to use. I'm thinking nltk. Do you have any suggestions? I have to write the classifier in python. Also any specific types of classifiers. I've been doing research but can't seem to get a good answer.
binary document classifcation
0.132549
0
0
211
27,394,554
2014-12-10T06:28:00.000
1
0
0
0
python,web-scraping,beautifulsoup,masking
27,397,328
1
false
0
0
The problem is your public IP address. What you can do is use a list of proxy's and rotate through them.
1
0
0
I am using Python Beautiful Soup for website Scrapping. My program hits different urls of a website more than thousand times. I don not wish to get banned. As a first step, I would like to introduce IPmasking in my project. Is there any possible way to hit different urls of a website from a pool of rotating IPs with the help of Python modules like ipaddress, socket etc?
IP masking with Python
0.197375
0
1
2,628
27,396,657
2014-12-10T08:49:00.000
0
0
1
0
python,python-2.7,py2exe
28,449,481
1
true
0
0
The problem was solved using pyinstaller instead of py2exe
1
0
0
I am using py2exe in order to generate a .exe file from a project of mine. This project contain in the root folder a main.py In order folders, like the folder project, there are other .py files that should be imported only when relevant during the main.py execution. Right now, the py2exe is packing all the files together when creating the .exe Thus, folders like this projects - that are actually intended to have .py files in the final distribution - cease to exist. Is there a way for it to keep the file hierarchy without also adding those files into the final .exe? (i.e. not compact those folders into the .exe file)
How to mantain file structure for dynamic loading python files using Py2Exe
1.2
0
0
55
27,397,769
2014-12-10T09:47:00.000
0
0
1
0
python,virtualenv
27,398,081
3
false
0
0
Simply look at the project/bin/activate, everything you need is there to set up the appropriate search. Usually the most important path is the PYTHONPATH which should point to the site-packages.
2
2
0
My project's running fine in virtualenv. Unfortunately, I can't yet run it via my IDE (Eric) because of the import troubles. It stands to reason, as I never told the IDE anything about virtualenv. I know the drill ($ source project/bin/activate etc.), but lack general understanding. What constitutes "running inside virtualenv"? What IDE options might be relevant?
Python, virtualenv: telling IDE to run the debug session in virtualenv
0
0
0
2,133
27,397,769
2014-12-10T09:47:00.000
2
0
1
0
python,virtualenv
27,398,962
3
false
0
0
I think the only required setting to run or debug code is path to python interpreter. Relevant IDE options could be SDK or Interpreter settings. Note that you should run not default python (eg. /usr/bin/python) but python binary in your virtual environment (eg /path/to/virtualenv/bin/python) Also there are some environment variables set by activate, but i think they aren't needed when you point to virtualenv python binary directly. So, again, what activate does is only environment variables setup: at least, it modifies system $PATH in a way that python and pip commands points to executable files under path/to/virtaulenv/bin directiry.
2
2
0
My project's running fine in virtualenv. Unfortunately, I can't yet run it via my IDE (Eric) because of the import troubles. It stands to reason, as I never told the IDE anything about virtualenv. I know the drill ($ source project/bin/activate etc.), but lack general understanding. What constitutes "running inside virtualenv"? What IDE options might be relevant?
Python, virtualenv: telling IDE to run the debug session in virtualenv
0.132549
0
0
2,133
27,398,538
2014-12-10T10:22:00.000
0
0
0
1
python,git,docker
27,399,888
2
false
0
0
In the development case I would just use docker's -v option to mount the current working copy into a well known location in the container and provide a small wrapper shell script that automates firing up the app in the container.
2
2
0
Suppose I have a python web app. I can create docker file for installing all dependencies. But then (or before it if I have requirements for pip) I have like two different goals. For deployment I can just download all source code from git through ssh or tarballs and it would work. But for a developer machine it wouldn't work. I would need then work on actual source code. I know that I can 'mirror' any folder/files from host machine to docker container. So ok, I can then remove all source code, that was downloaded when image was built and 'mirror' current source code that exists in developer machine. But if developer machine don't have any source code downloaded with git clone, it wouldn't work. So what to do in that case? I mean except the obvious - clone all repos on developer machine and then 'mirror' it? So what is the right way to use docker not only for deployment but for the development also?
How to use docker for deployment and development?
0
0
0
286
27,398,538
2014-12-10T10:22:00.000
1
0
0
1
python,git,docker
27,402,917
2
false
0
0
Providing developer with a copy of repository to work with is not docker's responsibility. Many people do the other way - you put Dockerfile or a script to pull (or build) and run your container into the sources of your project.
2
2
0
Suppose I have a python web app. I can create docker file for installing all dependencies. But then (or before it if I have requirements for pip) I have like two different goals. For deployment I can just download all source code from git through ssh or tarballs and it would work. But for a developer machine it wouldn't work. I would need then work on actual source code. I know that I can 'mirror' any folder/files from host machine to docker container. So ok, I can then remove all source code, that was downloaded when image was built and 'mirror' current source code that exists in developer machine. But if developer machine don't have any source code downloaded with git clone, it wouldn't work. So what to do in that case? I mean except the obvious - clone all repos on developer machine and then 'mirror' it? So what is the right way to use docker not only for deployment but for the development also?
How to use docker for deployment and development?
0.099668
0
0
286
27,399,181
2014-12-10T10:53:00.000
4
1
0
0
python-2.7,coverage.py
38,825,213
2
false
0
0
Seems like from command line there is no option to do this. But using configuration files this can be done. In the configuration file keep below lines. [run] data_file = < path where .coverage should be stored > Then run: coverage run < script > (if configuration file is not specified, it looks for .coveragerc in same folder from when coverage is being run. If this is also not available then defaults are used) or coverage run --rcfile < configuration file name > < script name >
1
6
0
I'm developing a module1 which has some test cases. I've another module2 which can run these test cases and generate the amount of coverage. Currently the .coverage folder is generated on the current working directory from where the module2 is being called. Is there a way to specify the folder path for coverage to dump this .coverage in the path specified?
Provide path to Coverage to dump .coverage
0.379949
0
0
1,975
27,401,918
2014-12-10T13:12:00.000
4
1
0
0
python,bluetooth,pebble-watch
27,412,126
5
false
0
0
Apple iDevices do use private resolvable addresses with Bluetooth Low Energy (BLE). They cycle to a different address every ~15 minutes. Only paired devices that have a so called Identity Resolving Key can "decipher" these seemingly random addresses and associate them back to the paired device. So to do something like this with your iPhone, you need to pair it with your raspberry pi. Then what you can do is make a simple iOS app that advertises some data (what does not matter because when the app is backgrounded, only iOS itself gets to put data into the advertising packet). On the raspberry pi you can then use hcitool lescan to scan for the BLE advertisements. If the address of the advertisement can be resolved using the IRK, you know with high certainty that it's the iPhone. I'm not sure if hcitool does any IRK math out of the box, but the resolving algorithm is well specified by the Bluetooth spec. Pebble currently does indeed use a fixed address. However, it is only advertising when it is disconnected from the phone it is supposed to be connected to. So, for your use case, using its BLE advertisements is not very useful. Currently, there is no API in the Pebble SDK to allow an app on the Pebble to advertise data. FWIW, the commands you mentioned are useful only for Bluetooth 2.1 ("Classic") and probably only useful if the other device is discoverable (basically never, unless it's in the Settings / Bluetooth menu).
1
3
0
my ultimate goal is to allow my raspberry pi detect when my iphone or pebble watch is nearby. I am presently focusing on the pebble as I believe iphone randomizes the MAC address. I have the static MAC address of the pebble watch. My question is how to detect the presence of the MAC address through bluetooth? I have tried hcitool rssi [mac address] or l2ping [mac address] however both needs a confirmation of connection on the watch before any response. I want it to be automatic... I also tried hcitool scan, but it takes awhile, presumably it is going through all possibilities. I simply want to search for a particular Mac Address. EDIT: I just tried "hcitool name [Mac Address]" which return the name of the device and if not there it returns a "null" so this is the idea... is there a python equivalent of this? I am new to python, so hopefully someone can point to how I can simply ping the mac address and see how strong the RSSI value is?
Detecting presence of particular bluetooth device with MAC address
0.158649
0
0
15,931
27,403,050
2014-12-10T14:05:00.000
0
0
0
0
python,mysql,django,database,data-migration
60,700,719
4
false
1
0
I faced the same issue all what i do to get out this issue just drop all tables in my DB and then run: python manage.py makemigrations and : python manage.py migrate
4
4
0
I have a Django project and I did the following: Added a table with some columns Insert some records into the db Added a new column that I didn't realize I needed Made an update to populate that column When I did a migrate everything worked just fine. The new db column was created on the table and the values were populated. When I try to run my tests, however, I now bomb out at step 2 above. When I do insert, I believe it is expecting that field to be there, even though it hasn't been created at that point yet. What should I do? EDIT: More info I first made a class, class A and did a migration to create the table. Then I ran this against my db. Then I wrote a manual migration to populate some data that I knew would be there. I ra n this against the db. I realized sometime later that I need an extra field on the model. I added that field and did a migration and ran it against the database. Everything worked fine and I confirmed the new column is in the database. Now, I went to run my tests. It tried to create the test db and bombed out, saying "1054 - Unknown column [my new column that I added to an existing table]" at the time when it is trying to run the populate data script that I wrote. It is likely looking at the table, noticing that the third field exists in the model, but not yet in the database, but I don't know how to do it better.
Django 1054 - Unknown Column in field list
0
0
0
8,200
27,403,050
2014-12-10T14:05:00.000
0
0
0
0
python,mysql,django,database,data-migration
40,621,929
4
false
1
0
This happened to me because I faked one migration (m1), created another (m2), and then tried to migrate m2 before I had faked my initial migration (m1). So in my case I had to migrate --fake <app name> m1 and then migrate <app name> m2.
4
4
0
I have a Django project and I did the following: Added a table with some columns Insert some records into the db Added a new column that I didn't realize I needed Made an update to populate that column When I did a migrate everything worked just fine. The new db column was created on the table and the values were populated. When I try to run my tests, however, I now bomb out at step 2 above. When I do insert, I believe it is expecting that field to be there, even though it hasn't been created at that point yet. What should I do? EDIT: More info I first made a class, class A and did a migration to create the table. Then I ran this against my db. Then I wrote a manual migration to populate some data that I knew would be there. I ra n this against the db. I realized sometime later that I need an extra field on the model. I added that field and did a migration and ran it against the database. Everything worked fine and I confirmed the new column is in the database. Now, I went to run my tests. It tried to create the test db and bombed out, saying "1054 - Unknown column [my new column that I added to an existing table]" at the time when it is trying to run the populate data script that I wrote. It is likely looking at the table, noticing that the third field exists in the model, but not yet in the database, but I don't know how to do it better.
Django 1054 - Unknown Column in field list
0
0
0
8,200
27,403,050
2014-12-10T14:05:00.000
0
0
0
0
python,mysql,django,database,data-migration
29,351,981
4
false
1
0
Unless the new column has a default value defined, the insert statement will expect to add data to that column. Can you move the data load to be after the second migration. (I would have commented, but do not yet have sufficient reputation.)
4
4
0
I have a Django project and I did the following: Added a table with some columns Insert some records into the db Added a new column that I didn't realize I needed Made an update to populate that column When I did a migrate everything worked just fine. The new db column was created on the table and the values were populated. When I try to run my tests, however, I now bomb out at step 2 above. When I do insert, I believe it is expecting that field to be there, even though it hasn't been created at that point yet. What should I do? EDIT: More info I first made a class, class A and did a migration to create the table. Then I ran this against my db. Then I wrote a manual migration to populate some data that I knew would be there. I ra n this against the db. I realized sometime later that I need an extra field on the model. I added that field and did a migration and ran it against the database. Everything worked fine and I confirmed the new column is in the database. Now, I went to run my tests. It tried to create the test db and bombed out, saying "1054 - Unknown column [my new column that I added to an existing table]" at the time when it is trying to run the populate data script that I wrote. It is likely looking at the table, noticing that the third field exists in the model, but not yet in the database, but I don't know how to do it better.
Django 1054 - Unknown Column in field list
0
0
0
8,200
27,403,050
2014-12-10T14:05:00.000
0
0
0
0
python,mysql,django,database,data-migration
31,042,146
4
true
1
0
I believe this was because the migration scripts were getting called out of order, due to a problem I had setting them up. Everything is ok now.
4
4
0
I have a Django project and I did the following: Added a table with some columns Insert some records into the db Added a new column that I didn't realize I needed Made an update to populate that column When I did a migrate everything worked just fine. The new db column was created on the table and the values were populated. When I try to run my tests, however, I now bomb out at step 2 above. When I do insert, I believe it is expecting that field to be there, even though it hasn't been created at that point yet. What should I do? EDIT: More info I first made a class, class A and did a migration to create the table. Then I ran this against my db. Then I wrote a manual migration to populate some data that I knew would be there. I ra n this against the db. I realized sometime later that I need an extra field on the model. I added that field and did a migration and ran it against the database. Everything worked fine and I confirmed the new column is in the database. Now, I went to run my tests. It tried to create the test db and bombed out, saying "1054 - Unknown column [my new column that I added to an existing table]" at the time when it is trying to run the populate data script that I wrote. It is likely looking at the table, noticing that the third field exists in the model, but not yet in the database, but I don't know how to do it better.
Django 1054 - Unknown Column in field list
1.2
0
0
8,200
27,406,345
2014-12-10T16:43:00.000
5
0
1
0
python,scikit-learn,pycharm
38,023,538
3
false
0
0
This worked for me: In my PyCharm Community Edition 5.0.4, Preference -> Project Interpreter -> check whether sklearn package is installed for the current project interpreter, if not, install it.
2
10
1
I installed numpy, scipy and scikit-learn using pip on mac os. However in PyCharm, all imports work except when i try importing sklearn. I tried doing it in the Python shell and it worked fine. Any ideas as to what is causing this? Also, not sure if it is relevant, but i installed scikit-learn last. The error I receive is unresolved reference
import sklearn not working in PyCharm
0.321513
0
0
10,664
27,406,345
2014-12-10T16:43:00.000
6
0
1
0
python,scikit-learn,pycharm
27,422,973
3
false
0
0
I managed to figure it out, i had to go to the project interpreter and change the python distribution as it had defaulted the OS installed Python rather than my own installed distribution.
2
10
1
I installed numpy, scipy and scikit-learn using pip on mac os. However in PyCharm, all imports work except when i try importing sklearn. I tried doing it in the Python shell and it worked fine. Any ideas as to what is causing this? Also, not sure if it is relevant, but i installed scikit-learn last. The error I receive is unresolved reference
import sklearn not working in PyCharm
1
0
0
10,664
27,413,547
2014-12-11T00:46:00.000
1
0
0
0
python,scroll,tkinter
27,414,982
1
true
0
1
The two numbers represent the fractional part of the data that is visible. A value of 0 (zero) means you are at the top (or left, in the case of xview), and a value of 1 (one) means bottom (or right). So, for example, if the very middle of the document were at the top of the screen, the first number would be .5. If a document that is three times as long as the screen was perfectly centered, the numbers would be something like (.333,.666) meaning the top one third is scrolled off of the screen, and the bottom one third is scrolled off the screen.
1
0
0
On scrollable tkinter objects in Python, when you call .yview(), it returns their current 'position'. I can't make heads or tails of what it returns, though. For example, when scrolled to the beginning of one of my elements, it returns (0.0, 0.4662309368191721), and when at the end, (0.5337690631808278, 1.0). What do these numbers mean? Why are there two of them? It seems to me like it'd make more sense if there was only a single number, from 0(beginning) to 1(end).
What do the values in obj.yview() mean?
1.2
0
0
267
27,414,466
2014-12-11T02:35:00.000
1
0
1
0
python,spyder
27,453,175
1
true
0
0
(Spyder dev here) This is not possible. If Pandas is installed on the same Python installation where Spyder is, then Spyder will import Pandas to: a) report to its users the minimal version needed to view DataFrames in the Variable Explorer and b) import csv files as DataFrames. The only solution I can suggest you is this: Create a new virtualenv or conda environment Install there Spyder and its dependencies, but not Pandas. Spyder dependencies can be checked under the menu Help > Optional dependencies Start your virtualenv/conda env Spyder Go to Tools > Preferences > Console > Advanced Settings > Python executable select the option Use the following Python interpreter and write (or select) there the path to the interpreter where you have Pandas installed (e.g. /usr/bin/python) Start a new Python/IPython console and import pandas there.
1
0
1
When I start Spyder, it automatically imports pandas and numpy. Is it possible to have Spyder ignore these modules? I see these are imported in multiple Spyderlib files. For example, pandas gets imported in spyderlib/widgets/importwizard.py, spyderlib/baseconfig.py, etc. (I'm trying to debug something in pandas and I'd like to import it for the first time in a debugging session in Spyder)
Stop Spyder from importing modules like `numpy`, `pandas`, etc
1.2
0
0
651
27,414,693
2014-12-11T03:04:00.000
2
0
1
0
python,arrays,recursion,huffman-code
27,414,722
1
false
0
0
Your problem is that recursiveHuff doesn't return a value when you do your recursion step. You want to accumulate the path and return it. As you have it now, the changes you make to path are local to the recursive calls. (so they propagate down the chain as you descend, but not back up as you unwind)
1
0
0
I'm encoding a Huffman tree in Python. I have one regular function which takes in the string to be encoded and the Huffman tree. It creates an array of the string's characters and an empty array whose entries will be the binary paths corresponding to each char. This function loops over each character in the string array, calling function 2, which recursively searches through the tree, building up the binary code and returning it once the letter has been found. Everything is working fine - the recursive function moves through the tree properly, finding and printing the correct path. Only problem is that when I assign that return value to a variable inside function1 and append it to the binary array, it becomes None. Can you not assign a recursive return statement to a variable like that?? Any help would be greatly appreciated as I feel like I'm on the cusp of finishing this. Here is my code: def huffmanEncoder(s, t): """encodes string s with Tree t""" s = list(s) b = [] for i in range(len(s)): val = recursiveHuff(t, '', s[i]) print 'val:', val b.append(val) print b def recursiveHuff(tree, path, char): """given a tree, an empty string 'path', and a character, finds said char in tree and returns the binary path""" print 'looking for:\t', char, 'path:\t', path if not isLeaf(tree): recursiveHuff(getLeftChild(tree), path+'0', char) recursiveHuff(getRightChild(tree), path+'1', char) else: n = getNodeValue(tree) if n[1] == char: print 'found', char, 'at', path return path
Assigning recursive function to a variable in python
0.379949
0
0
989
27,416,642
2014-12-11T06:26:00.000
1
0
0
0
google-app-engine,python-2.7,google-bigquery
27,449,004
1
false
0
0
That job failed with reason "invalid" and message starting with "Too many errors encountered." In order to detect job failure, when you get a successful response from jobs.get, first ensure that the job is in a DONE state, then look for the presence of errors in the status.errorResult.reason and status.errorResult.message. Additionally, the status.errorStream will contain a list of individual failures encountered. In this case, it looks like the job was trying to load data that didn't match the schema of the table. You can find the file/line/field offsets in the "location" field of each error in the errorStream. Here are a couple of causes for "not finding the errors in the job" that we've seen: Forgetting to check for DONE before looking for job errors. Waiting for job completion with a timeout, but forgetting to treat timeout as failure. Letting temporary errors from jobs.get (5xx http response codes) terminate your wait loop early, and then not knowing the state of the job since the jobs.get itself failed. I hope this helps narrow down the problem for you.
1
0
0
my job id is job_7mb6iw3BHoMRC09US9Vqq-Qd06s, while uploading data by this job on Bigquery The data was not getting uploaded on bigquery. And I am not getting any error for this.
Bigquery data not getting uploaded
0.197375
1
0
95
27,416,913
2014-12-11T06:46:00.000
0
0
0
0
python,django,crm,zoho
27,498,436
2
false
1
0
All you need is export django auth but if extender user model is there then you can also take extended user model too... Please update your question what Database you are using ?
1
0
0
How to track django user details using zoho CRM? I am new zoho CRM, I got the few information and details how ZOHO CRm will be. Now I want to know one thing, I had implement the django project and also have a account in zoho CRM. Now I would like to Tacke all my user details from app database in zoho crm. how to export app database users to zoho CRM and how to track the user behaviour?
How to track django user details using zoho CRM
0
0
0
812
27,428,711
2014-12-11T17:20:00.000
0
0
0
0
python,django,angularjs,facebook
27,430,041
2
false
1
0
The answer is yes. Just simply setup your fb canvas url to this: https://yourdomain.com/your-allauth-url/facebook/login/?process=login With this, your canvas app visitor will go through the allauth facebook login process. If the user has connected to your app it will redirect him/her to the login redirect url you have specified in your settings. Otherwise it will just ask him to allow the facebook connection.
1
0
0
I've been trying to connect user on my fb canvas app. I've implemented django-allauth fb login on my website. In fb canvas app, I'm trying to get user as already logged in since user has already been connected on fb. Is there any way to make it using django-allauth?
Django-allauth login on fb canvas
0
0
0
123
27,430,688
2014-12-11T19:18:00.000
4
0
0
0
python,django,database-migration,django-migrations
27,430,841
5
false
1
0
If it is django1.7, it stores history to database, table django_migrations. South also stores migrations in database, and you can enable feature to show migration history in django admin.
2
40
0
How does django know whether a migration has been applied yet? It usually gets it right, but when it doesn't I don't ever know where to start troubleshooting.
How does django know which migrations have been run?
0.158649
0
0
13,694
27,430,688
2014-12-11T19:18:00.000
44
0
0
0
python,django,database-migration,django-migrations
27,430,820
5
true
1
0
Django writes a record into the table django_migrations consisting of some information like the app the migration belongs to, the name of the migration, and the date it was applied.
2
40
0
How does django know whether a migration has been applied yet? It usually gets it right, but when it doesn't I don't ever know where to start troubleshooting.
How does django know which migrations have been run?
1.2
0
0
13,694
27,431,249
2014-12-11T19:52:00.000
0
1
1
0
python,int,boolean
27,431,567
3
false
0
0
The strict equivalent of x === y in Python is type(x) is type(y) and x == y. You don't really want to do this as Python is duck typed. If an object has the appropriate method or attribute then you shouldn't be too worried about its actual type. If you are checking for a specific unique object such as (True, False, None, or a class) then you should use is and is not. For example: x is True.
2
11
0
In PHP you use the === notation to test for TRUE or FALSE distinct from 1 or 0. For example if FALSE == 0 returns TRUE, if FALSE === 0 returns FALSE. So when doing string searches in base 0 if the position of the substring in question is right at the beginning you get 0 which PHP can distinguish from FALSE. Is there a means of doing this in Python?
Python: False vs 0
0
0
0
10,474
27,431,249
2014-12-11T19:52:00.000
27
1
1
0
python,int,boolean
27,431,348
3
true
0
0
In Python, The is operator tests for identity (False is False, 0 is not False). The == operator which tests for logical equality (and thus 0 == False). Technically neither of these is exactly equivalent to PHP's ===, which compares logical equality and type - in Python, that'd be a == b and type(a) is type(b). Some other differences between is and ==: Mutable type literals {} == {}, but {} is not {} (and the same holds true for lists and other mutable types) However, if a = {}, then a is a (because in this case it's a reference to the same instance) Strings "a"*255 is not "a"*255", but "a"*20 is "a"*20 in most implementations, due to how Python handles string interning. This behavior isn't guaranteed, though, and you probably shouldn't be using is in this case. "a"*255 == "a"*255 and is almost always the right comparison to use. Numbers 12345 is 12345 but 12345 is not 12345 + 1 - 1 in most implementations, similarly. You pretty much always want to use equality for these cases.
2
11
0
In PHP you use the === notation to test for TRUE or FALSE distinct from 1 or 0. For example if FALSE == 0 returns TRUE, if FALSE === 0 returns FALSE. So when doing string searches in base 0 if the position of the substring in question is right at the beginning you get 0 which PHP can distinguish from FALSE. Is there a means of doing this in Python?
Python: False vs 0
1.2
0
0
10,474
27,432,211
2014-12-11T20:56:00.000
0
0
0
0
python,unicode,bottle
61,842,769
2
false
1
0
in this case, to convert it ,I did like this search_field.encode("ISO-8859-1").decode("utf-8")
1
8
0
I'm building a small RESTful API with bottle in python and am currently experiencing an issue with character encodings when working with the request object. Hitting up http://server.com/api?q=äöü and looking at request.query['q'] on the server gets me "äöü", which is obviously not what I'm looking for. Same goes for a POST request containing the form-urlencoded key q with the value äöü. request.forms.get('q') contains "äöü". What's going on here? I don't really have the option of decoding these elements with a different encoding or do I? Is there a general option for bottle to store these in unicode? Thanks.
Python bottle requests and unicode
0
0
1
3,378
27,433,087
2014-12-11T21:52:00.000
1
0
0
1
python,multithreading,uwsgi,gevent
27,436,888
1
true
1
0
Mixing non-blocking programming (geventhttpclient) with blocking one (a uWSGI thread/process) is completely wrong. This is a general rule: even if your app is 99% non blocking it is still blocking. This is amplified by the fact that gevent makes use of stack switching to simulate blocking programming paradigms. This is like cooperative multitasking, and it is managed by the so called 'gevent-hub'. Unfortunately, albeit your greenlets will be able to make http requests they will be never terminated because the gevent hub will never run again once the request is over. If you want to maintain the geventhttpclient approach you have to set uWSGI in gevent mode, but you need to be sure that all the modules and techniques used by your app are gevent friendly.
1
1
0
I'm currently running a python web API that is NOT multithreaded with much success on the uWSGI + NGINX stack. Due to new operational needs, I have implemented a new build that includes multithreaded requests to external data sources. However, when I deploy this new multithreaded build under uWSGI with --enable-threads, after a few minutes, the machine runs out of available threads. I was able to isolate the issue to my usage of geventhttpclient for my external HTTP requests by monitoring the thread count using ps -eLf | grep <process id>| wc -l. I have currently 2 worker threads (two external requests) in my application, so as I noticed, every time I hit/make a request from my API, the application thread use count increases by 2. If I swap my use of geventhttpclient with the standard python Requests module in just one of these worker threads, the thread count only increases by 1. NOTE: I am using HTTPClient.close() to close the connection within each thread. This leads me to suspect that geventhttpclient creates new threads that do not terminate when used in multithreaded uWSGI applications. Is there an easy way around this chokepoint? The performance of geventhttpclient is exceptional in non-multithreaded uWSGI applications, so I would love to continue using this. Thanks and let me know if I can provide any more information.
Running Out of Threads: UWSGI + Multithreaded Python Application with GeventHTTPClient
1.2
0
0
777
27,433,412
2014-12-11T22:14:00.000
0
0
0
0
python,django
27,433,470
2
true
1
0
python3 manage.py runserver <your IP address>:8000
1
0
0
How would I run a Django application on, a digitalocean droplet let's say, with just using the development server Django provides. I've tried just running python3 manage.py runserver, but I can't pull it up with the browser from another computer I know this is bad practice, but I really only need it up to demonstrate for a class project
Manage.py runserver for demonstration
1.2
0
0
71
27,433,443
2014-12-11T22:16:00.000
0
1
1
0
python,visual-studio-2013
42,518,497
1
false
0
0
As for opening files, if you right click the .py file, open with, and then select visual studio. I know it's not really a solution, but you may want to try PyCharm, I prefer it to IDLE, and it has several customisation options to make it look like visual studio. Hope it helps.
1
1
0
so im working on a text game in python, and am working on this at both school and home. I recently started using Visual studio and love the program, however i found that it cannot open or save properly as a .py file, which i need to do to be able to work on the file at school. I have installed python tools for visual studio and it works great, but i can only work with files in a .pyroj format. Does anyone with Visual Studio experience know any way to save and open .py's in Visual Studio? Many thanks
Visual Studio cannot open or save Python .py files
0
0
0
780
27,435,367
2014-12-12T01:15:00.000
0
0
0
0
python,django,google-oauth
27,487,490
1
false
1
0
all right. it turns out to be a problem of scope . the sample code set a default scope to mail.google.com while social-auth only have a default scope to read user information. the difference in length of token must be related to the scope
1
0
0
I am integrating google oauth2 to my website using django social auth. The problem is that with django social auth , I get an access_token like ya29.2QCqpS-uKGXMNOP8yZnN6Z-F5LfVnyd7jwa8TaLP43nTEp2NUPB_p7Hi While with the code sample from google code the access token is like : ya29.2QCVXKc7XSNR3QxqRVAi0Z8Uz6mvolDGpezbZ9_r_oq7CXt01WvE9oUb90HXaynOLE4J8PjA5pzYDB The first one does not work while the second one works fine does anyone have any idea on this ?
django social auth get wrong access_token from google oauth2
0
0
0
123
27,438,448
2014-12-12T06:57:00.000
0
0
0
0
qpython
42,170,785
3
false
0
1
QPy for android has built in FTP. Just enter the local address it provides into an ftp program (like bareftp for ubuntu)
3
1
0
while developing qpython scripts, it would be much easier to develop them on a pc and only transfer them to the android device for testing. What is the easiest way to do such a transfer? (Development might require to do this very frequently) Thanks in advance for all your feedback. tfv
How to easily transfer qpython scripts from windows to android for development?
0
0
0
1,287
27,438,448
2014-12-12T06:57:00.000
1
0
0
0
qpython
29,536,167
3
false
0
1
In linux with ADB installed, in terminal you can use adb push pcprojectfolder /sdcard/com.hipipal.qpyplus/projects/androidprojectfolder You do need to manually end the task on the phone before pushing though.
3
1
0
while developing qpython scripts, it would be much easier to develop them on a pc and only transfer them to the android device for testing. What is the easiest way to do such a transfer? (Development might require to do this very frequently) Thanks in advance for all your feedback. tfv
How to easily transfer qpython scripts from windows to android for development?
0.066568
0
0
1,287
27,438,448
2014-12-12T06:57:00.000
0
0
0
0
qpython
27,445,725
3
false
0
1
There is a ftp service in setting part which can help you to translate files between android device and pc.
3
1
0
while developing qpython scripts, it would be much easier to develop them on a pc and only transfer them to the android device for testing. What is the easiest way to do such a transfer? (Development might require to do this very frequently) Thanks in advance for all your feedback. tfv
How to easily transfer qpython scripts from windows to android for development?
0
0
0
1,287
27,439,027
2014-12-12T07:45:00.000
0
0
1
1
python,pip
27,470,901
1
true
0
0
After many days of trying a workaround, I finally got down to debugging the setup.py script and setuptools and distutils. Figured out the problem was a missing "svn.exe" on my workstation, which caused the "svn_finder" function in setuptools core to hang up. Can someone point me in the right direction as to how I can make the right team aware of the "bug"?
1
1
0
I've been struggling with an issue with Python and pip installs (python version 3.4.2, same with x86 or x64 MSIs, Windows 7 x64). I'm using the CPython installer available from the Python.org website. When I install, I get the UAC prompt, which I approve, and it installs fine to D:\opt\python34 (along with pip, as added in 3.4.2 installations by default). Then, as standard procedure, I add the install path and Scripts subfolder to the user path variable. Now, the issues are as follows: Whenever I run python setup.py install inside any package directory, the prompt hangs at writing ... to top_level.txt or writing to dependency_links.txt or etc. (Same issue happens if I create a virtual environment using python -m venv, activate it, and do python setup.py install). Setup.py never succeeds. Pip install also hangs infinitely after giving a warning "manifest_maker: Standard file '-c' not found." If I remove setuptools, and just use distribute, then "python setup.py install" works. Kindly assist with ideas/solutions.
Python 3.4.2 AND pip Permission Issues
1.2
0
0
98
27,439,192
2014-12-12T07:56:00.000
13
0
1
0
python,set,difference
27,439,300
2
false
0
0
Q. What's the difference between these two set methods? A. The update version subtracts from an existing set, mutating it, and potentially leaving it smaller than it originally was. The non-update version produces a new set, leaving the originals unchanged. Q. Because the difference_update updates set s, what precautions should be taken to avoid receiving a result of None from this method? A. Mutating methods in Python generally return None as a way to indicate that they have mutated an object. The only "precaution" is to not assign the None result to a variable. Q. In terms of speed, shouldn't set.difference_update be faster since you're only removing elements from set s instead of creating a new set like in set.difference()? A. Yes, the algorithm of the update version simply discards values. In contrast, the algorithm for the non-updating version depends on the size of the sets. If the size of s is four or more times larger that t, the new set version first copies the main set and then discards values from it. So "s - t is implemented as n = s.copy(); n.difference_update(t)). That algorithm is used when s is much larger than t Otherwise, the algorithm for the non-updating version is to create an empty new set n, loop over the elements of s and add them to n if they are not present in t.
1
7
0
s.difference(t) returns a new set with no elements in t. s.difference_update(t) returns an updated set with no elements in t. What's the difference between these two set methods? Because the difference_update updates set s, what precautions should be taken to avoid receiving a result of None from this method? In terms of speed, shouldn't set.difference_update be faster since you're only removing elements from set s instead of creating a new set like in set.difference()?
Python: What's the difference between set.difference and set.difference_update?
1
0
0
7,636
27,439,431
2014-12-12T08:15:00.000
7
0
1
0
python,spyder
27,443,676
2
true
0
0
In Spyder you can jump to the definition of a function (or class) by holding CTRL and clicking on the function (or class) name/reference. If that definition is in an other file, that file will be opened. Ctrl + Leftclick EDIT as commented by @pwagner Rightclick -> Goto Definition and Ctrl + G also work
1
4
0
In MATLAB, there I can place the cursor on a function name and press Ctrl+D, the file containing the function will automatically open. Is there any way I can do something similar with Python, within the Spyder IDE?
Open a function in Python Spyder, like I do with MATLAB
1.2
0
0
6,036
27,440,489
2014-12-12T09:26:00.000
13
0
1
0
python,django,pycharm
27,440,698
7
false
1
0
Go to Settings->Project Interpreter. Double-click the Django package. Activate the check box Specify version and select the version you want. Press the button Install Package. Django will use pip in the background to install the package.
2
11
0
I've installed new PyCharm that uses django v1.71(default), but I would like to change it to v1.68. How can we achieve this with PyCharm?
How to change django version in PyCharm?
1
0
0
13,451
27,440,489
2014-12-12T09:26:00.000
2
0
1
0
python,django,pycharm
34,903,153
7
false
1
0
Go to file>>settings>>Project Interpreter and click the plus sign at the right edge of the popup window and look for django and install it. You need internet access though. It will install the new version.
2
11
0
I've installed new PyCharm that uses django v1.71(default), but I would like to change it to v1.68. How can we achieve this with PyCharm?
How to change django version in PyCharm?
0.057081
0
0
13,451
27,448,905
2014-12-12T17:26:00.000
1
0
0
1
python,ip,packet,scapy
28,396,576
2
false
0
0
You basically want to spoof your ip address.Well I suggest you to read Networking and ip header packets.This can be possible through python but you won't be able to see result as you have spoofed your ip.To be able to do this you will need to predict the sequence numbers.
1
7
0
Lets say I have an application written in python to send a ping or e-mail. How can I change the source IP address of the sent packet to a fake one, using, e.g., Scapy? Consider that that the IP address assigned to my eth0 is 192.168.0.100. My e-mail application will send messages using this IP. However, I want to manipulate this packet, as soon as it is ready to be sent, so its source IP is not 192.168.0.100 but 192.168.0.101 instead. I'd like to do this without having to implement a MITM.
Send packet and change its source IP
0.099668
0
1
31,903
27,449,735
2014-12-12T18:20:00.000
3
0
1
0
python,c,cross-language
27,449,816
1
true
0
1
Because it should also compile the foreign language as well. No. ctypes, and the like, only need to be able to link to object code. This depends on the target foreign language having appropriate conventions for name mangling inside the object code, which C does. if the host language is object oriented, and foreign language is not, then I need some kind of mapping to and from objects. How is this handled? The C code needs to expose appropriate interfaces for the host language; or equivalently, use some C-language libraries to do so. CPython is written in C, so this is broadly speaking easy in that case. What if the host language runs on a Virtual Machine? The Instruction Set would be different in that case, right? The VM has to have the appropriate facilities to load compiled object code.
1
1
0
I came across a feature in some programming languages to call methods in other programming languages. Its called the Foreign Function Interface. For example, I would be able to call a C Language function inside a Python program. Or I could I write a wrapper for some C library in Python language for other Python users to use those. One simple example is the ctypes in Python. So using this, I can access the time function in libc. I understand to this level. However, I couldn't get a clear picture of how this ctypes itself is implemented and other 'behind the scene' things! The questions that arise for me here are: What kind of features do a compiler for this language require to use the Foreign Function Interface. Because it should also compile the foreign language as well. So if the host language is object oriented, and foreign language is not, then I need some kind of mapping to and from objects. How is this handled? What if the host language runs on a Virtual Machine? The Instruction Set would be different in that case, right?
Wrappers for C API in programming languages
1.2
0
0
185
27,452,987
2014-12-12T22:18:00.000
2
0
1
0
python,python-3.x,configuration,build
30,315,369
1
false
0
0
If the server you are trying to build Python on does not have tcl/tk installed, then Python will skip that portion during its build process.
1
10
0
Basically I want to build a minimal version of Python 3 (this will be running on a headless server, so no GUI, no mouse, no keyboard). The dependency on tk for most Python packages pulls in X and bunch of other UI things I dont want. There's nothing in ./configure --help that tells me how to switch off building with tk. And nothing in the README file included with the source tarball either. Its been surprisingly hard to find info about this, so what kind of spell is needed ?
How to build Python3 WITHOUT support for tk?
0.379949
0
0
1,262
27,453,156
2014-12-12T22:34:00.000
9
0
0
0
python,jenkins,jenkins-plugins
27,465,483
1
true
1
0
If your Jenkins is running on Linux, a simple "Execute Shell" build-step with cat filename.txt will print the file to console If running Windows, an "Execute Windows batch command" build-step with type filename.txt will do the same. The filename path is relative to WORKSPACE
1
5
0
I have a non-deterministically working Jenkins build step that prints text. It's multi line and has lots of urls in it. It's a python build step. print("""\ xxxx yyy ") It works many times, but not all the times - it messes up the next step when it fails. I'm not sure how to debug, but maybe I should just add a step that displays the contents of a text file on the console output log. I can't find such a plugin? thank you!
Jenkins : how to print the contents of a text file to the build log?
1.2
0
0
15,107
27,454,876
2014-12-13T02:22:00.000
1
0
1
1
python,tornado
27,455,651
1
true
0
0
Use Eclipse, PyDev, PyCharm, or whatever to set a breakpoint at the misbehaving line of code and step through your code from there. Tornado applications are relatively difficult to debug because the stack trace is less clear than in multithreaded code. Step through your code carefully. If you use coroutines, you should become familiar with the implementation of gen.Runner so you can understand what your code does during a "yield".
1
1
0
I already set the debug=Truethen what is the next ? I use eclipse + pydev for develop environment Give me some details about tornado debugging will be very appreciate
How to debug python tornado
1.2
0
0
1,832
27,456,443
2014-12-13T07:06:00.000
0
0
1
0
windows,python-idle,python-3.4
27,589,157
3
false
0
0
Things didn't work, I had to reinstall Activestate python 2.7 and settle with it. Also, installed the Komodo IDE without worrying about IDLE.
1
0
0
I have installed Python v3.4 and that added Python27 and Python34 in my C:\ Thus both have their own idle.pyw's but the problem is that they point to the same Python27 version. How do I use Python 3.4 through the idle then? Thanks.
Changing IDLE version from python 2.7 to 3.4
0
0
0
2,741
27,459,327
2014-12-13T13:29:00.000
2
0
0
0
python,django
27,461,927
3
false
1
0
I'd humbly reccomend the standard library module multiprocessing for this. As long as the background process can run on the same server as the one processing the requests, you'll be fine. Although i consider this to be the simplest solution, this wouldn't scale well at all, since you'd be running extra processess on your server. If you expect these things to only happen once in a while, and not to last that long, it's a good quick solution. One thing to keep in mind though: In the newly started process ALWAYS close your database connection before doing anything - this is because the forked process shares the same connection to the SQL server and might enter into data races with your main django process.
1
1
0
What I want to achieve is to run python some script which will collect data and insert it to DB in a background. So basically, a person opens Django view, clicks on a button and then closes the browser and Django launches this script on a server, the script then collects data in background while everything else goes on its own. What is the best library, framework, module or package to achieve such functionality?
Run Python script in background on remote server with Django view
0.132549
0
0
1,029
27,460,843
2014-12-13T16:07:00.000
0
1
1
1
python,raspberry-pi,pycharm,remote-debugging,interpreter
29,901,664
1
true
0
0
It works in PyCharm if you deploy a remote SFTP server. Tools > Deployment > Add > Enter name and SFTP > Enter host, port, root path (I said "/" without quotes) username and password. Then, when creating a new project, change your interpreter to 'Deployment Configuration', and select your SFTP server. Press OK, then create. You should be all set to go.
1
4
0
I would like to connect to my raspberry pi using a remote interpreter. I've managed to do it just fine in windows 7 using Pycharm, but having recently upgrading to windows 8.1 it no longer works. I've tried to connect to the raspberry pi (where it worked in win 7) and another one with a fresh install of Raspbian (released 09-09-2014). I also tried through Ubuntu, but to no avail. Has anyone out there managed to get this right in windows 8 or any linux flavour? Should I try a key pair (OpenSSH or PuTTY)? After adding the RSA key to the repository, the process that hangs is 'Getting remote interpreter version' ~ 'Connecting to 10.0.0.98'
Configuring Remote Python Interpreter in Pycharm
1.2
0
0
2,553
27,465,157
2014-12-14T00:37:00.000
0
0
0
0
python,python-2.7,graphics,geometry
27,465,376
2
false
0
1
Check out svgfig or svgwrite for svg reportlab for pdf pyglet for drawing to a window
1
0
0
How would I draw a simple partially transparent rectangle in python. I would like to not download anything from the internet and purely use python 2.7.3. I would like to also control where the rectangle starts and ends and control its width and height. The end goal of this is to have a map (of michigan and its great lakes) and have color coded rectangles pop up along the coast to visually show what the expected weather will be like based of buoys data from the NDBV. So in short, a map that i can place color coded rectangles on, that will be oriented along the coast of western michigan.
Drawing basic shapes in python 2
0
0
0
7,565
27,468,688
2014-12-14T11:07:00.000
0
0
0
0
jquery,python,xpath,lxml
27,468,960
1
true
1
0
What happens here is, upon selecting a value form the dropdown, an AJAX request is generated and gets the data. You can analyze the request url in your browser. If you use firefox, use firebug and take a look at the Net tab, what requests are generating and what is the url. In google chorme, look in Network tab. If you want to parse the data you have to make a request in that url.
1
0
0
I want to parse some data from a website. However, there's a certain peculiarity: There is a dropdown list (layed out using div and child a tags, made functional with a jQuery script). Upon selecting one of the values, a subsequent text field would change its value. I want to retrieve the first dropdown value, and the respective text field, then the next dropdown value and the updated text field, and so forth. How would I go about this?
Python/lxml: Retrieving variable data
1.2
0
1
57
27,469,113
2014-12-14T12:07:00.000
0
0
0
0
python,sockets
42,805,997
1
false
0
0
When you call accept(), it returns a new socket for the connection, the original socket is still listening for new connections. – Barmar
1
0
0
I am trying to create a simple form of client-server application, using python. So I got started with sockets, and facing some errors I searched a bit and saw that no two sockets can be listening to the same port at the same time. Is that true? And if so, the only way to handle multiple requests towards the server, as regards the sockets, is to have a single socket do the listening and take turns at the incoming requests?
Handling mutliple client connections as a server
0
0
1
28
27,470,670
2014-12-14T15:13:00.000
25
0
0
0
python,nlp,gensim,word2vec,doc2vec
30,337,118
4
false
0
0
Note that the "DBOW" (dm=0) training mode doesn't require or even create word-vectors as part of the training. It merely learns document vectors that are good at predicting each word in turn (much like the word2vec skip-gram training mode). (Before gensim 0.12.0, there was the parameter train_words mentioned in another comment, which some documentation suggested will co-train words. However, I don't believe this ever actually worked. Starting in gensim 0.12.0, there is the parameter dbow_words, which works to skip-gram train words simultaneous with DBOW doc-vectors. Note that this makes training take longer – by a factor related to window. So if you don't need word-vectors, you may still leave this off.) In the "DM" training method (dm=1), word-vectors are inherently trained during the process along with doc-vectors, and are likely to also affect the quality of the doc-vectors. It's theoretically possible to pre-initialize the word-vectors from prior data. But I don't know any strong theoretical or experimental reason to be confident this would improve the doc-vectors. One fragmentary experiment I ran along these lines suggested the doc-vector training got off to a faster start – better predictive qualities after the first few passes – but this advantage faded with more passes. Whether you hold the word vectors constant or let them continue to adjust with the new training is also likely an important consideration... but which choice is better may depend on your goals, data set, and the quality/relevance of the pre-existing word-vectors. (You could repeat my experiment with the intersect_word2vec_format() method available in gensim 0.12.0, and try different levels of making pre-loaded vectors resistant-to-new-training via the syn0_lockf values. But remember this is experimental territory: the basic doc2vec results don't rely on, or even necessarily improve with, reused word vectors.)
1
44
1
I recently came across the doc2vec addition to Gensim. How can I use pre-trained word vectors (e.g. found in word2vec original website) with doc2vec? Or is doc2vec getting the word vectors from the same sentences it uses for paragraph-vector training? Thanks.
How to use Gensim doc2vec with pre-trained word vectors?
1
0
0
40,470
27,473,393
2014-12-14T19:54:00.000
0
0
1
0
python,cumulative-sum
27,473,448
1
false
0
0
All factors plus a lot more (say the motherboard, disks) play a role. In your case however you use python. Python usually uses only one CPU core so having less/more cores plays no role. The size of the memory would play any role if you need a lot of RAM, you most likely did not need much. For simple python scripts I think only CPU speed (family...) and memory (RAM) speed play any role. The question is what resources your program uses. In your case disk speed, memory size, network card, OS and so on most likely do not play any role.
1
0
0
I implemented dijkstra's algorithm in python. I run the same program in 4 different system, but the result was surprising. An intel xeon processor, 64 gb ram desktop took exactly same time (1.21 sec) as taken by a pentium dual core 1 gb ram desktop. How is it possible? Please tell me whether program execution depend on following factors: System Processor OS RAM Programing language. System cache memory. Who's effect is maximum.
Does the system processor/ configuration plays role in program execution?
0
0
0
62
27,474,557
2014-12-14T21:52:00.000
1
1
0
0
python,http,web,raspberry-pi
27,474,586
2
true
1
0
You have to write somewhere your configuration for looping script. So file or database are possible choices but I would say that a formatted file (ini, yaml, …) is the way to go if you have a little number of parameters.
1
0
0
I have a python script on my raspberry-pi continuously (every 5 seconds) running a loop to control the temperature of a pot with some electronics through GPIO. I monitor temperature on a web page by having the python script write the temperature to a text file witch I request from java script and HTTP on a web page. I would like to pass a parameter to the python script to make changes to the controlling, like change the target temperature. What would be the better way to do this? I'm working on a solution, where the python script is looking for parameters in a text file and then have a second python script write changes to this file. This second python script would be run by a http request from the web page. Is this a way to go? Or am I missing a more direct way to do this. This must be done many time before and described on the web, but I find nothing. Maybe I don't have the right terms to describe the problem. Any hints is appreciated. Best regards Kresten
Interact with python script running infinitive loop from web
1.2
0
1
229
27,474,576
2014-12-14T21:53:00.000
0
0
0
0
python,sublimetext,sublimetext3,sublime-text-plugin
27,475,157
1
true
1
0
There is no sidebar API in Sublime currently, so unfortunately what you are trying to do isn't possible at present.
1
0
0
I want to build a plugin in order to affect the sidebar. Mainly visual stuff at first. But I can't find any documentation about it. Is it possible, as we can obtain view() and window() in the plugin, to have something like sidebar(), and be able to treat all the nodes on the Folders sections (for individual files) and interact with them? Thanks!
Sidebar Object on Sublime Plugin
1.2
0
0
70
27,479,855
2014-12-15T08:18:00.000
0
0
1
1
python,pyephem
29,810,773
1
false
0
0
Wouldn't that be the, for lack of a better term, anti-transit? It seems to me that if it's cicumpolar, what you're looking for would be roughly 12 hours before/after transit.
1
2
0
Is it possible to calculate time of a planet will be closest to the horizon, when pyephem throws AlwaysUpError and NeverUpError?
Pyephem: time of a planet will be closest to the horizon
0
0
0
75
27,484,406
2014-12-15T12:43:00.000
0
1
0
0
python,xcode6,mechanize,python-requests,osx-yosemite
27,577,039
1
true
0
0
Apparently Xcode was referring to my default of Python. After the comment from Droppy, I checked my python version by using which python I copy pasted the result in the Xcode Program scheme. Now it works...! Thanks for all the help.
1
0
0
Recently I installed Mac OS Yosemite. Until then I had no problem in installing libraries and have already installed beautiful soup, pydelicious, etc. After the installation of Yosemite, I tried to install Mechanize and Requests Libraries in mac. There was no problem in their installation. I could import them and use them from the Terminal. However, Xcode 6.1 doesn't load them/seeing them. I consistently get the ImportError: No module named mechanize ImportError: No module named requests error messages. I have already tried changing the file permissions with full access to the user but to no avail. I also checked PYTHONPATH and .profile files, so far no luck. I wonder, if any has encountered this problem or if any one know of some fix to this problem?
ImportError: No module named mechanize - XCode 6.1 is not seeing newly installed python libraries
1.2
0
0
481
27,485,296
2014-12-15T13:33:00.000
0
0
0
1
python-2.7,ubuntu-14.04,openfire,ejabberd
27,539,662
1
true
1
0
Solved the issue problem wan with my settings,and then restarted the server using sudo service ejabbers restart.It worked
1
0
0
I have re-installed ejabberd server in my localhost.When i run sudo service ejabberd restart its no getting restarted.Instead its craeting error.The following error is shown in erl_crash.dump.All my configurations in conf file is correct. Kernel pid terminated (application_controller) ({application_start_failure,kernel,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,net_kernel,{'EXIT',nodistribution}}}}},{k I tried everything also killed process running on same ports.I there anything else to do to solve this issue ???
Ejabberd server not getting started?
1.2
0
0
75
27,491,173
2014-12-15T18:59:00.000
0
1
0
1
python,ssh,raspberry-pi,raspbian
27,494,641
2
false
0
0
You should modify your python script to write its output to a file instead of to the screen (which you can't see). I.e., I think that a log file is your best (possibly only) bet. You can write to a file in /tmp on the raspberry pi if you just want a temporary log file that you can check once and a while. Also, as Tim said, you could try out the python logging library, but I think just writing to a file is quicker and easier, although you might run into some issues with permissions...
2
0
0
I am running Python on a Raspberry Pi and everything works great. I have a small script running on the system start up which prints several warning messages (which I actually cannot read since it is running in the background)... My question is: Is there a way via SSH to "open" this running script instance and see what is going on or a log file is the only way to work with that? Thanks!
Python on system start up
0
0
0
82
27,491,173
2014-12-15T18:59:00.000
1
1
0
1
python,ssh,raspberry-pi,raspbian
27,492,567
2
true
0
0
Try using the Python logging library. You can configure it to save the output to a file and then you can use tail -f mylogfile.log to watch as content is put in. EDIT: An alternative is to use screen. It allows you to run a command in a virtual console, detach from that console, and then disconnect from the machine. You can then reconnect to the machine and re-attach to that console and see all the output the process made. I'm not sure about using it on a script that starts when the machine is turned on, though (I simply haven't tried it).
2
0
0
I am running Python on a Raspberry Pi and everything works great. I have a small script running on the system start up which prints several warning messages (which I actually cannot read since it is running in the background)... My question is: Is there a way via SSH to "open" this running script instance and see what is going on or a log file is the only way to work with that? Thanks!
Python on system start up
1.2
0
0
82
27,493,323
2014-12-15T21:19:00.000
0
0
1
0
python,version-control,version,updates,conflict
27,493,421
4
false
0
0
It should be unlikely to cause any problems. A Python version has the format major.minor.bugfix. Changes in bugfix shouldn't change how any programs work, unless it makes them work correctly where they weren't before. Changes in minor shouldn't require you to change your programs much, but you might have to upgrade libraries. Changes in major are definitely backwards incompatible, but thankfully not that common. You're only likely to encounter problems if your programs have lots and lots of dependencies.
2
0
0
Question: I'm a new python user. I'm currently using v2.7.6 in a tied relations with other systems and files written in different languages. Does updating to v2.7.9 might cause any issues? Any hidden conflicts for using py files that were written in 2.7.6 and/or using files written in other languages? Could is a bit of explanation on why it might cause issue/why it couldn't. Thanks for your time.
Python - Can updating 2.7.6 to 2.7.9 cause any problems/conflicts?
0
0
0
656