Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
24,286,218
2014-06-18T12:57:00.000
-1
0
0
0
javascript,python,ajax,rest,tornado
48,689,932
1
false
1
0
/add/name/(\d)+ Then make post function with def post (self, id): pass. this argument id is value in url \d pattern. Hope it’s helpful.
1
0
0
I am learning Tornado and my app does just following: localhost:8000/add/name : adds name to the database localhost:8000/delete/name: deletes name from database As of now I type in browser address bar /add/name and manually adding names. How do I make use of HTML forms for this request? Is this the right way: I create a field box with a id, using JS I get the value from that id, construct the RESTful POST url and on clicking submit, it goes to the constructed url. Now I want to turn above thing to AJAX call so that there is no page refresh. All the examples I found uses form where it sends the 'value' as request parameter not as RESTful. Any help regarding this is appreciated. Thank you! PS: I know I can use get_argument in Tornado and get the value. But I want this in REST, sending the value in URL.
How do I convert RESTful POST call to Ajax in Tornado?
-0.197375
0
0
168
24,287,228
2014-06-18T13:43:00.000
1
0
0
0
python-2.7,wxpython
24,311,759
1
true
0
1
This is not supported by the grid widget. You could size a column or row such that it is skinnier than usual and change all the cells in that row or column to have a different background color. You might also be able to utilize a custom label renderer or cell renderer. See the wxPython demo for examples.
1
0
0
I've been trying to add separator lines between rows in my grid. I tried using wx.Menu() with the AppendSeparator() method, however the wx grid can't add objects of type Menu. Is there any other way?
How do I add a separator in a grid in wxPython?
1.2
0
0
160
24,289,418
2014-06-18T15:23:00.000
0
0
1
0
python,qt,error-handling,pyqt,suppress-warnings
33,542,567
3
false
0
1
It would be much better if some of your actual code would have been available. The reason you are seeing this is that you are using a different type of threading than QThreads . That is in general not advisable, but it is not illegal. There are three things that you will have to take care All calls should end up being executed from calls within a QThreads object All data passed through the signal-slots mechanism should be made such that there are no references/pointers to objects owned from a non-QThread. Setup connections between object that receive call from non-QThreads to emit signal that are received with either Qt::QueuedConnection or Qt::BlockingQueuedConnection connection type.
2
3
0
I am having this error flood my terminal and make it impossible to debug. Is there a way to silence this warning? This error only generated when I included a scrollToBottom() on my TableWidget.
How to suppress warning QPixmap: It is not safe to use pixmaps outside the GUI thread
0
0
0
7,241
24,289,418
2014-06-18T15:23:00.000
0
0
1
0
python,qt,error-handling,pyqt,suppress-warnings
24,291,464
3
false
0
1
You should better design your code to avoid displaying this message. If you created the pixmap in another thread and "use" it in the GUI thread this might work now, tomorrow or forever ... or will not. Don't do that. You cannot suppress the output of this warning without changing the Qt sources or installing a message handler.
2
3
0
I am having this error flood my terminal and make it impossible to debug. Is there a way to silence this warning? This error only generated when I included a scrollToBottom() on my TableWidget.
How to suppress warning QPixmap: It is not safe to use pixmaps outside the GUI thread
0
0
0
7,241
24,291,443
2014-06-18T17:12:00.000
1
1
0
1
ubuntu,python-3.x,twisted
24,295,427
1
true
0
0
Twisted has not been entirely ported to Python 3. Only parts of it have been ported. When you install Twisted using Python 3, only the parts that have been ported are installed. The unported modules are not installed because they are not expected to work. As you observed, this code does not actually work on Python 3 because it uses implicit relative imports - a feature which has been removed from Python 3.
1
2
0
I'm using Ubuntu in several PCs (versions 12.04 and 14.04), and I noticed that serialprotocol.py is not being installed when I run "sudo python3 setup3.py install" in the default source tar package for twisted 14.0.0. I had to manually copy the file in my computers. I also tried installing the default ubuntu package python3-twisted-experimental with the same results. So I always end up copying "serialprotocol.py" and "_posixserialport.py" manually. And they work fine after that. As a side note: _posixserialport.py fails to import BaseSerialPort because it says: from serialport import BaseSerialPort but it should be: from twisted.internet.serialport import BaseSerialPort
Why isn't serialport.py installed by default?
1.2
0
0
91
24,291,779
2014-06-18T17:35:00.000
4
0
1
0
python,python-3.x
24,291,950
1
true
0
0
The directory is called __pycache__ (with double underscores). It'll only be created if Python has permission to create a directory in the same location the .py file lives. The folder is not hidden in any way, if it is not there, then Python did not create it. Note that .pyc bytecode cache files are only created for modules your code imports; it is not created for the main script file. If you run python.exe foobar.py, no __pycache__/foobar.cpython-34.pyc file is created.
1
2
0
I'm running Python 3.4.1 on Windows 7 and thought that after running my .py script in the command line, a directory named _pycache_ would be created in the same directory that my script ran in. It is not there, even after I made sure that 'Show hidden files, folders, and drives' was checked. I looked around here and on Google but can't seem to get an answer that makes this visible. Can someone help? I'm new to Python and would like to look over the byte code files.
Where is my _pycache_ folder and .pyc byte code files?
1.2
0
0
2,487
24,294,371
2014-06-18T20:21:00.000
0
1
0
0
python,r,dataset,fortran,data-processing
24,299,151
3
false
1
0
Is the file human-readable text or in the native format of the computer (sometimes called binary)? If the files are text, you could reduce the processing load and file size by switching to native format. Converting from the internal representation of floating point numbers to human-reading numbers is CPU intensive. If the files are in native format then it should be easy to skip in the file since each record will be 16 bytes. In Fortran, open the file with an open statement that includes form="unformated", access="direct", recl=16. Then you can read an arbitrary record X without reading intervening records via rec=X in the read statement. If the file is text, you can also read it with direct IO, but it might not be that each two numbers always uses the same number of characters (bytes). You can examine your files and answer that question. If the records are always the same length, then you can use the same technique, just with form="formatted". If the records vary in length, then you could read a large chunk and locate your numbers within the chunk.
1
0
0
So I hope this question already hasn't been answered, but I can't seem to figure out the right search term. First some background: I have text data files that are tabular and can easily climb into the 10s of GBs. The computer processing them is already heavily loaded from the hours long data collection(at up to 30-50MB/s) as it is doing device processing and control.Therefore, disk space and access are at a premium. We haven't moved from spinning disks to SSDs due to space constraints. However, we are looking to do something with the just collected data that doesn't need every data point. We were hoping to decimate the data and collect every 1000th point. However, loading these files (Gigabytes each) puts a huge load on the disk which is unacceptable as it could interrupt the live collection system. I was wondering if it was possible to use a low level method to access every nth byte (or some other method) in the file (like a database does) because the file is very well defined (Two 64 bit doubles in each row). I understand too low level access might not work because the hard drive might be fragmented, but what would the best approach/method be? I'd prefer a solution in python or ruby because that's what the processing will be done in, but in theory R, C, or Fortran could also work. Finally, upgrading the computer or hardware isn't an option, setting up the system took hundreds of man-hours so only software changes can be performed. However, it would be a longer term project but if a text file isn't the best way to handle these files, I'm open to other solutions too. EDIT: We generate (depending on usage) anywhere from 50000 lines(records)/sec to 5 million lines/sec databases aren't feasible at this rate regardless.
Low level file processing in ruby/python
0
0
0
162
24,295,681
2014-06-18T21:57:00.000
3
0
1
0
python,user-interface
24,295,727
1
true
0
1
The best way to do this would be to ship your application with those modules as a part of it; the user's computer doesn't need to have the GUI framework installed if you provide it. What you're asking would essentially require you to write an entire GUI framework, which would give a result that would be similar or worse - with a LOT more work.
1
1
0
I've been looking for ways to make a GUI with a .py file, and have so far only found frameworks and modules like Tkinter. However, my ultimate goal is for this code to run on a lot of computers that don't necessarily have these modules installed. The machines are only guaranteed to have Python on them. Does anyone know a way to make a GUI under these restrictions?
Can I make a GUI with Python without any extraneous software?
1.2
0
0
57
24,296,221
2014-06-18T22:49:00.000
5
1
0
0
python,protocol-buffers
24,301,278
3
true
0
0
protocol buffers have a method SerializeToString() Use it to compare your messages.
1
6
0
I can't seem to find a comparison method in the API. I have these two messages, and they have a lot of different values that sometimes drill down to more values (for example, I have a Message that has a string, an int, and a custom_snapshot, where custom_snapshot is comprised of an int, a string, and so on). I want to see if these two messages are the same. I don't want to compare each value one by one since that will take a while, so I was wondering if there was a quick way to do this in Python? I tried doing messageA.debugString() == messageB.debugString(), but apparently there is no debugString method that I could access when I tried.
How do I compare the contents of two Google Protocol Buffer messages for equality?
1.2
0
0
9,737
24,297,468
2014-06-19T01:34:00.000
2
0
1
1
python,batch-file
32,706,802
4
false
0
0
I don't have enough reputation to comment on nicholas's solution, but that code breaks if any of the folder names contain the character you want to replace. For instance, if you want to newname = path.replace('_', '') but your path looks like /path/to/data_dir/control_43.csv you will get an OSError: [Errno 2] No such file or directory
1
7
0
I have 3 main folder in Windows explorer that contain files with naming like this ALB_01_00000_intsect_d.kml or Baxters_Creek_AL_intsect_d.kml. Even though the first name changes the consistent thing that I would like to remove from all these files is "_intsect_d". Would like to do this for all files within each of the folders. The files have an extension .kml. The result I am expecting as per the example above is ALB_01_00000.kml and the other one would be Baxters_Creek_AL.kml. Dont know much about programming in python, but would like help to write a script that can acheive the result mentioned above. Thanks
Removing characters from filename in batch
0.099668
0
0
19,762
24,304,640
2014-06-19T10:42:00.000
4
0
0
0
python,web-scraping,lxml
24,305,212
2
true
1
0
Even JavaScript is using http requests to get the data, so one method would be to investigate, what requests are providing the data when user asks to "Load more results" and emulate these requests. This is not traditional scraping, which is based on plain or rendered html content and detecting further links, but can be working solution. Next actions: visit the page in Google Chrome or Firefox press F12 to start up Developer tools or Firebug switch to "Network" tab click "Load more results" check, what http requests have served data for loading more results and what data they return. try to emulate these requests from Python Note, that the data do not necessarily come in HTML or XML form, but could be in JSON. But Python provide enough tools to process this format too.
2
3
0
I am scraping a webpage. The webpage consists of 50 entries. After 50 entries it gives a Load more reults button. I need to automatically select it. How can I do it. For scraping I am using Python, Lxml.
How to select "Load more results" button when scraping using Python & lxml
1.2
0
1
2,379
24,304,640
2014-06-19T10:42:00.000
1
0
0
0
python,web-scraping,lxml
24,304,877
2
false
1
0
You can't do that. The functionality is provided by javascript, which lxml will not execute.
2
3
0
I am scraping a webpage. The webpage consists of 50 entries. After 50 entries it gives a Load more reults button. I need to automatically select it. How can I do it. For scraping I am using Python, Lxml.
How to select "Load more results" button when scraping using Python & lxml
0.099668
0
1
2,379
24,306,285
2014-06-19T12:10:00.000
7
0
0
0
python,performance,numpy,cuda,gpu
24,317,131
2
false
0
0
The comments and Moj's answer give a lot of good advice. I have some experience on signal/image processing with python, and have banged my head against the performance wall repeatedly, and I just want to share a few thoughts about making things faster in general. Maybe these help figuring out possible solutions with slow algorithms. Where is the time spent? Let us assume that you have a great algorithm which is just too slow. The first step is to profile it to see where the time is spent. Sometimes the time is spent doing trivial things in a stupid way. It may be in your own code, or it may even be in the library code. For example, if you want to run a 2D Gaussian filter with a largish kernel, direct convolution is very slow, and even FFT may be slow. Approximating the filter with computationally cheap successive sliding averages may speed things up by a factor of 10 or 100 in some cases and give results which are close enough. If a lot of time is spent in some module/library code, you should check if the algorithm is just a slow algorithm, or if there is something slow with the library. Python is a great programming language, but for pure number crunching operations it is not good, which means most great libraries have some binary libraries doing the heavy lifting. On the other hand, if you can find suitable libraries, the penalty for using python in signal/image processing is often negligible. Thus, rewriting the whole program in C does not usually help much. Writing a good algorithm even in C is not always trivial, and sometimes the performance may vary a lot depending on things like CPU cache. If the data is in the CPU cache, it can be fetched very fast, if it is not, then the algorithm is much slower. This may introduce non-linear steps into the processing time depending on the data size. (Most people know this from the virtual memory swapping, where it is more visible.) Due to this it may be faster to solve 100 problems with 100 000 points than 1 problem with 10 000 000 points. One thing to check is the precision used in the calculation. In some cases float32 is as good as float64 but much faster. In many cases there is no difference. Multi-threading Python - did I mention? - is a great programming language, but one of its shortcomings is that in its basic form it runs a single thread. So, no matter how many cores you have in your system, the wall clock time is always the same. The result is that one of the cores is at 100 %, and the others spend their time idling. Making things parallel and having multiple threads may improve your performance by a factor of, e.g., 3 in a 4-core machine. It is usually a very good idea if you can split your problem into small independent parts. It helps with many performance bottlenecks. And do not expect technology to come to rescue. If the code is not written to be parallel, it is very difficult for a machine to make it parallel. GPUs Your machine may have a great GPU with maybe 1536 number-hungry cores ready to crunch everything you toss at them. The bad news is that making GPU code is a bit different from writing CPU code. There are some slightly generic APIs around (CUDA, OpenCL), but if you are not accustomed to writing parallel code for GPUs, prepare for a steepish learning curve. On the other hand, it is likely someone has already written the library you need, and then you only need to hook to that. With GPUs the sheer number-crunching power is impressive, almost frightening. We may talk about 3 TFLOPS (3 x 10^12 single-precision floating-point ops per second). The problem there is how to get the data to the GPU cores, because the memory bandwidth will become the limiting factor. This means that even though using GPUs is a good idea in many cases, there are a lot of cases where there is no gain. Typically, if you are performing a lot of local operations on the image, the operations are easy to make parallel, and they fit well a GPU. If you are doing global operations, the situation is a bit more complicated. A FFT requires information from all over the image, and thus the standard algorithm does not work well with GPUs. (There are GPU-based algorithms for FFTs, and they sometimes make things much faster.) Also, beware that making your algorithms run on a GPU bind you to that GPU. The portability of your code across OSes or machines suffers. Buy some performance Also, one important thing to consider is if you need to run your algorithm once, once in a while, or in real time. Sometimes the solution is as easy as buying time from a larger computer. For a dollar or two an hour you can buy time from quite fast machines with a lot of resources. It is simpler and often cheaper than you would think. Also GPU capacity can be bought easily for a similar price. One possibly slightly under-advertised property of some cloud services is that in some cases the IO speed of the virtual machines is extremely good compared to physical machines. The difference comes from the fact that there are no spinning platters with the average penalty of half-revolution per data seek. This may be important with data-intensive applications, especially if you work with a large number of files and access them in a non-linear way.
2
2
1
I've completed writing a multiclass classification algorithm that uses boosted classifiers. One of the main calculations consists of weighted least squares regression. The main libraries I've used include: statsmodels (for regression) numpy (pretty much everywhere) scikit-image (for extracting HoG features of images) I've developed the algorithm in Python, using Anaconda's Spyder. I now need to use the algorithm to start training classification models. So I'll be passing approximately 7000-10000 images to this algorithm, each about 50x100, all in gray scale. Now I've been told that a powerful machine is available in order to speed up the training process. And they asked me "am I using GPU?" And a few other questions. To be honest I have no experience in CUDA/GPU, etc. I've only ever heard of them. I didn't develop my code with any such thing in mind. In fact I had the (ignorant) impression that a good machine will automatically run my code faster than a mediocre one, without my having to do anything about it. (Apart from obviously writing regular code efficiently in terms of loops, O(n), etc). Is it still possible for my code to get speeded up simply by virtue of being on a high performance computer? Or do I need to modify it to make use of a parallel-processing machine?
How to speed up Python code for running on a powerful machine?
1
0
0
6,194
24,306,285
2014-06-19T12:10:00.000
4
0
0
0
python,performance,numpy,cuda,gpu
24,306,811
2
false
0
0
I am afraid you can not speed up your program by just running it on a powerful computer. I had this issue while back. I first used python (very slow), then moved to C(slow) and then had to use other tricks and techniques. for example it is sometimes possible to apply some dimensionality reduction to speed up things while having reasonable accurate result, or as you mentioned using multi processing techniques. Since you are dealing with image processing problem, you do a lot of matrix operations and GPU for sure would be a great help. there are some nice and active cuda wrappers in python that you can easily use, by not knowing too much CUDA. I tried Theano, pycuda and scikit-cuda (there should be more than that since then).
2
2
1
I've completed writing a multiclass classification algorithm that uses boosted classifiers. One of the main calculations consists of weighted least squares regression. The main libraries I've used include: statsmodels (for regression) numpy (pretty much everywhere) scikit-image (for extracting HoG features of images) I've developed the algorithm in Python, using Anaconda's Spyder. I now need to use the algorithm to start training classification models. So I'll be passing approximately 7000-10000 images to this algorithm, each about 50x100, all in gray scale. Now I've been told that a powerful machine is available in order to speed up the training process. And they asked me "am I using GPU?" And a few other questions. To be honest I have no experience in CUDA/GPU, etc. I've only ever heard of them. I didn't develop my code with any such thing in mind. In fact I had the (ignorant) impression that a good machine will automatically run my code faster than a mediocre one, without my having to do anything about it. (Apart from obviously writing regular code efficiently in terms of loops, O(n), etc). Is it still possible for my code to get speeded up simply by virtue of being on a high performance computer? Or do I need to modify it to make use of a parallel-processing machine?
How to speed up Python code for running on a powerful machine?
0.379949
0
0
6,194
24,310,407
2014-06-19T15:27:00.000
4
0
0
0
java,python,xml,regex,docx
24,310,461
2
true
0
0
Let me try to make this clear. If you are viewing it, then you have downloaded it. You are "downloading" this webpage in order for your browser to render it. You're "downloading" a link to a document which tells you that there is a document. You cannot view the document unless you download it. Yes, you have to download it. Downloading a file is just getting it from the remote server. Of course, you don't have to write it to your hard drive. You can download it and store it in memory, and then deal with it from memory. Once you open a connection, you get an InputStream object to read bytes. You can pass that into the Apache POI libraries to read the file.
1
0
0
I'd like to write a program that parses an online .docx file to build an XML document. I know (or at least I think I know) that browsers need a plug-in to view .docx in browser, but I'm not that familiar with plug-ins or how the work. After looking at a .docx file in Notepad++, it seems clear to me that I won't be able to parse the binary data. Is there a way to simulate the opening of the .docx file for my purposes (EDIT: that is, without downloading and saving the file to my hard drive) within the the abilities of any languages or libraries? My question is more about the opening of the file without downloading than about the actual parsing of it, as I've looked into the Apache POI API for parsing the document in Java.
Is it possible to read in and parse a .docx file that is linked to on a website without downloading the file (in Java, Python, or another language)?
1.2
0
1
839
24,311,929
2014-06-19T16:39:00.000
0
0
0
0
python,ssl
24,312,062
1
true
0
0
Typically, the ssl part for Python web app is managed by some frontend web server like nginx, apache or so. This does not require any modification of your code (assuming, you are not expecting user to authenticate by ssl certificate on client side, what is quite exotic, but possible scenario). If you want to run pure Python solution, I would recommend using cherrypy, which is able providing rather reliable and performant web server part (it will be very likely slower then served behind nginx or apache).
1
2
0
I been using python to create an web app and it has been doing well so far. Now I would like to encrypt the transmission of the data between client and server using https. The communication is generally just post form and web pages, no money transactions are involve. Is there anything I need to change to the python code except setting the server up with certificate and configurate it to use https? I see a lot of information regarding ssl for python and I not sure if I need those modules and python setup to make https work. Thanks
Is there any thing needed for https python web page
1.2
0
1
45
24,312,068
2014-06-19T16:47:00.000
1
0
0
0
python-2.7,charts,google-sheets,google-spreadsheet-api
24,347,728
1
true
0
0
AFAIK, no. There is no way to do this with python. Google-apps-script can do this, but the spreadsheet-api (Gdata) can't. You can make a call from Python to Google-apps-script and pass parameters.
1
1
1
Is there a way generate a chart on google spreadsheet automatically using Python? I checked gspread. There seems no api for making charts. Thanks~
Is there a way generate a chart on google spreadsheet automatically using Python?
1.2
0
0
1,378
24,312,753
2014-06-19T17:25:00.000
0
0
0
0
java,python,audio
24,438,142
1
false
1
0
Yes, it's possible to get the actual audio samples from the audio, this is a very common operation and I'm sure you can do it in many languages.A good audio library to use in C# (.NET) is the NAudio library, it has many features and it relatively easy to use.
1
0
0
I'm generally looking for any language in which I can do this in, be it Java/Python/.NET. I'm looking to programmatically convert audio to values. I know it's possible to render the waveform of audio using Java. Can I transfer the audio to values? For example, the part in the song with the highest amplitude would have the greatest value in this array.
Java - audio to values/variables?
0
0
0
107
24,314,270
2014-06-19T18:53:00.000
2
0
0
1
python,sftp,file-transfer,paramiko,resume
27,151,379
2
true
0
0
Paramiko doesn't offer an out of the box 'resume' function however, Syncrify, DeltaCopy's big successor has a retry built in and if the backup goes down the server waits up to six hours for a reconnect. Pretty trusty, easy to use and data diff by default.
2
2
0
I'm working on a Python project that is required some file transferring. One side of the connection is highly available ( REHL 6 ) and always online. But the other side is going on and off ( Windows 7 ) and the connection period is not guaranteed. The files are transporting on both directions and sizes are between 10MB to 2GB. Is it possible to resume the file transferring with paramiko instead of transferring the entire file from the beginning. I would like to use rSync but one side is windows and I would like to avoid cwRsync and DeltaCopy
How to resume file transferring with paramiko
1.2
0
0
1,637
24,314,270
2014-06-19T18:53:00.000
2
0
0
1
python,sftp,file-transfer,paramiko,resume
50,497,310
2
false
0
0
paramiko.sftp_client.SFTPClient contains an open function, which functions exactly like python's built-in open function. You can use this to open both a local and remote file, and manually transfer data from one to the other, all the while recording how much data has been transferred. When the connection is interrupted, you should be able to pick up right where you left off (assuming that neither file has been changed by a 3rd party) by using the seek method. Keep in mind that a naive implementation of this is likely to be slower than paramiko's get and put functions.
2
2
0
I'm working on a Python project that is required some file transferring. One side of the connection is highly available ( REHL 6 ) and always online. But the other side is going on and off ( Windows 7 ) and the connection period is not guaranteed. The files are transporting on both directions and sizes are between 10MB to 2GB. Is it possible to resume the file transferring with paramiko instead of transferring the entire file from the beginning. I would like to use rSync but one side is windows and I would like to avoid cwRsync and DeltaCopy
How to resume file transferring with paramiko
0.197375
0
0
1,637
24,315,020
2014-06-19T19:43:00.000
1
0
0
1
python,webserver,tornado
24,319,467
2
false
0
0
The error has nothing to do with unix sockets. IOLoops do not survive a fork gracefully, so if you are going to fork you must do it before initializing any global IOLoop (but after binding any sockets). In general, you must do as little as possible before the fork, since many Tornado components implicitly start the IOLoop. If you are using multiple TCPServers, be sure to only fork from the first one you start; all the others should be in single-process mode.
1
0
0
Using Tornado Web Server, I'm attempting to use their pre-fork after binding to a unix socket, but I get the following error: RuntimeError: Cannot run in multiple processes: IOLoop instance has already been initialized. You cannot call IOLoop.instance() before calling start_processes() Is there a reason tornado throws this issue when binding unix sockets and using: myserver.start(0) vs using an TCP Port?
Tornado: Pre-forking with unix sockets
0.099668
0
0
583
24,317,368
2014-06-19T22:41:00.000
2
1
0
1
python,ssh,fabric,sshfs
24,329,791
1
true
0
0
I figured out finally there is an issue with SSH and need to pass pty=False flag. run("sshfs -o reconnect -C -o workaround=all localhost:/home/test/ /mnt",pty=False)
1
1
0
I am trying to mount the SSHFS using the following run command run("sshfs -o reconnect -C -o workaround=all localhost:/home/test/ /mnt") and it is failing with the following error fuse: bad mount point `/mnt': Transport endpoint is not connected However if i demonize it works. Is there any work around?.
sshfs mount failing using fabric run command
1.2
0
0
295
24,320,040
2014-06-20T04:36:00.000
2
0
1
0
python,redis
24,321,923
1
false
0
0
One option would be: Storing data as long list of chunks store data in List - this allows storing the content as sequence of chunks as well as desctroying whole list in one step store the data using pipeline contenxt manager to ensure, you are the only one, who writes at that moment. be aware, that Redis is always processing single request and all others are blocked for that moment. With large files, which take time to write you can not only slow other clients down, but you are also likely to exceed max execution time (see config for this value). Store data in randomly named list with known pointer Alternative approach, also using list, would be to invent random list name, write content chunk by chunk into it, and when you are done, update value in known key in Redis pointing to this randomly named list. Do not forget to remove old one, this can be done from your code, but you might use expiration if it seems usable in your use case.
1
4
0
I'm storing strings on the order of 150M. It's well-below the maximum size of strings in Redis, but I'm seeing a lot of different, conflicted opinions on the approach I should take, and no clear path. On the one hand, I've seen that I should use a hash with small data chunks, and on the other hand, I've been told that leads to gapping, and that storing the whole string is most efficient. On the one hand, I've seen that I could pass in the one massive string, or do a bunch of string-append operations to build it up. The latter seems like it might be more efficient than the former. I'm reading the data from elsewhere, so I'd rather not fill a local, physical file just so that I can pass a whole string. Obviously, it'd be better all around if I can chunk the input data, and feed it into Redis via appends. However, if this isn't efficient with Redis, it might take forever to feed all of the data, one chunk at a time. I'd try it, but I lack the experience, and it might be inefficient for a number of different reasons. That being said, there's a lot of talk of "small" strings and "large" strings, but it's not clear what Redis considers an optimally "small" string. 512K, 1M, 8M? Does anyone have any definitive remarks? I'd love it if I could just provide a file-like object or generator to redis-py, but that's more language-specific than I meant this question to be, and most likely impossible for the protocol, anyway: it'd just require internal chunking of the data, anyway, when it's probably better to just impose this on the developer.
Best way to store large string in Redis... Getting mixed signals
0.379949
1
0
2,349
24,320,514
2014-06-20T05:27:00.000
1
0
0
0
python,django,migration,django-south
24,321,591
2
true
1
0
As far as I know there is no automatic way to do that, so you'll have to do the following by hand: Move your package to the new place Reflect this change in your settings.py INSTALLED_APPS In all the migration files of you package you have to edit the module path, table names and complete_apps list In your database table south_migrationhistory you have to edit the app_name column Rename the app table(s) in the database That's all. To check that everything is working properly you can type python manage.py schemamigration your_new_app_name --auto and if you did everything properly it will say that nothing have changed. Now you can continue working with your app as usual.
2
0
0
How to move package from one place to another in Django (1.4) with south? Package has applied migrations.
Move package with migrations in django
1.2
0
0
78
24,320,514
2014-06-20T05:27:00.000
0
0
0
0
python,django,migration,django-south
24,322,566
2
false
1
0
Another solution is just move package to another namespace/place but don`t change package name.
2
0
0
How to move package from one place to another in Django (1.4) with south? Package has applied migrations.
Move package with migrations in django
0
0
0
78
24,320,713
2014-06-20T05:47:00.000
0
0
1
1
python,file
24,321,323
1
false
0
0
You are trying to defeat the purpose of log rotation if you are populating the same log file even after its rotated. One of the reason for doing log rotation is not to grow log size too much so that we don't have to face difficulties in opening\searching log information and your case is defeating this purpose. Still if you want to do that then you can check the folder where log files are kept after rotation and you can find out the latest rotated log file say that latest rotated log file name is application.log.x (where x is a number i.e.1,2,3,..) then before performing a write operation to log file you need to again check the log directory to check what is the latest rotated file and if there is a file later than application.log.x that means the log file in which you were writing is rotated and you can write the log to the logfile named as application.log.x+1. On the other hand if log rotation is appending timestamp value in the logfile name to rename it then you need to check the latest rotated file before you open the log file for writing (say its app.log.timestamp ) and before writing again you need to check the log directory to find out the latest rotated log file if you find a rotated log file with greater time stamp than the time stamp of app.log.timestamp then you should use the file with name app.log.(timestamp + time duration for log rotation) Note: Rotation happens mainly on two basis i.e. size or timestamp and usually in my observation a file is renamed after rotation by appending a number or timestamp in its name e.g. if log name is application.log then after location its name become application.log.x (where x is a number 1,2,3...) or application.log. where timestamp is the date time when log rotation happened
1
0
0
The story is there is a log file that will be rotated repeatedly by some interval. I need to write a small tool in Python that always print new logs in that file even after it rotated. How can I tell the old log file is renamed, and open the new one in Python?
How to tell a file is renamed or not after opened in Python?
0
0
0
193
24,322,264
2014-06-20T07:39:00.000
2
0
0
0
python,django,oauth
24,454,976
2
false
1
0
If it's only one user I'd say it's fairly safe to assume the problem has something to do with that user's credentials. It's hard to say without an error log but if it were me I'd first check to make sure the information the user is entering is the same as what oauth is expecting. Good luck and hope this helps!
2
2
0
At work we run a python application where users log in via their google account. One user gets an "Error logging in" message on any instance, this doesn't replicate on any other instance. The app was made by a third party and they can't tell us why this happens. Is there a debugging tool or something that comes with Google auth that could be used to trace where the failure is happening? Thanks in advance. If any more technical details are needed please let me know. I'm not very familiar with how all this works.
Only one user cannot log into an app via Google ID Authentication
0.197375
0
0
108
24,322,264
2014-06-20T07:39:00.000
0
0
0
0
python,django,oauth
24,472,076
2
true
1
0
Worked this out. Whoever set up the user's ID originally had a capital letter in the ID - but not in the Email addr so this wasn't showing up anywhere.
2
2
0
At work we run a python application where users log in via their google account. One user gets an "Error logging in" message on any instance, this doesn't replicate on any other instance. The app was made by a third party and they can't tell us why this happens. Is there a debugging tool or something that comes with Google auth that could be used to trace where the failure is happening? Thanks in advance. If any more technical details are needed please let me know. I'm not very familiar with how all this works.
Only one user cannot log into an app via Google ID Authentication
1.2
0
0
108
24,330,630
2014-06-20T15:22:00.000
6
0
1
0
python
24,330,660
4
true
0
0
You can do name.replace(' ','') or ''.join(name.split())
2
1
0
I have the string name = 'one two'. i want to make 'onetwo' from it. is there any cool python shortcut for it like .join() but without space?
python - how to concatenate two words from one string without spaces
1.2
0
0
3,925
24,330,630
2014-06-20T15:22:00.000
2
0
1
0
python
24,330,655
4
false
0
0
How about "".join(name.split(" ")) ?
2
1
0
I have the string name = 'one two'. i want to make 'onetwo' from it. is there any cool python shortcut for it like .join() but without space?
python - how to concatenate two words from one string without spaces
0.099668
0
0
3,925
24,333,323
2014-06-20T18:15:00.000
0
0
1
1
python,macos,pyinstaller
53,956,162
1
false
0
1
while you create application don't add those options --windowed and --noconsole
1
6
0
I'm packaging a GUI app for MacOS with Pyinstaller, using --windowed flag. Is it possible to package it so that it would show a console in addition to the GUI? When I tried to set console=True, the GUI part fails. In other words, when I start the App from the terminal by typing "open My.App/Contents/MacOS/myapp", then I do get both GUI and console. I'd like to get similar behaviour by just double-clicking on the App without starting the terminal. Is there a way to do it?
How to package a Mac OS app with Pyinstaller that shows both a console and a GUI?
0
0
0
425
24,333,423
2014-06-20T18:22:00.000
2
1
0
0
python,amqp,pika
41,400,921
1
false
1
0
I would like to write the answer down because it this question was before the documentation on google. def amqmessage(ch, method, properties, body): channel.basic_consume(amqmessage, queue=queue_name, no_ack=True) channel.start_consuming() The routing key can be found with:method.routing_key
1
5
0
New to RabbitMQ and I am trying to determine a way in which to retrieve the routing key information of an AMQP message. Has anyone really tried this before? I am not finding a lot of documentation that explicitly states how to query AMQP using pika (python). This is what I am trying to do: basically I have a Consumer class, for example: channel.exchange_declare(exchange='test', type='topic') channel.queue_declare(queue='topic_queue',auto_delete=True) channel.queue_bind(queue='topic_queue', exchange='test', routing_key = '#') I set up a queue and I bind to an exchange and all the routing_keys (or binding keys I suppose) being passed through that exchange. I also have a function: def amqmessage(ch, method, properties, body): channel.basic_consume(amqmessage, queue=queue_name, no_ack=True) channel.start_consuming() I think that the routing_key should be "method.routing_key" from the amqmessage function but I am not certain how to get it to work correctly.
Retrieving AMQP routing key information using pika
0.379949
0
0
3,298
24,336,306
2014-06-20T22:01:00.000
0
0
0
0
python,curl,urllib2,urllib
24,336,466
1
false
0
0
Simple way would be to run wireshark to capture the desired requests and then replay them with a packet replay tool like TCPreplay. If you do want to modify parts in curl for debugging wireshark will show you all the headers urllib2 is setting so you can set them the same in curl.
1
0
0
How can I programmatically convert urllib2.Request object into the equivalent curl command? My Python script constructs several urllib2.Request objects, with different headers and postdata. For debugging purposes, I'd like to replay each request with curl. This seems tricky, as we must consider Bash escaping and urllib2's default headers. Is there a simple way to do this?
Translate a general urllib2.Request to curl command
0
0
1
309
24,336,343
2014-06-20T22:04:00.000
0
0
1
0
python,c++,windows,compilation,nuitka
24,336,469
1
true
0
0
It was really simple. I haven't figured out how to build properly yet, but the issue was, as cubuspl42 said, that nuitka was configured to compile with visual studio as default. nuitka recursive-all --mingw program.py
1
0
0
I've got a little script that I want to compile using Nuitka. So I installed Nuitka, then I installed minGW C++ compiler, Nuitka then asked me to install python 2.7, so I installed that as well. Running nuitka recursive-all program.py results in a large unreadable stack trace. It starts with "vsvars32.bat" is not recognized as an internal or external command. How can I fix this issue?
vsvars32.bat missing, Error while trying to building a Python program with Nuitka
1.2
0
0
490
24,336,655
2014-06-20T22:38:00.000
1
0
0
0
python-3.x,python-c-api,python-c-extension
26,024,351
1
false
0
1
The only way to do this is to create a new object with PyBufferProcs* PyTypeObject.tp_as_buffer. I checked cpython source code thoroughly, as of 3.4.1, there is no out-of-box (so to speak) solution.
1
7
0
It seems to me the buffer protocol is more for exposing Python buffer to C. I couldn't find a way to create a bytes object using existing buffer without copying in C. Basically what I want is to implement something similar to PyBytes_FromStringAndSize() but without copying, and with a callback to free the buffer when the object is released. I don't know how big the buffer is before I receive the buffer returned from a C API. So creating bytes object in Python first and later fill it in is not an option. I also looked into memoryview, PyMemoryView_FromMemory() doesn't copy but there is no way to pass a callback to free my buffer. And I'm not suse Python lib (e.g. Psycopg) can use memoryview object or not. Do I have to create my own object to achieve these 2 requirements? Any other shortcut? If I have to, how can I make sure this object works same as bytes so I can pass it to Python lib safely? Thanks.
python c-api: create bytes using existing buffer without copying
0.197375
0
0
444
24,338,882
2014-06-21T06:14:00.000
4
0
1
0
python,python-2.7
24,338,973
2
false
0
0
No, you can install Python 2.7.7 on top of Python 2.7.6 . Just be careful to specify exactly the same installation directory you used for 2.7.6 .
1
3
0
How can I upgrade Python 2.7.6 to Python 2.7.7 on Windows? Should I install new version in a separate directory, change all appropriate environment variables and install all required third-party modules again?
How can I upgrade Python 2.7.6 to Python 2.7.7 on Windows
0.379949
0
0
1,160
24,344,448
2014-06-21T18:11:00.000
0
0
0
1
python,cross-platform
24,344,493
1
false
0
0
No. You can write assumptions into your program, which is what all developers do to handle these formats. It doesn't matter what extension a file has, it can be used as a format regardless. Take for example an XML file. If you take that XML data and put it into a .txt file, or simply rename the .xml file to .txt, reading from that file and parsing the data within will still render XML formats.
1
0
0
Is there a way to do this, just by relying on the file's extension? For example: os.system(filepath) opens the given filepath using the default application, but what is the executable's filepath?
How to determine the default executable for a specific file format?
0
0
0
33
24,349,335
2014-06-22T08:07:00.000
0
0
0
0
python,eclipse,web-applications,flask,pydev
24,350,506
3
false
1
0
I've had a very similar thing happen to me. I was using CherryPy rather than Flask, but my solution might still work for you. Oftentimes browsers save webpages locally so that they don't have to re-download them every time the website is visited. This is called caching, and although it's very useful for the average web user, it can be a real pain to app developers. If you're frequently generating new versions of the application, it's possible that your browser is displaying an old version of the app that it has cached instead of the most up to date version. I recommend clearing that cache every time you restart your application, or disabling the cache altogether.
1
3
0
I'm working on a simple Flask web application. I use Eclipse/Pydev. When I'm working on the app, I have to restart this app very often because of code changes. And that's the problem. When I run the app, I can see the frame on my localhost, which is good. But when I want to close this app, just click on the red square which should stop applications in Eclipse, sometimes (often), the old version of application keeps running so I can't test the new version. In this case the only thing which helps is to force close every process in Windows Task Manager. Will you give me any advice how to manage this problem? Thank you in advance. EDIT: This maybe helps: Many times, I have to run the app twice. Otherwise I can't connect.
Python/Flask: Application is running after closing
0
0
0
3,981
24,356,820
2014-06-22T23:50:00.000
0
0
1
1
python,concurrency,race-condition
24,356,913
2
false
0
0
There are likely pythonic ways, but myself I would use a process supervisor like daemontools, systemd, runit etc - to start and supervise the status process to ensure there is one and only one.
1
1
0
I have 2 processes: Start and Status. There can be multiple Start processes executed on the same time and there should only be 1 instance of Status process. On startup of the Start process, it will attempt to start Status. At the moment, I try to stop multiple Statuses from starting by getting the Status process to check if Status's server port has been binded to determine if there is another Status that exists and if so it will shutdown gracefully. However this has a race condition where the time it checks for the binded port, there might be another Status that had done that check and is in the process of binding that port, hence 2 Statuses will be created. Is there a process level solution to this? I have considered having another process monitoring the number of Statuses in the System but is there another approach? Edit: This is done in Python 2.6 Edit2: Both Start and Status are excuted from the shell.
How to stop multiple processes from creating multiple instances of another process?
0
0
0
572
24,360,908
2014-06-23T07:52:00.000
1
1
1
0
python,intellij-idea,keyboard-shortcuts,python-module
24,362,017
1
true
0
0
Install the Python plugin Settings | Plugins | Browse repositories | "Python". Add Python SDK to the project Select project settings Select Platform Setting | SDKs | Add New SDK | Python SDK Select a python interpreter Wait for configuration to complete Control+N should then work as expected in your project
1
0
0
Using Control+N while coding JAVA in IntelliJ helps me to navigate to classes. Is there any similar functionality in IntelliJ for navigating to Python modules. Thanks
Navigate to Python module by name in Intellij keyboard shortcut
1.2
0
0
50
24,365,844
2014-06-23T12:22:00.000
0
1
0
1
python,debian
24,382,572
1
true
0
0
I fixed it with the following re-install: apt-get install python2.7-minimal --reinstall Reinstalling python and python-dev wasn't solving, but python2.7-minimal made the job.
1
0
0
I'm configuring a Debian 7.5 server, and up to yesterday the mail server and the policyd-spf Python plugin were running fine. I added some more Python-related libraries in order to configure Plone (python-setuptools, python-dev, python-imaging), and now the Python setup seems corrupted for some reason. If I now run policyd-spf manually, I get an ImportError on the spf module. Opening a Python interpreter and checking the sys.path, I get the following: ['', '/usr/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg', '/usr/lib/python2.7/site-packages/virtualenv-1.11.6-py2.7.egg', '/usr/lib/python27.zip', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-linux2', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/lib/python2.7/site-packages'] I noticed that /usr/lib/python2.7/site-packages is there, but /usr/lib/python2.7/dist-packages is missing, and that's the reason for the import error. I already tried re-installing the python and python-all packages, hoping that a reinstall would have fixed it, but I still have the same problem. Does anyone know where exactly Debian configured dist-packages to be included in the search path, and how can I recover it? thanks!
How to fix corrupted Python search path in Debian 7.5?
1.2
0
0
175
24,367,141
2014-06-23T13:25:00.000
2
0
0
0
python,machine-learning,classification,scikit-learn,text-classification
24,524,206
3
false
0
0
I think gustavodidomenico makes a good point. You can think of Naive Bayes as learning a probability distribution, in this case of words belonging to topics. So the balance of the training data matters. If you use decision trees, say a random forest model, you learn rules for making the assignment (yes there are probability distributions involved and I apologise for the hand waving explanation but sometimes intuition helps). In many cases trees are more robust than Naive Bayes, arguably for this reason.
2
15
1
I am using scikit-learn Multinomial Naive Bayes classifier for binary text classification (classifier tells me whether the document belongs to the category X or not). I use a balanced dataset to train my model and a balanced test set to test it and the results are very promising. This classifer needs to run in real time and constantly analyze documents thrown at it randomly. However, when I run my classifier in production, the number of false positives is very high and therefore I end up with a very low precision. The reason is simple: there are many more negative samples that the classifer encounters in the real-time scenario (around 90 % of the time) and this does not correspond to the ideal balanced dataset I used for testing and training. Is there a way I can simulate this real-time case during training or are there any tricks that I can use (including pre-processing on the documents to see if they are suitable for the classifer)? I was planning to train my classifier using an imbalanced dataset with the same proportions as I have in real-time case but I am afraid that might bias Naive Bayes towards the negative class and lose the recall I have on the positive class. Any advice is appreciated.
Naive Bayes: Imbalanced Test Dataset
0.132549
0
0
9,405
24,367,141
2014-06-23T13:25:00.000
11
0
0
0
python,machine-learning,classification,scikit-learn,text-classification
24,528,969
3
true
0
0
You have encountered one of the problems with classification with a highly imbalanced class distribution. I have to disagree with those that state the problem is with the Naive Bayes method, and I'll provide an explanation which should hopefully illustrate what the problem is. Imagine your false positive rate is 0.01, and your true positive rate is 0.9. This means your false negative rate is 0.1 and your true negative rate is 0.99. Imagine an idealised test scenario where you have 100 test cases from each class. You'll get (in expectation) 1 false positive and 90 true positives. Great! Precision is 90 / (90+1) on your positive class! Now imagine there are 1000 times more negative examples than positive. Same 100 positive examples at test, but now there are 1000000 negative examples. You now get the same 90 true positives, but (0.01 * 1000000) = 10000 false positives. Disaster! Your precision is now almost zero (90 / (90+10000)). The point here is that the performance of the classifier hasn't changed; false positive and true positive rates remained constant, but the balance changed and your precision figures dived as a result. What to do about it is harder. If your scores are separable but the threshold is wrong, you should look at the ROC curve for thresholds based on the posterior probability and look to see if there's somewhere where you get the kind of performance you want. If your scores are not separable, try a bunch of different classifiers and see if you can get one where they are (logistic regression is pretty much a drop-in replacement for Naive Bayes; you might want to experiment with some non-linear classifiers, however, like a neural net or non-linear SVM, as you can often end up with non-linear boundaries delineating the space of a very small class). To simulate this effect from a balanced test set, you can simply multiply instance counts by an appropriate multiplier in the contingency table (for instance, if your negative class is 10x the size of the positive, make every negative instance in testing add 10 counts to the contingency table instead of 1). I hope that's of some help at least understanding the problem you're facing.
2
15
1
I am using scikit-learn Multinomial Naive Bayes classifier for binary text classification (classifier tells me whether the document belongs to the category X or not). I use a balanced dataset to train my model and a balanced test set to test it and the results are very promising. This classifer needs to run in real time and constantly analyze documents thrown at it randomly. However, when I run my classifier in production, the number of false positives is very high and therefore I end up with a very low precision. The reason is simple: there are many more negative samples that the classifer encounters in the real-time scenario (around 90 % of the time) and this does not correspond to the ideal balanced dataset I used for testing and training. Is there a way I can simulate this real-time case during training or are there any tricks that I can use (including pre-processing on the documents to see if they are suitable for the classifer)? I was planning to train my classifier using an imbalanced dataset with the same proportions as I have in real-time case but I am afraid that might bias Naive Bayes towards the negative class and lose the recall I have on the positive class. Any advice is appreciated.
Naive Bayes: Imbalanced Test Dataset
1.2
0
0
9,405
24,367,155
2014-06-23T13:26:00.000
1
0
0
0
python,mysql,python-3.x,timestamp
24,368,972
2
false
0
0
time.time() is a float, if a resolution of one second is enough you can just truncate it and store it as an INTEGER.
1
0
0
I'm looking to insert the current system timestamp into a field on a database. I don't want to use the server side now() function and need to use the python client's system timestamp. What MySQL datatype can store this value, and how should I insert it? Is time.time() sufficient?
Inserting a unix timestamp into MySQL from Python
0.099668
1
0
3,202
24,367,286
2014-06-23T13:32:00.000
0
0
0
0
python,image-processing,3d
24,371,384
1
false
0
1
In the case where you're interested in replacing textures on-the-fly, you should render your objects as UV maps. UV maps specify the pixel offset within the texture so that once a texture is chosen, it is a simple process of table lookup for filling in the texture. You might consider rendering at double the resolution and after applying reduce the image size. This will anti-alias any discontinuities.
1
2
0
I'm working on a project that requires user generated images to be applied to various 3D models (mugs, t-shirts etc). I've explored numerous applications, (Pyglet, Blender, Panda to name a few), and am looking for ideas / guidance as to the best approach. Appears to me that the world of 3D modelling has quite a steep learning curve (looking at you GL), just looking to invest my time wisely. Thoughts?
Looking for advice on applying textures to 3D models at run time
0
0
0
87
24,367,485
2014-06-23T13:40:00.000
0
0
0
0
java,python,amazon-web-services,amazon-ec2
24,373,021
1
true
1
0
First-time: Create a Postgres db - Depending on size(small or large), might want RDS or Redshift Connect to Amazon Server - EC2 Download code to server - upload your programs to an S3 bucket Once a month: Download large data file to server - Move data to S3, if using redshift data can be loaded directly from s3 to redshift Run code (written in python) to load database with data Run code (written in Java) to create a Lucene Search Index file from data in the database - might want to look into EMR with this Continuously: Run Java code in servlet container, this will use lucense search index file but DOES NOT require access to the database - If you have a java WAR file, you can host this using Elasticbean stalk In order to connect to your database, you must make sure the security group allows for this connection, and for an ec2 you must make sure port 22 is open to your IP to conncet to it. It sounds like the security group for RDS isn't opening up port 3306.
1
0
0
I'm used to having a remote server I can use via ssh but I am looking at using Amazon Web Services for a new project to give me better performance and resilience at reduced costs but I'm struggling to understand how to use it. This is what I want to do: First-time: Create a Postgres db Connect to Amazon Server Download code to server Once a month: Download large data file to server Run code (written in python) to load database with data Run code (written in Java) to create a Lucene Search Index file from data in the database Continuously: Run Java code in servlet container, this will use lucense search index file but DOES NOT require access to the database. Note: Technically I could create do the database population locally the trouble is the resultant lucene index file is about 5GB and I dont have a good enough Internet connection to upload a file of that size to Amazon. All that I have managed to do so far is create a Postgres database but I don't understand how to connect to it or get a ssh/telnet connection to my server (I requested a Direct Connect but this seems to be a different service). Update so far FYI: So I created a Postgres database using RDS I created a Ubuntu linux installation using EC2 I connected to the linux installation using ssh I installed required software (using apt-get) I downloaded data file to my linux installation I think according to the installation should be able to connect to my Postgres db from my EC2 instance and even from my local machine however in both cases it just times out. * Update 2 ** Probably security related but I cannot for the life of me understand what I'm meant to do with security groups ands why they don't make the EC2 instance able to talk to my database by default. Ive checked both RDS and EC2 have the3 same vpc id, and both are in the same availability zone. Postgres is using port 5432 (not 3306) but havent been able to access it yet. So taking my working EC2 instance as the starting point should I create a new security group before creating a database, and if so what values do I need to put into it so I can access the db with psql from within my ec2 ssh session - thats all that is holding me up for now and all I need to do * Update 3 * At last I have access to my database, my database had three security groups ( I think the other two were created when I created a new EC2 instance) I removed two of them and in the remaining on the inbound tab I set rule to All Traffic Ports 0-65535 Protocol All IPAddress 0.0.0.0/0 (The outbound tab already had the same rule) and it worked ! I realize this is not the most secure setup but at least its progress. I assume to only allow access from my EC2 instance I can change the IPAddress of the inbound rule but I don't how to calculate the CIDR for the ipaddress ? My new problem is having successfully downloaded my datafile to my EC2 instance I am unable to unzip it because I don't not have enough diskspace. I assume I have to use S3 Ive created a bucket but how do I make it visible as diskspace from my EC2 instance so I can Move my datafile to it Unzip the datafile into it Run my code against the unzipped datafile to load the database (Note the datafile is an Xml format and has to be processed with custom code to get it into the database it cannot just be loaded directly into the database using some generic tool) Update 4 S3 is the wrong solution for me instead I can use EBS which is basically disc storage accessible not as a service but by clicking Volumes in EC2 Console. Ensure create the volume in the same Availability zone as the instance, there maybe more than one in each location, for example my EC2 instance was created in eu-west-1a but the first time I created a volume it was in eu-west-1b and therefore could not be used. Then attach volume to instance But I cannot see the volume from the linux commandline, seems there is something else required. Update 5 Okay, have to format the disk and mount it in linux for it to work I now have my code for uploading the data to database working but it is running incredibly slow, much slower than my cheap local server I have at home. I'm guessing that because the data is being loaded one record at a time that the bottleneck is not the micro database but my micro instance, looks like I need to redo with a more expensive instance Update 6 Updated to a large Computative instance still very slow. Im now thinking the issue is the network latency between server and database perhaps need to install a postgres server directly onto my instance to cut that part out.
How do i get started with Amazon Web Services for this scenario?
1.2
1
0
186
24,369,215
2014-06-23T15:03:00.000
1
0
1
0
debian,ipython
24,390,881
1
true
0
0
Reposting as an answer because it apparently helped: I don't think there's an environment variable for profile, but you could point it to an entirely different IPython directory with the IPYTHONDIR environment variable (this is instead of ~/.ipython, not ~/.ipython/profile_foo). Alternatively, you could alias ipython to ipython --profile foo in your script.
1
0
0
I have a very customized profile file profile_foo that is used in project foo, however I'd like to keep this configuration separate from my profile_default. Is it possible to to conditionally enable default ipython profile via some environment variable, without having to pass: ipython notebook --profile foo every time I launch the notebook? Ideally I'm looking for something that can be insetrted inside existing script that is sourced before working on the project. I use debian linux.
Is it possible to do setup default ipython profile via enviorment variable
1.2
0
0
48
24,371,646
2014-06-23T17:17:00.000
1
0
0
0
django,python-2.7,django-forms,django-templates,django-views
24,379,552
2
true
1
0
First of all, We will have to make sure if its a non_field_error or a field error. Where have you raise ValidationError in the ModelForm you have defined ? If its raised in def clean() of the Form, then it would be present in non_field_errors and can be accessed via form.non_field_errors in template If it is raised in def clean_<field_name>() then, it would be a field error and can be accessed via form.errors or form.<field_name>.error in template Please decide for yourself where do you want to raise it. Note: ModelForm can work with with FormView. But Ideally, there are CreateView and UpdateView for that
1
0
0
I'm using FormView with ModelForm to process a registration form. In case of duplication of email i'm raising ValidationError. But this error message is not available on registration template as non_field_errors. When i tried to find what is the form.errors in form_invalid method in RegistrationView, its showing the expected the errors, but somehow its not getting passed to template.
How to get non_field_errors on template when using FormView and ModelForm
1.2
0
0
1,090
24,376,961
2014-06-24T01:18:00.000
0
1
1
0
python,import,pythonpath
24,377,167
1
true
0
0
I haven't had occasion to ever use a .pth file. I prefer a two-pronged approach: Use a shebang which runs env python, so it uses the first python on your path, i.e.: #!/usr/bin/env python Use virtualenv to keep separate different environments and group the necessary libraries for any given program/program set together. This has the added benefit that the requirements file (from pip freeze output) can be stored in source control, and the environment can be recreated easily anywhere, such as for use with Jenkins tests, et al. In the virtualenv case the python interpreter can be explicitly invoked from the virtualenv's bin directory. For local modules in this case, a local PyPI server can be used to centralize custom modules, and they can also be included in the requirements file (via the --extra-index option of pip). Edit with response to comment from OP: I have not used SublimeREPL before, however, based on the scenario you have described, I think the overall simplest approach might be to simply symlink the directories into your site-packages (or dist-packages, as the case may be) directory. It's not an ideal scenario for a production server, but for your purposes, on a client box, I think it would be fine. If you don't want to have to use the folder name, i.e. import ch1/foo, you'll need to symlink inside of those directories so you can simply import foo. If you're OK with using the dir name, i.e. import ch1/foo, then you should only need to symlink the top-level code directory.
1
0
0
I am new to python and trying to add a project folder to the PYTHONPATH. I created a .pth file and add my root path of the file in my site-packages folder. However, when I trying to import the .py files in this folder, only those located under the root folder (for example '/sample') can be imported, but those subfolders under the /sample folder were not able to be imported (for example /sample/01). So my question is what file and how to change it to make my whole folder including all its subfolders can be importable. In the worst case I can think of is to write down all the folders name in the .pth file in site-packages. But I just believe that Python will provide a more efficient way to achieve that.
How to use PYTHONPATH to import the whole folder in Python
1.2
0
0
287
24,379,275
2014-06-24T05:57:00.000
0
0
0
0
python,django,mercurial,fabric
24,384,040
1
false
1
0
Two branches for different environment (with env-specific changes in each, thus - additional merge before deploy) or MQ extension, "clean" code in changesets, MQ-patch for every environment on top of single branch (and accuracy with apply|unapply of patches)
1
1
0
I've a puzzle of a development and production Django setup that I can't figure out a good way to deploy in a simple way. Here's the setup: /srv/www/projectprod contains my production code, served at www.domain.com /srv/www/projectbeta contains my development code, served at www.dev.domain.com Prod and Dev are also split into two different virtualenvs, to isolate their various Python packages, just in case. What I want to do here is to make a bunch of changes in dev, then push to my Mercurial server, and then re-pull those changes in production when stable. But there are a few things making this complicated: wsgi.py contains the activate_this.py call for the virtualenv, but the path is scoped to either prod or dev, so that needs to be edited before deployment. manage.py has a shebang at the top to define the correct python path for the virtualenv. (This is currently #!/srv/ve/.virtualenvs/project-1.2/bin/python so I'm wondering if I can just remove this to simplify things) settings.py contains paths to the templates, staticfiles, media root, etc. which are all stored under /srv/www/project[prod|dev]/* I've looked into Fabric, but I don't see anything in it that would re-write these files for me prior to doing the mercurial push/pull. Does anyone have any tips for simplifying this, or a way to automate this deployment?
Django Dev/Prod Deployment using Mercurial
0
0
0
172
24,380,269
2014-06-24T06:59:00.000
3
0
0
0
python,mysql,django,django-cms
24,380,525
2
false
1
0
This is an error message you get if MySQLdb isn't installed on your computer. The easiest way to install it would be by entering pip install MySQL-python into your command line.
1
1
0
I can't connect with mysql and I can't do "python manage.py syncdb" on it how to connect with mysql in django and django-cms without any error?
Getting “Error loading MySQLdb module: No module named MySQLdb” in django-cms
0.291313
1
0
9,227
24,380,332
2014-06-24T07:02:00.000
5
0
0
0
javascript,python,flask,jinja2
24,381,042
1
true
1
0
Note: by HTML I mean HTML incl. JavaScript etc. Python web app receives HTTP request to render a page Python code in controller asks Python model to prepare data for rendering HTML page by Jinja2 Jinja2 template renders the HTML page Python web app sends resulting page back to the client Client clicks on some element on the page. This could result in new HTTP request for completely new HTML page, or it can be AJAX request (Asynchronously performed HTTP request initiated from JavaScript on HTML page in a browser), which asks the web app for new data or provides web app with new information. Web app (Python) receives the request, could make changes in model content and can return response back to JavaScript JavaScript receives new data and uses them to update the HTML page in browser. As seen, Jinja template is only tool, which allows rendering HTML page. The only direct interaction with web app is providing renderd HTML content, there is no chance to include any user interaction in that content at the moment, as client did not see the page yet, it has to be provided by Python code. The only way, how can something in Jinja template inform Python code about user interaction is indirect by the round trip described above.
1
1
0
How do I pass info from Jinja-templated page back to Flask? Say I print some list of items. User chooses the item, I can catch that via Javascript. What is the best practice to pass the chosen item as an argument to function that will generate this item's own page?
Passing data from Jinja back to Flask
1.2
0
0
1,433
24,380,528
2014-06-24T07:12:00.000
4
1
1
0
python,c,haskell
24,380,595
1
false
0
0
The fundamental difference between a Haskell function and a C function is in the fact that Haskell functions cannot have side effects. They cannot modify state when called and as such will return the same value when called repeatedly with the same parameters. This is not to say that you cannot have pure functions in C. I would encourage you to read articles about functional programming and maybe a tutorial in Haskell to get a clearer idea about the subject.
1
0
0
whats are the main differences between functions in Haskell , python and c? I know that haskell function can get a function as a parameter? is it only in haskell?
whats are the main differences between functions in Haskell , python and c?
0.664037
0
0
1,052
24,380,853
2014-06-24T07:33:00.000
0
0
1
0
python,debugging,python-2.7,pycharm
24,381,256
1
true
0
0
I got it! After building suds using setup.py, there appears directory suds with sources in: **BUILDING_DIR\build\lib** It needs to copy it to **C:\PythonXX\Lib\site-packages** and remove from there suds-Z.Z-pyX.X.egg then debugging starts to import sources from that directory and show code lines in debug. Bingo!
1
3
0
I'm trying to enter library function in PyCharm, to see what is happening there, but I can't: debugger shows me details and variables, moving inside step by step, but I don't see on my Window lines of code. I just can feel debugger is moving over them because it shows different internal variables. I guess that happens because library is installed as binary package, without sources. How should I install library to be able moving by it using debugger? I tried both this installation types: I installed pip, and using it successfully installed suds. I also downloaded suds sources (and build&installed from them, using setup.py). And both don't show me internal codelines. How can I move using debugger over library code?
Debug into libraries in python
1.2
0
0
2,421
24,381,227
2014-06-24T07:53:00.000
0
1
1
0
python,python-2.7
24,381,532
2
false
0
0
It is possible but not trivial because the processes are unrelated. So you have to set : an exclusion mechanism (file lock should be portable across architectures) to allow a process to know if it is the first - beware of race condition when a server is about to exit when a new process arrives ... the first process opens a listener - a socket on an unused port should be portable you must define a protocol to allow you processes to communicate over the socket when a process discovers it is not the first, it contacts the first over the socket and passes its arguments So it can work, but it is a lot of work ... There could be solutions a little simpler if you can limit you to a specific architectur (only Windows or only Linux) and make use of specific libraries, but it will nether be a simple job.
1
0
0
For example my py script already has one instance running and when I fire another instance with args, instead of allowing the new instance to run, make it pass its args to the main instance or make it so the main instance awares of the args so it can do whatever it needs to do with the args. Is something like this possible?
Python - Disable multiple py script instances and pass their args to main instance
0
0
0
288
24,386,080
2014-06-24T11:58:00.000
-2
0
0
1
python,network-programming,timestamp
24,386,429
2
false
0
0
The timestamp is in seconds. You can import datetime in python and use its fromtimestamp method to get it in a easier to read format like so. import datetime ts = datetime.datetime.fromtimestamp(1305354670.602149) print ts 2011-05-14 02:31:10.602149 Hope this helped.
1
1
0
I have this silly question. I analyze data packets with scapy. And there is a variable inside the packet, it's called timestamp (TSFT) , which is the time that the packet was constructed. So i grab that vairable (packet[RadioTap].TSFT) but I do not know if the value is in nanoseconds or in microseconds. Could anyone inform me ? I haven't seen it anywhere. Thanks in advance.
Mac Address Timestamp python
-0.197375
0
0
334
24,389,121
2014-06-24T14:22:00.000
0
0
1
0
python,xively
24,389,410
1
false
0
0
Generator is object created with speed in mind. List items are generated at once, generator items are generated when needed. So you have to "ask" for each item separatelly. for datapoint in datastream.datapoints.history(start=start, end=end): print(datapoint) # Or whatever you have to do to print that
1
1
0
How do you select the first value and last value (chronologically) from a datastream in Xively via Python? I'm able to select a datapoint if I know its timestamp, datastream.datapoints.get(at), but I would like to be able to select the first and last points without this prior knowledge. EDIT: The below subquestion has been answered. Thank you @alkuzad and @cmd. For one approach, I tried to fetch and return a list of datapoints with datastream.datapoints.history(start=start, end=end), but this returns <generator object history at 0x107c37050>, and I have no idea what to do with that. (I'm no expert in Python.) Which leads to a subquestion: How do you access the generator object and print a list of datapoints? Any help would be great! Thanks!
Select the first value and last value (chronologically) from a datastream in Xively
0
0
0
75
24,393,766
2014-06-24T18:28:00.000
1
0
1
0
python,internationalization,python-babel
24,394,096
1
true
0
0
I think you have to parse the date formats – that's similar to how the CLDR data itself represents the "order" of date components.
1
0
0
Is there a way to get the day, month and year order for a locale in Babel? I am display three input fields for a date on a web page, and I would like to get the order correct based on the user's preferred language. I know there is Locale.date_formats, but parsing date format strings to determine the order seems unreliable.
Python/Babel: Get locale-specific ordering of day, month and year?
1.2
0
0
217
24,395,368
2014-06-24T20:08:00.000
-1
0
1
0
python,sql,database,django,web
24,395,620
2
false
1
0
Hard coding into the clean function, and displaying to the user is the best means. Test if any words are in the banned_words list, and show as error to user: Sorry, the following words are not allowed: foo, bar, foobar
1
1
0
Let's say I have a group of words that I don't want to allow my users to include in their titles that they are going to be submitting. What are some alternatives on how to store those values besides hardcoding the list into the clean function? I thought about creating a new model that would contain all of these words that aren't allowed but I am not sure whether or not querying the database each time clean was called for that function would be slower/faster or more/less secure than just creating a separate list for the names. I do think it would be more readable if the list would get too long though.
Alternatives to creating and iterating through a list for "bad" values each time clean is called in django?
-0.099668
0
0
54
24,396,591
2014-06-24T21:25:00.000
3
0
0
0
python,sql,django
24,396,885
2
true
1
0
To do this, I would recommend breaking down each individual relationship. Your relationships seem to be: Authoring Following For authoring, the details are: Each Question is authored by one User Each User can author many questions As such, this is a One-to-Many relationship between the two. The best way to do this is a foreign key from the Question to the User, since there can only be one author. For following, the details are: Each Question can have many following Users Each User can be following many Questions As such, this is a Many-to-Many relationship. The Many-to-Many field in Django is a perfect candidate for this. Django will let you use a field through another model, but in this case this is not needed, as you have no other information associated with the fact that a user is following a question (e.g. a personal score/ranking). With both of these relationships, Django will create lists of related items for you, so you do not have to worry about that. You may need to define a related_name for this field, as there are multiple users associated with a question, and by default django would make the related set for both named user_set.
1
2
0
I'm new to Django and I'm trying to create a simple app! I basically want to create something like StackOverflow, I have many User and many Question. I don't know how I should define the relationship between these two Models. My Requirements: I want each Question to have a single author User, and a list of User that followed the Question. I want each User to have a list of posted Question and a list of followed Question. I'm pretty much lost, and I don't know how to define my relationship. Is this a many-to-many relationship? If so, how do I have like 2 lists of Question in my User model (Posted/Followed)? Please help!
Define models in Django
1.2
0
0
85
24,397,394
2014-06-24T22:29:00.000
5
0
1
0
python,multithreading,python-3.4,concurrent.futures
24,397,894
1
true
0
0
No. ThreadPoolExector is just a class to help with scheduling work on multiple threads. All of the normal thread constraints still apply. To clear up some confusion, threads will run on different processors / cores as the operating system chooses, they just won't run concurrently. The exception is that some C based functions release the GIL temporarily while performing actions not needing the lock.
1
4
0
I know that Python 2.7 does not allow one to run multiple threads on different cores, and you need to use the multiprocessing module in order to achieve some degree of concurrency. I was looking at the concurrent.futures module in Python 3.4. Does using a ThreadPoolExecutor allow you to run different threads on different processes, or is it still bound by GIL constraints? If not, is there a way of running threads on different processors using Python 3.4? For my use case, using multiple processes is absolutely not feasible.
Run python threads on multiple cores
1.2
0
0
3,059
24,400,012
2014-06-25T04:05:00.000
0
0
0
0
python,numpy,matplotlib,scikit-learn
24,424,870
1
false
0
0
If X is a sparse matrix, you probably need X = X.todense() in order to get access to the data in the correct format. You probably want to check X.shape before doing this though, as if X is very large (but very sparse) it may consume a lot of memory when "densified".
1
0
1
I'm using scikit to perform text classification and I'm trying to understand where the points lie with respect to my hyperplane to decide how to proceed. But I can't seem to plot the data that comes from the CountVectorizer() function. I used the following function: pl.scatter(X[:, 0], X[:, 1]) and it gives me the error: ValueError: setting an array element with a sequence. Any idea how to fix this?`
How to plot text documents in a scatter map?
0
0
0
99
24,401,550
2014-06-25T06:20:00.000
1
0
0
1
python,bash
24,401,731
1
false
0
0
If you compiled Python 3.4 from source, you are probably missing the development libraries for readline. The package is typically called libreadline-dev.
1
1
0
Previously I ran python 2.7 on Debian Linux terminal (bash). I conveniently use control-f, control-b to move forward/back word. But it does not work on updated 3.4 version, which generates unreadable symbol. Is there a way to configure the control-key recognition?
Python interpreter does not recognize control keys
0.197375
0
0
54
24,408,233
2014-06-25T12:06:00.000
1
0
0
0
python,django,web-applications
24,408,309
1
false
1
0
For example, if you have an admin and a user interface you can separate them as ; admin app user app
1
6
0
When are multiple apps actually used? I've been trying to find a concrete example of when multiple apps might be used, but haven't found anything. I've been reading through the docs, and following the tutorial, and it says that an app has a single functionality - what does this mean? This is open to interpretation depending on the level of detail: it could refer to the individual components of a blog perhaps (ie. the menu bar, the individual blog entries, the comments section); it could refer to the pages the visitors see, and the pages writers use to create posts; it could even refer to two separate websites running within the same server. Can someone give an example of a project which uses more the one application?
Django: When to use multiple apps
0.197375
0
0
2,868
24,410,124
2014-06-25T13:30:00.000
0
0
0
0
python,database,sqlite
24,410,155
2
false
0
0
No, sqlite package is part of Python standard library and as soon as you have Python installed, you may use sqlite functionality. MartijnPieters noted, the actual shared library is not technically part of Python (this was my a bit oversimplified answer) but comes as shared library, which has to be installed too. Practically speaking, if you manage installing Python, you have the sqlite available for your Python code. As OP asked for for a need to install sqlite separately, I will not speculate on how to install Python, which is not able to work with it.
1
2
0
There are Python libraries that allow to communicate with a database. Of course, to use these libraries there should be an installed and running database server on the computer (python cannot communicate with something that does not exist). My question is whether the above written is applicable to the sqlite3 library. Can one say that this library does not need any database to be installed (and running) on the computer? Can one say that sqlite3 needs only a file system?
Does python sqlite3 library need sqlite to be installed?
0
1
0
878
24,413,025
2014-06-25T15:38:00.000
1
0
1
0
python,wsdl,netsuite
24,416,478
3
false
0
0
Netsuite has provided toolkits for Java, .Net and PHP to access their webservices. For other languages either there are third party toolkits or you have to send Raw SOAP requests. For my Python based projects I'm using Raw SOAP requests method. I suggest that first you get familiar with Netsuite Web services using any of the available toolkits and then for Python use this knowledge to generate raw SOAP requests. SOAPUI can also be of great help. Have you explored restlets they are a generally good alternate for webservices.
1
4
0
Using Python, I would like to pull data from NetSuite, along with adding/updating data in NetSuite. For example, I would like to create sales orders and add line items via Python. I'm aware that they have a WSDL that I could potentially use. (And I was hoping that they would also have an API, but apparently not...) Does anyone have examples working with this WSDL in Python? Are there better ways to integrate with NetSuite?
Accessing NetSuite data with Python
0.066568
0
1
6,680
24,416,062
2014-06-25T18:34:00.000
0
0
1
0
python,build,module
24,416,422
1
false
0
0
If by "using the write() function repeatedly", you mean that you open the target .py file and use the file object's write method, then, yes, that's a standard way to compose files.
1
0
0
Say I have a module X. In module X, I have a build method, which takes in a few arguments. Using those arguments, it creates a custom module and class from scratch into a certain directory. By scratch, I mean it creates the .py module from scratch, and it creates the module's class + its methods from scratch. Is using the write() function repeatedly the best way to achieve this, or is there a much simpler approach?
Python--How to create a module from another module
0
0
0
30
24,416,140
2014-06-25T18:39:00.000
0
0
0
0
python,sqlite,pandas
24,419,432
1
false
0
0
I have found the issue -- I am using SQLite Manager (Firefox Plugin) as a SQLite client. For whatever reason, SQLite Manager displays the tweet IDs incorrectly even though they are properly stored (i.e. when I query, I get the desired values). Very strange I must say. I downloaded a different SQLite client to view the data and it displays properly.
1
0
0
I am using pandas to organize and manipulate data I am getting from the twitter API. The 'id' key returns a very long integer (int64) that pandas has no problem handling (i.e. 481496718320496643). However, when I send to SQL: df.to_sql('Tweets', conn, flavor='sqlite', if_exists='append', index=False) I now have tweet id: 481496718320496640 or something close to that number. I converted the tweet id to str but Pandas SQLite Driver / SQLite still messes with the number. The data type in the SQLite database is [tweet_id] INTEGER. What is going on and how do I prevent this from happening?
Long integer values in pandas dataframe change when sent to SQLite database using to_sql
0
1
0
160
24,417,793
2014-06-25T20:18:00.000
1
0
0
0
java,python,multithreading,sockets,udp
24,419,007
1
true
0
0
"I assume that the majority of gameplay data (e.g. fine player movements) will need to be sent via UDP connections. I'm unfamiliar with UDP connections so I really don't know where to begin designing the server." UDP can be lower latency, but sometimes, it is far more important that packets aren't dropped in a game. If it makes any difference to you, World of Warcraft uses TCP. If you chose to use UDP, you would have to implement something to handle dropped packets. Otherwise, what happens if a player uses an important ability (Such as a spell interrupt or a heal) and the packet gets dropped? You COULD use both UDP and TCP to communicate different things, but that adds a lot of complexity. WoW uses only a single port for all gameplay traffic, plus a UDP port for the in-game voice chat that nobody actually uses. "How should the server be threaded? One thread per client connection that retains session info, and then a separate thread(s) to control autonomous world changes (NPCs moving, etc.)?" One thread per client connection can end up with a lot of threads, but would be a necessity if you use synchronous sockets. I'm not really sure of the best answer for this. "How should relatively large packets be transmitted? (e.g. ~25 nearby players and all of their gameplay data, usernames, etc.) TCP or UDP?" This is what makes MMORPG servers so CPU and bandwidth intense. Every action has to be relayed to potentially dozens of players, possibly hundreds if it scales that much. This is more of a scaling issue than a TCP vs UDP issue. To be honest, I wouldn't worry much about it unless your game catches on and it actually becomes an issue. "Lastly - is it safe for the gameplay server to interface with the login server via HTTP requests, how do I verify (from the login server's perspective) the gameplay server's identity - simple password, encryption?" You could easily use SSL. "Lastly - if this is relevant - I have not begun development on the client - not sure what my goals for the game itself are yet, I just want the servers to be scalable (up to ~150 players, beyond that I expect and understand that major rewrite will probably be necessary) and able to support a fair amount of players and open-world style content. (no server-taxing physics or anything like that necessary)" I wouldn't use Python for your server. It is horrendously slow and won't scale well. It's fine for web servers and applications where latency isn't too much of an issue, but for a real-time game server handling 100+ players, I'd imagine it would fall apart. Java will work, but even THAT will run into scaling issues before a natively coded server does. I'd use Java to rapidly prototype the game and get it working, then consider a rewrite in C/C++ to speed it up later. Also, something to consider regarding Python...if you haven't read about the Global Interpreter Lock, I'd make sure to do that. Because of the GIL, Python can be very ineffective at multithreading unless you're making calls to native libraries. You can get around it with multiprocessing, but then you have to deal with the overhead of communication between processes.
1
0
0
I'm working on an online multiplayer game. I already developed the login servers and database for any persistent storage; both are written in Python and will be hosted with Google's App Engine. (For now.) I'm relatively comfortable with two languages - Java and Python. I'd like to write the actual gameplay server in one of those languages, and I'd like for the latency of the client to gameplay-server connection to be as low as possible, so I assume that the majority of gameplay data (e.g. fine player movements) will need to be sent via UDP connections. I'm unfamiliar with UDP connections so I really don't know where to begin designing the server. How should the server be threaded? One thread per client connection that retains session info, and then a separate thread(s) to control autonomous world changes (NPCs moving, etc.)? How should relatively large packets be transmitted? (e.g. ~25 nearby players and all of their gameplay data, usernames, etc.) TCP or UDP? Lastly - is it safe for the gameplay server to interface with the login server via HTTP requests, how do I verify (from the login server's perspective) the gameplay server's identity - simple password, encryption? Didn't want to ask this kind of question because I know they're usually flagged as unproductive - which language would be better for me (as someone inexperienced with socketing) to write a sufficiently efficient server - assume equal experience with both? Lastly - if this is relevant - I have not begun development on the client - not sure what my goals for the game itself are yet, I just want the servers to be scalable (up to ~150 players, beyond that I expect and understand that major rewrite will probably be necessary) and able to support a fair amount of players and open-world style content. (no server-taxing physics or anything like that necessary)
Structuring a server for an online multiplayer game
1.2
0
1
1,168
24,418,748
2014-06-25T21:18:00.000
1
0
0
0
python,swig,google-nativeclient
24,420,392
1
true
0
0
Yes, the python port (BTW there is 2.7 as well as 3.3) in naclports supports C extensions. There are several in naclports already (see ports/python_modules). I don't know if any of them use swig by I don't think that would be a problem.
1
0
0
I have an application written in Python and C++. I use SWIG to wrap the C++ parts. I'm interested in porting this application to work with Chrome native client (NaCl and/or PNaCl). I see that Python 2.7 is listed on the naclports page, so presumably it won't be a problem to run the Python code. But does it support C extensions? Will it be able to load my SWIG wrapper when running under native client?
Use Python extensions with Chrome native client
1.2
0
1
199
24,419,793
2014-06-25T22:42:00.000
0
0
1
0
python,django,vagrant,ansible
24,422,753
1
false
1
0
You should probably think of it slightly differently. You create a Vagrant file which specifies Ansible as a provisioner. In that Vagrant file you also specify what playbook to use for your vagrant provision portion. If your playbooks are written in an idempotent way, running them multiple times will skip steps that already match the desired state. You should also think about what your desired end-state of a VM should look like and write playbooks to accomplish that. Unless I'm misunderstanding something, all your playbook actions should be happening inside of VM, not directly on your local machine.
1
1
0
I'm a long-time Django developer and have just started using Ansible, after using Vagrant for the last 18 months. Historically I've created a single VM for development of all my projects, and symlinked the reusable Django apps (Python packages) I create, to the site-packages directory. I've got a working dev box for my latest Django project, but I can't really make changes to my own reusable apps without having to copy those changes back to a Git repo. Here's my ideal scenario: I checkout all the packages I need to develop as Git submodules within the site I'm working on I have some way (symlinking or a better method) to tell Ansible to setup the box and install my packages from these Git submodules I run vagrant up or vagrant provision It reads requirements.txt and installs the remaining packages (things like South, Pillow, etc), but it skips my set of tools because it knows they're already installed I hope that makes sense. Basically, imagine I'm developing Django. How do I tell Vagrant (via Ansible I assume) to find my local copy of Django, rather than the one from PyPi? Currently the only way I can think of doing this is creating individual symlinks for each of those packages I'm developing, but I'm sure there's a more sensible model. Thanks!
Reusable Django apps + Ansible provisioning
0
0
0
280
24,423,645
2014-06-26T06:19:00.000
1
0
0
0
python,mysql,django,webproject
44,363,554
3
false
1
0
There is one feature called inspectdb in Django. for legacy databases like MySQL , it creates models automatically by inspecting your db tables. it stored in our app files as models.py. so we don't need to type all column manually.But read the documentation carefully before creating the models because it may affect the DB data ...i hope this will be useful for you.
1
2
0
This might sound like a bit of an odd question - but is it possible to load data from a (in this case MySQL) table to be used in Django without the need for a model to be present? I realise this isn't really the Django way, but given my current scenario, I don't really know how better to solve the problem. I'm working on a site, which for one aspect makes use of a table of data which has been bought from a third party. The columns of interest are liklely to remain stable, however the structure of the table could change with subsequent updates to the data set. The table is also massive (in terms of columns) - so I'm not keen on typing out each field in the model one-by-one. I'd also like to leave the table intact - so coming up with a model which represents the set of columns I am interested in is not really an ideal solution. Ideally, I want to have this table in a database somewhere (possibly separate to the main site database) and access its contents directly using SQL.
Loading data from a (MySQL) database into Django without models
0.066568
1
0
1,496
24,424,745
2014-06-26T07:29:00.000
0
0
0
0
python,firefox,selenium,webdriver
24,430,044
1
false
0
0
I used tempdir = tempfile.mkdtemp(suffix='foo',prefix='bar',dir=myTemp) and it worked. Thanks
1
0
0
I execute my Selenium tests on FF16, Selenium 2.33, Python on Linux. I have created separate firefox profiles corresponding to my devices. I observed a directory 'webdriver-py-profilecopy' is created in tmp directory. And I see that this directory is deleted after completion of tests. But sometimes these directories are not cleared. The size of this directory is around 28mb. I want to change the tmp directory location. Is there a way to change temp file location? In Java webdriver provides a way to define our own temp directory. Is there a way to do it in Python webdriver TemporaryFilesystem.setTemporaryDirectory
Change temporary file location in python webdriver
0
0
1
1,294
24,427,882
2014-06-26T10:16:00.000
1
0
0
0
python,download,tor
30,506,125
3
false
0
0
Httrack does not work with Tor, but you can download TorCap2, start Tor, set proxy to 127.0.0.1:9150 (or other port, check it with netstat), enter program location and parameters. I use wget and it's working great.
1
4
0
I'm trying to download an "onion" site , I was trying to use Httrack and Internet Download Manager, unfortunately with no success. How can I download a Tor "onion" website in depth of 1/2?
Is it possible to download a Tor onion website?
0.066568
0
1
4,525
24,430,817
2014-06-26T12:40:00.000
2
0
0
0
python,database,django,models
24,431,142
4
false
1
0
In the admin interface, you can go to the list page for that model, then you can select all models and use the Delete selected ... action at the top of the table. Remember that, in whatever way you delete the data, foreign keys default to ON DELETE CASCADE, so any model with a foreign key to a model you want to delete will be deleted as well. The admin interface will give you a complete overview of models that will be deleted.
1
3
0
I've a table name UGC and would like to clear all the data inside that table. I don't want to reset the entire app which would delete all the data in all the other models as well. Is it possible to clear only one single model? I also have South configured with my app, if that would help.
Django 1.6: Clear data in one table
0.099668
0
0
3,762
24,431,664
2014-06-26T13:18:00.000
0
0
1
0
python,function,parameters,arguments
24,432,340
4
false
0
0
Don't think too much into it. It's an informal use of the term that's not entirely correct, but still more-or-less accepted. Just assume Codecademy means parameter in this context. Have fun learning!
1
0
0
I have a short and simple question.I have been learning python by the Codeacademy website and i came across a section which gives you an exercise.Here is a part of exercise's text: Below your existing code, define a function called rental_car_cost with an argument called days. My question is,why does the exercise call days an argument ? isn't is supposed to be a Parameter? Because an argument is a value which you give it to a function while calling it. Please help me.Thanks
vague explanation of an exercise
0
0
0
96
24,433,535
2014-06-26T14:44:00.000
1
1
0
0
python-2.7,serial-port,usb
24,433,581
1
true
0
0
Regular ASCII works for me with our FTDI cables. You may also need to terminate the string with a \n.
1
0
0
I am using a combination of FTDI usb driver and python-serial library to communicate with a USB led light. When I am writing a value to the serial port (to turn the. Light on) can I pass regular ASCII text or does it need to be the hex equivalent?
python and serial ports - regular or fancy text?
1.2
0
0
35
24,436,952
2014-06-26T17:42:00.000
0
0
0
0
python,django,amazon-web-services,amazon-s3,django-staticfiles
24,622,541
1
true
1
0
i think my problem was related to bucket policies. i am not sure as i tried many different things but i would bet that's the one that made it work.
1
0
0
So i have my Django site and i am trying to have my static files on S3, but i am getting a ERR_INSECURE_RESPONSE when my site is on a production server. If i click on the link and accept it, it then loads the page. I am using django-storages and on my local machine everything works fine (my S3 credentials are ok) but when i deploy to production i get the error. Do i need to have https enabled on my site to be able to serve static files through S3?? What should i do? THanks
Django + Amazon S3 not loading static files on production
1.2
0
0
1,241
24,440,210
2014-06-26T21:04:00.000
0
0
1
1
python,multiprocessing,pipe
24,442,244
1
false
0
0
A few suggestions for transferring unpicklable raw data back from multiprocessing workers: 1) have each worker write to a database or file (or print to the console) 2) translate the raw data into a string, to return to the parent. If the parent is just logging things then this is the easiest. 3) translate to JSON, to return to the parent. This solution is best if the parent is aggregating data, not just logging it.
1
0
0
I have a long-running process running a simulation using python's multiprocessing. At the end, the process sends the results through a pipe back to the main process. The problem is that, I had redefined the class for the results object, so I knew it would give me an unpickling error. In an attempt to head this off, I got the file descriptor of the pipe and tried to open it with os.fdopen. Unfortunately, I got a "bad file descriptor" error, and now I get the same if I try to receive from the pipe. Because this is a very long simulation, I don't want to kill the process and start over. Is there any way to get the object out of the pipe OR just access the namespace of the child process so that I can save it to disk? Thanks so much.
Python multiprocessing broken pipe, access namespace
0
0
0
665
24,442,307
2014-06-27T00:28:00.000
0
0
1
1
ipython,spyder,qtconsole
24,472,251
1
true
0
0
You have two problems here: The Anaconda launcher haven't been ported to Python 3 yet, so that's why you can't find it. To fix the ValueError: unknown locale: UTF-8 problem, you need to: Open a terminal Write this command on it nano ~/.bashrc (nano is terminal-based editor) Paste this text in nano: export LANG=en_US.UTF-8 export LC_ALL=en_US.UTF-8 Hit these keys to save: Ctrl+O+Enter, then Ctrl+X to exit. Close that terminal, open a new one and try to start spyder. Everything should be fixed now.
1
1
0
I just did a clean install of ananconda 2.0 (python 3.4) on my mac osx after uninstalling the previous version of anaconda. I used the graphical installer but the launcher is missing in the ~/anaconda directory. I tried running spyder and ipython from the terminal but i got long error messages that ended with: ValueError: unknown locale: UTF-8 I am a newbie to python programming and this is quite unnerving for me. I have gone through related answers but I still need help. Guys, please kindly point me in the right direction. Thanks.
Can't find launcher in ~/anaconda 2.0 in mac osx
1.2
0
0
1,398
24,442,775
2014-06-27T01:42:00.000
0
0
0
0
python,wxpython,wxwidgets
24,454,132
1
false
0
1
I don't know why exactly does this happen but this looks like a bug in wxWidgets. In practice, this means that it you shouldn't rely on this behaviour because it might (and in fact I'm pretty sure it does) behave differently under other platforms and also could change in future wxWidgets versions.
1
1
0
I am learning wxpython and have a question. When I create a treectrl within framework and call framework.show(), the first item in the treectrl is automatically selected (i.e., EVT_TREE_SEL_CHANGED event is fired). However, when I create a treectrl in a panel, add the panel to a notebook and add the notebook to framework, the EVT_TREE_SEL_CHANGED event is not fired when the framework.show() is called. Instead, when I select an item in the treecontrol later after the initial rendering, two EVT_TREE_SEL_CHANGED are fired (one for the first item which is supposed to be fired during the initial rendering and the other one for the selected item). panel.SetFocus() in the bottom of framework.__init__() fix this problem -- i.e., fires EVT_TREE_SEL_CHANGED to select the first item during the initial rendering. But, I wonder why this is happening. Does anybody know why EVT_TREE_SEL_CHANGED is blocked in the initial rendering when the tree control is contained in the panel of notebook?
wxpython: EVT_TREE_SEL_CHANGED event in treectrl in notebook when created
0
0
0
291
24,443,621
2014-06-27T03:48:00.000
2
0
0
1
python,cx-freeze
24,457,483
1
true
0
0
cx_Freeze doesn't really compile your code. It really just packages up your Python code along with the Python interpreter, so that when you launch your application, it sets up a Python interpreter and starts running your Python code. It has the necessary machinery to run from either Python source code or bytecode, but it mostly stores modules as bytecode, because that's quicker to load. Options like Cython and Nuitka go a step further - they translate your code to C and compile it to machine code, but they still use the Python VM machinery. It's just compiled code calling Python functionality rather than the VM running Python bytecode.
1
4
0
Does cx_freeze contain its own compiler that goes from Python -> binary? Or does it translate it (e.g. to C), and compile the translated code? Edit: It appears to be compiled to byte-code. So does this mean a cx_freeze exe is just the byte-code -> binary part of the Python interpreter?
How does cx_freeze compile a Python script?
1.2
0
0
226
24,446,884
2014-06-27T08:03:00.000
1
1
1
0
python,eclipse,syntax-highlighting
24,447,038
1
true
0
0
Assuming you use the PyDev plug-in you can access the color settings in the Window/Preferences/PyDev/Editor menu.
1
0
0
I am using Eclipse Indigo for python coding. When I comment something, I want the color of the comment to be blue how can I achieve? Thanks
Changing python syntax coloring in eclipse
1.2
0
0
442
24,446,966
2014-06-27T08:09:00.000
1
1
1
1
python,project
24,447,080
1
false
0
0
Scripts can be used as stand-alone programs for tasks both simple and complex. When you put them in a bin directory, and have the bin directory in your PATH, you can execute them just like an exe, assuming you have configured the interpreter correctly (in Windows), or have put #!/usr/bin/python as the top line for Linux. For example, you might write a Python script that computes the mean of a list of numbers passed into stdin, stick it in your bin directory, and execute it just like you would a C program for the same purpose.
1
0
0
I have been following LPTHW ex. 46 in which it says to put a script in bin directory that you can run. I don't get the idea of using script when you have modules. What extra significance do scripts provide? Are scripts executable *.exe files(in case of windows) rather than modules which are compiled by python? If modules provide all the code needed for the project then do scripts provide the code needed to execute them? How are scripts and modules linked to each other, if they do so?
What do scripts(stored in bin directory of the project) do in addition to modules in a python project?
0.197375
0
0
59
24,447,455
2014-06-27T08:38:00.000
1
0
1
0
python,python-3.x
24,447,547
3
false
0
0
It depends on how you process them. If you have enough memory you can read all the files first and change them to python data structures. Then you can do calculations. If your files don't fit into memory probably the easiest way is to use some distributed computing mechanism (hadoop or other lighter alternatives). Another smaller improvements could be to use fadvice linux function call to say how you will be using the file (sequential reading or random access), it tells the operating system how to optimize file access. If the calculations fit into some common libraries like numpy numexpr which has a lot of optimizations you can use them (this can help if your computations use not-optimized algorithms to process them).
2
1
0
I have around 60 files each contains around 900000 lines which each line is 17 tab separated float numbers. Per each line i need to do some calculation using all corresponding lines from all 60 files, but because of their huge sizes (each file size is 400 MB) and limited computation resources, it takes so long time. I would like to know is there any solution to do this fast?
Working with multiple Large Files in Python
0.066568
0
0
326
24,447,455
2014-06-27T08:38:00.000
0
0
1
0
python,python-3.x
24,447,847
3
false
0
0
A few options: 1. Just use the memory If you have 17x900000 = 15.3 M floats/file. Storing this as doubles (as numpy usually does) will take roughly 120 MB of memory per file. You can reduce this by storing the floats as float32, so that each file will take roughly 60 MB. If you have 60 files and 60 MB/file, you have 3.6 GB of data. This amount is not unreasonable if you use 64-bit python. If you have less than, say, 6 GB of RAM in your machine, it will result in a lot of virtual memory swapping. Whether or not that is a problem depends on the way you access data. 2. Do it row-by-row If you can do it row-by-row, just read each file one row at a time. It is quite easy to have 60 open files, that'll not cause any problems. This is probably the most efficient method, if you process the files sequentially. The memory usage is next to nothing, and the operating system will take the trouble of reading the files. The operating system and the underlying file system try very hard to be efficient in sequential disk reads and writes. 3. Preprocess your files and use mmap You may also preprocess your files so that they are not in CSV but in a binary format. That way each row will take exactly 17x8 = 136 or 17x4 = 68 bytes in the file. Then you can use numpy.mmap to map the files into arrays of [N, 17] shape. You can handle the arrays as usual arrays, and numpy plus the operating system will take care of optimal memory management. The preprocessing is required because the record length (number of characters on a row) in a text file is not fixed. This is probably the best solution, if your data access is not sequential. Then mmap is the fastest method, as it only reads the required blocks from the disk when they are needed. It also caches the data, so that it uses the optimal amount of memory. Behind the scenes this is a close relative to solution #1 with the exception that nothing is loaded into memory until required. The same limitations about 32-bit python apply; it is not able to do this as it runs out of memory addresses. The file conversion into binary is relatively fast and easy, almost a one-liner.
2
1
0
I have around 60 files each contains around 900000 lines which each line is 17 tab separated float numbers. Per each line i need to do some calculation using all corresponding lines from all 60 files, but because of their huge sizes (each file size is 400 MB) and limited computation resources, it takes so long time. I would like to know is there any solution to do this fast?
Working with multiple Large Files in Python
0
0
0
326
24,450,211
2014-06-27T11:01:00.000
1
0
1
0
python,dictionary,spss
24,466,862
1
true
0
0
A Python dictionary is an in-memory hash table where lookup of individual elements requires fixed time, and there is no deterministic order. SPSS data files are disk-based and sequential and are designed for fast, in-order access for arbitrarily large amounts of data. So these are intended for quite different purposes, but there is nothing stopping you from using a Python dictionary within Statistics using the apis in the Python Essentials to complement what Statistics does with the casewise data.
1
1
1
I was trying to Google above, but knowing absolutely nothing about SPSS I wasn't sure what search phrase I should be using. From my initial search (tried using words: "Dictionary" and "Scripting Dictionary") it seems there is something called Data Dictionary in SPSS, but description suggest it is not the same as Python Dictionaries. Would someone be kind enough just to confirm that SPSS has similar functionality and if yes, can you please suggest key words to be used in Google? Many thanks dce
SPSS equivalent of Python Dictionary
1.2
0
0
176
24,453,842
2014-06-27T14:06:00.000
0
0
0
0
python,xml
24,454,049
2
false
0
0
I recommend you to parse the XML document using a SAX parser, this gives you great flexibility to make your changes and to write back the document as it was. Take a look at the xml.sax modules (see Python's documentation).
1
2
0
I want to be able to edit existing XML config files via Python while preserving the formatting of the file and the comments in them so that its still human readable. I will be updating existing XML elements and changing values as well as adding new XML elements to the file. Available XML parsers such as ElementTree and lxml are great ways to edit XML files but you loose the original formatting(when adding new elements to the file) and comments that were in the file. Using Regular expressions seems to be an option but I know that this is not recommended with XML. So I'm looking for something along the lines of a Pythonic XML file editor. What is the best way to go about this? Thanks.
What is the best option for editing XML files in Python that preserve the original formatting of the file?
0
0
1
1,103
24,454,538
2014-06-27T14:39:00.000
5
1
0
0
python,outlook,win32com
24,454,678
1
true
0
0
If you configured a separate POP3/SMTP account, set the MailItem.SendUsingAccount property to an account from the Namespace.Accounts collection. If you are sending on behalf of an Exchange user, set the MailItem.SentOnBehalfOfName property
1
3
0
I am trying to automate emails using python. Unfortunately, the network administrators at my work have blocked SMTP relay, so I cannot use that approach to send the emails (they are addressed externally). I am therefore using win32com to automatically send these emails via outlook. This is working fine except for one thing. I want to choose the "FROM" field within my python code, but I simply cannot figure out how to do this. Any insight would be greatly appreciated.
Choosing "From" field using python win32com outlook
1.2
0
1
2,282
24,457,479
2014-06-27T17:18:00.000
1
0
0
0
wxpython
24,457,910
1
true
0
1
wxPython does not support this behavior. You might be able to fake it by creating lots of custom widgets or by drawing everything, but it will be a lot of work. You would be better off switching to a different toolkit that has this sort of thing builtin. wxPython is for developers that want to make applications that look native on the target OS.
1
0
0
My goal is to create a window that has a variable level of transparency and no standard border. On top of that area I would like to display opaque items, especially text, that might need to be made transparent. I have tried using SetTransparency methods, SetBackgroundColor and wx.TRANSPARENT_WINDOW styles but haven't had any luck in essentially keeping the children transparency level independent from the parent window's. I have started looking into the graphic and draw methods but not sure if this result is even possible to implement in wxPython. Should I be using a different tool or can this be achieved in wxPython?
Is it possible to have a transparent window while having opaque children?
1.2
0
0
328
24,462,670
2014-06-28T01:10:00.000
1
0
1
0
c#,python,architecture,domain-model,mda
33,035,174
1
false
0
0
What you called "abstract model" in MDA is called Platform Independent Model (PIM), and its implementation in C# and/or Python is called Platform Specific Model (PSM). It is supposed that there exist tranformations/code-generators from PIM to PSM's, so depending on how these code-generations work you will get appropriate C# and Python source code. Usually, such tools provide some means to control the code generated. And such control usually done via PIM annotations which are specific to PSM you are generating. Hope this helps.
1
1
0
I am, as a hobby and best-practice exercise, to implement the same domain model (a simple GPS / GIS library, inspired in ISO 191xx stardards and OGC Abstract Model) both in Python and C#. It first, I tought: "well, ISO/OGC gave me a finished UML, so I will have each class in C# and in Python to have the same signature". I quickly found myself stuck in the "strict/static vs duck typing" problem, since I cannot count on method signatures in python. For example: Overloading constructors is quite common and natural in C#, but in Python you have to resort to *args **kwargs and conditionals; Properties are encouraged in C#, but most source code I have seen around in Python tend to set the fields directly, although the use of @property or property() is quite straightforward. (and so on). Actually there is (obviously) an obvious and very well documented "difference in mindsets" between one language and the other, and I would like to respect those differences while at the same time ending up with "the same" application, that is, equivalent domain model, architecture and functionality. So my question basically is: If I am to implement the same abstract model ("UML-like") in Python and C#, how should I proceed, and specifically, which constructs should be invariant, and which should be different?
Implement same domain model in Python and C# - What should be the same and what should vary?
0.197375
0
0
151
24,462,834
2014-06-28T01:39:00.000
4
0
1
0
python,string
24,463,178
3
false
0
0
"That raised a question - what is a string and what difference does it have from a non-string?" It sounds like python is your first language. That being said, for conceptual sake, a string is text, and a 'non-string' is a number. You will see why this is not quite true as you program more, but for understanding the difference between a string and a 'non-string' this will suffice. You can do math with 'non-strings'. "2" is a string, but 2 is a 'non-string'. Adding strings is NOT the same as arithmetic addition. "2" + "2" results in another string "22" (this operation is called concatenation ), but 2 + 2 results in a 'non-string' A.K.A. the NUMBER (not string) 4, because the addition is arithmetic addition.
2
0
0
I'm beginning to learn the basics of python. I had just learned that str() turns non-strings into strings - example: str(2) would change 2 to "2". That raised a question - what is a string and what difference does it have from a non-string? I've googled this but I could not find this question is directly answered and the general explanations don't quite make it clear for me.
What is the difference between a string and non-string?
0.26052
0
0
9,177
24,462,834
2014-06-28T01:39:00.000
3
0
1
0
python,string
24,462,861
3
false
0
0
A string is any sequence of characters — not just numbers, but letters and punctuation and all of Unicode. Something that isn't a string is... not that. :) (There are lots of things that aren't strings! String isn't special.) For example, 2 is an int. You can do math on an int, because it's a number. But you can't do math on a str like "2"; it's only the way we write the number in Western mathematics, not the number itself. You couldn't ask "dog" to wag its tail, either, because it's not a real dog; it's just the written word "dog". As a more practical example: 2 + 2 gives you 4, the result of combining two numbers. "2" + "2" gives you "22", the result of combining two written "words".
2
0
0
I'm beginning to learn the basics of python. I had just learned that str() turns non-strings into strings - example: str(2) would change 2 to "2". That raised a question - what is a string and what difference does it have from a non-string? I've googled this but I could not find this question is directly answered and the general explanations don't quite make it clear for me.
What is the difference between a string and non-string?
0.197375
0
0
9,177
24,462,898
2014-06-28T01:55:00.000
0
0
1
0
python,standards,terminology
24,463,074
3
false
0
0
I'm not a rockstar in python, but considering the nature of list, I think it would be more appropriate to call list of list. However, it's valid to name this as N-dimensional list. Just for curiosity, if you search in google: python 2d array: 853k results python 2d list: 2,590k results python list of list: 68,300k results As you can see, list of list is the most used.
3
0
0
Because in Python list is the built in data type, not array, I see see many questions in python referring to this type of data differently, as a 2d array, a 2d list, a list of lists, a table, and a variety of other expressions. What's the most appropriate standard?
What's the most commonly used terminology for a 2d array/list of lists in Python?
0
0
0
101
24,462,898
2014-06-28T01:55:00.000
0
0
1
0
python,standards,terminology
24,462,951
3
false
0
0
For reference, I have been writing code professionally for a few years now and have just started working a new job 2 months ago where we use python. I feel like most python people will know what you are talking about if you say "this function accepts a 2D array of data". But as Mr. BrenBarn has stated, I think the 'proper' terminology would be a list of lists. Python lists are more mutable than other typical arrays. In many languages an array refers to a series of similar values like an array of ints or an array of strings. Python lists are not limited to a single heterogeneous data type within the list and therefore I think using the list or list of lists terminology is the way to go if you want to be proper. For me personally I have it engrained in my memory to just call it an array.
3
0
0
Because in Python list is the built in data type, not array, I see see many questions in python referring to this type of data differently, as a 2d array, a 2d list, a list of lists, a table, and a variety of other expressions. What's the most appropriate standard?
What's the most commonly used terminology for a 2d array/list of lists in Python?
0
0
0
101
24,462,898
2014-06-28T01:55:00.000
4
0
1
0
python,standards,terminology
24,462,925
3
true
0
0
I think only "list of lists" makes sense. Terms like "2d array" and "table" misleadingly imply that tabular structure is tracked or encoded in the data, which it isn't. That is, if you have [[1, 2], [3, 4], [5, 6]], nothing stops you from appending an item to just one of the lists to get [[1, 2], [3, 4, 88], [5, 6]], which is no longer a tabular structure as the "rows" have different lengths. The outer list does not "know" that what it contains is other lists, so it can't be used in any special way as a "table"; it's just a list, and if you want to use the lists inside it, you have to get them as you would any other list item. For this reason, I think it's best to avoid terms that suggest that a list of lists is some structure in and of itself, with its own properties apart from those of lists. It's not. A list of lists is just a list of lists, and it has no functionality above and beyond that of the lists that make it up. This is in contrast to true tabular data structures like numpy arrays, which enforce the dimensionality and prevent you from doing things like creating rows of unequal size.
3
0
0
Because in Python list is the built in data type, not array, I see see many questions in python referring to this type of data differently, as a 2d array, a 2d list, a list of lists, a table, and a variety of other expressions. What's the most appropriate standard?
What's the most commonly used terminology for a 2d array/list of lists in Python?
1.2
0
0
101
24,463,587
2014-06-28T04:30:00.000
2
0
0
0
python,django,heroku,virtualenv,gunicorn
24,469,221
3
false
1
0
One of the changes in later versions of gunicorn includes not logging to stdout/stderr. Add the argument --log-file=XXX, then examine that log file for what port it's running on.
1
3
0
I'm trying to deploy my django app on heroku. After following the steps instructed by the official document, the dyno I launched always crashes. Then I went through the whole process, and I think the problem might lie on the gunicorn part. Following the instruction, I set the Procfile as 'web: unicorn hellodjango.wsgi', and when I $foreman start, it only shows "21:21:07 web.1 | started with pid 77969". It didn't say where the web is launched. Then I tried to test whether gunicorn is working well. So I tried: "$gunicorn hellodjango.wsgi:application", it indeed doesn't work. I think the path is correct because in current folder there's a hellodjango folder and inside there's the file wsgi.py. What might be the problem?
Gunicorn doesn't work
0.132549
0
0
1,863
24,464,913
2014-06-28T08:05:00.000
1
1
1
0
python,obfuscation
24,464,932
5
false
0
0
You can try converting them into executable files using something like pyinstaller or py2exe although that will increase the distribution size.
1
1
0
How can I obfuscate / hide my Python code from the customer, so that he cannot change the source how he likes to? I know there is no effective way to hide Python code, so that there is no way to read it. I just want a simple protection, that someone who doesn't really know what he is doing cannot just open the source files with a text editor and make changes or understand everything easily with no effort. Because my code is written really understandable, I'd like to hide the main principles I used at the first place. If someone really wants to understand what I have done, he will. I know that. So is there a common method you use to make a simple protection for python code?
Hiding Python Code from non-programmers
0.039979
0
0
791
24,473,156
2014-06-29T04:27:00.000
0
0
0
0
python,pygame
24,473,291
3
false
0
1
you can use the graphics library and use the method called getMouse.
1
0
0
Hi I am trying to make a punny cookie clicker type game called py clicker and made an invisible circle over the sprite which is a pie. How do I detect if the mouse is within the circle so when the user clicks it checks if it is in the circle and adds one to the counter?
Python 2.7.7/Pygame - How to detect if the mouse is within a circle?
0
0
0
127
24,473,765
2014-06-29T06:40:00.000
-1
1
0
1
python,c,segmentation-fault
24,473,958
3
false
0
0
segfault... Check if the number of variables or the types of variables you passed to that c function (in .so) are correct. If not aligned, usually it's a segfault.
1
0
0
I have a caller.py which repeatedly calls routines from some_c_thing.so, which was created from some_c_thing.c. When I run it, it segfaults - is there a way for me to detect which line of c code is segfaulting?
Finding a line of a C module called by a python script that segfaults
-0.066568
0
0
77