Available Count
int64
1
31
AnswerCount
int64
1
35
GUI and Desktop Applications
int64
0
1
Users Score
int64
-17
588
Q_Score
int64
0
6.79k
Python Basics and Environment
int64
0
1
Score
float64
-1
1.2
Networking and APIs
int64
0
1
Question
stringlengths
15
7.24k
Database and SQL
int64
0
1
Tags
stringlengths
6
76
CreationDate
stringlengths
23
23
System Administration and DevOps
int64
0
1
Q_Id
int64
469
38.2M
Answer
stringlengths
15
7k
Data Science and Machine Learning
int64
0
1
ViewCount
int64
13
1.88M
is_accepted
bool
2 classes
Web Development
int64
0
1
Other
int64
1
1
Title
stringlengths
15
142
A_Id
int64
518
72.2M
1
2
0
0
3
0
0
0
I'm writing git commands through a Python script (on Windows) When I double click on myScript.py, commands are launched in the Windows Command Prompt. I would like to execute them in Git Bash. Any idea how to do that without opening Git Bash and write python myScript.py?
0
python,git
2014-06-09T15:05:00.000
1
24,123,128
in the top of your python file add #!/usr/bin/python then you can rename mv myScript.py myScript and run chmod 755 myScript. This will make it so you can run the file with ./myScript look into adding the file directory to your path or linking it to the path if you want to be able to run it from anywhere.
0
28,190
false
0
1
Launch Python script in Git Bash
24,123,328
1
1
0
0
3
0
0
1
Is it possible to calculate the performance testing through selenium with python? If it is possible, how should I do it?
0
python,selenium-webdriver,selenium-firefoxdriver
2014-06-10T08:06:00.000
0
24,135,896
Selenium is not the right tool to use for performance testing. jmeter is a great tool for this. You would be able to see the response for each request.
0
304
false
0
1
Is it possible to calculate the performance testing through selenium with python?
25,492,157
1
2
1
1
0
0
1.2
0
I am currently working on a project to import a Matlab program to python for integration into ImageJ as a plugin. The program contains Mex files and the source code was written in C++. Is there a way to call the C++ functions without having to rewrite them in python. Thanks!!!
0
python,c,matlab,mex
2014-06-11T01:54:00.000
0
24,153,503
If you can build your program as a shared library, then you can use the ctypes foreign-function interface to call your functions. This is often less work (and less complex) than wrapping the functions with Cython or writing your own C-API extension, but it is also more limited in what you can do. Therefore, I recommend starting with ctypes, and moving up to Cython if you find that ctypes doesn't suit your needs. However, for simple libraries, ctypes will do just fine (I use it a lot).
0
1,339
true
0
1
Call C/C++ code from Python
24,154,260
1
1
0
2
3
0
0.379949
0
Our Heroku-hosted Django app does some simple image processing on images our users upload to Amazon S3—mostly resizing to the sizes we will display on the site. For this we use Pillow (the fork of the Python Imaging Library), running in a Celery task. We have seen the time for this operation change from a fraction of a second to half a minute or more. My best guess for why is that we are now often getting memory-quota (R14) conditions (just because the application is bigger), which I would naïvely expect to make resizing particularly slow. So I am considering refactoring the tasks to use an external ImageMagick process to do the processing rather than in-memory PIL. The thinking is that this will at least guarantee that memory used during resizing is released when the convert process terminates. So my question is, is this going to help? Is ImageMagick’s convert going to have a smaller memory footprint than Pillow?
0
python,django,heroku,imagemagick,pillow
2014-06-11T09:00:00.000
0
24,158,704
I have had a similar experience (alas in Java) which might help make a decision . Calling the ImageMagick library binding from Java (using JNI) seemed like a good idea, but turned out to leak memory by the tons. We ended up moving to an external command-line invocation of ImageMagick - which worked a lot better, for the reason you mentioned- guarantee the release of memory.
0
1,421
false
1
1
Which has the better memory footprint, ImageMagick or Pillow (PIL)?
24,158,927
1
1
0
1
0
1
0.197375
0
I'm looking for a way to quantify the changes made to a file. That is, if I have a file with something written on, and I edit it and save it, is there a way to know (using Python or C/C++) how much the files has changed? For example, if my file is "aaaaaaaaaaa" and I change it to "aaabbbbbbb", that quantification method should yield a greater result (assuming thats quantifiable) than if I had changed it to "aaaaaaaaaba". Hope I made myself clear, Thanks in advance Edit: It is supposed to be done without actually reading the file.
0
python,c++,c,unix,operating-system
2014-06-11T13:54:00.000
0
24,164,787
On a lines you could use diff. To Edit: xor_hash(new) - xor_hash(old)
0
54
false
0
1
Quantify file changes
24,164,847
1
1
0
2
0
0
1.2
0
I'm trying to find a Python script that uses logger to write file data to syslog. There's an application that outputs reports as log files, but I need these log files to be sent to syslog. I have this code, but I cannot determine how to target the log files and send it to syslog in Python. This code has me on the right track, but I do not know where to specify the file I need to transfer to syslog. Can someone help steer me in the right direction? Or, perhaps provide the documentation I need?
0
python,logging
2014-06-11T17:44:00.000
0
24,169,303
Syslog handler is a service, not a file You seem to be confused by trying to specify logfile for syslog. Quoting Wikipedia: Syslog is a standard for computer message logging. It permits separation of the software that generates messages from the system that stores them and the software that reports and analyzes them. As syslog is a service, it decides about what to do with log records on it's own. That is why, you can only say address (like localhost on default port) of the syslog service but have no chance to control more from your (separated) application. On SysLog side, there is configuration, controlling where should what log entry end up, but this is out of control of your log handler. If you omit address, it would by default talk to localhost and default syslog port. In such a case it is very likely, you find your log records in /var/log/syslog file. Forwarding log records from another log to syslog If you have another log in some file and want to send it to syslog, you must: parse the log file and find log records send log records to syslog However, this might create issues with timestamps of LogRecords as usually the created time is derived automatically at the moment log record is created. But it could be possibly resolved. Conclusions Do not expect your logger in Python to decide the file where syslog writhes log records. This must be configured at syslog level, not in your app. If you have to forward logs from another source, most optimal is to manage that it goes directly there. If you cannot do that, you have to parse the log file and resend log records to syslog handler. Here you will have to resolve details about timestamps in log records.
0
1,589
true
0
1
How to use Python script for logger to write file data to syslog?
24,169,956
1
2
0
1
0
0
0.099668
0
At a high level, what I need to do is have a python script that does a few things based on the commands it receives from various applications. At this stage, it's not clear what the application may be. It could be another python program, a MATLAB application, or a LAMP configuration. The commands will be sent rarely, something like a few times every hour. The problem is - What is the best way for my python script to receive these commands, and indicate to these applications that it has received them? Right now, what I'm trying to do is have a simple .txt file. The application(s) will write commands to the file. The python script will read it, do its thing, and remove the command from the file. I didn't like this approach for 2 reasons- 1) What happens if the file is being written/read by python and a new command is sent by an application? 2) This is a complicated approach which does not lead to anything robust and significant.
0
python,database,web-applications,ipc
2014-06-11T17:59:00.000
1
24,169,539
Python has since early stages a very comfortable PyZMQ binding for ZeroMQ. MATLAB can have the same, a direct ZeroMQ at work for your many-to-many communications. Let me move in a bit broader view, from a few, but KEY PRINCIPAL POINTS, that are not so common in other software-engineering "products" & "stacks" we meet today around us: [1] ZeroMQ is first of all a very powerful concept rather than a code or a DIY kit [2] ZeroMQ's biggest plus for any professional grade project sits in rather using the genuine Scaleable Formal Communication Patterns end-to-end, not in the ability to code pieces or to "trick/mod" the published internals [3] ZeroMQ team has done a terrific job and saves users from re-inventing wheels ("inside") and allows to rather stay on the most productive side by a re-use of the heroic knowledge ( elaborated, polished & tested by the ZeroMQ gurus, supporters & team-members ) from behind the ZMQ-abstraction-horizon. Having said these few principles, my recommendation would be to spend some time on the concepts in a published book from Peter Hintjens on ZeroMQ ( also available in PDF). This a worthwhile place to start from, to get the bigger picture. Then, there it would be a question of literally a few SLOC-s to make these world most powerful ( and believe me, that this sounds bold only on first sight, as there are not many real alternatives to compare ZeroMQ with ... well, ZeroMQ co-architect Martin Sustrik's [nanomsg] is that case, to mention at least one, if you need to go even higher in speed / lower in latency, but the above key principal points hold & remain the same even there ... ) Having used a ZeroMQ orchestrated Python & MQL4 & AI/ML system in FOREX high speed trading infrastructure environment is just a small example, where microseconds matter and nanosecond make a difference in the queue ... Presented in a hope that your interest in ZeroMQ library will only grow & that you will benefit as much as many other uses of this brilliant piece of art have gained giant leap & benefited from whatever the PUB/SUB, PAIR/PAIR, REQ/REP formal patterns does best match the very communication need of your MATLAB / Python / * heterogeneous multi-party / multi-host Project.
0
2,813
false
0
1
How do I communicate and share data between python and other applications?
24,171,852
1
1
0
1
0
0
0.197375
0
I have a Python CGI that I use along with SQLAlchemy, to get data from a database, process it and return the result in Json format to my webpage. The problem is that this process takes about 2mins to complete, and the browsers return a time out after 20 or 30 seconds of script execution. Is there a way in Python (maybe a library?) or an idea of design that can help me let the script execute completely ? Thanks!
0
python,sqlalchemy
2014-06-13T09:01:00.000
0
24,201,497
You will have to set the timing on the http server's (Apache for example) configuration. The default should be more than 120 seconds, if I remember correct.
0
678
false
1
1
Avoid Python CGI browser timeout
24,201,579
1
1
0
5
0
0
0.761594
0
I am trying to run a python script on one.com after a user completes an action on my website. If I run it using a shell file (every couple of minutes) when I run it in the background and end the ssh session it ends the script. I have tried running if from php using shell_exec and system but these are blocked by one.com. I was wondering if anyone had any success with this?
0
php,python
2014-06-13T19:37:00.000
1
24,212,724
I know this post i old but for future reference, as I assume you have moved on... I can inform that after I read your question I contacted One.com (12 jan 2016) and they said that they do not support Python and are not planning to do so in the near future.
0
3,118
false
0
1
Running Python from PHP on one.com
34,741,305
1
3
0
1
0
0
0.066568
0
Is there a way to poll the cp command to get its current progress? I understand there's a modified/Advanced copy utility that adds a small little ASCII progress bar, but I want to build my own progress bar using led lights and whatnot, and need to be able to see the current percentage of the file activity in order to determine how many LEDs to light up on the progress bar.
0
python,linux,bash,raspberry-pi,cp
2014-06-15T19:25:00.000
1
24,233,264
you can use rsync, which can be used pretty much like cp is used, but offers an option for progress indicator. It is sent to standard out, and you ought to be able to intercept it for your own fancies.
0
606
false
0
1
Poll the linux cp command to GET progress
24,233,353
1
2
0
5
3
1
1.2
0
I have written a small scientific experiment in Python, and I now need to consider optimizing this code. After profiling, what tools should I consider in order to improve performance. From my understanding, the following wouldn't work: Psyco: out of date (doesn't support Python 2.7) Pyrex: last update was in 2010 Pypy: has issues with NumPy What options remain now apart from writing C modules and then somehow interfacing them with Python (for example, by using Cython)?
0
python,numpy
2014-06-16T02:53:00.000
0
24,236,079
You can use Cython to compile the bottlenecks to C. This is very effective for numerical code where you have tight loops. Python loops add quite a lot of overhead, that is non-existent if you can translate things to pure C. In general, you can get very good performance for any statically typed code (that is, your types do not change, and you can annotate them on the source). You can also write the core parts of your algorithm in C (or take an already written library) and wrap it. You can still do it writing a lot of boilerplate code with Cython or SWIG, but now there are tools like XDress that can do this for you. If you are a FORTRAN person, f2py is your tool. Modern CPUs have many cores, so you should be able to take advantage of it usin Python's multiprocessing. The guys at joblib have provided a very nice and simplified interface for it. Some problems are also suitable for GPU computing when you can use PyCUDA. Theano is a library that is a bridge between Numpy, Cython, Sympy, and PyCUDA. It can evaluate and compile expressions and generate GPU kernels. Lastly, there is the future, with Numba and Blaze. Numba is a JIT compiler based on LLVM. The development is not complete, as some syntax is missing and bugs are quite common. I don't believe it is ready for production code, unless you are sure your codebase is fully supported and you have a very good test coverage. Blaze is a next generation Numpy, with support for out of core storage and more flexible arrays; and designed to use Numba as a backend to speed up execution. It is in a quite early stage of development. Regarding your options: Pysco: the author considered the project was done, and he decided to collaborate with Pypy. Most of its features are in there now. Pyrex: abandoned project, where Cython was forked from. It has all its features and much more. Pypy: not a real option for general scientific code because the interfacing with C is too slow, and not complete. Numpy is only partially suported, and there is little hope Scipy will ever be (mainly because of the FORTRAN dependencies). This may change in the future, but probably not any time soon. Not being able to fully use C extensions limits very much the possibilities for using external code. I must add I have used it successfully with Networkx (pure Python networks library), so there are use cases where it could be of use.
0
165
true
0
1
As of June 2014, what tools should one consider for improving Python code performance?
24,236,471
1
1
0
0
1
0
0
0
Using Python's Django (w/ Rest framework) to create an application similar to Twitter or Instagram. What's the best way to deal with caching content (JSON data, images, etc.) considering the constantly changing nature of a social network? How to still show updated state a user creates a new post, likes/comments on a post, or deletes a post while still caching the content for speedy performance? If the cache is to be flushed/recreated each time a user takes an action, then it's not worth having a cache because the frequency of updates will be too rapid to make the cache useful. What are some techniques of dealing with this problem. Please feel free to share your approach and some wisdom you learned while implementing your solution. Any suggestions would be greatly appreciated. :)
0
python,django,caching,django-rest-framework
2014-06-17T18:36:00.000
0
24,271,006
One technique is to key the URLs on the content of the media they are referring too. For example if you're hosting images then use the sha hash of the image file in the url /images/<sha>. You can then set far-future cache expiry headers on those URLs. If the image changes then you also update the URL referring to it, and a request is made for an image that is no longer cached. You can use this technique for regular database models as well as images and other media, so long as you recompute the hash of the object whenever any of its fields change.
0
83
false
1
1
Handling Cache with Constant Change of Social Network
24,293,403
1
1
0
1
2
0
1.2
0
I'm using Ubuntu in several PCs (versions 12.04 and 14.04), and I noticed that serialprotocol.py is not being installed when I run "sudo python3 setup3.py install" in the default source tar package for twisted 14.0.0. I had to manually copy the file in my computers. I also tried installing the default ubuntu package python3-twisted-experimental with the same results. So I always end up copying "serialprotocol.py" and "_posixserialport.py" manually. And they work fine after that. As a side note: _posixserialport.py fails to import BaseSerialPort because it says: from serialport import BaseSerialPort but it should be: from twisted.internet.serialport import BaseSerialPort
0
ubuntu,python-3.x,twisted
2014-06-18T17:12:00.000
1
24,291,443
Twisted has not been entirely ported to Python 3. Only parts of it have been ported. When you install Twisted using Python 3, only the parts that have been ported are installed. The unported modules are not installed because they are not expected to work. As you observed, this code does not actually work on Python 3 because it uses implicit relative imports - a feature which has been removed from Python 3.
0
91
true
0
1
Why isn't serialport.py installed by default?
24,295,427
1
3
0
0
0
0
0
0
So I hope this question already hasn't been answered, but I can't seem to figure out the right search term. First some background: I have text data files that are tabular and can easily climb into the 10s of GBs. The computer processing them is already heavily loaded from the hours long data collection(at up to 30-50MB/s) as it is doing device processing and control.Therefore, disk space and access are at a premium. We haven't moved from spinning disks to SSDs due to space constraints. However, we are looking to do something with the just collected data that doesn't need every data point. We were hoping to decimate the data and collect every 1000th point. However, loading these files (Gigabytes each) puts a huge load on the disk which is unacceptable as it could interrupt the live collection system. I was wondering if it was possible to use a low level method to access every nth byte (or some other method) in the file (like a database does) because the file is very well defined (Two 64 bit doubles in each row). I understand too low level access might not work because the hard drive might be fragmented, but what would the best approach/method be? I'd prefer a solution in python or ruby because that's what the processing will be done in, but in theory R, C, or Fortran could also work. Finally, upgrading the computer or hardware isn't an option, setting up the system took hundreds of man-hours so only software changes can be performed. However, it would be a longer term project but if a text file isn't the best way to handle these files, I'm open to other solutions too. EDIT: We generate (depending on usage) anywhere from 50000 lines(records)/sec to 5 million lines/sec databases aren't feasible at this rate regardless.
0
python,r,dataset,fortran,data-processing
2014-06-18T20:21:00.000
0
24,294,371
Is the file human-readable text or in the native format of the computer (sometimes called binary)? If the files are text, you could reduce the processing load and file size by switching to native format. Converting from the internal representation of floating point numbers to human-reading numbers is CPU intensive. If the files are in native format then it should be easy to skip in the file since each record will be 16 bytes. In Fortran, open the file with an open statement that includes form="unformated", access="direct", recl=16. Then you can read an arbitrary record X without reading intervening records via rec=X in the read statement. If the file is text, you can also read it with direct IO, but it might not be that each two numbers always uses the same number of characters (bytes). You can examine your files and answer that question. If the records are always the same length, then you can use the same technique, just with form="formatted". If the records vary in length, then you could read a large chunk and locate your numbers within the chunk.
0
162
false
1
1
Low level file processing in ruby/python
24,299,151
1
3
0
5
6
0
1.2
0
I can't seem to find a comparison method in the API. I have these two messages, and they have a lot of different values that sometimes drill down to more values (for example, I have a Message that has a string, an int, and a custom_snapshot, where custom_snapshot is comprised of an int, a string, and so on). I want to see if these two messages are the same. I don't want to compare each value one by one since that will take a while, so I was wondering if there was a quick way to do this in Python? I tried doing messageA.debugString() == messageB.debugString(), but apparently there is no debugString method that I could access when I tried.
0
python,protocol-buffers
2014-06-18T22:49:00.000
0
24,296,221
protocol buffers have a method SerializeToString() Use it to compare your messages.
0
9,737
true
0
1
How do I compare the contents of two Google Protocol Buffer messages for equality?
24,301,278
1
1
0
2
1
0
1.2
0
I am trying to mount the SSHFS using the following run command run("sshfs -o reconnect -C -o workaround=all localhost:/home/test/ /mnt") and it is failing with the following error fuse: bad mount point `/mnt': Transport endpoint is not connected However if i demonize it works. Is there any work around?.
0
python,ssh,fabric,sshfs
2014-06-19T22:41:00.000
1
24,317,368
I figured out finally there is an issue with SSH and need to pass pty=False flag. run("sshfs -o reconnect -C -o workaround=all localhost:/home/test/ /mnt",pty=False)
0
295
true
0
1
sshfs mount failing using fabric run command
24,329,791
1
1
0
2
5
0
0.379949
0
New to RabbitMQ and I am trying to determine a way in which to retrieve the routing key information of an AMQP message. Has anyone really tried this before? I am not finding a lot of documentation that explicitly states how to query AMQP using pika (python). This is what I am trying to do: basically I have a Consumer class, for example: channel.exchange_declare(exchange='test', type='topic') channel.queue_declare(queue='topic_queue',auto_delete=True) channel.queue_bind(queue='topic_queue', exchange='test', routing_key = '#') I set up a queue and I bind to an exchange and all the routing_keys (or binding keys I suppose) being passed through that exchange. I also have a function: def amqmessage(ch, method, properties, body): channel.basic_consume(amqmessage, queue=queue_name, no_ack=True) channel.start_consuming() I think that the routing_key should be "method.routing_key" from the amqmessage function but I am not certain how to get it to work correctly.
0
python,amqp,pika
2014-06-20T18:22:00.000
0
24,333,423
I would like to write the answer down because it this question was before the documentation on google. def amqmessage(ch, method, properties, body): channel.basic_consume(amqmessage, queue=queue_name, no_ack=True) channel.start_consuming() The routing key can be found with:method.routing_key
0
3,298
false
1
1
Retrieving AMQP routing key information using pika
41,400,921
1
1
0
1
0
1
1.2
0
Using Control+N while coding JAVA in IntelliJ helps me to navigate to classes. Is there any similar functionality in IntelliJ for navigating to Python modules. Thanks
0
python,intellij-idea,keyboard-shortcuts,python-module
2014-06-23T07:52:00.000
0
24,360,908
Install the Python plugin Settings | Plugins | Browse repositories | "Python". Add Python SDK to the project Select project settings Select Platform Setting | SDKs | Add New SDK | Python SDK Select a python interpreter Wait for configuration to complete Control+N should then work as expected in your project
0
50
true
0
1
Navigate to Python module by name in Intellij keyboard shortcut
24,362,017
1
1
0
0
0
0
1.2
0
I'm configuring a Debian 7.5 server, and up to yesterday the mail server and the policyd-spf Python plugin were running fine. I added some more Python-related libraries in order to configure Plone (python-setuptools, python-dev, python-imaging), and now the Python setup seems corrupted for some reason. If I now run policyd-spf manually, I get an ImportError on the spf module. Opening a Python interpreter and checking the sys.path, I get the following: ['', '/usr/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg', '/usr/lib/python2.7/site-packages/virtualenv-1.11.6-py2.7.egg', '/usr/lib/python27.zip', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-linux2', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/lib/python2.7/site-packages'] I noticed that /usr/lib/python2.7/site-packages is there, but /usr/lib/python2.7/dist-packages is missing, and that's the reason for the import error. I already tried re-installing the python and python-all packages, hoping that a reinstall would have fixed it, but I still have the same problem. Does anyone know where exactly Debian configured dist-packages to be included in the search path, and how can I recover it? thanks!
0
python,debian
2014-06-23T12:22:00.000
1
24,365,844
I fixed it with the following re-install: apt-get install python2.7-minimal --reinstall Reinstalling python and python-dev wasn't solving, but python2.7-minimal made the job.
0
175
true
0
1
How to fix corrupted Python search path in Debian 7.5?
24,382,572
1
1
0
0
0
1
1.2
0
I am new to python and trying to add a project folder to the PYTHONPATH. I created a .pth file and add my root path of the file in my site-packages folder. However, when I trying to import the .py files in this folder, only those located under the root folder (for example '/sample') can be imported, but those subfolders under the /sample folder were not able to be imported (for example /sample/01). So my question is what file and how to change it to make my whole folder including all its subfolders can be importable. In the worst case I can think of is to write down all the folders name in the .pth file in site-packages. But I just believe that Python will provide a more efficient way to achieve that.
0
python,import,pythonpath
2014-06-24T01:18:00.000
0
24,376,961
I haven't had occasion to ever use a .pth file. I prefer a two-pronged approach: Use a shebang which runs env python, so it uses the first python on your path, i.e.: #!/usr/bin/env python Use virtualenv to keep separate different environments and group the necessary libraries for any given program/program set together. This has the added benefit that the requirements file (from pip freeze output) can be stored in source control, and the environment can be recreated easily anywhere, such as for use with Jenkins tests, et al. In the virtualenv case the python interpreter can be explicitly invoked from the virtualenv's bin directory. For local modules in this case, a local PyPI server can be used to centralize custom modules, and they can also be included in the requirements file (via the --extra-index option of pip). Edit with response to comment from OP: I have not used SublimeREPL before, however, based on the scenario you have described, I think the overall simplest approach might be to simply symlink the directories into your site-packages (or dist-packages, as the case may be) directory. It's not an ideal scenario for a production server, but for your purposes, on a client box, I think it would be fine. If you don't want to have to use the folder name, i.e. import ch1/foo, you'll need to symlink inside of those directories so you can simply import foo. If you're OK with using the dir name, i.e. import ch1/foo, then you should only need to symlink the top-level code directory.
0
287
true
0
1
How to use PYTHONPATH to import the whole folder in Python
24,377,167
1
1
0
4
0
1
0.664037
0
whats are the main differences between functions in Haskell , python and c? I know that haskell function can get a function as a parameter? is it only in haskell?
0
python,c,haskell
2014-06-24T07:12:00.000
0
24,380,528
The fundamental difference between a Haskell function and a C function is in the fact that Haskell functions cannot have side effects. They cannot modify state when called and as such will return the same value when called repeatedly with the same parameters. This is not to say that you cannot have pure functions in C. I would encourage you to read articles about functional programming and maybe a tutorial in Haskell to get a clearer idea about the subject.
0
1,052
false
0
1
whats are the main differences between functions in Haskell , python and c?
24,380,595
1
2
0
0
0
1
0
0
For example my py script already has one instance running and when I fire another instance with args, instead of allowing the new instance to run, make it pass its args to the main instance or make it so the main instance awares of the args so it can do whatever it needs to do with the args. Is something like this possible?
0
python,python-2.7
2014-06-24T07:53:00.000
0
24,381,227
It is possible but not trivial because the processes are unrelated. So you have to set : an exclusion mechanism (file lock should be portable across architectures) to allow a process to know if it is the first - beware of race condition when a server is about to exit when a new process arrives ... the first process opens a listener - a socket on an unused port should be portable you must define a protocol to allow you processes to communicate over the socket when a process discovers it is not the first, it contacts the first over the socket and passes its arguments So it can work, but it is a lot of work ... There could be solutions a little simpler if you can limit you to a specific architectur (only Windows or only Linux) and make use of specific libraries, but it will nether be a simple job.
0
288
false
0
1
Python - Disable multiple py script instances and pass their args to main instance
24,381,532
1
1
0
1
0
0
1.2
0
I am using a combination of FTDI usb driver and python-serial library to communicate with a USB led light. When I am writing a value to the serial port (to turn the. Light on) can I pass regular ASCII text or does it need to be the hex equivalent?
0
python-2.7,serial-port,usb
2014-06-26T14:44:00.000
0
24,433,535
Regular ASCII works for me with our FTDI cables. You may also need to terminate the string with a \n.
0
35
true
0
1
python and serial ports - regular or fancy text?
24,433,581
1
1
0
1
0
1
1.2
0
I am using Eclipse Indigo for python coding. When I comment something, I want the color of the comment to be blue how can I achieve? Thanks
0
python,eclipse,syntax-highlighting
2014-06-27T08:03:00.000
0
24,446,884
Assuming you use the PyDev plug-in you can access the color settings in the Window/Preferences/PyDev/Editor menu.
0
442
true
0
1
Changing python syntax coloring in eclipse
24,447,038
1
1
0
1
0
1
0.197375
0
I have been following LPTHW ex. 46 in which it says to put a script in bin directory that you can run. I don't get the idea of using script when you have modules. What extra significance do scripts provide? Are scripts executable *.exe files(in case of windows) rather than modules which are compiled by python? If modules provide all the code needed for the project then do scripts provide the code needed to execute them? How are scripts and modules linked to each other, if they do so?
0
python,project
2014-06-27T08:09:00.000
1
24,446,966
Scripts can be used as stand-alone programs for tasks both simple and complex. When you put them in a bin directory, and have the bin directory in your PATH, you can execute them just like an exe, assuming you have configured the interpreter correctly (in Windows), or have put #!/usr/bin/python as the top line for Linux. For example, you might write a Python script that computes the mean of a list of numbers passed into stdin, stick it in your bin directory, and execute it just like you would a C program for the same purpose.
0
59
false
0
1
What do scripts(stored in bin directory of the project) do in addition to modules in a python project?
24,447,080
1
1
0
5
3
0
1.2
1
I am trying to automate emails using python. Unfortunately, the network administrators at my work have blocked SMTP relay, so I cannot use that approach to send the emails (they are addressed externally). I am therefore using win32com to automatically send these emails via outlook. This is working fine except for one thing. I want to choose the "FROM" field within my python code, but I simply cannot figure out how to do this. Any insight would be greatly appreciated.
0
python,outlook,win32com
2014-06-27T14:39:00.000
0
24,454,538
If you configured a separate POP3/SMTP account, set the MailItem.SendUsingAccount property to an account from the Namespace.Accounts collection. If you are sending on behalf of an Exchange user, set the MailItem.SentOnBehalfOfName property
0
2,282
true
0
1
Choosing "From" field using python win32com outlook
24,454,678
1
5
0
1
1
1
0.039979
0
How can I obfuscate / hide my Python code from the customer, so that he cannot change the source how he likes to? I know there is no effective way to hide Python code, so that there is no way to read it. I just want a simple protection, that someone who doesn't really know what he is doing cannot just open the source files with a text editor and make changes or understand everything easily with no effort. Because my code is written really understandable, I'd like to hide the main principles I used at the first place. If someone really wants to understand what I have done, he will. I know that. So is there a common method you use to make a simple protection for python code?
0
python,obfuscation
2014-06-28T08:05:00.000
0
24,464,913
You can try converting them into executable files using something like pyinstaller or py2exe although that will increase the distribution size.
0
791
false
0
1
Hiding Python Code from non-programmers
24,464,932
1
3
0
-1
0
0
-0.066568
0
I have a caller.py which repeatedly calls routines from some_c_thing.so, which was created from some_c_thing.c. When I run it, it segfaults - is there a way for me to detect which line of c code is segfaulting?
0
python,c,segmentation-fault
2014-06-29T06:40:00.000
1
24,473,765
segfault... Check if the number of variables or the types of variables you passed to that c function (in .so) are correct. If not aligned, usually it's a segfault.
0
77
false
0
1
Finding a line of a C module called by a python script that segfaults
24,473,958
1
1
0
0
1
0
0
0
need help with something... I had this python program which i made. The thing is, i need the source of it, but the thing is, the hdd i had with it is dead , and when i tried to lookup any backups, it wasn't there. The only thing i have the binary, which i think, was compiled in cx_Freeze. I'm really desperate about it, and i tried any avialble ways to do it, and there was none or almost little. Is there a way to ''unfreeze'' the executable or at least get the pyc out of it?
0
python,cx-freeze,panda3d
2014-06-30T02:29:00.000
1
24,482,222
No, it is not possible to recover the original source code. If the application used CPython, though, it is always possible to recover the CPython bytecode, which you can use a disassembler on to make a reconstruction of the Python code, but a lot of information will be lost; the resulting code will look rather unreadable and obfuscated, depending on the degree to which the bytecode was optimised. If you want to go down that path, though, I advise looking into CPython's "dis" module. There are also numerous other utilities available that can reconstruct Python code from CPython bytecode.
0
1,104
false
0
1
cx_Freeze Unfreeze. Is it possible? [python]
24,698,264
1
1
0
0
0
0
1.2
0
I have some module, which using AES-128 encryption/decryption. And me necessary automatically generate one time in module secret key (if didnt initialized) for every user, after save and disallow to change. How can I do this?
0
python,python-2.7,module,cryptography
2014-07-01T09:35:00.000
0
24,506,949
If you really want to, you could generate a key and then hard-code the key value using hexadecimals. You could try and hide the value in the code, but it would amount to obfuscation, adding little to no security.
0
52
true
0
1
Genererate secret key for cryptography in module
24,516,137
1
1
0
0
0
1
0
0
I can't easily find out the exact size of the string I will produce. I only know the upper bound which should be within 1-2 characters of the final size. How do I shrink the string after filling it?
0
python,python-3.x,python-c-api
2014-07-02T16:08:00.000
0
24,536,004
Based on your comment: If you are trying to remove characters from your string use the .strip() method. If you want the byte count of the string compared to the character count you need to change the encoding. If you are just trying to remove the \0 character use the .replace() method.
0
77
false
0
1
Using the python C API, is it possible to shrink a PyUnicode object?
24,562,179
1
2
0
0
2
0
0
0
I want to test web applications that were developed using Django framework and Tastypie. My plan was to test the REST API calls of the web apps against the queries they perform on the MySql DB. In order to do so I've investigated a little bit about DB access framework, and have encountered SQLalchemy framework, and the reflection attitude. My thought were to try and access the Web Apps REST API in the same attitude and test the results from both sources. Can you please suggest a different approach for examining this problem? Is there framework that will help for this task?
0
python,mysql,django,rest,sqlalchemy
2014-07-03T19:40:00.000
0
24,562,068
You want to see what queries are generated by django ORM or tastepy? I think one easy way is to do a wrappper arround the DB class, where you run the DB class method, analise the results and print our save them to a file. Another way to do this, is to use mysql slow_query_log with 0 seconds to log all the queries that are being made to MYSQL. You can use a diferent user our schema to parse the results more easy. Not good to test in production services :)
0
276
false
1
1
Projection of a Tastypie REST API into python objects
24,677,758
1
3
0
0
7
1
0
0
So I'm migrating all my tools from python2 to python3.4 on an Ubuntu 14.04 machine. So far I've done the following: aliased python to python3 in my zshrc for just my user installed pip3 on the system itself (but I'll just be using virtualenvs for everything anyway so I won't really use it) changed my virtualenvwrapper "make" alias to mkvirtualenv --python=/usr/bin/python3 ('workon' is invoked below as 'v') Now curiously, and you can clearly see it below, running python3 from a virtualenv activated environment still inherits my $PYTHONPATH which is still setup for all my python2 paths. This wreaks havoc when installing/running programs in my virtualenv because the python3 paths show up AFTER the old python2 paths, so python2 modules are imported first in my programs. Nulling my $PYTHONPATH to '' before starting the virtualenv fixes this and my programs start as expected. But my questions are: Is this inheritance of $PYTHONPATH in virtualenvs normal? Doesn't that defeat the entire purpose? Why set $PYTHONPATH as an env-var in the shell when python already handles it's own paths internally? Am I using $PYTHONPATH correctly? Should I just be setting it in my 'zshrc' to only list my personal additions ($HOME/dev) and not the redundant '/usr/local/lib/' locations? I can very easily export an alternate python3 path for use with my virtualenvs just before invoking them, and reset them when done, but is this the best way to fix this? ○ echo $PYTHONPATH /usr/local/lib/python2.7/site-packages:/usr/local/lib/python2.7/dist-packages:/usr/lib/python2.7/dist-packages:/home/brian/dev brian@zeus:~/.virtualenvs ○ python2 Python 2.7.6 (default, Mar 22 2014, 22:59:56) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys, pprint >>> pprint.pprint(sys.path) ['', '/usr/local/lib/python2.7/dist-packages/pudb-2013.3.4-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/Pygments-1.6-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/urwid-1.1.1-py2.7-linux-x86_64.egg', '/usr/local/lib/python2.7/dist-packages/pythoscope-0.4.3-py2.7.egg', '/usr/local/lib/python2.7/site-packages', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/home/brian/dev', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/lib/python2.7/dist-packages/PILcompat', '/usr/lib/python2.7/dist-packages/gst-0.10', '/usr/lib/python2.7/dist-packages/gtk-2.0', '/usr/lib/pymodules/python2.7', '/usr/lib/python2.7/dist-packages/ubuntu-sso-client', '/usr/lib/python2.7/dist-packages/ubuntuone-client', '/usr/lib/python2.7/dist-packages/ubuntuone-storage-protocol', '/usr/lib/python2.7/dist-packages/wx-2.8-gtk2-unicode'] >>> brian@zeus:~/.virtualenvs ○ v py3venv (py3venv) brian@zeus:~/.virtualenvs ○ python3 Python 3.4.0 (default, Apr 11 2014, 13:05:11) [GCC 4.8.2] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import sys, pprint >>> pprint.pprint(sys.path) ['', '/usr/local/lib/python2.7/site-packages', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/home/brian/dev', '/home/brian/.virtualenvs/py3venv/lib/python3.4', '/home/brian/.virtualenvs/py3venv/lib/python3.4/plat-x86_64-linux-gnu', '/home/brian/.virtualenvs/py3venv/lib/python3.4/lib-dynload', '/usr/lib/python3.4', '/usr/lib/python3.4/plat-x86_64-linux-gnu', '/home/brian/.virtualenvs/py3venv/lib/python3.4/site-packages'] >>> (py3venv)
0
python,virtualenv,pythonpath,virtualenvwrapper,zshrc
2014-07-05T06:44:00.000
1
24,583,777
The $PYTHONPATH appears in your virtualenv because that virtualenv is just a part of your shell environment, and you (somewhere) told your shell to export the value of PYTHONPATH to child shells. One of the joys of working in virtual environments is that there is much less need to put additional directories on your PYTHONPATH, but it appears as though you have unwittingly been treating it as a global (for all shells) setting, when it's more suited to being a per-project setting.
0
8,833
false
0
1
Why does virtualenv inherit $PYTHONPATH from my shell?
39,600,194
1
1
0
0
2
1
1.2
0
I want to use an RPi to control some water pumps. My question is, what kind of guaranties can I make about the "real timeness"? I have a pump filling a container, and when a sensor signals the RPi that it is full the pump should turn off. How much extra room in the container needs to be left for the worst case response time?
0
python,raspberry-pi,real-time,interrupt,interrupt-handling
2014-07-05T22:37:00.000
0
24,591,132
From a theoretical perspective, Python running as a userspace process on a Linux kernel makes no realtime guarantees whatsoever. In practice, interrupt response times will usually be in the low millisecond range. In all probability, the pump will take considerably longer to shut off than the rPi will take to respond.
0
582
true
0
1
Python/Raspberry Pi guaranty about interrupt response time
24,591,388
1
3
0
2
3
0
1.2
1
I'm analyzing tweets and need to find which state (in the USA) the user was in from their GPS coordinates. I will not have an internet connection available so I can't use an online service such as the Google Maps API to reverse geocode. Does anyone have any suggestions? I am writing the script in python so if anyone knows of a python library that I can use that would be great. Or if anyone can point me to a research paper or efficient algorithm I can implement to accomplish this that would also be very helpful. I have found some data that represents the state boundaries in GPS coordinates but I can't think of an efficient way to determine which state the user's coordinates are in.
0
python,algorithm,gps,reverse-geocoding
2014-07-07T06:59:00.000
0
24,604,661
Use a point-in-polygon algorithm to determine if the coordinate is inside of a state (represented by a polygon with GAP coordinates as points). Practically speaking, it doesn't seem like you would be able to improve much upon simply checking each state one at a time, though some optimizations can be made if it's too slow. However, parts of Alaska are on both sides of the 180th meridian which cases problems. One solution would be to offset the coordinates a bit by adding 30 degrees modulus 180 to the longitude for each GPS coordinate (user coordinates and state coordinates). This has the effect of moving the 180th meridian about 30 degrees west and should be enough to ensure that the entire US on one side of the 180th meridian.
0
2,493
true
0
1
Determine the US state from GPS coordinates without using online service
24,612,105
1
1
0
0
0
0
0
1
I can build Facebook login with Python Social Auth. But in order to access full content of the site I want users to be authorised first. Would it be possible to get guidelines how such solution should be build?
0
python,django,python-social-auth
2014-07-07T18:13:00.000
0
24,617,063
You can still use the @login_required decorator
0
145
false
0
1
User authorization in Python Social Auth
24,626,165
1
1
1
2
2
1
1.2
0
I am planning on using IronPython to develop a GUI interface for some python code. Do I need to know any other programming languages other than python. Also if not are there any other GUI packages/addon's to python that only use python to implement and get the final product working?
0
python,ironpython
2014-07-08T15:20:00.000
0
24,635,660
You don't need to know any other languages - modulo a few implementation differences, Python is Python is Python. You will, however, need to know the Microsoft windowing library, with which I believe you will have to interface to build a GUI.
0
134
true
0
1
Does IronPython just use Python or to use IronPython do I need to know other programming languages other than python?
24,636,635
2
2
0
3
2
0
0.291313
0
I'm working with python on raspberry pi. I'm using complementary filter to get better values from gyroscope, but it eats too much raspberry's power - it's about 70%. I thought I could increase performance by reducing floating point precision. Now, results have about 12 decimal places, it's way more than I need. Is there any way to set maximum precision? Just rounding the number doesn't meet my needs, since it's just another calculation. Thanks! Edit: I have tried to use Decimal module and with precision set to 6 it was nearly 6 times slower than float! Is there any other way to work with fixed-point numbers than Decimal (it looks to be created for higher precision than for performance)
0
python,performance,raspberry-pi,floating-point-precision
2014-07-09T08:46:00.000
0
24,649,084
You can force single precision floating point calculations using numpy. However, I would be very surprised if using single precision floating point worked out any faster than double precision: the raspberry pi has hardware floating point support so I would expect that all calculations are done at full 80 bit precision and then rounded for 32 bit or 64 bit results when saving to memory. The only possible gain would be slightly less memory bandwidth used when saving the values.
0
2,081
false
0
1
Lower the floating-point precision in python to increase performance
24,649,933
2
2
0
3
2
0
0.291313
0
I'm working with python on raspberry pi. I'm using complementary filter to get better values from gyroscope, but it eats too much raspberry's power - it's about 70%. I thought I could increase performance by reducing floating point precision. Now, results have about 12 decimal places, it's way more than I need. Is there any way to set maximum precision? Just rounding the number doesn't meet my needs, since it's just another calculation. Thanks! Edit: I have tried to use Decimal module and with precision set to 6 it was nearly 6 times slower than float! Is there any other way to work with fixed-point numbers than Decimal (it looks to be created for higher precision than for performance)
0
python,performance,raspberry-pi,floating-point-precision
2014-07-09T08:46:00.000
0
24,649,084
It may be that you have the wrong end of the stick. The data flow form a gyroscope is rather slow, so you should have ample time to filter it with any reasonable filter. Even a Kalman filter should be usable (though probably unnecessary). How often do you sample the gyroscope and accelerometer data? Reasonable maximum values are few hundred Hertz, not more. The complementary filter for accelerometer and gyroscope measurement is very lightweight, and it by itself should consume very little processing power. It can be implemented on a slow 8-bit processor, so Raspberry is way too fast for it. Depending on what you do with the complementary filter, the filter itself needs a few floating point operations. If you calculate arcus tangents or equivalent functions, that'll require hundreds of FLOPs. If you do that at a rate of 1 kHz, you'll consume maybe a few hundred kFLOPS (FLoating-point OPerations per Second). The FP throughput of a RPi is approximately 100 MLFOPS, so there is a lot of margin. Reducing the FP precision will thus not help significantly, the problem is elsewhere. Maybe if you show a bit more of your code, it could be determined where the problem is!
0
2,081
false
0
1
Lower the floating-point precision in python to increase performance
24,650,318
1
3
1
5
3
0
1.2
0
I have a Python code that needs to be able to execute a C++ code. I'm new to the idea of creating libraries but from what I have learned so far I need to know whether I need to use static or dynamic linking. I have read up on the pros and cons of both but there is a lot of jargon thrown around that I do not understand yet and since I need to do this ASAP I was wondering if some light can be shed on this from somebody who can explain it simply to me. So here's the situation. My C++ code generates some text files that have data. My Python code then uses those text files to plot the data. As a starter, I need to be able to run the C++ code directly from Python. Is DLL more suitable than SL? Or am I barking up the completely wrong tree? Extra: is it possible to edit variables in my C++ code, compile it and execute it, all directly from Python?
0
python,c++,dll,static-libraries
2014-07-09T10:07:00.000
0
24,650,785
It depends on your desired deployment. If you use dynamic linking will need to carefully manage the libraries (.so, .dll) on your path and ensure that the correct version is loaded. This can be helped if you include the version number in the filename, but then that has its own problems (security... displaying version numbers of your code is a bad idea). Another benefit is that you can swap your library functionality without a re-compile as long as the interface does not change. Statically linking is conceptually simpler and practically simpler. You only have to deploy one artefact (an .exe for example). I recommend you start with that until you need to move to the more complicated shared library setup. Edit: I don't understand your "extra credit" question. What do you mean by "edit values"? If you mean can you modify variables that were declared in your C++ code, then yes you can as long as you use part of the public interface to do it. BTW this advice is for the general decision. If you are linking from Python to C/C++ I think you need to use a shared library. Not sure as I haven't done it myself. EDIT: To expand on "public interface". When you create a C++ library of whatever kind, you specify what functions are available to outside classes (look up how to to that). This is what I mean by public interface. Parts of your library are inaccessible but others (that you specify) are able to be called from client code (i.e. your python script). This allows you to modify the values that are stored in memory. If you DO mean that you want to edit the actual C++ code from within your python I would suggest that you should re-design your application. You should be able to customise the run-time behaviour of your C++ library by providing the appropriate configuration. If you give a solid example of what you mean by that we'll be able to give you better advice.
0
2,081
true
0
1
Advise needed for Static vs Dynamic linking
24,650,884
1
1
0
0
0
0
1.2
1
How can i get twitter information (number of followers, following, etc.) about a set of twitter handles using the Twitter API? i have already used Python-Twitter library but this only gives me information about my own twitter account, but i need the same for other twitter users (i have a list). Can you please guide me in the right direction? or refer to some good blogs/articles
0
python-2.7,twitter
2014-07-09T12:05:00.000
0
24,653,225
If you want the latest tweets from specific users, Twitter offers the Streaming API. The Streaming API is the real-time sample of the Twitter Firehose. This API is for those developers with data intensive needs. If you're looking to build a data mining product or are interested in analytics research, the Streaming API is most suited for such things. If you're trying to access old information, the REST API with its severe request limits is the only way to go.
0
58
true
0
1
Twitter API access using Python (newbie:Help Needed)
24,654,111
1
1
0
2
1
0
0.379949
0
When I specify a Python executable script file that does not end in .py suffx, Sonar runs successfully but the report has no content. I have tried specifying -Dsonar.python.file.suffixes="" but that makes no difference. sonar-runner -Dsonar.sources=/users/av/bin -Dsonar.inclusions=gsave -Dsonar.issuesReport.html.location=/ws/av-rcd/SA_Reports/PY-SA_report-2014-7-2-15-13-44.html -Dsonar.language=py -Dsonar.python.file.suffixes="" How can I make sonar analyze a python executable script that does nothave a .py suffix?
0
python,sonarqube,filenames,executable
2014-07-09T23:58:00.000
1
24,665,515
It is not possible to do so. Empty string as value of property "sonar.python.file.suffixes" is ignored.
0
561
false
0
1
Sonar python files without .py suffix
24,698,064
1
1
0
0
0
0
0
0
I do not want to use the fab command and don't use the fabfile & command line arguments. I want to make automated remote ssh using fab api by writing a python script.Can I automated this by writig python script?
0
python,ssh,fabric
2014-07-10T09:49:00.000
1
24,673,386
You can call your functions with importing your fabfile.py. At the end, fabfile is just another python script you can import. I saw a case, where a django project has an api call to a function from fabfile. Just import and call, simple as python :)
0
74
false
0
1
without using fab commandline argument can I use fabric api automated
24,673,514
1
3
0
1
0
0
0.066568
0
I have a GUI program in Python which calculates graphs of certain functions. These functions are mathematical like say, cos(theta) etc. At present I save the graphs of these functions and compile them to PDF in Latex and write down the equation manually in Latex. But now I wish to simplify this process by creating a template in Latex that arranges, The Function Name, Graph, Equation and Table and complies them to a single PDF format with just a click. Can this be done? And how do I do it? Thank you.
0
python,pdf,export,latex,pdflatex
2014-07-10T19:00:00.000
0
24,684,316
generate a Latex file.tex with a Python script f= open("file.tex", 'w') f.write('\documentclass[12pt]{article}\n') f.write('\usepackage{multicol}\n') f.write('\n\begin{document}\n\n') ... f.write('\end{document}') f.close() run pdflatex on the LaTex file from the Python script as a subprocess subprocess.call('latex file.tex') As an alternative to 1. you can generate a LaTex template and just substitute the variable stuff using Python regular expressions and string substitutions.
0
2,206
false
1
1
Python Export Program to PDF using Latex format
24,684,691
1
3
0
0
0
0
0
0
I have the temperature coming from my arduino through the serial port on my mac. I need to write the data to a file, i don't want my script to write the data from /dev/tty.usbserial-A5025XZE (serial port) if the data is the same or if it is nothing. The temperature is the the format "12.32" and is sent every 5s.
0
python,bash,arduino
2014-07-11T15:10:00.000
1
24,700,966
Just save the output from the arduino to a temporary variable and compare it to another that is the last value written to a file. If it is different, change the last value written to the new temperature and write it to the file.
0
51
false
0
1
Check to see if data is the same before writing over it
24,715,910
1
1
0
1
1
0
0.197375
0
Hi all I am trying to write a python code to broadcast an SSID created by using Python. Is there a library Written for something like that which i could install? Is it really possible to write such a code to cause my wifi card broad cast an SSID i created?
0
python-2.7,ssid,wifi
2014-07-14T02:08:00.000
0
24,728,678
aircrack-ng suite use airbase-ng to broadcast or hostapd (if you want to do more than just broadcast). In terms of python not really, you could use subprocess and execute airbase-ng through your script. If you want pure python, best to get Scapy and do it through there.
0
633
false
0
1
Wireless SSID broadcast using Python
24,738,477
1
3
0
-2
0
1
-0.132549
0
C++ programmer here. In Python, how do you make sure that a particular class (e.g. UsefulClass) can only be created through its related factory class (e.g. FactoryClass)? But, at the same time the public methods of UsefulClass are callable directly? In C++ this can be easily achieved by making the relevant methods of UsefulClass public, and by making its default constructor (and any other constructors) private. The related FactoryClass (which can be a "friend" of the UsefulClass) can return instances of UsefulClass and thereby strictly controlling creation, while allowing the user to directly call the public methods of UsefulClass. Thanks.
0
python
2014-07-14T13:43:00.000
0
24,737,909
Don't. Python is not C++ and using patterns that worked before are silly in Python. In particular, Python is not a "Bondage and Domination" language where phrases like "thereby strictly controlling creation" don't apply. "If you didn't want to instantiate a UsefulClass then why did you?" — me. If you can't trust yourself or your colleagues to read and follow the code's internal documentation, you're screwed regardless of the implementation language.
0
136
false
0
1
Only creating object through factory class in Python - factory class related
24,739,463
1
2
0
0
0
1
0
0
I have a Python script that reads a line of data from a source file, performs a set of calculations on that data and writes the results of those calculations to the output file. The script is currently coded to read one line at a time from the source file until the end of the source file is reached. Can I improve the execution time of the script by reading multiple lines from the source file, performing the calculations and writing the results to the output file? Do I take a performance hit by having large numbers of read / write instances? I ask the question rather than perform a test due to the difficulty of changing the code.
0
python,performance,file,io
2014-07-14T18:31:00.000
0
24,743,340
Such a question can be answered only by real measurement. You shall create simple test scenario, which will do reading and writing files of similar type and size without the actual calculation. You can do profiling and check, how much time you spend on I/O operations and how much on processing the content. It might turn out, that even with I/O running at speed of light you will not improve the performance remarkably. Without measuring the time one can only guess and my estimation is: if you use default buffering, you will not see big differences. in case, reading more lines at once would speed up the processing, you could play with setting up larger buffer for file operations. This could speed up the process even with processing line by line keeping your processing code simple. Personally, I would prefer to preserve current simple line by line processing, unless performance and gains would be really significant.
0
41
false
0
1
Performance Tradeoff Reading From One File, Perfoming An Action and Writing To Another File
24,743,892
1
3
0
-1
0
1
-0.066568
0
What is the most efficient algorithm for finding ~A XOR B? (Note that ~ is the complement function, done by reversing each 1 bit into 0 and each 0 into 1 bit, and XOR is the exclusive or function) For example, ~4 XOR 6 = ~010 = 101 = 5 and ~6 XOR 9 = ~1111 = 0
0
python,bit-manipulation,xor
2014-07-15T21:58:00.000
0
24,768,900
You can simply use ==. A XNOR B is same as == operator because: A B NXOR F F T F T F T F F T T T
0
1,018
false
0
1
Complement of XOR
24,768,999
2
2
0
1
1
0
0.099668
0
I'm fairly competent with Python but I've never 'uploaded code' to a server before and have it run automatically. I'm working on a project that would require some code to be running 24/7. At certain points of the day, if a criteria is met, a process is started. For example: a database may contain records of what time each user wants to receive a daily newsletter (for some subjective reason) - the code would at the right time of day send the newsletter to the correct person. But of course, all of this is running out on a Cloud server. Any help would be appreciated - even correcting my entire formulation of the problem! If you know how to do this in any other language - please reply with your solutions! Thanks!
0
python,cloud
2014-07-15T23:02:00.000
0
24,769,574
Here are two approaches to this problem, both of which require shell access to the cloud server. Write the program to handle the scheduling itself. For example, sleep and wake up every few miliseconds to perform the necessary checks. You would then transfer this file to the server using a tool like scp, login, and start it in the background using something like python myscript.py &. Write the program to do a single run only, and use the scheduling tool cron to start it up every minute of the day.
0
262
false
1
1
Uploading code to server and run automatically
24,769,619
2
2
0
1
1
0
0.099668
0
I'm fairly competent with Python but I've never 'uploaded code' to a server before and have it run automatically. I'm working on a project that would require some code to be running 24/7. At certain points of the day, if a criteria is met, a process is started. For example: a database may contain records of what time each user wants to receive a daily newsletter (for some subjective reason) - the code would at the right time of day send the newsletter to the correct person. But of course, all of this is running out on a Cloud server. Any help would be appreciated - even correcting my entire formulation of the problem! If you know how to do this in any other language - please reply with your solutions! Thanks!
0
python,cloud
2014-07-15T23:02:00.000
0
24,769,574
Took a few days but I finally got a way to work this out. The most practical way to get this working is to use a VPS that runs the script. The confusing part of my code was that each user would activate the script at a different time for themselves. To do this, say at midnight, the VPS runs the python script (using scheduled tasking or something similar) and runs the script. the script would then pull times from a database and process the code at those times outlined. Thanks for your time anyways!
0
262
false
1
1
Uploading code to server and run automatically
24,983,741
1
1
0
2
0
0
1.2
0
I have a Matlab script that sends a number in hexadecimal representation to a Python socket server. Then Python sends the same message back. Python receives: 3ff0000000000000. But Matlab receives (using fread): 51 102 102 48 48 48 48 48 48 48 48 48 48 48 48 48. What does this mean? I can't figure out from Matlab's documentation what to do with those numbers. I've tried converting them to hexadecimal using mat2str and num2str but none of the results make sense to me.
0
python,matlab,tcp,hex
2014-07-16T14:12:00.000
0
24,783,069
These numbers are ASCII codes for.. the 3ff000000000.. Basically, what you are sending over the wire is a string, you need to interpret it as a hexadecimal number first.
0
174
true
0
1
Matlab fread from Python Socket
24,783,138
1
1
0
0
0
0
0
0
I currently have a Raspberry Pi running Iperf non stop and collecting results. After collecting results it uploads the bandwidth tests to MySQL. Is there a way to automatically refresh the table to which the data is added?
1
python,mysql
2014-07-16T21:57:00.000
0
24,791,510
Is your goal is to use MySQL workbench to build a live-view of your data ? If so I don't think you're using the right tools. You may just use ElasticSearch to store your data and Kibana to display it, this way you'll have free graphs and charts of your stored data, and auto-refresh (based on an interval, not on events). You also may take a look a Grafana, an event more specialized tool in storing / representing graphs of values. But if you really want to store your data on MySQL, you may not want to use MySQL Workbench as a user interface, it's a developper tool to build your database. You may however build a graphical interface from scratch, and send it an event when you're updating your tables so it refreshes itself, but it's a lot of work that Kibana/Grafana does for you.
0
726
false
0
1
MySQL WorkBench How to automatically re run query?
24,795,785
2
6
0
1
72
1
0.033321
0
I know this is an electrical engineering convention, but I'm still wondering why it was chosen for Python. I don't know other programming languages with complex-number literals, so I don't have anything to compare against, but does anyone know any that do use i?
0
python
2014-07-17T19:59:00.000
0
24,812,444
j (not J) is used in Electrical Engineering as mentioned before. i for current: yes, both I (dc) and i (ac) are used for current.
0
38,708
false
0
1
Why are complex numbers in Python denoted with 'j' instead of 'i'?
37,527,223
2
6
0
1
72
1
0.033321
0
I know this is an electrical engineering convention, but I'm still wondering why it was chosen for Python. I don't know other programming languages with complex-number literals, so I don't have anything to compare against, but does anyone know any that do use i?
0
python
2014-07-17T19:59:00.000
0
24,812,444
i in electrical engineering is typically used for i(t) or instantaneous current. I is for steady state DC (non-complex) or rms values of AC current. In addition spacial coordinates are generally expressed as i,j,k but for two dimensional items i,j are all that are needed and the "i" is dropped so the perpendicular "j" is used as in 4j3 vs 4+3i or 4i3 -See that this is not 413 at a glance. J recognizes this notation in handling complex numbers. As a retired EE prof- I do like the use of "j" As for Current density "J" is used.
0
38,708
false
0
1
Why are complex numbers in Python denoted with 'j' instead of 'i'?
54,385,244
2
3
0
1
1
0
0.066568
0
I'm totally new in Python world. I want to create a web application with some Python code behind. I want to use Python to control Raspberry Pi inputs and outputs etc. There are Python 2 and Python 3 available. I've read some about these version, but I'm still not sure which version I should use.
0
python,raspberry-pi
2014-07-21T07:03:00.000
0
24,859,323
It depends, what web framework you are going to use. Some of them might have a bit limited functionality on Python 3 but still can be worth to use. This could be case of Flask, which is very lightweight, provides all what you need, but according to heavy users lack in few small details complete support for Python 3. This situation is likely to be resolved in near future, but if you want to have it developed now, it is better to use the version of Python, which fits your web framework. Comments on few (not all) web frameworks Django Very popular, but will force you to do things in Django style. Final solution can become a bit heavier, then really necessary, this could be a problem on Raspberry Pi, which has very limited resources available. Flask Also rather popular (even though not as much as Django). Gives you freedom to use only what you need. Very good tutorials. Most of the applications run under Python 2 and Python 3, few (supporting) libraries are told to be not ported completely yet (I cannot serve exactly which ones). CherryPy Minimalistic web framework, but with very good builtin HTTP and WSGI server. Not so easy to find good tutorials, best is using (now a bit old) book about programming in CherryPy. Note: By default, the applications are developed in debug mode and code is autoreloaded from disk. This disk activity can slow down on RPi and consume some energy, so if you have troubles with that, set the app to production mode. Conclusions My current choice is using Flask on Python 2.7, but this is partially due to a lot of legacy code I have developed in Python 2.7. You shall make your decision about what framework you are going to use and check, what is status of Python 3 support.
0
2,070
false
0
1
Which Python version should I use with Raspberry Pi running web applications?
24,862,812
2
3
0
1
1
0
0.066568
0
I'm totally new in Python world. I want to create a web application with some Python code behind. I want to use Python to control Raspberry Pi inputs and outputs etc. There are Python 2 and Python 3 available. I've read some about these version, but I'm still not sure which version I should use.
0
python,raspberry-pi
2014-07-21T07:03:00.000
0
24,859,323
Most of the books on the topic of Python and Raspberry Pi refer to Python 3.x. I'm finding a lot of online courses and books are focusing more on 3.x than 2.7. Unless you're working at a company that's on Python 2.x and don't plan on going to 3.x, you're better off learning Python 3.x.
0
2,070
false
0
1
Which Python version should I use with Raspberry Pi running web applications?
24,860,000
1
3
0
0
1
0
0
0
I have been looking for a few weeks now on how to make a .py file start on startup. I have had no luck on any of the methods working, does anyone have any ideas. The file is reasonably small and will need GPIO input from a PIR movement sensor.
0
python,raspberry-pi,gpio
2014-07-21T22:33:00.000
0
24,875,955
Make sure your script runs fine from the command line first. Also, if you are dealing with the GPIO pins, make sure you are running your script with the proper permissions. I know when I access the GPIO pins on my pi, I need to use root/sudo to access them.
0
1,105
false
0
1
Autostart on raspberry pi
24,920,035
1
2
0
0
0
0
0
1
I want to establish one session at the starting of the suite. That session should be stay for longer time for multiple test cases.That session should end at the last. That session should be implement in Selenium Web driver by using Unittest frame works in python language. please can anyone suggest any methods or how to implement it.
0
python,session,selenium,webdriver,python-unittest
2014-07-22T14:47:00.000
0
24,890,579
The simplest way to achieve this is not to use the Setup() and TearDown() methods, or more specifically not to create a new instance of the WebDriver object at that start or each test case, and not to use the Quit() method at the end of each test case. In your first test case create a new instance of the WebDriver object and use this object for all of your test cases. At the end of your last test case use the Quit() method to close the browser.
0
739
false
0
1
how to establish a session thourghout the selenium webdriver suite by using python in firefox
24,891,062
1
1
0
2
5
0
0.379949
0
In our company we using vagrant VM's to have the save env. for all. Is it possible to configure VisualStudio + PTVS (python tools for VS) to use vagrant based python interpreter via ssh for example?
0
python,visual-studio,vagrant,ptvs
2014-07-23T14:22:00.000
0
24,913,100
There's no special support for remote interpreters in PTVS, like what PyCharm has. It's probably possible to hack something based on the existing constraints, but it would be some work... To register an interpreter that can actually run, it would have to have a local (well, CreateProcess'able - so e.g. SMB shares are okay) binary that accepts the same command line options as python.exe. It might be possible to use ssh directly by adding the corresponding command line options to project settings. Otherwise, a proxy binary that just turns around and invokes the remote process would definitely work. Running under debugger is much trickier. For that to work, the invoked Python binary would also have to be able to load the PTVS debugging bits (which is a bunch of .py files in PTVS install directory), and to connect to VS over TCP to establish a debugger connection. I don't see how this could be done without writing significant amounts of code to correctly proxy everything. Attaching to a remotely running process using ptvsd, on the other hand, would be trivial. For code editing experience, you'd need a local copy (or a share etc) of the standard library for that interpreter, so that it can be analyzed by the type inference engine.
0
1,465
false
0
1
Is it possible to use remote vagrant based python interpreter when coding Visual Studio + PTVS
24,916,542
1
1
0
3
0
1
1.2
0
I'm developing a module right now that requires a configuration file. The configuration file is optional and can be supplied when the module is launched, or (ideally) loaded from a defaults.json file located in the same directory as the application. The defaults.json file's purpose is also used to fill in missing keys with setdefault. The problem comes from where the module is launched... ...\Application = python -m application.web.ApplicationServer ...\Application\application = python -m web.ApplicationServer ...\Application\application\web = python ApplicationServer.py ....read as, "If if I'm in the folder, I type this to launch the server." How can I determine where the program was launched from (possibly using os.getcwd()) to determine what file path to pass to json.load(open(<path>), 'r+b')) such that it will always succeed? Thanks. Note: I strongly prefer to get a best practices answer, as I can "hack" a solution together already -- I just don't think my way is the best way. Thanks!
0
python,configuration
2014-07-23T15:20:00.000
0
24,914,489
If you want to get the path to file that has your code relative to from where it was launched, then it is stored in the __file__ of the module, which can be used if you: dont want your module(s) to be installed with a setup.py/distutils scheme. and want your code + configs contained in one location. So a codeDir = os.path.dirname(os.path.abspath(__file__)) should always work. If you want to make an installer, I would say it is customary to place the code in one place and things like configs somewhere else. And that would depend on your OS. On linux, one common place is in /home/user/.local/share/yourmodule or just directly under /home/user/.yourmodule. Windows have similar place for app-data. For both, os.environ['HOME'] / os.getenv('HOME') is a good starting point, and then you should probably detect the OS and place your stuff in the expected location with a nice foldername. I can't swear that these are best practices, but they seem rather hack-free at least.
0
241
true
1
1
Best way to find the location of a config file based from launching directory?
24,915,117
1
3
0
0
3
0
0
0
I wrote a python script that uses win32com.client.Dispatch("Outlook.Application") to send automated emails through outlook. If I run the script myself everything works perfectly fine. But if I run it through Window's task scheduler it doesn't send the emails. Just to check if I am running the script properly I made the script output a random text file and that works but email doesn't. Why?
0
python,outlook,pywin32
2014-07-24T06:40:00.000
1
24,926,733
My similar issue has been cleared up. I used task scheduler to call a python script (via batch file) that has the pywin32com module. The python code opens excel and calls a macro. It will run fine from python, cmd and the batch file, but wasn't working when ran through task scheduler. It traced back to errors like: "EnsureDispatch disp = win32com.client.Dispatch(prog_id)" As noted on this thread, I changed the option to "Run only when user is logged on" and it ran successfully! The only drawback is that I schedule the task for a time that I'm away from the computer. I suppose I just have to not log off and hope that the cpu doesn't go into sleep mode, but that's not really a big deal in this case.
0
3,507
false
0
1
Sending automated email using Pywin32 & outlook in a python script works but when automating it through windows task scheduler doesn't work
54,351,677
2
3
0
7
10
1
1
0
Assume to not have any particular memory-optimization problem in the script, so my question is about Python coding style. That also means: is it good and common python practice to dereference an object as soon as whenever possible? The scenario is as follows. Class A instantiates an object as self.foo and asks a second class B to store and share it with other objects. At a certain point A decides that self.foo should not be shared anymore and removes it from B. Class A still has a reference to foo, but we know this object to be useless from now on. As foo is a relatively big object, would you bother to delete the reference from A and how? (e.g. del vs setting self.foo = None) How this decision influence the garbage collector?
0
python,garbage-collection
2014-07-24T16:03:00.000
0
24,938,729
If, after deleting the attribute, the concept of accessing the attribute and seeing if it's set or not doesn't even make sense, use del. If, after deleting the attribute, something in your program may want to check that space and see if anything's there, use = None. The garbage collector won't care either way.
0
4,089
false
0
1
Should I delete large object when finished to use them in python?
24,938,815
2
3
0
1
10
1
0.066568
0
Assume to not have any particular memory-optimization problem in the script, so my question is about Python coding style. That also means: is it good and common python practice to dereference an object as soon as whenever possible? The scenario is as follows. Class A instantiates an object as self.foo and asks a second class B to store and share it with other objects. At a certain point A decides that self.foo should not be shared anymore and removes it from B. Class A still has a reference to foo, but we know this object to be useless from now on. As foo is a relatively big object, would you bother to delete the reference from A and how? (e.g. del vs setting self.foo = None) How this decision influence the garbage collector?
0
python,garbage-collection
2014-07-24T16:03:00.000
0
24,938,729
So far, in my experience with Python, I haven't had any problems with garbage collection. However, I do take precautions, not only because I don't want to bother with any unreferenced objects, but also for organization reasons as well. To answer your questions specifically: 1) Yes, I would recommend deleting the object. This will keep your code from getting bulky and/or slow. This is an especially good decision if you have a long run-time for your code, even though Python is pretty good about garbage collection. 2) Either way works fine, although I would use del just for the sake of removing the actual reference itself. 3) I don't know how it "influences the garbage collector" but it's always better to be safe than sorry.
0
4,089
false
0
1
Should I delete large object when finished to use them in python?
24,938,858
1
2
0
0
1
1
0
0
My python script runs with several imports. On some systems where it needs to run, some of those modules may not be installed. Is there a way to distribute a standalone script that will automagically work? Perhaps by just including all of those imports in the script itself?
0
python
2014-07-24T16:57:00.000
0
24,939,723
Including all necessary modules in a standalone script is probably extremely tricky and not nice. However, you can distribute modules along with your script (by distributing an archive for example). Most modules will work if they are just installed in the same folder as your script instead of the usual site-packages. According to the sys.path order, the system's module will be loaded in preference to the one you ship, but if it doesn't exist the later will be imported transparently. You can also bundle the dependencies in a zip and add that zip to the path, if you think that approach is cleaner. However, some modules cannot be that flexible. One example are extensions that must first be compiled (like C extensions) and are thus bound to the platform. IMHO, the cleanest solution is still to package your script properly using distutils and proper dependency definition and write some installation routine that installs missing dependencies from your bundle or using pip.
0
686
false
0
1
Is it possible to bundle all import requirements with a python script?
24,939,902
2
2
0
2
3
1
0.197375
0
I'm just curious here, but I have been using bytes() to convert things to bytes ever since I learned python. It was until recently that I saw struct.pack(). I didn't bother learning how to use it because I thought It did essentially did the same thing as bytes(). But it appears many people prefer to use struct.pack(). Why? what are the advantages of one over the other?
0
python,python-3.x
2014-07-24T21:51:00.000
0
24,944,626
They do two different things; compare bytes(1234) with struct.pack("!H", 1234). The first just provides a 4-byte string representation of the number bytes object with 1,234 null bytes; the second provides a two-byte string with the (big-endian) value of the integer. (Edit: Struck out irrelevant Python 2 definition of bytes(1234).)
0
1,956
false
0
1
Python-bytes() vs struct.pack()
24,944,710
2
2
0
3
3
1
1.2
0
I'm just curious here, but I have been using bytes() to convert things to bytes ever since I learned python. It was until recently that I saw struct.pack(). I didn't bother learning how to use it because I thought It did essentially did the same thing as bytes(). But it appears many people prefer to use struct.pack(). Why? what are the advantages of one over the other?
0
python,python-3.x
2014-07-24T21:51:00.000
0
24,944,626
bytes() does literally what the name implies: Return a new “bytes” object, which is an immutable sequence of integers in the range 0 <= x < 256 struck.pack() does something very different: This module performs conversions between Python values and C structs represented as Python strings While for some inputs these might be equivalent, they are not at all the same operation. struct.pack() is essentially producing a byte-string that represents a POD C-struct in memory. It's useful for serializing/deserializing data.
0
1,956
true
0
1
Python-bytes() vs struct.pack()
24,944,749
1
2
0
0
0
0
0
0
Eclipse can run a python project rather than just one .py file. Is it possible to run an entire project from Python 3.x shell. I looked into it a little, but I didn't really find a way. I tried just running the .py file with the main using exec(open('bla/blah/projMain.py')) like you would any python file. All of my modules (including the main) is in one package, but when I ran the main I got a no module named 'blah' (the package it is in). Also, as a side note there is in fact aninit.pyand even apycache' directory. Maybe I didn't structure it correctly with Eclipse (or rather maybe Eclipse didn't structure it properly), but Eclipse can run it, so how can I with a Python 3.4.1 shell? Do I have to put something in __init__.py, perhaps, and then run that file?
0
python,eclipse,shell,python-3.x,project
2014-07-25T01:44:00.000
1
24,946,778
Based on current information, I would suggest you to run it this way in OSX 1) Bring up the Terminal app 2) cd to the location where bla lives 3) run python bla/blah/projMain.py Show us stacktrace if the above failed.
0
880
false
0
1
Run Python project from shell
24,947,778
1
1
0
1
1
0
0.197375
0
Usually I use my .bashrc file to load some functions for my bash environment. When I call these functions (that I created based on some frameworks I use). So, I play around with variables such as PATH and PYTHONPATH when I use the functions depending on the environment I'm working on. So far so well with the terminal. The problem is that when I use emacs these functions and these environmental variables that I activate with my functions, they don't exist. .bashrc is not read by emacs, and therefore I don't have the functions loaded by .bashrc don't work. I would like them to work. Any Ideas?
0
bash,emacs,path,pythonpath,.bash-profile
2014-07-26T14:51:00.000
1
24,972,238
The issue might be that emacs, as many other programs you run, reads your login shell rc files, such as ~/.bash_login or ~/.profile, but not ~/.bashrc, where as your terminal also reads you user shell rc file: ~/.bashrc.
0
594
false
0
1
Emacs, bash, bashrc, functions and paths
24,980,728
1
1
0
1
4
0
0.197375
0
I've taken to creative coding on my iPad and iPhone using Codea, Procoding, and Pythonista. I really love the paper.js Javascript library, and I'm wondering how I might have the functionality that I find in paper.js when writing in Python. Specifically, I'd love to have the vector math and path manipulation that paper.js affords. Things like finding the intersection of two paths or binding events to paths (on click, mouse move, etc). There's an ImagePath module provided by Pythonista that does some path stuff but it's not as robust as paper.js (it seems). Any ideas?
0
python,processing,paperjs,codea,pythonista
2014-07-27T09:28:00.000
0
24,979,640
The ui module actually includes a lot of vector drawing functions, inside a ui.ImageContext. ui.ImageContext is a thin wrapper around part of one of the Objective-C APIs (maybe CALayer?) The drawing methods are designed to operate inside the draw method of a custom view class, but you can present these things in other contexts using a UIImageContext, from which you can get a static image.
0
901
false
0
1
A paperjs-equivalent for python (specifically, Pythonista for iOS)?
38,641,689
1
1
0
0
0
0
0
0
I have managed to use oauth authentication and add a Sign in with Twitter functionality to a Google App Engine web app. How should I verify, during the site navigation, if the user is still logged in Twitter?
0
python,google-app-engine,twitter,webapp2
2014-07-27T10:39:00.000
0
24,980,103
Are you talking about being logged in to Twitter.com or your app? If you have received oAuth access tokens by authenticating an app, then logging out of twitter.com won't 'log you out' of any apps, the tokens will remain valid until the user revokes the access.
0
36
false
1
1
Sign in with Twitter: how to verify the current user is stil logged in
25,003,619
1
1
0
2
2
0
1.2
0
I'm busy creating a login system to the admin area of my new personal website. The site backend is written entirely in Python, owing to my knowledge of the language. I have been looking at ways of tracking a user once they login so that the rest of the site knows they are logged in. I can't get a definitive answer online (unless I'm not looking hard enough or are searching the wrong thing) as to whether are all cookies are the same and accessible from all languages. My tests have proved inconclusive; either they are not or I am doing it wrong but some clarification would be appreciated. For example, if I create a cookie in Python with cookie.SimpleCookie() in the http.cookies module, is there a way of loading and accessing the value of this cookie in PHP? Thanks in advance for any help, Ilmiont
0
php,python,cookies,web,login
2014-07-28T06:24:00.000
0
24,989,488
Cookie are just cookies and browsers don't record (and possibly can't) how they were created. So when you create one cookie using PHP and then you would like to read the same cookie using any other language that support cookie, you should do it without a problem. Of course you need to remember about cookie domain and path. They should be set properly if you want to access your cookie without a problem.
0
52
true
0
1
Are all cookies created equal?
24,989,747
1
1
0
2
1
0
0.379949
0
So I've been working on porting a python tester to PHP, but I'm fairly new to PHP still. I understand there is a session command within PHP, and I've read the documentation as well as other questions that have come up here in stackoverflow that are close to it, but not quite what I'm looking for. So my question is whether if there is something similar to sess = requests.Session() from python to PHP, i.e. is there something I could pass in just like I did in python that can occur also in PHP? EDIT: So I've re-read the documentations for both the python Request package and Sessions for PHP. And I think the meat of my question is if there is a way to have a session object in PHP that holds persistent parameters across POST and GET Request? And to further explain, my main problem is that I have certain POST and GET endpoints that require a login, but even after using the login POST first, I still receive a 401 error code after. Example Code: $current->httpPost($accountLoginURL, $accountLoginPostData); $current->httpPost($followFriend, $followFriendData); And even after the first line gives me a 200. The second gives me a 401.
0
php,python,session
2014-07-28T18:35:00.000
0
25,001,953
You can assign to and read anything you want from the $_SESSION array. It's just a regular array like any other in PHP, except for two things: a) It's a superglobal b) IF you've called session_start(), then PHP will auto-populate the array from whatever's in session storage (files, db, etc...), and auto-save the contents of the array upon script exit or calling session_write_close().
0
212
false
0
1
Is it possible to pass in the session as a variable like in Python for PHP?
25,001,993
2
2
0
1
0
1
0.099668
0
I have a python script, which has multiple command line options. I want to make this script runnable without having to type "python myscript.py" and without having to be in the same directory as the script. For example, if one installs git on linux, regardless of which directory the user is in, the user can do "git add X, etc..". So, an example input I would like is "myscript -o a,b,c -i" instead of "python myscript.py -o a,b,c -i". I already added "#! /usr/bin/env python" to the top of my script's code, which makes it executable when I type "./myscript", however I don't want the ./, and I want this to work from any directory.
0
python,linux,shell,command,executable
2014-07-28T20:27:00.000
1
25,003,748
You should add the folder that contains the script to your system's $PATH variable (I assume you're on Linux). This variable contains all of the directories that are searched looking for a specific command. You can add to it by typing PATH=/path/to/folder:$PATH. Alternately, you need to move the script into a folder that's already in the $PATH variable (which is generally a better idea than messing with system variables).
0
4,406
false
0
1
Making python script executable from any directory
25,003,823
2
2
0
0
0
1
0
0
I have a python script, which has multiple command line options. I want to make this script runnable without having to type "python myscript.py" and without having to be in the same directory as the script. For example, if one installs git on linux, regardless of which directory the user is in, the user can do "git add X, etc..". So, an example input I would like is "myscript -o a,b,c -i" instead of "python myscript.py -o a,b,c -i". I already added "#! /usr/bin/env python" to the top of my script's code, which makes it executable when I type "./myscript", however I don't want the ./, and I want this to work from any directory.
0
python,linux,shell,command,executable
2014-07-28T20:27:00.000
1
25,003,748
Your script needs to be in a location searchable via your PATH. On Unix/Linux systems, the generally accepted location for locally-produced programs and scripts that are not part of the system is /usr/local/bin. So, make sure your script is executable by running chmod +x myscript, then move it to the right place with sudo mv myscript /usr/local/bin (while in the directory containing myscript). You'll need to enter an admin's password, then you should be all set.
0
4,406
false
0
1
Making python script executable from any directory
25,003,818
1
2
0
0
0
0
0
0
Is it possible to use the antlr4 python runtime with python 2.6 or is the minimun required version python 2.7? I want to use it on CentOS 6.3 that comes with python 2.6.6. If it is not possible, is it known what features of python 2.7 that is used?
0
python,antlr4
2014-07-29T14:51:00.000
0
25,018,343
The example you used was badly written. input is the name of a built-in Python function. Give the Lexer a FileStream and it might work.
0
805
false
0
1
use antlr4 python runtime with python 2.6
51,549,667
1
1
0
2
3
0
1.2
0
I have a series of unit tests that are meant to run in two contexts: 1) On a buildbot server 2) in developer's home environments In both our development procedure and in the buildbot server we use virtualenv. The tests run fine in the developer environments, but with buildbot the tests are being run from the python executable in the virtualenv without activating the virtualenv. This works out for most tests, but there are a few that shell out to run scripts, and I want them to run the scripts with the virtualenv's python executable. Is there a way to pull the path to the current python executable inside the tests themselves to build the shell commands that way?
0
python
2014-07-29T20:06:00.000
1
25,024,010
The current python executable is always available as sys.executable, which should give full path (but you can ensure this using os.path functions).
0
313
true
0
1
How can I get the path to the calling python executable
25,026,388
1
6
0
0
29
1
0
0
I have a large project that is unit tested using the Python unittest module. I have one small method that controls large aspects of the system's behaviour. I need this method to return a fixed result when running under the UTs to give consistent test runs, but it would be expensive for me to mock this out for every single UT. Is there a way that I can make this single method, unittest aware, so that it can modify its behaviour when running under the unittest?
0
python,unit-testing
2014-07-29T22:18:00.000
0
25,025,928
I am sure that there are other, better, methods but you could always set a global flag from your main and not under unit test then access it in your method. The other way of course would be to override the method as a part of the unit test set-up - if your method is called brian and you have a test_brian then simply during your pre-test setting brian = test_brian will do the job, you may need to put module names into the preceding statement.
0
11,612
false
0
1
How can a piece of python code tell if it's running under unittest
25,025,972
1
1
0
1
0
1
1.2
0
So i am suppose to open a text file by >>>yyy('yyy.txt') and after inputing that python should find my file (which is does, since in the same directory) and edit all the words 'hot' to new word 'why not'. after editing the text file the content of the entire file should be printed. Its opening the file and its editing 'hot' with 'why not' but it duplicates the whole text in the text file and it does not return anything on the python when i need the text to be displayed. Any help???
0
python
2014-07-30T15:50:00.000
0
25,041,311
file.write() appends to the end of the file. You never clear the file after reading the contents. The simplest thing to do, probably, would be to read the file once in 'r' mode, then open it again in 'w' mode (which will clear the file), and write out the edited content. The output doesn't print because you don't tell it to. Calling infile.readlines() on its own just reads the file, then discards the result. The final line should be print infile.readlines().
0
52
true
0
1
Python, opening the file, reading the file, editing the file, and priting the file string?
25,041,459
2
2
0
0
1
0
0
0
I am using mod_wsgi with apache to serve the python application. I have a directive in the VirtualHost entry as follows WSGIScriptAlias /app /home/ubuntu/www/app.wsgi. I also have DocumentRoot /home/ubuntu/www/. Therefore, if the user attempts to read /app.wsgi it gets the raw file. If I try to block access to it via .htaccess, the application becomes unusable. How do I fix this? Is there a way to do so without moving the file out of the DocumentRoot?
0
python,apache,.htaccess,wsgi
2014-07-30T16:33:00.000
1
25,042,205
You should not stick the WSGI file in the DocumentRoot directory in the first place. You have created the situation yourself. It doesn't need to be in that directory for WSGIScriptAlias to work.
0
77
false
1
1
How do I prevent the raw WSGI python file from being read?
25,056,888
2
2
0
0
1
0
0
0
I am using mod_wsgi with apache to serve the python application. I have a directive in the VirtualHost entry as follows WSGIScriptAlias /app /home/ubuntu/www/app.wsgi. I also have DocumentRoot /home/ubuntu/www/. Therefore, if the user attempts to read /app.wsgi it gets the raw file. If I try to block access to it via .htaccess, the application becomes unusable. How do I fix this? Is there a way to do so without moving the file out of the DocumentRoot?
0
python,apache,.htaccess,wsgi
2014-07-30T16:33:00.000
1
25,042,205
This is far from the best option, but it does seem to work: I added WSGIScriptAlias /app.wsgi /home/ubuntu/www/app.wsgi to the VirtualHost as well so that it will run the app on that uri instead of returning the raw file.
0
77
false
1
1
How do I prevent the raw WSGI python file from being read?
25,045,724
2
3
1
-1
0
0
-0.066568
0
Got a bit of weird request here, however it's one which I can't really figure out the answer to. I'm writing a python application that displays web pages and locally stored images. What I need is a way of displaying a web page using python that is really lightweight and quite fast. The reason for this is that it is running on a Raspberry Pi. Of course I have many options, I can run it through web browser installed on the Raspbian distribution and run it as a separate process in python, I can download an Arch-Linux compatible browser and run it as a separate process in python and finally I can write my own native python file using Gtk or PyQt. All of these approaches have their downsides as well as serious overheads. The web browser must also be full screen when I have a web page to display, and minimised when I'm displaying an image. The main issue I have had with Gtk and PyQt is the way they have to be executed on the main thread - which is impossible as it doesn't align with my multithreaded architecture. The downside to using the web browsers that are pre-installed on raspbian, is that from python you lack control and it's slow. And finally, the issue with using an Arch-Linux browser is that it ends up being messy and hard to control. What I would Ideally need is a web browser that loads a web page almost instantaneously, or a multithreaded web browser that can handle multiple instances. This way I can buffer one web page in the background whilst another browser is being displayed. Do you guys have any advice to point me in the right direction? I would've thought that there would be a neat multithreaded python based solution by now, and I would think that's either because no one needs to do what I'm doing (less likely) - or I'm missing something big (more likely)! Any advice would be appreciated. James.
0
python,multithreading,web,browser,python-webbrowser
2014-07-30T20:04:00.000
0
25,045,924
i have written winks-up in vala. It's small and fast and compile well on raspbian. All the code was optimize to reduce memory occupation. isn't perfect but was better like nothing
0
1,135
false
1
1
Lightweight Python Web Browser
31,276,859
2
3
1
0
0
0
0
0
Got a bit of weird request here, however it's one which I can't really figure out the answer to. I'm writing a python application that displays web pages and locally stored images. What I need is a way of displaying a web page using python that is really lightweight and quite fast. The reason for this is that it is running on a Raspberry Pi. Of course I have many options, I can run it through web browser installed on the Raspbian distribution and run it as a separate process in python, I can download an Arch-Linux compatible browser and run it as a separate process in python and finally I can write my own native python file using Gtk or PyQt. All of these approaches have their downsides as well as serious overheads. The web browser must also be full screen when I have a web page to display, and minimised when I'm displaying an image. The main issue I have had with Gtk and PyQt is the way they have to be executed on the main thread - which is impossible as it doesn't align with my multithreaded architecture. The downside to using the web browsers that are pre-installed on raspbian, is that from python you lack control and it's slow. And finally, the issue with using an Arch-Linux browser is that it ends up being messy and hard to control. What I would Ideally need is a web browser that loads a web page almost instantaneously, or a multithreaded web browser that can handle multiple instances. This way I can buffer one web page in the background whilst another browser is being displayed. Do you guys have any advice to point me in the right direction? I would've thought that there would be a neat multithreaded python based solution by now, and I would think that's either because no one needs to do what I'm doing (less likely) - or I'm missing something big (more likely)! Any advice would be appreciated. James.
0
python,multithreading,web,browser,python-webbrowser
2014-07-30T20:04:00.000
0
25,045,924
I'd use PyQT to display the page but if the way PyQT use threads does not fit within you application, you may just write a minimalist (I'm speaking of ~10 lines of code here) web browser using PyQT, and fork it from your main application ?
0
1,135
false
1
1
Lightweight Python Web Browser
25,051,761
1
1
0
1
1
1
0.197375
0
I have a project on eclipse and I am wondering why I do not see the package symbol on the folders in the hierarchy. Do I have to choose an option to be able to see the folders appear as package on eclipse? I am using PyDev plugin here..
0
python,eclipse,package
2014-07-31T21:17:00.000
0
25,069,079
Three things spring to mind: Does the project have a pythonpath set? Right-click the project -> properties -> pythonpath. Add the root project directory, or whatever is appropriate for your project. Do your packages contain an __init__.py file? Have you got a python interpreter configured in PyDev? Is your package explorer tab/window titled "PyDev Package Explorer"? If not, got to Window -> "Show View" -> "PyDev Package Explorer". Do you have the pydev builder enabled? (PyDev Settings -> Builders)
0
254
false
0
1
How to get package icon in eclipse?
25,069,554
1
1
0
1
1
0
0.197375
0
I have co'd existing code from an SVN repo that uses full imports - by which I mean: -->projectdir -------->dira -------------->a1.py -------------->a2.py -------->dirb -------------->b1.py Suppose a1.py imports a method from a2.py: Normally I would simply write: from a2 import xyz Here they have written it as: from project_dir.dira.a2 import xyz How do I make eclipse recogonize these imports? Basically I want to be able to Ctrl+click and Open Declaration. I need to browse through this massive project and I simply cannot do so until this works. PS: I have tried adding the projectdir to the PYTHONPATH I have tried adding each and every sub-directory to PYTHONPATH I have an init in every folder -_-
0
python,eclipse,pydev,importerror
2014-08-01T10:46:00.000
0
25,078,549
For that to work you need to have init.py under 'project_dir', 'dira' and 'dirb' and then you need to set as a source folder the directory which is the parent of the 'project_dir' (and not the project_dir itself) -- and no other directories should be set as source folders. I.e.: the source folder is the directory added to the PYTHONPATH (so, for importing 'project_dir', its parent must be in the PYTHONPATH). Note: You may have to remove the project from Eclipse/PyDev and recreate it a level before for this to work depending on how you created it the first time.
0
249
false
1
1
Import Error - Pydev Eclipse
25,087,270
1
2
0
3
1
1
1.2
0
our application is composed of several modules grouped in two packages, and one script. I am not sure, and don't manage to google my way through to knowing if the modules need a coding directive in order to manage our native french accented strings, and while we're at it, a shabang with the interpreter directive, or if it is enough to put these two lines in the one script.
0
python
2014-08-01T20:15:00.000
0
25,088,006
Scripts need shebangs. Modules don't need shebangs -- they're sometimes used to make text editors' jobs easier (in detecting the programming language easier), or if a module can be run as a script to execute its tests, but are not called for otherwise. All Python code, whether in modules or scripts, should have encoding directives when not using the default encoding for the Python version targeted (ASCII for 2.x, Unicode for 3.x).
0
98
true
0
1
Do Python modules need shebangs or encoding directives?
25,088,132
1
1
0
1
0
0
1.2
0
I have installed a python package slimit and I have cloned the source code from github. I am doing changes to this package in my local folders which I want to test (often) but I don't want to do allways python setup.py install. My folder structure is: ../develop/slimit/src/slimit (contains package files) ../develop/test/test.py I'm using eclipse + pydev + python 2.7, on linux Should I run eclipse with "sudo rights"? Even better, is there a way to import the local development package into my testing script?
0
python
2014-08-02T10:17:00.000
0
25,093,943
When you're working on a library you can use python setup.py develop instead of install. This will install the package into your local environment and keep it updated as you develop. To be clear, if you use develop you don't have to run it again when you change your source files.
0
227
true
0
1
how to test changes to an installed package
25,106,484
1
2
0
0
0
0
0
0
I have correct python3 program looking like *.py. I have Digital Ocean(DO) droplet with Ubuntu 14.04. My program post message to my twitter account. I just copy my *.py in some directory on DO droplet and run it with ssh and all works fine. But I need to post message(rum my program) automatically every 15-30 min for example. Iam newbie with this all. What should i do? Step-by-step please!
0
python,ubuntu,digital-ocean
2014-08-02T21:54:00.000
1
25,099,749
First install and enable fcron. Then, sudo -s into root and run fcrontab -e. In the editor, enter */30 * * * /path/to/script.py and save the file. Change 30 to 15 if every 15 minutes is what you're after.
0
492
false
0
1
Run my python3 program on remote Ubuntu server every 30 min
25,100,208
1
2
0
1
0
0
1.2
0
I am trying to run a cron script in python 3 so I had to setup a virtual environment (if there is an easier way, please let me know) and in order to run the script I need to be in the script's parent folder as it writes to text files there. So here is the long string of commands I have come up with and it works in console but does not work in cron (or I can't find the output..) I can't type the 5 asterisks without it turning into bullet points.. but I have them in the cron tab. cd usr/local/sbin/cronjobs && . virtualenv/secret_ciphers/bin/activate && cd csgostatsbot && python3 CSGO_STATS_BOT_TASK.py && deactivate
0
python,linux,unix,cron,raspberry-pi
2014-08-04T01:23:00.000
1
25,110,635
It looks like you may have a stray . in there that would likely cause an error in the command chain. Try this: cd usr/local/sbin/cronjobs && virtualenv/secret_ciphers/bin/activate && cd csgostatsbot && python3 CSGO_STATS_BOT_TASK.py && deactivate Assuming that the virtualenv directory is in the cronjobs directory. Also, you may want to skip the activate/deactivate, and simply run the python3 interpreter right out of the virtualenv. i.e. /usr/local/sbin/cronjobs/virtualenv/secret_ciphers/bin/python3 /usr/local/sbin/cronjobs/csgostatsbot/CSGO_STATS_BOT_TASK.py Edit in response to comments from OP: The activate call is what activates the virtualenv. Not sure what the . would do aside from cause shell command parsing issues. Both examples involve the use of the virtualenv. You don't need to explicitly call activate. As long as you invoke the interpreter out of the virtualenv's directory, you're using the virtualenv. activate is essentially a convenience method that tweaks your PATH to make python3 and other bin files refer to the virtualenv's directory instead of the system install. 2nd Edit in response to add'l comment from OP: You should redirect stderr, i.e.: /usr/local/sbin/cronjobs/virtualenv/secret_ciphers/bin/python3 /usr/local/sbin/cronjobs/csgostatsbot/CSGO_STATS_BOT_TASK.py > /tmp/botlog.log 2>&1 And see if that yields any additional info. Also, 5 asterisks in cron will run the script every minute 24/7/365. Is that really what you want? 3rd Edit in response to add'l comment from OP: If you want it to always be running, I'm not sure you really want to use cron. Even with 5 asterisks, it will run it once per minute. That means it's not always running. It runs once per minute, and if it takes longer than a minute to run, you could get multiple copies running (which may or may not be an issue, depending on your code), and if it runs really quickly, say in a couple seconds, you'll have the rest of the minute to wait before it runs again. It sounds like you want the script to essentially be a daemon. That is, just run the main script in a while (True) loop, and then just launch it once. Then you can quit it via <crtl>+c, else it just perpetually runs.
0
822
true
0
1
how to write a multi-command cronjob on a raspberry pi or any other unix system
25,110,706
1
5
0
3
4
1
1.2
0
I've read a number of SO threads about why Python doesn't have truly private variables, and I understand it for most applications. Here's my problem: I'm creating a class project. In this class project, students design an agent that takes a multiple choice test. We want to be able to grade the agents' answers immediately so that the agents can learn from their wrong answers to previous problems. So, we need the correct answer to each problem to be stored in the program. Students are running these projects on their local machine, so they can see all the tester's code. They can't modify it -- we overwrite any of their changes to the tester code when they turn it in. They just turn in a new file representing their agent. In Java, this is straightforward. In the Problem class, there is a private correctAnswer variable. correctAnswer always stores the correct answer to the problem. Agents can only read correctAnswer through a checkAnswer method, and in order to call checkAnswer, agents must actually give an answer to the question that cannot be changed later. I need to recreate this behavior in Python, and so far I'm at a loss. It seems that no matter where in the program correctAnswer is stored, agents can access it -- I'm familiar with the underscore convention, but in this problem, I need agents to be incapable of accessing the right answer. The only thing I can think of is to name correctAnswer something different when we're testing students' code so that their agents can't anticipate what it will be called, but that's an inelegant solution. Any suggestions? It's acceptable for agents to be able to read correctAnswer so long as we can detect when they read it (so we can set the 'agent's answer' variable to read-only after that... although then I'll need a way to do that, too, oof).
0
python,private
2014-08-05T02:34:00.000
0
25,130,345
You can change the name of the the correctAnswer attribute in the code that you overwrite the tester with. This will instantly break all solutions that rely on the name of correctAnswer. Furthermore, you could run both versions of the tester and diff the outputs. If there is a difference, then their code relies on the attribute name in some way.
0
120
true
0
1
Recreating "private" class variables in Python
25,130,467
1
2
0
27
24
0
1.2
0
Before my testing library of choice was unittest. It was working with my favourite debugger - Pudb. Not Pdb!!! To use Pudb with unittest, I paste import pudb;pudb.set_trace() between the lines of code. I then executed python -m unittest my_file_test, where my_file_test is module representation of my_file_test.py file. Simply using nosetests my_file_test.py won't work - AttributeError: StringIO instance has no attribute 'fileno' will be thrown. With py.test neither works: py.test my_file_test.py nor python -m pytest my_file_test.py both throw ValueError: redirected Stdin is pseudofile, has no fileno() Any ideas about how to use Pudb with py.test
0
python,pytest,pudb
2014-08-07T12:39:00.000
1
25,182,812
Simply by adding the -s flag pytest will not replace stdin and stdout and debugging will be accessible, i.e. pytest -s my_file_test.py will do the trick. In documentation provided by ambi it is also said that previously using explicitly -s was required for regular pdb too, now -s flag is implicitly used with --pdb flag. However pytest does not implicitly support pUdb, so setting -s is needed.
0
4,263
true
0
1
Using Python pudb debugger with pytest
25,183,130
1
1
0
1
0
0
0.197375
0
Here is the problem: I do have several python packages that do have unittest that do require access to different online services in order to run, like connecting to a postgresql database or a LDAP/AD server. In many cases these are not going to execute successfully because local network is fire-walled, allowing only basic outgoing traffic on ports like 22, 80, 443, 8080 and 8443. I know that first thing coming into your mind is: build a VPN. This is not the solution I am looking for, and that's due to two important issues: will affect other software running on the same machine probably breaking them. Another solution I had in mind was SSH port forwarding, which I successfully used but also this is very hard to configure and worse it does require me to re-configure the addresses the python script is trying to connect to, and I do not want to go this way. I am looking for a solution that could work like this: detect if there is a firewall preventing your access setup the bypassing strategy (proxy?) run the script restore settings. Is there a way to setup this in a transparent way, one that would not require me to make changes to the executed script/app ?
0
python,unit-testing,firewall-access
2014-08-08T09:03:00.000
1
25,199,709
Is it possible to have the script itself run through these steps? By this I mean have the setup phase of your unit tests probe for firewall, and if detected dynamically setup a proxy somehow, use it to run unit tests, then when done teardown proxy. That seems like it would achieve the transparency you're aiming for.
0
352
false
0
1
Transparent solution for bypassing local outgoing firewalls for python scripts
25,200,050
1
2
0
1
1
0
0.099668
0
I am making something which involves pycurl since pycurl depends on libcurl, I was reading through its documentation and came across this Multi interface where you could perform several transfers using a single multi object. I was wondering if this is faster/more memory efficient than having miltiple easy interfaces ? I was wondering what is the advantage with this approach since the site barely says, "Enable multiple simultaneous transfers in the same thread without making it complicated for the application."
0
python,curl,pycurl
2014-08-10T12:31:00.000
0
25,228,637
Having multiple easy interfaces running concurrently in the same thread means building your own reactor and driving curl at a lower level. That's painful in C, and just as painful in Python, which is why libcurl offers, and recommends, multi. But that "in the same thread" is key here. You can also create a pool of threads and throw the easy instances into that. In C, that can still be painful; in Python, it's dead simple. In fact, the first example in the docs for using a concurrent.futures.ThreadPoolExecutor does something similar, but actually more complicated than you need here, and it's still just a few lines of code. If you're comparing multi vs. easy with a manual reactor, the simplicity is the main benefit. In C, you could easily implement a more efficient reactor than the one libcurl uses; in Python, that may or may not be true. But in either language, the performance cost of switching among a handful of network requests is going to be so tiny compared to everything else you're doing—especially waiting for those network requests—that it's unlikely to ever matter. If you're comparing multi vs. easy with a thread pool, then a reactor can definitely outperform threads (except on platforms where you can tie a thread pool to a proactor, as with Windows I/O completion ports), especially for huge numbers of concurrent connections. Also, each thread needs its own stack, which typically means about 1MB of memory pages allocated (although not all of them used), which can be a serious problem in 32-bit land for huge numbers of connections. That's why very few serious servers use threads for connections. But in a client making a handful of connections, none of this matters; again, the costs incurred by wasting 8 threads vs. using a reactor will be so small compared to the real costs of your program that they won't matter.
0
1,906
false
0
1
Is the Multi interface in curl faster or more efficient than using multiple easy interfaces?
25,228,720
1
1
0
4
2
0
1.2
0
If I run a python script (with Linux) which reads in a file (e.g.: with open(inputfile) as infi:): Will the file be in danger when I abort the script by pressing Ctrl C?
0
python,file-io,abort
2014-08-10T14:43:00.000
1
25,229,703
Probably not. It will release the file handle when the script stops running. Also you typically only have to worry about corrupting a file when you kill a script that is writing to the file, in case it is interrupted mid-write.
0
662
true
0
1
Will aborting a Python script corrupt file which is open for read?
25,229,783
1
2
0
-1
0
1
-0.099668
0
For example: EAX = 10101010 00001110 11001010 00100000 I want to move EAX high 8 bits to right 7 times,what can i do in c or in python? In asm : SHR ah,7 The result of EAX is:10101010 00001110 00000001 00100000 And how about SHR ax,7? I have tried ((EAX & 0xff00) >> 8 ) >> 7,but i don't how to add it back to EAX?
0
python,c,assembly,bit
2014-08-11T02:17:00.000
0
25,235,040
You can store bits from 9th onwards in other variable. Then make those vits 0 in EAX. Then do EAX << 7 and add those bits again to it.
0
384
false
0
1
how to move a number's high 8 bits 7 times in c or python?
25,236,304
1
1
0
2
0
1
1.2
0
I have been trying to import modules on my python Coderbyte challenges, but to no avail. I noticed that the C++ challenges allow includes, so I've been relying on , for the C++ challenges. My question is, is there a way to successfully use other modules for challenges written in Python on Coderbyte?
0
python
2014-08-11T18:38:00.000
0
25,250,164
Coderbyte only has the standard 2.7.2 python. There is not a way to import a package they do not have setup for you to use in their environment.
0
552
true
0
1
CoderByte Python import statements
25,250,281
2
3
0
0
0
0
0
1
I may be being ignorant here, but I have been researching for the past 30 minutes and have not found how to do this. I was uploading a bunch of files to my server, and then, prior to them all finishing, I edited one of those files. How can I update the file on the server to the file on my local computer? Bonus points if you tell me how I can link the file on my local computer to auto update on the server when I connect (if possible of course)
0
python,ssh
2014-08-12T13:08:00.000
0
25,265,148
Have you considered Dropbox or SVN ?
0
2,518
false
0
1
update file on ssh and link to local computer
25,265,180
2
3
0
0
0
0
0
1
I may be being ignorant here, but I have been researching for the past 30 minutes and have not found how to do this. I was uploading a bunch of files to my server, and then, prior to them all finishing, I edited one of those files. How can I update the file on the server to the file on my local computer? Bonus points if you tell me how I can link the file on my local computer to auto update on the server when I connect (if possible of course)
0
python,ssh
2014-08-12T13:08:00.000
0
25,265,148
I don't know your local computer OS, but if it is Linux or OSX, you can consider LFTP. This is an FTP client which supports SFTP://. This client has the "mirror" functionality. With a single command you can mirror your local files against a server. Note: what you need is a reverse mirror. in LFTP this is mirror -r
0
2,518
false
0
1
update file on ssh and link to local computer
25,265,274
1
3
0
0
11
1
0
0
I cannot find how to write empty Python struct/dictionary in PHP. When I wrote "{}" in PHP, it gives me an error. What is the equivalent php programming structure to Python's dictionary?
0
php,python,struct
2014-08-14T22:09:00.000
0
25,318,344
If you are trying to pass a null value from PHP to a Python dictionary, you need to use an empty object rather than an empty array. You can define a new and empty object like $x = new stdClass();
0
6,548
false
0
1
What is the equivalent php structure to python's dictionary?
35,089,240
1
2
0
2
1
1
0.197375
0
I've recently started learning Lua. The only other programming language I have some experience in is Python. In Python there is the "pass" function that does nothing. I was wondering what the equivalent (if any) of this would be in Lua.
0
python,lua
2014-08-15T04:48:00.000
0
25,321,391
Leave your conditional empty by doing this if <condition> then end
0
3,469
false
0
1
Function that does nothing in Lua
25,321,438