Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
30,668,532 | 2015-06-05T14:02:00.000 | 1 | 0 | 0 | 1 | python,32bit-64bit,python-import,cx-oracle | 30,736,055 | 3 | false | 0 | 0 | I had the 32-bit version of oracle client installed. Once I installed the 64-bit version it worked fine. | 1 | 4 | 0 | I installed Python 2.7.7 :: Anaconda 2.0.1 (64-bit). Currently I am trying to run the command "import cx_Oracle". I ran easy_install which successfully add the cx_Oracle egg to the "site_packages" folder the anaconda directory getting the message "Installed c:\fast\anaconda\2.0.1\lib\site-packages\cx_oracle-5.1.3-py2.7-win-amd64".'
Now whenever I try the command "import cx_Oracle" in the python terminal I get the error "ImportError: DLL load failed: %1 is not a valid Win32 application". I tried installing the specific 32-bit version of cx_Oracle but it still resulted in the same output "Installed c:\fast\anaconda\2.0.1\lib\site-packages\cx_oracle-5.1.3-py2.7-win-amd64".
Has anyone had success fixing this? | Python Anaconda - "import cx_Oracle" error in command window | 0.066568 | 0 | 0 | 6,717 |
30,669,269 | 2015-06-05T14:36:00.000 | 0 | 0 | 1 | 0 | python,django | 30,669,335 | 1 | false | 1 | 0 | Take the difference between two dates, which is a timedelta, and ask for the days attribute of that timedelta. | 1 | 0 | 0 | How can I subtract on datefield from another and get result in integer?
Like 11.05.2015-10.05.2015 will return 1
I tried
entry.start_devation= each.start_on_fact - timedelta(days=data.calendar_start) | Subtract one date from another and get int | 0 | 0 | 0 | 171 |
30,669,507 | 2015-06-05T14:48:00.000 | 0 | 0 | 1 | 0 | python,rpm | 30,669,700 | 1 | true | 0 | 0 | There is no way to specify a custom .spec file when building a SRPM with setuptools. Either create a source tarball and build the SRPM using the custom .spec, or create a source package, extract it, modify the .spec file, and rebuild it. | 1 | 0 | 0 | is there any way to use my own spec file when using setuptools and setup.py to create a RPM? I would like to find a way to install the scripts in my setup.py directory to /etc/init.d instead of the usual place that they are installed by RPM. I can't seem to find a command to let me use a custom spec file instead of the one setup.py generates. | Custom Spec File when creating RPM packing with Setuptools? | 1.2 | 0 | 0 | 415 |
30,669,568 | 2015-06-05T14:51:00.000 | 1 | 0 | 0 | 0 | python,sql,django,git,workflow | 30,669,779 | 4 | false | 1 | 0 | You should track migrations. The only thing that you must keep an eye out for is at branch merge. If everyone uses a feature branch and develops on his branch then the changes are applied once the branch is integrated. At that point (pull request time or integration time) you need to make sure that the migrations make sense and if not fix them. | 2 | 1 | 0 | Suppose you write a Django website and use git to manage the source code. Your website has various instances (one for each developer, at least).
When you perform a change on the model in a commit, everybody needs to update its own database. In some cases it is enough to run python manage.py migrate, in some other cases you need to run a few custom SQL queries and/or run some Python code to update values at various places.
How to automate this? Is there a clean way to bundle these "model updates" (for instance small shell scripts that do the appropriate actions) in the associated commits? I have thought about using git hooks for that, but as the code to be run changes over time, it is not clear to me how to use them for that purpose. | How to track Django model changes with git? | 0.049958 | 0 | 0 | 216 |
30,669,568 | 2015-06-05T14:51:00.000 | 4 | 0 | 0 | 0 | python,sql,django,git,workflow | 30,669,896 | 4 | true | 1 | 0 | All changes to models should be in migrations. If you "need to run a few custom SQL queries and/or run some Python code to update values" then those are migrations too, and should be written in a migration file. | 2 | 1 | 0 | Suppose you write a Django website and use git to manage the source code. Your website has various instances (one for each developer, at least).
When you perform a change on the model in a commit, everybody needs to update its own database. In some cases it is enough to run python manage.py migrate, in some other cases you need to run a few custom SQL queries and/or run some Python code to update values at various places.
How to automate this? Is there a clean way to bundle these "model updates" (for instance small shell scripts that do the appropriate actions) in the associated commits? I have thought about using git hooks for that, but as the code to be run changes over time, it is not clear to me how to use them for that purpose. | How to track Django model changes with git? | 1.2 | 0 | 0 | 216 |
30,672,259 | 2015-06-05T17:15:00.000 | 2 | 0 | 0 | 0 | python,sqlalchemy,eve | 30,680,531 | 1 | true | 0 | 0 | It's quite simple you just do:
http://127.0.0.1:5000/people?where={"city":"XX", "pop":"<1000"} | 1 | 1 | 0 | I can do http://127.0.0.1:5000/people?where={"lastname":"like(\"Smi%\")"} to get people.lastname LIKE "Smi%"
How do I concat two conditions, like where city=XX and pop<1000 ? | Eve SQLAlchemy query catenation | 1.2 | 1 | 0 | 113 |
30,674,548 | 2015-06-05T19:42:00.000 | 0 | 0 | 1 | 0 | python,travis-ci | 70,156,533 | 2 | false | 0 | 0 | The accepted answer is not complete if the extra-index-url is containing a username & password and one does not want to have their credentials in the .travis.yml.
If you happen to have a private password protected pypi running you can use environment variables set on travis to store the password and refer to it in the travis.yml | 1 | 4 | 0 | I want to setup Travis CI so that it can find Python dependencies in our own PyPI server.
I know I can put the --extra-index-url option into the requirements.txt file, but I would have rather not hardcoded the PyPI URL in the requirements file, but rather left requirements.txt generic and specify the PyPI URL just for Travis. Is this possible? | Provide alternate PyPI URL for Travis CI to install dependencies from? | 0 | 0 | 0 | 236 |
30,674,560 | 2015-06-05T19:43:00.000 | 1 | 0 | 1 | 0 | python,dictionary | 30,675,106 | 4 | true | 0 | 0 | Python can do pretty much anything in response to item access since any class can redefine __getitem__ (and, for dict subclasses, __missing__). If the documentation doesn't cover it, there is no well-defined way to discover what "hidden keys" are available in any given object, short of inspecting the source code. | 1 | 5 | 0 | I have a dictionary that is passed to me from a function that I do not have access to the code. In this dictionary there is a key called 'time'. I can print d['time'] and it prints the value I expect. However, when I iterate through the dictionary, this key is skipped. Also d.keys() does not include it. If it matters, the other keys are numerical.
How would I recreate this? How do you see hidden keys without knowing the name? Can this be undone?
print type(d) returns <type 'dict'> | Hidden Dictionary Key | 1.2 | 0 | 0 | 2,538 |
30,674,893 | 2015-06-05T20:03:00.000 | 0 | 0 | 0 | 1 | python,zlib,centos5 | 30,797,283 | 1 | false | 0 | 0 | I thought guys in here are much more active.
anyway it ended up that I contacted my server company and they checked the hard disk and said that there is physical fault with the hard disk. I backed up my data on another server. they changed the hard disks and installed a fresh OS.
now every thing's fine | 1 | 0 | 0 | I faced "segmentation fault error" with "yum". Then I realised that there's a problem with zlib, but I messed it when trying to fix it. Now I need to reinstall zlib. since "yum" is no longer working I need to reinstall zlib through rpm or wget or some other way.
could any one tell me the step by step procedure of reinstalling zlib? | Reinstall zlib on centos | 0 | 0 | 0 | 79 |
30,677,413 | 2015-06-05T23:48:00.000 | 4 | 0 | 0 | 0 | python,flask,couchdb,ibm-cloud,cloudant | 30,679,043 | 1 | true | 1 | 0 | The main advantage of using a Cloudant/CouchDB library is that you write less code. This can be significant in languages like Java where Rest and JSON handling is very cumbersome. However working with Rest and JSON in python using standard libraries is very easy.
However, the main disadvantages of using a Cloudant/CouchDB library are:
you have less control over the interaction with Cloudant which may make things like session management and http connection pooling much harder.
You don't get to learn the Cloudant API as this is abstracted away from you by the library.
Some libraries allow you do to things which can be problematic for scalability such as py-couchdb's functionality for creating temporary views.
Libraries may not implement the full Cloudant API so you may end up having to make Rest/JSON calls to access these features not implemented by the library. | 1 | 2 | 0 | I am writing an app in Python Flask that makes use of the Python HTTP library Request to interface with Cloudant on Bluemix. It is an easy interface that allows me to directly access the Bluemix VCAP information for Cloudant and of course the Cloudant API. However it does not make use of the CouchDB package, which seems to be the most popular way to inteface to Cloudant.
Are there negatives in staying with Request as I scale up, and if so what would they be?i | Are there any known negatives to using Requests in Flask to interface to Cloudant on Bluemix? | 1.2 | 0 | 0 | 137 |
30,678,437 | 2015-06-06T02:51:00.000 | 1 | 0 | 1 | 0 | python | 30,678,520 | 3 | false | 0 | 0 | try del type(Q).an_attribute.
type(Q) will return QClass and then you use del with it. | 1 | 4 | 0 | I've a class say QClass and an instance Q of QClass.
I've somehow created an attribute an_attribute at run time in QClass.
How do I delete that an_attribute using del Q.an_attribute?
I know that deleting that attribute from class will make it inaccessible from all of its instances.
Update: Q is exposed to user and they can only go with del Q.an_attribute. I can only change code of Q or QClass. | Delete attribute of a class using its instance | 0.066568 | 0 | 0 | 4,261 |
30,680,330 | 2015-06-06T07:48:00.000 | 2 | 0 | 1 | 0 | python,redis,multiprocessing | 30,681,111 | 3 | false | 0 | 0 | An answer has to consist of at least 30 characters so mine is: "yes". | 2 | 2 | 0 | Would it be possible to execute multiple redis pipelines in parallel using the python multiprocessing module to increase redis throughput? | Redis pipelines and python multiprocessing | 0.132549 | 0 | 0 | 2,326 |
30,680,330 | 2015-06-06T07:48:00.000 | 2 | 0 | 1 | 0 | python,redis,multiprocessing | 30,715,561 | 3 | false | 0 | 0 | In order to use it with python's multiprocessing model you will need to create a new connection in each subprocess to ensure each process has it's own connection. Otherwise you can run into contention issues on the client side.
That said, if there are commands you need to run as a transaction you will want to use multi/exec yourself as pipelining is not the same thing and does not call it. The simplest way with py-redis is by setting the transaction flag to True when calling pipeline. But only do this if you really need every other client to wait for that pipeline to finish executing. If you do that you've essentially made your application non-threaded as it works like a lock on the database - all other clients can't operate on the database while a MULTI/EXEC is in play.
If you must use MULTI/EXEC and still want the concurrency you will need to isolate groups of keys on different servers and run a server per connection needing to lock the DB. If your operations are on keys which have overlap in various processes, this will require either accepting the effects of MULTI/EXEC on the overall performance or redesigning the client code to eliminate the contention. | 2 | 2 | 0 | Would it be possible to execute multiple redis pipelines in parallel using the python multiprocessing module to increase redis throughput? | Redis pipelines and python multiprocessing | 0.132549 | 0 | 0 | 2,326 |
30,681,408 | 2015-06-06T09:54:00.000 | -1 | 1 | 0 | 1 | python,cygwin,nose,nosetests | 34,595,846 | 1 | false | 0 | 0 | If you only have a single file with tests, you can launch it like this: nosetests tests.py | 1 | 1 | 0 | I have a set of unit tests files created in python with unittest as the import.
Running nosetests on both the terminal of MacOSX and on the cmd.exe of Windows 7, it finds the tests and runs them.
Trying to execute nosetests under Cygwin does not find any tests to run.
All three cases use the same version of Python (3.4) and the same version of nose(1.3.6). Also, none of the files are marked as executable
I suspect that is something environmental on cygwin. Does anyone know that do I need to do? | nosetest not finding tests on cygwin | -0.197375 | 0 | 0 | 218 |
30,682,311 | 2015-06-06T11:32:00.000 | 0 | 0 | 1 | 0 | python,sql,arrays,database,nosql | 30,684,782 | 2 | false | 0 | 0 | It seems not so big with numpy arrays, if your integers are 8 bits. a=numpy.ones((17e6,128),uint8) is created in less than a second on my computer. but ones((17e6,128),uint16) is difficult, and ones((17e6,128),uint64) crashed. | 1 | 2 | 1 | I'm working on a project where I have to store about 17 million 128-dimensional integer arrays e.g [1, 2, 1, 0, ..., 2, 6, 4] and I'm trying to figure out what's the best way to do it.
The perfect solution would be one that makes it fast to both store and retrieve the arrays, since I need to access ALL of them to make calculations. With such a vast amount of data, I obviously can't store them all in memory in order to make calculations, so accessing batches of arrays should be as fast as possible.
I'm working in Python.
What do you recommend ? Using a DB (SQL vs NOSQL ?), storing it in a text file, using python's Pickle? | Fastest way to store and retrieve arrays | 0 | 1 | 0 | 1,207 |
30,683,672 | 2015-06-06T13:53:00.000 | 1 | 1 | 0 | 0 | python,iis,cgi | 30,683,908 | 1 | false | 1 | 0 | When using Windows authentication on IIS, the server variables should contain the username in two variables: AUTH_USER and REMOTE_USER
CGI offers access to all server variables, check your Python docs on how to access them. | 1 | 0 | 0 | I have a very basic CGI based front end hosted on an IIS server.
I'm trying to find the users within my shop that have accessed this site.
All users on the network sign on with their LAN (Windows) credentials and the same session would be used to access the site.
The python getpass module (obviously) returns only the server name so is there a way to find the user names of the visitors to the site?
The stack is Python 2.7 on IIS 8.0, Windows Server 2012 | Python/CGI on IIS - find user ID | 0.197375 | 0 | 0 | 591 |
30,684,113 | 2015-06-06T14:39:00.000 | 0 | 0 | 0 | 0 | python,django,django-models,concurrency | 30,684,683 | 2 | false | 1 | 0 | What you can do is lock the slot for the booking once the payment is initiated. That way you only "lose" availability for a few moments. The lock can be done on the same centralized system that holds the rest of the information. For instance you can scale up the application servers but keep a single entry point to the data source.
You release the lock once the payment is declined or confirm. | 1 | 1 | 0 | I have a Django Model class for a booking slot. Once we have all the auxiliary data and payment is ready to be made, we get a live availability and allocate a booking. This whole last step of the process takes a couple of seconds (mostly waiting on the payment provider to clear). In theory two payments coming through at once could double-book a slot.
That's all handled by a single function Booking.book(). Is there any sane way I can limit so that only one instance can work at once and others are queued?
The deployment design is initially pretty simple but there could be scale to multiple servers eventually.
What's the proper way of doing this and what are its downsides? | Stopping a Django Model function being run more than once, at once | 0 | 0 | 0 | 49 |
30,685,632 | 2015-06-06T17:27:00.000 | 1 | 0 | 1 | 1 | python,python-3.x | 30,685,708 | 2 | false | 0 | 0 | 1) sys.stdin is a TextIOWrapper, its purpose is to read text from stdin. The resulting strings will be actual strs. sys.stdin.buffer is a BufferedReader. The lines you get from this will be byte strings
2) They read all the lines from stdin until hitting eof or they hit the limit you give them
3) If you're trying to read a single line, you can use .readline() (note: no s). Otherwise, when interacting with the program on the command line, you'd have to give it the EOF signal (Ctrl+D on *nix)
Is there a reason you are doing this rather than just calling input() to get one text line at a time from stdin? | 1 | 1 | 0 | I have a code with function sys.stdin.readlines().
What is the difference between the above one and sys.stdin.buffer.readlines()?.
What exactly do they do ?
If they read lines from command line,how to stop reading lines at a certain instant and proceed to flow through the program? | python-sys.stdin.readlines() ,Stop reading lines in command line | 0.099668 | 0 | 0 | 2,456 |
30,687,178 | 2015-06-06T20:17:00.000 | 2 | 0 | 0 | 0 | python,pyqt | 30,687,563 | 1 | false | 0 | 1 | Finally found the solution after scouring the documentation and trying different options. I think I was looking for something along the lines of "toolBarRow" so I missed it.
The solution is to insert a toolBarBreak. The same way a separator can be added to a toolbar itself, a "break" simply breaks up one of the four areas provided for tool bars: either top, bottom, left, or right. It is added with similar functions to the way separators are added to toolbars, with:
QMainWindow.addToolBarBreak() which adds to the "end" of the toolbar area, which really means the most inward position.
or
QMainWindow.insertToolBarBreak(toolBarBefore) which adds right before the passed in toolbar reference. | 1 | 1 | 0 | I'm adding a simple toolbar to my PyQt application and trying to get the toolbar to start by default in the top position, but in the 2nd row beneath another toolbar.
I have called:
self.addToolBar(Qt.TopToolBarArea, navBar)
This combines the toolbar with my first toolbar which is much shorter into the same row. Is there a way to force these toolbars to be in separate rows? | PyQt toolbar in 2nd row by default | 0.379949 | 0 | 0 | 714 |
30,688,827 | 2015-06-07T00:21:00.000 | 1 | 0 | 0 | 1 | python-2.7,twisted | 30,689,289 | 1 | false | 0 | 0 | Python data structures can change at runtime, so Eclipse can only guess what methods are available. In the case of twisted.internet.reactor, it is a singleton whose type may change depending on how things are initialized, so it appears to Eclipse as a blank module.
Since PyDev for Eclipse does not provide a way for libraries to tell it that it's wrong about what methods it has detected, if your Python code does not match the subset of Python it can guess correctly about, then you get spurious errors like this. Sorry! If PyDev ever adds a way to override its built-in guessing logic, we will distribute something that says what methods twisted.internet.reactor likely provides. Please file a bug against PyDev for this. | 1 | 1 | 0 | I am working with twisted and...well these two methods keep coming up in Eclipse as undefined. Cannot find any reference to this.
I tried #@UndefinedVariable (which solved the reactor.run() issue I had but it does not work in this case.
Running Eclipse Kepler on Mac Yosemite with twisted-15.2.1 zope.interface-4.1.2. | twisted.reactor callWhenRunning and callLater undefined? | 0.197375 | 0 | 0 | 436 |
30,688,828 | 2015-06-07T00:21:00.000 | 1 | 0 | 1 | 0 | python,wing-ide | 30,711,755 | 1 | false | 0 | 0 | It is probably due to your code being in a file named numpy.py If you do this then 'import numpy' may import your module and not numpy. This depends on what's on the Python Path and possibly current directory, which probably explains why it works outside of Wing. | 1 | 0 | 1 | I am new to Python and am having trouble loading numpy in Wing IDE. I can load the module and use it fine in the command line but not in Wing IDE. Below is what I am seeing:
code:
import numpy as np
a=np.arange(15)
result:
[evaluate numpy.py]
Traceback (most recent call last):
File "C:\Users[my name]\Documents\Python\practice\numpy.py", line 2, in 0
builtins.NameError: name 'arange' is not defined
I have also tried to use the help() command:
code:
help(np)
result:
Help on module numpy:
NAME
numpy
FILE
c:\users[my name]\documents\python\practice\numpy.py | Load Modules in Wing IDE | 0.197375 | 0 | 0 | 1,987 |
30,690,901 | 2015-06-07T06:50:00.000 | 14 | 0 | 1 | 0 | python,eclipse,pydev | 30,696,555 | 2 | true | 0 | 0 | Coloring in eclipse is tricky but it can be done:
windows -> preferences -> pydev -> editor ...scroll down in inset box to "comments"
Good luck! | 1 | 12 | 0 | I've really looked everywhere for a lead on this.
I'm using Eclipse (Kepler) and Pydev 4.0.0.
The default syntax coloring for Pydev is driving me crazy. But what's driving me more crazy is that I cannot find an obvious source that explains how to adjust syntax coloring.
In specific, what I want to do is simply adjust the color used for # comments. Right now it's a very pale gray on white, and almost invisible to me.
It's easy to see how you can monkey with colors for Java editors, but for editing Python, it just seems that I'm stuck with the hardcoded syntax color choices.
What am I missing? | Eclipse and Pydev - how to edit syntax coloring choices | 1.2 | 0 | 0 | 7,090 |
30,691,591 | 2015-06-07T08:35:00.000 | 0 | 0 | 0 | 0 | python,django | 30,691,728 | 1 | false | 1 | 0 | I don't think you need either of those. A simple DetailView would be easier; just override the post method and do the update there. | 1 | 0 | 0 | I want to create a view having a form, But the form should not show any fields.
In the view I has to be able to confirm/accept the object and thus change the status field of the object.
I guess I can make a simple view inheriting from FormView without creating any input fields, find the object in the dispatch method, and change the status field in the form_valid method.
But I wondered if it's better to use UpdateView since it has already implemented get_object, etc.
I have to use this approach many times, so I want to do it right the first time. | Update object in form view without any fields in Django | 0 | 0 | 0 | 448 |
30,695,739 | 2015-06-07T16:15:00.000 | 1 | 0 | 0 | 0 | django,python-2.7,pycharm | 30,695,913 | 1 | true | 1 | 0 | Try PyCharm preferences.
Project: --> Project Interpreter --> -->
Ensure the correct python environment
Also double check your settings under
Build, Execution, Deployment --> Console --> Python Console | 1 | 1 | 0 | I can run Django shell directly from terminal in PyCharm, but I can't run it from manage.py shortcut (tools -> manage.py task -> shell). But I can do any other operations using this shortcut.
When I am trying to run it using second way nothing happens.
I am using Django 1.7 and PyCharm 4.0.2. | Can't open Django shell from PyCharm using manage.py shortcut | 1.2 | 0 | 0 | 3,553 |
30,695,803 | 2015-06-07T16:22:00.000 | 3 | 0 | 1 | 0 | python | 30,695,869 | 2 | false | 0 | 0 | Callback, I believe. Not necessary though. | 1 | 1 | 0 | What does cb in cb_kwargs (one entry in parameter list of python) stand for?
I believe kwargs means keyword arguments. But I don’t have no idea about cb. | What does 'cb' in cb_kwargs stand for? | 0.291313 | 0 | 0 | 992 |
30,698,999 | 2015-06-07T22:13:00.000 | 0 | 0 | 0 | 0 | python,django | 30,699,316 | 1 | false | 1 | 0 | You need to make a new form class that is a subclass of ModelForm. Then the __init__ method of the ModelForm class change one of the things in self.fields after calling the superclass constructor. | 1 | 0 | 0 | I have an IntegerField with choices.
The list of choices consists of 10 different choices. I have different ModelForm using this integerfield.
In some of the modelforms I don't want to display all of the choices.
Can I in the ModelForm reduce the number of available choices? | Reduce number of choices in Django | 0 | 0 | 0 | 64 |
30,699,683 | 2015-06-07T23:56:00.000 | 0 | 0 | 1 | 1 | python,exe,py2exe | 30,699,958 | 2 | false | 0 | 0 | Py2exe is a tool provides an exe application which can be run without installing python interpreter, after packaging you find your exe and dlls of interpreter and all modules... In dist folder. It does nt provide all in one exe, use pyinstaller instead | 1 | 0 | 0 | After I make an exe file, there are many files such as .pyd that my exe depend on them..
I want to make a program with only one exe file which will be handie..
please help me | PYTHON py2exe makes too many files.. how do I execute only one .EXE file? | 0 | 0 | 0 | 280 |
30,700,603 | 2015-06-08T02:44:00.000 | 2 | 0 | 1 | 0 | python,brackets | 30,700,684 | 3 | false | 0 | 0 | () parentheses are used for order of operations, or order of evaluation, and are referred to as tuples.
[] brackets are used for lists. List contents can be changed, unlike tuple content.
{} are used to define a dictionary in a "list" called a literal. | 1 | 38 | 0 | I am curious, what do the 3 different brackets mean in Python programming? Not sure if I'm correct about this, but please correct me if I'm wrong:
[] - Normally used for dictionaries, list items
() - Used to identify params
{} - I have no idea what this does...
Or if these brackets can be used for other purposes, any advice is welcomed! Thanks! | Different meanings of brackets in Python | 0.132549 | 0 | 0 | 113,169 |
30,701,698 | 2015-06-08T05:20:00.000 | 0 | 0 | 0 | 1 | python,jenkins | 30,706,914 | 1 | false | 0 | 0 | I think you're using relative path names. In that case it will default to the current working directory, which works when you run it manually, but may fail when Jenkins runs your code using a different working directory. The solution is to make sure that both the src and dst args to os.rename() are absolute paths, or alternatively to chdir() to the correct directory first. | 1 | 0 | 0 | I am getting the following error when I launched my build script from jenkins
os.rename(str1,str)
OSError: [Errno 13] Permission denied
Build step 'Execute shell' marked build as failure
I am able to rename the file manually.I have rwx permissions on that file.But I could not do the same thing when python script launched from jenkins.Any ideas ? | os.rename() gives permisson denied when python script launched from jenkins | 0 | 0 | 0 | 1,541 |
30,701,734 | 2015-06-08T05:23:00.000 | 4 | 0 | 1 | 0 | python,python-2.7,event-driven | 30,702,008 | 1 | true | 0 | 0 | The term "helper function" has no official definition. It is just a function that helps or assists other functions.
A "handler" is a callable object (a function for example) that is registered to an event. If that event is triggered, the handler will get called automatically.
You did not give any code and did not name any framework, so I can give this overview without examples only. The implementation differs on the frameworks / libraries used. | 1 | 1 | 0 | What is helper function in python and what is difference between helper function and handler in event driven programming? | Difference between helper function and handler in event driven programming? | 1.2 | 0 | 0 | 2,059 |
30,704,214 | 2015-06-08T08:18:00.000 | 1 | 0 | 0 | 0 | python,python-2.7,python-3.x,compatibility | 30,704,359 | 1 | true | 0 | 0 | When you install Python packages using apt-get, you're relying on the distribution package manager. The Ubuntu convention is to prefix Python 2 packages with python-, and Python 3 packages with python3-.
This distinction is necessary because Python 3 introduced some incompatible changes from Python 2. It's thus not possible to simply recompile (most) packages for Python 3, meaning both need to be made available.
Alternatively, you can use the Python package manager, pip (or pip3). The catch with this is that some packages (like scipy) require certain compiler toolchains which you might not have installed.
It's generally a good idea to stick to either apt-get or pip for a particular machine. You probably won't have issues if you mix them, but it's better to be consistent. | 1 | 0 | 1 | I am new to Python. I am running Ubuntu 14.04, and I have both Python 2.7 and 3.4 on it.
I want to use the newer 3.x version, with the NumPy, SciPy, and NLTK libraries. I set the Python REPL path to Python 3.x in the ~/.bash_aliases file like so:
alias python=python3
After this I installed several libs, including python-numpy, python-scipy, and python-matplotlib.
$ sudo apt-get install python-numpy python-scipy python-matplotlib ipython ipython-notebook python-pandas python-sympy python-nose
Unfortunately, I am facing issues since I am guessing that the libraries got installed for the older 2.7 version of Python; I am unable to access the libraries using the 3.4 REPL.
import numpy
ImportError: No module named 'numpy'
However, I am able to access the libraries using the older version:
$ /usr/bin/python2.7
How do I get this this work? | Python 2.7 and 3.4: Libraries inaccessible across versions | 1.2 | 0 | 0 | 154 |
30,707,120 | 2015-06-08T10:46:00.000 | 2 | 0 | 1 | 0 | python,ms-word,python-docx | 30,773,088 | 3 | false | 0 | 0 | okay I found a solution to this,
1. make a Document object
2. add some paragraphs
3. took the section[0]
4. queried the xpath for existing w:cols using cols = sectPr.xpath('./w:cols')
5. then set the 'num' property for w:cols using cols.set(qn('w:num'), "2")
worked for me... | 1 | 2 | 0 | I need to implement a design for word document. I have to programatically set the page layout to 2-column layout for that document using python-docx library. Please help. | How to programatically implement Columns in page layout as of in MS Word using python-docx | 0.132549 | 0 | 0 | 2,824 |
30,710,144 | 2015-06-08T13:14:00.000 | 1 | 0 | 1 | 1 | python,background-process | 30,710,470 | 2 | false | 0 | 0 | If you have the #!/bin/env python, and its permissions are set correctly, you can try something like nohup /path/to/test.py & | 1 | 4 | 0 | I have a python code, which has some part that needs to be run on foreground since it tells out if the socket connection was proper or not. Now after running that part the output is written into a file.
Is there a way to move the running python code (process) automatically from foreground to background after running some definite steps in foreground, so that I can continue my work on terminal.
I know using screen is a option but is there any other way to do so. Since after running a part on foreground, there won't be any output shown in the terminal, I don't want to run screen unnecessarily. | Move python code to background after running a part of it | 0.099668 | 0 | 0 | 2,517 |
30,710,725 | 2015-06-08T13:41:00.000 | 4 | 0 | 1 | 0 | python,process,background,hide,pyinstaller | 41,129,006 | 1 | false | 0 | 0 | I think this will help you.
pyinstaller "filename.filetype" -w -F | 1 | 6 | 0 | I used pyinstaller -F in order to create one .exe file to run.
I would like it to run in as a background process.
That means that if one clicks the .exe file, he can only close it from the "PROCESSES".
I want the program will run in the background and will not be seen. (As opposed to now, where I see the black console.) | How to hide the python console window in Pyinstaller | 0.664037 | 0 | 0 | 5,785 |
30,713,062 | 2015-06-08T15:20:00.000 | 8 | 0 | 0 | 0 | python,mysql,arrays,numpy | 30,713,767 | 2 | true | 0 | 0 | You could use ndarray.dumps() to pickle it to a string then write it to a BLOB field? Recover it using numpy.loads() | 1 | 9 | 1 | My use case is simple, i have performed some kind of operation on image and the resulting feature vector is a numpy object of shape rowX1000(what i mean to say is that the row number can be variable but column number is always 1000)
I want to store this numpy array in mysql. No kind of operation is to be performed on this array. The query will be simple given a image name return the whole feature vector. so is there any way in which the array can be stored (something like a magic container which encapsulates the array and then put it on the table and on retrieval it retrieves the magic container and pops out the array)
I want to do this in python. If possible support with a short code snippit of how to put the data in the mysql database. | store numpy array in mysql | 1.2 | 1 | 0 | 7,366 |
30,713,927 | 2015-06-08T15:58:00.000 | 0 | 0 | 1 | 0 | python,terminal,ipython | 30,714,067 | 1 | false | 0 | 0 | You can't make os.system or subprocess unavailable, and users could use these to build themselves terminals even if you disable the built in terminals. However, if you run the ipython instance in a sandbox then it won't matter that they have command line access. | 1 | 0 | 0 | I am using iPython notebook build a online interactive teaching website, but I don't want users to run command line, any idea how to remove iPython notebook command line function? Is there any configuration or something? I have been stuck here for 3 days! | How to stop the iPython notebook to run the command line, run only python code | 0 | 0 | 0 | 750 |
30,714,388 | 2015-06-08T16:23:00.000 | 0 | 0 | 1 | 0 | python,opencv,ffmpeg,codec | 30,737,353 | 1 | false | 0 | 0 | Ok, it was my own fault. All the above steps were ok. I made the error to define cv2.VideoWriter(fname,fourcc,2,(w,h),1) with (w,h) different to the actual frame size (I thought it rescales automaticly). Unfortunately there is no appropriate error message.
So my problem is solved. | 1 | 0 | 0 | I know this question was asked hundred of times, nevertheless I got problems.
I'm working on a new windows (2010 server) systen, installed Python 2.7.9 and OpenCV 2.4.10. I copied opencv_ffmpeg.dll to Python27\opencv_ffmpeg2410.dll. I also installed K-Lite video codecs. If I try to save a video with VideoWriter (MJPG), I get always a file with size 5682 bytes which is not playable. On my old system the same python code works, but over the years I installed several versions of drivers and ffmpeg and whatever. So is there a systematic way to get VideoWriter working if you are on a freshly installed system? | OpenCV VideoWriter ffmpeg again and again | 0 | 0 | 0 | 650 |
30,716,541 | 2015-06-08T18:32:00.000 | 1 | 0 | 0 | 0 | python,filtering,time-series,signal-processing,noise-reduction | 30,721,305 | 1 | true | 0 | 0 | Load the data using any method you prefer. I see that your file can be treated as csv format, therefore you could use numpy.genfromtxt('file.csv', delimiter=',') function.
Use the scipy function for median filtering: scipy.signal.medfilt(data, window_len). Keep in mind that window length must be odd number.
Save the results to a file. You can do it for example by using the numpy.savetxt('out.csv', data, delimiter=',') function. | 1 | 1 | 1 | I have a time series in a log file having the following form (timestamp, value) :
1433787443, -60
1433787450, -65
1433787470, -57
1433787483, -70
Is there any available python code/library that takes as input the log file and a window size, apply a median filter to the time series to remove noise and outliers, and outputs the filtered signal to a new file ? | Simple Python Median Filter for time series | 1.2 | 0 | 0 | 4,246 |
30,721,173 | 2015-06-09T00:27:00.000 | 1 | 0 | 1 | 0 | python | 30,721,227 | 2 | false | 1 | 0 | Either
__repr__or __str__ will do it. | 1 | 0 | 0 | Assume I have a class that inherits from object. I create an instance of it pass that to print. It will display something like <__main__.ObjName object at 0xxxxxx>. Is there an object method that can be overridden to provide a return value when the object is accessed this way? | What method gets called when you display an object? | 0.099668 | 0 | 0 | 47 |
30,722,732 | 2015-06-09T03:57:00.000 | 8 | 1 | 1 | 0 | python,algorithm | 30,722,751 | 7 | false | 0 | 0 | I don't think there is anything better than a single pass over the string, counting the current sequence length (and updating the maximum) as you go along.
If by "binary string" you mean raw bits, you can read them one byte at a time and extract the eight bits in there (using bit shifting or masking). That does not change the overall algorithm or its complexity. | 1 | 5 | 0 | I am looking for an efficient algorithm to find the longest run of zeros in a binary string. My implementation is in Python 2.7, but all I require is the idea of the algorithm.
For example, given '0010011010000', the function should return 4. | Efficient algorithm to find the largest run of zeros in a binary string? | 1 | 0 | 0 | 3,277 |
30,723,301 | 2015-06-09T04:56:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,hpc,sungridengine,drmaa | 30,723,512 | 2 | true | 0 | 0 | Created a ~/.sge_request file with a -p parameter set to 0. | 1 | 0 | 0 | When I try to submit a job using the python drmaa wrapper, I get a DeniedByDrmException: code 17: job rejected: positive submission priority requires operator privileges.
How do I change the priority of jobs that I submit using the Python DRMAA wrapper? | Changing priority of job in SGE using python drmaa wrapper | 1.2 | 0 | 0 | 144 |
30,724,143 | 2015-06-09T06:05:00.000 | 0 | 0 | 0 | 0 | python,mysql,sql,sql-server,csv | 30,724,975 | 1 | true | 0 | 0 | My answer is to work with bulk-insert.
1. Make sure you have bulk-admin permission in server.
2. Use SQL authentication login (For me most of the time window authentication login haven't worked.) for bulk-insert operation. | 1 | 0 | 1 | i have tried import file csv using bulk insert but it is failed, is there another way in query to import csv file without using bulk insert ?
so far this is my query but it use bulk insert :
bulk insert [dbo].[TEMP] from
'C:\Inetpub\vhosts\topimerah.org\httpdocs\SNASPV0374280960.txt' with
(firstrow=2,fieldterminator = '~', rowterminator = ' '); | how to import file csv without using bulk insert query? | 1.2 | 1 | 0 | 777 |
30,724,520 | 2015-06-09T06:30:00.000 | 2 | 0 | 1 | 0 | python,multithreading,multiprocessing,python-multithreading,python-multiprocessing | 30,725,250 | 2 | false | 0 | 0 | My guess would be: multithreading module was implemented very early, multiprocessing module came in 2.6 version.
The queue design was slightly corrected for multiprocessing and offers better flexibility than the multithreading, because you can choose between Queue, SimpleQueue and JoinableQueue depending on your use cases (speed vs reliability).
Now modifing multithreading like this would have caused backwards incompatibility, since join and task_done methods would have to be removed. Imagine the code needed to be refactored, new tests had to be written, API broken - for me clearly no benefits. | 1 | 5 | 0 | Why does Queue.Queue have a task_done method while multiprocessing.Queue has no such method? | Why does multiprocessing.Queue have no task_done method | 0.197375 | 0 | 0 | 3,014 |
30,736,698 | 2015-06-09T15:38:00.000 | 3 | 0 | 0 | 0 | python-2.7,fortran | 30,748,701 | 2 | false | 0 | 0 | I would interface directly between Python and Fortran. It is relatively straightforward to allocate memory using Numpy and pass a pointer through to Fortran. You use iso_c_binding to write C-compatible wrapper routines for your Fortran routines and ctypes to load the Fortran .dll and call the wrappers. If you're interested I can throw together a simple example (but I am busy right this moment). | 1 | 2 | 1 | We are developing a scientific application which has the interface in python 2.7 and the computation routines written in Intel Visual Fortran. Reading the source files is done using python, then only the required data for computations has to be passed to standalone Fortran algorithms. Once the computations done, the data has to be read by python once again.
Using formatted text files seems to be taking too long and not efficient. Further, we would like to have a standard intermediate format. There can be about 20 arrays and those are huge (if written to formatted text, the file is about 500 MB).
Q1. In a similar situation where Python and Fortran data exchange is necessary. What would be recommended way of interaction? (e.g.: writing an intermediate data to be read by the other or calling Fortran from within Python or using numpy to create compatible arrays or etc.)
Q2. If writing intermediate structures is recommended, What format is good for data exchange? (We came across CDF, NETCdf, binary streaming, but didn't try any so far.) | Data exchange - Python and Fortran | 0.291313 | 0 | 0 | 325 |
30,736,765 | 2015-06-09T15:41:00.000 | 2 | 0 | 1 | 0 | python,multithreading,numpy,intel,intel-mkl | 30,737,140 | 1 | true | 0 | 0 | Upon further investigation it looks you are able to set the environment variable MKL_NUM_THREADS to achieve this. | 1 | 2 | 1 | I just installed a intel MKL optimized version of scipy and when running my benchmarks, I got remarkable speedup with it. I then looked closer and found out it was running on 20 cores ... how do I restrict it to single threaded mode? Is there a way I could have installed it to single threaded mode by default, while leaving the option open to run on a specified number of cores? | Restrict MKL optimized scipy to single thread | 1.2 | 0 | 0 | 225 |
30,738,083 | 2015-06-09T16:43:00.000 | 1 | 1 | 1 | 0 | python,aes,pycrypto | 42,974,085 | 4 | false | 0 | 0 | Solved when i installed pycrypto rather then crypto
pip2 install pycrypto | 1 | 19 | 0 | I am just starting to explore Python. I am trying to run an AES algorithm code and I am facing the:
ImportError: No module named Crypto.
How do you solve this? | ImportError: No module named Crypto | 0.049958 | 0 | 0 | 53,824 |
30,739,230 | 2015-06-09T17:46:00.000 | 1 | 1 | 0 | 0 | dronekit-python | 30,833,309 | 2 | false | 0 | 0 | Root password for both 3drobotics Solo and Artoo is "TjSDBkAu" aka Tj (Tijuana) SD (San Diego) Bk (Back) A (At) u (You).
./src/com/o3dr/solo/android/service/artoo/ArtooLinkManager.java: private static final SshConnection sshLink = new SshConnection("10.1.1.1", "root", "TjSDBkAu");
./src/com/o3dr/solo/android/service/sololink/SoloLinkManager.java: private static final SshConnection sshLink = new SshConnection(getSoloLinkIp(), "root", "TjSDBkAu");
./src/com/o3dr/solo/android/service/update/UpdateService.java: public static final String SSH_PASSWORD = "TjSDBkAu"; | 2 | 1 | 0 | The getting started documentation I can find is helpful in loading up a dronekit script in the simulator, but I can't figure out how to then translate the process to transfer scripts onto my Solo for real world execution. Perhaps I'm misunderstanding somewhere. Please help, and thanks a bunch in advance! | Running DroneKit Air on my 3DR Solo | 0.099668 | 0 | 0 | 403 |
30,739,230 | 2015-06-09T17:46:00.000 | 0 | 1 | 0 | 0 | dronekit-python | 30,855,173 | 2 | false | 0 | 0 | We are working on a new firmware release together with some guides/docs that will allow for a better development experience, right now all you can do is scp your files, add them to .mavinit.scr on home directory and kill the mavproxy process to make sure it reloads this configuration.
That said, I would be very careful in flying with untested code as an exception might cause other parts of Solo to fail and cause you to crash, please test thoroughly before flying. | 2 | 1 | 0 | The getting started documentation I can find is helpful in loading up a dronekit script in the simulator, but I can't figure out how to then translate the process to transfer scripts onto my Solo for real world execution. Perhaps I'm misunderstanding somewhere. Please help, and thanks a bunch in advance! | Running DroneKit Air on my 3DR Solo | 0 | 0 | 0 | 403 |
30,740,731 | 2015-06-09T19:07:00.000 | 0 | 0 | 1 | 0 | python,string,docx | 30,769,699 | 2 | false | 0 | 0 | Your best bet is going about unzipping the docx which will create a directory called word. Within that directory is document.xml, from there you would need to learn the xml structure and key words to be able to read just an italicized text. once you complete that all you have to do is pull the text string from xml file. | 1 | 4 | 0 | How should I go about reading a .docx file with Python and being able to recognize the italicized text and storing it as a string?
I looked at the docx python package but all I see is features for writing to a .docx file.
I appreciate the help in advance | Reading docx files, recognizing and storing italicized text | 0 | 0 | 0 | 2,363 |
30,742,533 | 2015-06-09T20:47:00.000 | 2 | 1 | 0 | 1 | python,unix,ubuntu | 30,742,647 | 4 | false | 0 | 0 | I am not sure how "best practice" this is but you could do:
Add the program to:
/etc/rc.d/rc.local
This will have the program run at startup.
If you add an '&' to the end of the line it will be run in the background.
If you dont want to run the program manually (not at startup) you could switch to another tty by pressing ctrl + alt + f1, (this opens tty1 and will work with f1 - f6) then run the command. This terminal you do not have to have open in a window so you dont have to worry about it getting closed. To return to the desktop use ctrl + alt + f7. | 1 | 15 | 0 | I have a Python script which process data off of a HTTP data stream and I need this script to in theory be running at all times and forever unless I kill it off manually to update it and run it again.
I was wondering what the best practice to do this was on a Unix (Ubuntu in particular) so that even if Terminal is closed etc the script continues to run in the background unless the process or server are shut down?
Thanks | Unix: Have Python script constantly running best practice? | 0.099668 | 0 | 0 | 40,621 |
30,742,572 | 2015-06-09T20:50:00.000 | 2 | 0 | 0 | 0 | python,scipy,sparse-matrix | 43,092,480 | 6 | true | 0 | 0 | From scipy version 0.19, both csr_matrix and csc_matrix support argmax() and argmin() methods. | 1 | 13 | 1 | scipy.sparse.coo_matrix.max returns the maximum value of each row or column, given an axis. I would like to know not the value, but the index of the maximum value of each row or column. I haven't found a way to make this in an efficient manner yet, so I'll gladly accept any help. | Argmax of each row or column in scipy sparse matrix | 1.2 | 0 | 0 | 4,988 |
30,743,683 | 2015-06-09T22:08:00.000 | 0 | 0 | 1 | 0 | python,wxpython,anaconda | 36,034,036 | 2 | false | 0 | 1 | I believe simply using pythonw vs. python helped when I was using a MacBook. | 2 | 0 | 0 | I am using the anaconda python distribution (python version 2.7) and I would like to be able to use wxpython either in a notebook or at least in an ipython console through the anaconda spyder app (their IDE). I am running into what is apparently a common problem which is due to the anaconda python environment not being recognized as a framework with GUI access.
In fact, I am able to launch a wxpython app when working directly in ipython when launched from the command line. However, when trying to get an app to run from either the spyder IDE ipython console or an ipython notebook I get this error:
This program needs access to the screen.
Please run with a Framework build of python, and only when you are
logged in on the main display of your Mac.
If anyone knows a workaround for this I would very much appreciate your advice.
Thanks! | How to use wxpython in an ipython notebook or console | 0 | 0 | 0 | 2,244 |
30,743,683 | 2015-06-09T22:08:00.000 | 0 | 0 | 1 | 0 | python,wxpython,anaconda | 30,744,213 | 2 | false | 0 | 1 | In an ipython notebook before you run a cell invoking wxpython functions you have to insert the %gui magic:
%gui wx | 2 | 0 | 0 | I am using the anaconda python distribution (python version 2.7) and I would like to be able to use wxpython either in a notebook or at least in an ipython console through the anaconda spyder app (their IDE). I am running into what is apparently a common problem which is due to the anaconda python environment not being recognized as a framework with GUI access.
In fact, I am able to launch a wxpython app when working directly in ipython when launched from the command line. However, when trying to get an app to run from either the spyder IDE ipython console or an ipython notebook I get this error:
This program needs access to the screen.
Please run with a Framework build of python, and only when you are
logged in on the main display of your Mac.
If anyone knows a workaround for this I would very much appreciate your advice.
Thanks! | How to use wxpython in an ipython notebook or console | 0 | 0 | 0 | 2,244 |
30,743,952 | 2015-06-09T22:31:00.000 | 2 | 0 | 1 | 0 | python,string,unicode,ascii,python-2.x | 30,744,052 | 3 | true | 0 | 0 | Try doing myString = u"███ ███ J ██". This will make it a Unicode string instead of the python 2.x default of an ASCII string.
If you are reading it from a file or a file-like object, instead of doing file.read(), do file.read().encode('utf-8-sig'). | 2 | 0 | 0 | I'm trying to get the index of 'J' in a string that is similar to myString = "███ ███ J ██" so I use myString.find('J') but it returns a really high value and if I replace '█' by 'M' or another character of the alphabet I get a lower value. I don't really understand what's the cause of that. | █ character string indexed in python | 1.2 | 0 | 0 | 129 |
30,743,952 | 2015-06-09T22:31:00.000 | 0 | 0 | 1 | 0 | python,string,unicode,ascii,python-2.x | 30,744,159 | 3 | false | 0 | 0 | Check the settings of the console/ssh client you are using. Set it to be UTF-8. | 2 | 0 | 0 | I'm trying to get the index of 'J' in a string that is similar to myString = "███ ███ J ██" so I use myString.find('J') but it returns a really high value and if I replace '█' by 'M' or another character of the alphabet I get a lower value. I don't really understand what's the cause of that. | █ character string indexed in python | 0 | 0 | 0 | 129 |
30,745,184 | 2015-06-10T00:37:00.000 | 0 | 0 | 0 | 0 | python,topic-modeling,gensim | 47,763,498 | 3 | false | 0 | 0 | The default number of iterations = 50 | 1 | 3 | 1 | I wish to know the default number of iterations in gensim's LDA (Latent Dirichlet Allocation) algorithm. I don't think the documentation talks about this. (Number of iterations is denoted by the parameter iterations while initializing the LdaModel ). Thanks ! | Gensim LDA - Default number of iterations | 0 | 0 | 0 | 6,280 |
30,749,053 | 2015-06-10T06:51:00.000 | 7 | 0 | 1 | 0 | python,linux,plone | 30,749,377 | 1 | true | 0 | 0 | You are right that you also need to transfer files and images. They are stored as BLOBs on the file system.
I guess that you will find a directory named blobstorage, close to the filestorage directory where you found Data.fs.
You need to transfer this blobstorage directory and all its content. | 1 | 3 | 0 | To create another exact copy of the plone install running along with data, is it sufficient to copy buildout.cfg and Data.fs with same version of Plone on the other install? Does it restore the uploaded pdf and image files that have been done on the first server?
Using plone 4.2.1 standalone install on linux | Create copy of plone installed onto another server with data | 1.2 | 0 | 0 | 270 |
30,750,849 | 2015-06-10T08:19:00.000 | 0 | 0 | 0 | 0 | python,csv,binary,data-conversion,hexdump | 30,751,250 | 3 | false | 0 | 0 | From the binary into meaningful strings, we must know that the binary code protocol We can't resolve the binary out of thin air. | 2 | 0 | 1 | Hi FolksI have been working on a python module which will convert a binary string into a CSV record. A 3rd Party application does this usually, however I'm trying to build this logic into my code. The records before and after conversion are as follows:
CSV Record After Conversion
0029.6,000.87,002.06,0029.2,0010.6,0010.0,0002.1,0002.3,00120,00168,00054,00111,00130,00000,00034,00000,00000,00039,00000,0313.1,11:09:01,06-06-2015,00000169
I'm trying to figure out the conversion logic that has been used by the 3rd party tool, if anyone can help me with some clues regarding this, it would be great! One thing I have analysed is that each CSV value corresponds to an unsigned short in the byte stream. TIA, cheers! | Binary to CSV record Converstion | 0 | 0 | 0 | 3,548 |
30,750,849 | 2015-06-10T08:19:00.000 | 1 | 0 | 0 | 0 | python,csv,binary,data-conversion,hexdump | 30,752,427 | 3 | true | 0 | 0 | As already mentioned, without knowing the binary protocol it will be difficult to guess the exact encoding that is being used. There may be special case logic that is not apparent in the given data.
It would be useful to know the name of the 3rd party application or a least what field this relates to. Anything to give an idea as to what the encoding could be.
The following are clues as you requested on how to proceed:
The end of the CSV shows a date, this can be seen at the start
31 08 11 06 06 15 20 AA A8 00 00 00 28 01 57 00 CE 00 24 01 6A 00
64 00 15 00 17 00 78 00 A8 00 36 00 6F 00 82 00 00 00 22 00 00 00 00
00 27 00 00 00
The end value 169 (hex A9) is suspiciously in between the next two hex values
31 08 11 06 06 15 20 AA A8 00 00 00 28 01 57 00 CE 00 24 01 6A 00
64 00 15 00 17 00 78 00 A8 00 36 00 6F 00 82 00 00 00 22 00 00 00 00
00 27 00 00 00
"00039," could refer to the last 4 digits
31 08 11 06 06 15 20 AA A8 00 00 00 28 01 57 00 CE 00 24 01 6A 00 64
00 15 00 17 00 78 00 A8 00 36 00 6F 00 82 00 00 00 22 00 00 00 00 00
27 00 00 00
or:
31 08 11 06 06 15 20 AA A8 00 00 00 28 01 57 00 CE 00 24 01 6A 00 64
00 15 00 17 00 78 00 A8 00 36 00 6F 00 82 00 00 00 22 00 00 00 00 00
27 00 00 00 ....or 27 00 00 00
...you guess two bytes are used so perhaps the others are separate 0 value fields.
"00034," could refer to:
31 08 11 06 06 15 20 AA A8 00 00 00 28 01 57 00 CE 00 24 01 6A 00 64
00 15 00 17 00 78 00 A8 00 36 00 6F 00 82 00 00 00 22 00 00 00 00
00 27 00 00 00
and so on... simply convert some of the decimal numbers into hex and search for possible locations in the data. Consider that fields might be big or little endian or a combination thereof.
You should take a look at the struct python library which can be useful in dealing with such conversions once you know the formatting that is being used.
With more data examples the above theories could then be tested. | 2 | 0 | 1 | Hi FolksI have been working on a python module which will convert a binary string into a CSV record. A 3rd Party application does this usually, however I'm trying to build this logic into my code. The records before and after conversion are as follows:
CSV Record After Conversion
0029.6,000.87,002.06,0029.2,0010.6,0010.0,0002.1,0002.3,00120,00168,00054,00111,00130,00000,00034,00000,00000,00039,00000,0313.1,11:09:01,06-06-2015,00000169
I'm trying to figure out the conversion logic that has been used by the 3rd party tool, if anyone can help me with some clues regarding this, it would be great! One thing I have analysed is that each CSV value corresponds to an unsigned short in the byte stream. TIA, cheers! | Binary to CSV record Converstion | 1.2 | 0 | 0 | 3,548 |
30,754,748 | 2015-06-10T11:10:00.000 | 2 | 0 | 1 | 0 | python,spyder | 30,756,777 | 1 | true | 0 | 0 | Spyder 2.2.5 is an older version (the last version is 2.3.4). When it is started automatically imports numpy and matplotlib. The regular Python interpreter needs an explicit import numpy as np in order to define an array, A=np.array([[1,2,3], [4,5,6]]). | 1 | 0 | 1 | This problem may seem a little strange, but a while (about 1-2 weeks) ago I wrote a little Python script which I tested and everything worked just fine. Today when I was taking lines from this latter script, the lines would run without any error in the Spyder IDE Python console, but when I try to put those same line in a new .py file, Spyder gives me errors!
So I tried to compile the old script again, then I got errors!
A few examples to maybe clear things up:
I load and image using PIL Image: im = Image.open("test.jpg")
then in the Spyder console I can do: im.layers which gives me the number of color channels. Even though this attribute doesn't exist in PIL Image docs!
But using this same attribute in a python file would give an error!
Using: a = array( [ [ 1, 2, 3], [4, 5, 6], [...] ] ) I can create a 2D array (or matrix). This is possible through Spyder, but not regular Python interpreter (which leads to NameError: global name 'array' is not defined
)!
And a few more examples like these.
Could anyone help me understand what is going on, knowing that I'm sort of a Python noob?
Python version: 2.7.6 | GCC 4.8.2 | Spyder 2.2.5 | Compilation difference in Spyder IDE and Python intrepeter | 1.2 | 0 | 0 | 1,345 |
30,757,708 | 2015-06-10T13:24:00.000 | 3 | 0 | 0 | 0 | python,user-interface,py2exe,pyqt5 | 31,620,544 | 3 | true | 0 | 1 | I've had this issue aswell, after a lot of digging I found the following solution:
Copy the following file next to you main .exe:
libEGL.dll
Copy the following file in a folder "platforms" next to you main .exe:
qwindows.dll
Putting the qwindows.dll in the subfolder is the important part I think, hope this helps | 2 | 1 | 0 | I know there are many posts about this problem (i've read them all).
But i still have a problem with my exe, still cannot be opened.
I've tried to put the qwindows.dll (i tried with 3 different qwindows.dll) in the folder dist with my exe but doesn't change anyhting.
I've tried with libEGL.dll, nothing.
Any suggestions ? Is there a way to avoid having this problem ? | Qt platform plugin 'windows' - py2exe | 1.2 | 0 | 0 | 1,637 |
30,757,708 | 2015-06-10T13:24:00.000 | 0 | 0 | 0 | 0 | python,user-interface,py2exe,pyqt5 | 41,159,237 | 3 | false | 0 | 1 | For me it was enough to copy qwindows.dll to platforms folder, like @Inktvisje wrote.
And don't repeat my mistake: don't download this dll from Internet! Copy it from your Python libs folder: YourPythonFolder\Lib\site-packages\PyQt5\plugins\platforms. | 2 | 1 | 0 | I know there are many posts about this problem (i've read them all).
But i still have a problem with my exe, still cannot be opened.
I've tried to put the qwindows.dll (i tried with 3 different qwindows.dll) in the folder dist with my exe but doesn't change anyhting.
I've tried with libEGL.dll, nothing.
Any suggestions ? Is there a way to avoid having this problem ? | Qt platform plugin 'windows' - py2exe | 0 | 0 | 0 | 1,637 |
30,759,749 | 2015-06-10T14:45:00.000 | 1 | 0 | 1 | 1 | python,user-interface,console,pyside,py2exe | 30,759,966 | 1 | false | 0 | 1 | This is not a py2exe limitation, but a Windows limitation. On Windows, applications are compiled either as Console Applications or GUI Applications. The difference is that Console Applications always open a console window, whilst GUI Applications never do.
As far as I can tell, it's not possible to have an application with dual functionality. As a workaround, I suggest that you simply compile two executables: one for console use and one for GUI use. | 1 | 0 | 0 | I have a python program using PySide. When run normally, it opens up a PySide GUI, but when run with some flags in the command line, it spits some things out in the console window.
I'd like to retain this dual functionality, but it seems with py2exe you have to choose whether to have a console window or not when compiling, with no option for choosing during program execution.
Is what I want to do possible with py2exe, or even with some other python "compiler?" | py2exe: Allow a console window to be either shown or hidden with a sys.argv | 0.197375 | 0 | 0 | 66 |
30,761,095 | 2015-06-10T15:41:00.000 | 0 | 0 | 0 | 0 | python,proxy,phantomjs | 30,762,932 | 2 | true | 0 | 0 | No. Nothing about this is documented and I see no indication of getting this information in the code.
As a workaround simply run Wireshark or tcpdump to capture the traffic and look into it to see where the requests go. It should be easy to see whether they go to the server or to the proxy server provided you know their IP addresses (or you can look into the dns query in Wireshark to see which IP address it is). | 1 | 0 | 0 | can someone point me in the right direction. Just need some documentation. I manually input a proxy, but I think it might be by passing it. I want to test my script to see if its actually going through my proxy with phantom. It looks like I successfully went through it, but still getting a few bug. Is there a way to print out the proxy its using in the command line? | Using phantomjs print proxy it used to access website | 1.2 | 0 | 1 | 230 |
30,761,147 | 2015-06-10T15:43:00.000 | 0 | 0 | 0 | 0 | python,django,views | 30,764,221 | 2 | false | 1 | 0 | This question can be answered by another question:
What is the difference between procedural and object oriented programming?
Class based views provide you the power of OOP. Code becomes reusable and abstract. The modularity improves.
Performance wise, I guess there is no difference between a class based and a function based view. It all depends on with which you are more comfortable. Both are just meant for different styles of programming. | 1 | 0 | 0 | I am just curious that which one is better django's Class Based View or Functional view and why.
I personally feel functional view is quiet easy but its lengthy and class based view can work with few lines of code.
Is there any performance issue with these views?
Can anyone guide me why to use django's CBV ? On later day will functional view be depriciated?
Thank you | python django class based view and functional view | 0 | 0 | 0 | 163 |
30,761,322 | 2015-06-10T15:52:00.000 | 1 | 0 | 0 | 1 | python,command,jmeter | 30,803,435 | 2 | false | 1 | 0 | Hi you just have to print the data you need to pass to jmeter, and then use one (or more) regular expression to extract values. | 1 | 0 | 0 | Is it possible to collect the output of a python script using the "OS Process Sampler"?
My python script does a database query and returns "r1=123 r2=456 r3=789"
Is there a way to collect the r1, r2, r3 values and graph them? | Using Jmeter OS Process Sampler to collect script data | 0.099668 | 0 | 0 | 1,555 |
30,766,187 | 2015-06-10T20:01:00.000 | 0 | 0 | 0 | 0 | python,amazon-web-services,boto,amazon-rds,rds | 30,766,635 | 4 | true | 1 | 0 | No, that's probably the best you can do. The RDS API does not support the Stop/Start functionality of EC2 instances. | 2 | 2 | 0 | I had a question about Amazon RDS. I wanted to Start/Stop AWS RDS Instances at my need . AWS Console does not allow me to do so.
The Only method I know is to take a snapshot of the rds instance and delete it and when I need it then a create rds instance using that snapshot.
Is there any better way to acheive the same using Boto? | How do I Start/Stop AWS RDS Instances using Boto? | 1.2 | 0 | 0 | 4,092 |
30,766,187 | 2015-06-10T20:01:00.000 | 0 | 0 | 0 | 0 | python,amazon-web-services,boto,amazon-rds,rds | 53,375,932 | 4 | false | 1 | 0 | You can only start or stop AWS RDS instance of single availability zone instances. For your case, you will have to check if the multi-AZ option is enabled or disabled.
Incase your database is in Multiple availability zone, the best way would be to take the snapshot and restore it. | 2 | 2 | 0 | I had a question about Amazon RDS. I wanted to Start/Stop AWS RDS Instances at my need . AWS Console does not allow me to do so.
The Only method I know is to take a snapshot of the rds instance and delete it and when I need it then a create rds instance using that snapshot.
Is there any better way to acheive the same using Boto? | How do I Start/Stop AWS RDS Instances using Boto? | 0 | 0 | 0 | 4,092 |
30,767,702 | 2015-06-10T21:34:00.000 | -1 | 0 | 0 | 0 | python,module,installation | 30,767,756 | 2 | false | 1 | 0 | You will often have to use sudo on mac, eg sudo python ...
This has been asked multiple times though, try searching before asking. | 2 | 0 | 0 | I've downloaded Beautiful Soup 4.3.2 and CD'ed to the right location on my disk. When I use 'python setup.py install' a load of lines run but then I get this problem:
error: could not create '/Library/Python/2.7/site-packages/bs4': Permission denied
Anyone know why this is?
Thanks a lot! | Python - Can't install modules on mac | -0.099668 | 0 | 0 | 176 |
30,767,702 | 2015-06-10T21:34:00.000 | -1 | 0 | 0 | 0 | python,module,installation | 30,770,660 | 2 | false | 1 | 0 | Apparently, your limits of authority is lower a little to install the python.
First, you should change your users to root. commnad:su root
Second,execute commands to install python.
I hope this can help you. | 2 | 0 | 0 | I've downloaded Beautiful Soup 4.3.2 and CD'ed to the right location on my disk. When I use 'python setup.py install' a load of lines run but then I get this problem:
error: could not create '/Library/Python/2.7/site-packages/bs4': Permission denied
Anyone know why this is?
Thanks a lot! | Python - Can't install modules on mac | -0.099668 | 0 | 0 | 176 |
30,768,182 | 2015-06-10T22:10:00.000 | 0 | 0 | 0 | 0 | python,hidden-markov-models | 30,769,647 | 1 | true | 0 | 0 | This sounds like the standard HMM scaling problem. Have a look at "A Tutorial on Hidden Markov Models ..." (Rabiner, 1989), section V.A "Scaling".
Briefly, you can rescale alpha at each time to sum to 1, and rescale beta using the same factor as the corresponding alpha, and everything should work. | 1 | 0 | 1 | I have implemented the baum-welch algorithm in python but I am now encountering a problem when attempting to train HMM (hidden markov model) parameters A,B, and pi. The problem is that I have many observation sequences Y = (Y_1=y_1, Y_2=y_2,...,Y_t=y_t). And each observation variable Y_t can take on K possible values, K=4096 in my case. Luckily I only have two states N=2, but my emission matrix B is N by K so 2 rows by 4096 columns.
Now when you initialize B, each row must sum to 1. Since there are 4096 values in each of the two rows, the numbers are very small. So small that when I go to compute alpha and beta their rows eventually approach 0 as t increases. This is a problem because you cannot compute gamma as it tries to compute x/0 or 0/0. How can I run the algorithm without it crashing and without permanently altering my values? | Baum-Welch many possible observations | 1.2 | 0 | 0 | 869 |
30,768,970 | 2015-06-10T23:20:00.000 | 0 | 0 | 1 | 1 | python,parsing,directory | 30,769,638 | 1 | true | 0 | 0 | len(os.listdir('path')) will give you the total number of entries at 'path' (this includes files and directories). As far as I know, the os module is platform independent, however the strings used for paths are different for windows and unix-based environments (and maybe others, I don't know). Windows path strings should look like 'C:\\directory\\subdir\\file' because backslashes need to be escaped. I can't remember off the top of my head but I think linux path strings are just like you'd expect them to be: 'directory/subdir/file'. | 1 | 0 | 0 | I am fairly familiar with Python and coding in general but I do not have much experience with the parsing and directory navigation in Python. I have a file that contains some data that I wish to extract and I want to be able to count the number of files and directories in given directory. I have done a little research and I think the sys and os.path modules will be useful. Also do the commands for os.path vary across platforms(Yosemite vs. Linux. Windows) | Navigating Directories and Counting files in a directory with Python | 1.2 | 0 | 0 | 57 |
30,769,851 | 2015-06-11T00:59:00.000 | 2 | 0 | 0 | 0 | python,lambda,tkinter,command | 30,770,368 | 2 | true | 0 | 1 | A good way to look at it is to imagine the button or binding asking you the question "what command should I call when the button is clicked?". If you give it something like self.red(), you aren't telling it what command to run, you're actually running the command. Instead, you have to give it the name (or more accurately, a reference) of a function.
I recommend this rule of thumb: never use lambda. Like all good rules of thumb, it only applies for as long as you have to ask the question. Once you understand why you should avoid lambda, it's OK to use it whenever it makes sense. | 1 | 1 | 0 | I'm confused as to the difference between using a function in commands of tkinter items. say I have self.mb_BO.add_radiobutton(label= "Red", variable=self.BO, value=2, command=self.red)
what is the difference in how the add statement works from this:
self.mb_BO.add_radiobutton(label= "Red", variable=self.BO, value=2, command=self.red())
where func red(self) changes the color to red.
And self.mb_BO.add_radiobutton(label= "Red", variable=self.BO, value=2, command=lambda: self.red())
Essentially I don't understand what these commands are doing and when to use the callback or function reference. I've spent hours looking online for an easy to follow summary to no avail and I am still just as confused. | commands in tkinter when to use lambda and callbacks | 1.2 | 0 | 0 | 881 |
30,770,219 | 2015-06-11T01:45:00.000 | 1 | 0 | 1 | 0 | python,qt,pyside,cx-freeze | 43,317,798 | 1 | true | 0 | 1 | In python there's no static linking. All the imports requires the correct dependencies to be installed on our target machine. The choice of the version of such libraries are in our decision.
Now let's come to the binary builders for python. In this case, we'll have to determine the linking type based on the GNU definitions. If the user can replace the dependency as he likes, it's dynamic. If the dependency is attached together with the binary itself, it's static linking. In case of cx_freeze or pyinstaller, if we build this as one file, it's static linking. If we build this in normal mode where all the dependencies are collected as separate files, it's dynamic linking. Idea is, whether we can replace the dependency in target machine or not. | 1 | 12 | 0 | I know the difference between static and dynamic linking in C or C++. But what does it mean in Python? Since it's just an interpreter, and only having one style of import mechanism of modules, how does this make sense?
If I freeze my Python application with cx_freeze by excluding a specific library, is it a kind of dynamic linking? Because, users have to download and install that library by themselves in order to run my application.
Actually my problem is, I'm using the PySide library (with LGPL v2.1) to develop a Python GUI application. The library says I should dynamically link to the library to obey their legal terms (same as Qt). In this case, how do I link to PySide dynamically? | What does it mean for statically linking and dynamically linking in Python? | 1.2 | 0 | 0 | 2,509 |
30,771,244 | 2015-06-11T03:51:00.000 | 1 | 0 | 1 | 0 | python,multithreading | 30,771,348 | 1 | false | 0 | 0 | Python (the language runtime, therefore the DLL) is architectured in such a way (global variables everywhere) that it's currently impossible to have more than one Python VM running in the same process.
So no, you can't do that (not in Python, at least -- Lua allows multiple independent VMs running in the same process).
And even if you could, to share data between threads (which is a bad idea to begin with, but that's another topic) without compromising the runtime's integrity, you need the GIL. That's why it exists in the first place. | 1 | 0 | 0 | I am a python novice, I gradually came to love python, but I'm not satisfied with its concurrent performance.
Multithreading is slow. Multi-process, slow loading, waste of resources.
So I thought, why not use Python. dll true multithreading.
It loads faster, runs faster, and save resources.
Moreover, An process inner share data more quickly and safely.
I am familiar with another scripting language, use this method. They can control each other, shared variables, but independent of each other, it is a true multi-threaded.
Who has similar experiences do, you're welcome to share. | python Use python.dll Achieve Multithreading | 0.197375 | 0 | 0 | 108 |
30,771,325 | 2015-06-11T03:58:00.000 | 0 | 0 | 1 | 0 | python,common-lisp,argparse | 30,776,685 | 2 | false | 0 | 0 | I have used apply-argv for our bundle-generator. | 1 | 1 | 0 | What is a common-lisp analogue of python's argparse library for parsing command-line arguments? | What is a common-lisp analogue of python's argparse? | 0 | 0 | 0 | 193 |
30,776,719 | 2015-06-11T09:24:00.000 | 0 | 1 | 0 | 1 | python,linux,shell,unix,linux-kernel | 30,956,053 | 4 | false | 0 | 0 | In a pinch, you can use at(1). Make sure the program you run reschedules the at job. Warning: this goes to heck if the machine is down for any length of time. | 3 | 0 | 0 | I have a shell script that I want to run automatically every day at 08 AM, and I am not authorised to use the crontab because I don't have root permission
My home directory is /home/user1/.
Any suggestions? | schedule automate shell script running not as ROOT | 0 | 0 | 0 | 164 |
30,776,719 | 2015-06-11T09:24:00.000 | 0 | 1 | 0 | 1 | python,linux,shell,unix,linux-kernel | 30,777,116 | 4 | false | 0 | 0 | I don't think root permission is required to create a cron job. Editing a cronjob that's not owned by you - there's where you'd need root. | 3 | 0 | 0 | I have a shell script that I want to run automatically every day at 08 AM, and I am not authorised to use the crontab because I don't have root permission
My home directory is /home/user1/.
Any suggestions? | schedule automate shell script running not as ROOT | 0 | 0 | 0 | 164 |
30,776,719 | 2015-06-11T09:24:00.000 | 0 | 1 | 0 | 1 | python,linux,shell,unix,linux-kernel | 30,776,928 | 4 | false | 0 | 0 | Even if you dont have root permission you can set cron job. Chcek these 2 commands as user1, if you can modify it or its throwing any error.
crontab -l
If you can see then try this as well:
crontab -e
If you can open and edit, then you can run that script with cron.
by adding this line:
* 08 * * * /path/to/your/script | 3 | 0 | 0 | I have a shell script that I want to run automatically every day at 08 AM, and I am not authorised to use the crontab because I don't have root permission
My home directory is /home/user1/.
Any suggestions? | schedule automate shell script running not as ROOT | 0 | 0 | 0 | 164 |
30,778,437 | 2015-06-11T10:35:00.000 | 0 | 1 | 0 | 0 | django,python-3.x,unicode,pycharm | 30,778,955 | 1 | false | 1 | 0 | Ok so I found the problem. as patrys sugested in a comment the file didn't use UTF-8 as encoding. To change that in pycharm I had to go to file->settings->editor->file encodings and change the file encoding for tests to utf-8. After I did that I had to go into the file and re eddit the § as they have now turned into question marks. However it still didn't work. I found out that I also have to change it to UTF-8 down in the right corner of pycharm. For some reason tests is the only .py file that was affected by this (even though I deleted the original tests.py file and remade it). | 1 | 0 | 0 | When I run my tests I get a syntax error: SyntaxError: (unicode error) 'utf-8' codec can't decode byte 0xa7 in position 0: invalid start byte
The cause of this seems to be that I use a § in a string on line 62. I'm using python 3.4.2 for the project and have used § elsewhere without getting a error. I got a friend to open the project as well, on his screen the § in tests.py showed up as question marks, but this was only in the test files, in the other places it had been used it showed up as normal. I got him to change the § that were showing up as question marks to § on his pc and it worked, which is really weird. How would I go about fixing something like this on my computer though? I can't really get him to load up the file and insert special character every time I want to use them in tests.
edit: So I found out pycharm for some reason had set only tests.py to a encoding other than utf-8. I changed this to utf-8 and it then showed the § I had written as question marks. However swapping them out for § did not work for me. The reason is that for some reason even though the encoding is set to utf-8, pycharm still displays latin1 for me and type latin1 characters instead of utf-8. I've tested on 2 other computers (1 mac, and 1 windows 8.1 same as the one I have problems with) where it correctly displays utf-8. On those computers my § still appear as question marks, but if i change it on the other computer it now appears as § on the computer with the problem. So my problem now is to get pycharm to properly use UTF-8 instead of latin 1. | Django testing won't run because of syntax error | 0 | 0 | 0 | 110 |
30,783,725 | 2015-06-11T14:29:00.000 | 1 | 0 | 1 | 0 | python-green | 35,518,882 | 3 | false | 0 | 0 | green can now be run directly as a module. To do this, use /path/to/python -m green | 1 | 1 | 0 | I have both Python 2.7 and 3.3 installed in my box. How would I change python-green configuration to use one or the other without changing /usr/bin/python symbolic link? | How to change python version used by python-green | 0.066568 | 0 | 0 | 93 |
30,786,979 | 2015-06-11T16:58:00.000 | 0 | 0 | 0 | 0 | android,python,django,networking,android-networking | 30,787,714 | 2 | false | 1 | 0 | Django doesn't care what the client is, and Android's HttpClient don't care whether the url are served by Django, Tomcat, Rails, Apache or whatever. It's only HTTP. IOW:
learn to write a Django App (it's fully documented)
learn to use the Android's HttpClient (it's fully documented too IIRC)
connect the dots... | 1 | 0 | 0 | I am working on a sensor app in android and I've to store the accelerometer readings on a django server and then retrieve them on my device. I am new to django and I don't know how to communicate with Android's HttpClient and a django server. | I want to write a django application that can respond to the HttpClient requests from an android device. Can I have an example code? | 0 | 0 | 0 | 131 |
30,787,397 | 2015-06-11T17:20:00.000 | 0 | 0 | 1 | 0 | python,pycharm,kivy | 30,808,862 | 1 | false | 0 | 0 | I found a way to set the environmental variables found in the kivy.bat. I simply created a new .bat that sets the environmental variables and then runs pycharm from the command line. This allows the variables to persist between projects. | 1 | 1 | 0 | I'm moving to pycharm from sublime text and can't get it working with kivy and virtualenv. I've created a virtualenv with a new project in pycharm but I can't figure out how to get kivy working. The kivy help shows using the kivy.bat as the python interpreter but I want to use the virtualenv. One possible option would be to add all the environmental variables from the kivy.bat, but this doesn't sound like fun to do with multiple virtualenvs. Any help or tips would be greatly appreciated. | Pycharm, virtualenv and kivy setup | 0 | 0 | 0 | 295 |
30,790,073 | 2015-06-11T19:46:00.000 | 0 | 0 | 1 | 1 | python,windows,importerror,pythonpath,sys.path | 30,790,972 | 2 | false | 0 | 0 | So for reference the source of the issue was residue from an old Enthought Canopy installation. The computer was using that installation of python (which didn't have the 3rd party libraries installed) instead of the one in Python27. I deleted that install from the system path and restarted the command prompt and now all is well. | 1 | 1 | 0 | I have an odd variation of the common "ImportError: DLL load failed: %1 is not a valid Win32 application" error. I only get this error when I try to import a 3rd party library while run a python script outside of the python27 directory. For instance, if I do "import numpy" while inside python27, it works fine, but if I try to import numpy while in any other directory, I get the above error. Essentially I can run "python" in any directory, but can only import 3rd party libraries if I run it from the python27 directory. If anyone has any ideas as to why this would be, I'd be very appreciative. Here's some information about my system paths.
Applicable Windows System Paths:
PYTHONPATH = C:\Python27\Lib
PYTHONHOME = C:\Python27
sys.path is equal to:
['', 'C:\Python27\Lib', 'C:\WINDOWS\SYSTEM32\python27.zip', 'C:\Python27\DLLs',
'C:\Python27\lib\plat-win',
'C:\Python27\lib\libtk',
'C:\Python27',
'C:\Python27\lib\site-packages', '
C:\Python27\lib\site-packages\win32',
'C:\Python27\lib\site-packages\win32\lib',
'C:\Python27\lib\site-packages\Pythonwin']
And if I run win_add2path.py I get:
No path was added
PATH is now:
C:\Users\Mike\AppData\Local\Enthought\Canopy\User;C:\Users\Mike\AppData\Local\Enthought\Canopy\User\Scripts;C:\Python27;C:\Python27\Scripts
Expanded:
C:\Users\Mike\AppData\Local\Enthought\Canopy\User;C:\Users\Mike\AppData\Local\Enthought\Canopy\User\Scripts;C:\Python27;C:\Python27\Scripts
Part of me feels that the Enthought Canopy path is screwing it up (that directory no longer exists), but the Python27 path is also there so it shouldn't be an issue...
EDIT: I believe I now know what is causing the problem, but not how to fix it. So apparently there was a python.exe in the enthought canopy folder, and this is the one my comptuer was using, not the one in python27 (which is weird because I uninstalled enthought canopy). However, my computer now can't find the python.exe in python27 even though that directory is added to my system path... It gives me the old "python is not recognized as an internal or external command" shindig.
Edit Well, I restarted the command prompt and now it works... I guess the removal of the enthought canopy path variable hadn't taken affect yet. | ImportError: DLL load failed: %1 is not a valid Win32 application Only When Outside Python27 Directory | 0 | 0 | 0 | 1,173 |
30,791,066 | 2015-06-11T20:42:00.000 | 1 | 1 | 0 | 1 | python,nginx,fastcgi | 30,792,360 | 1 | false | 0 | 0 | Nginx talks to a fastcgi process over a socket connection.
If the fastcgi process blocks, that means that it won't be sending data over the socket.
This won't block nginx as such, because it keeps processing events (data from other connections). It uses non-blocking techniques like select, poll or equivalent OS-dependent functions (with a timeout) to query sockets without blocking.
But it will stall whatever client is waiting for the fastcgi output. | 1 | 0 | 0 | The main advantage of Nginx is cited as it not needing to spawn separate threads for each request it receives.
Now, if we were to run a python based web application using FastCGI, and this web application has blocking calls, would this create a bottleneck?
Since there is only a limited number of workers running(1 per processor?), wont a blocking call by a python script make it cooperative multiprocessing? | Nginx + FastCGI with blocking calls | 0.197375 | 0 | 0 | 345 |
30,794,757 | 2015-06-12T03:11:00.000 | 1 | 0 | 0 | 0 | python,ibm-cloud,ibm-watson,personality-insights | 30,801,734 | 2 | false | 0 | 0 | You are not limited to 100 API calls a month, just over 100 you have to pay for the API calls. | 1 | 0 | 0 | I am using Python to program a script for IBM Watson's Personality Insights service. I am using the results as training data for a Machine Learning project.
Since the service is so limited (100 calls/month), is it possible to get multiple personality insights with only one API call? | Can one get multiple results from one API call in IBM Watson? | 0.099668 | 0 | 0 | 563 |
30,797,767 | 2015-06-12T07:37:00.000 | 3 | 0 | 1 | 0 | python,ubuntu,numpy,scipy | 30,802,293 | 2 | false | 0 | 0 | The majority of Linux distributions have a package manager that installs pre-compiled binary packages. In the case of numpy/scipy they would thus install Python source code with the precompiled C/Fortran extensions. No C/Fortran compilers are necessary for the install.
PyPI on the other hand, is a package manager for Python that is very roughly a wrapper around the python setup.py install command. It will in particular compile the necessary C/Fortran extensions from sources. It thus requires the gcc, gfortran compilers to be present on the system. This takes longer (~15 min for numpy) but has the advantage that it could be potentially optimized with compilation flags to the current CPU architecture and therefore marginally faster (that shouldn't matter much in practice though). | 1 | 3 | 0 | This is probably a trivial question and maybe even a duplicate.
What is the difference between numpy/scipy as installed from PyPI and as opposed to the one installed from a distribution's repository, say Ubuntu using apt-get? I think I have a vague idea- numpy as installed from PyPI requires a lot of other tools like gcc, gfortran before it can build. I am guessing a distro's version of numpy package comes with all these tools? Not sure if this is the right picture.
If so, using PyPI depending on which python I am pointing to I can install numpy and scipy for a particular version of python. Using apt-get, can you install numpy and scipy for a specific version of python? Does the package manager apt-get use the version of python I am pointing to? | Numpy installing via PyPI vs distro package manager | 0.291313 | 0 | 0 | 551 |
30,799,303 | 2015-06-12T09:07:00.000 | 2 | 0 | 1 | 0 | python,math,floating-point,nan,exponentiation | 30,813,777 | 3 | false | 0 | 0 | Floating-point arithmetic is not real-number arithmetic. Notions of "correct" informed by real analysis do not necessarily apply to floating-point.
In this case, however, the trouble is just that pow fundamentally represents two similar but distinct functions:
Exponentiation with an integer power, which is naturally a function RxZ --> R (or RxN --> R).
The two-variable complex function given by pow(x,y) = exp(y * log(x)) restricted to the real line.
These functions agree for normal values, but differ in their edge cases at zero, infinity, and along the negative real axis (which is traditionally the branch cut for the second function).
These two functions are sometimes divided up to make the edge cases more reasonable; when that's done the first function is called pown and the second is called powr; as you have noticed pow is a conflation of the two functions, and uses the edge cases for these values that come from pown. | 2 | 1 | 0 | Why is 1**Inf == 1 ?
I believe it should be NaN, just like Inf-Inf or Inf/Inf.
How is exponentiation implemented on floats in python?
exp(y*log(x)) would get correct result :/ | Why is the value of 1**Inf equal to 1, not NaN? | 0.132549 | 0 | 0 | 186 |
30,799,303 | 2015-06-12T09:07:00.000 | 1 | 0 | 1 | 0 | python,math,floating-point,nan,exponentiation | 30,799,372 | 3 | false | 0 | 0 | Technically 1^inf is defined as limit(1^x, x->inf). 1^x = 1 for any x >1, so it should be limit(1,x->inf) = 1, not NaN | 2 | 1 | 0 | Why is 1**Inf == 1 ?
I believe it should be NaN, just like Inf-Inf or Inf/Inf.
How is exponentiation implemented on floats in python?
exp(y*log(x)) would get correct result :/ | Why is the value of 1**Inf equal to 1, not NaN? | 0.066568 | 0 | 0 | 186 |
30,801,879 | 2015-06-12T11:25:00.000 | 11 | 0 | 0 | 0 | android,python,navigation,ui-automation,appium | 40,546,232 | 8 | false | 1 | 0 | I guess maybe it depends on what version of the client library are you using because in Java driver.navigate().back() works well. | 4 | 9 | 0 | I am working on test automation for a hybrid mobile application on Android using Appium(python client library). I haven't been able to figure out any means to automate or create a gesture for using the Phone back button to go back to the previous page of the app. Is there any driver function that can be used? I tried my luck with self.driver.navigate().back() [hoping this would simulate the same behaviour as in Selenium to navigate to the previous URL] but to no avail. Can anyone suggest a way out? | How to automate the android phone back button using appium | 1 | 0 | 1 | 25,243 |
30,801,879 | 2015-06-12T11:25:00.000 | 13 | 0 | 0 | 0 | android,python,navigation,ui-automation,appium | 31,707,929 | 8 | false | 1 | 0 | Yes,try the driver.back(), it simulates the system back function. | 4 | 9 | 0 | I am working on test automation for a hybrid mobile application on Android using Appium(python client library). I haven't been able to figure out any means to automate or create a gesture for using the Phone back button to go back to the previous page of the app. Is there any driver function that can be used? I tried my luck with self.driver.navigate().back() [hoping this would simulate the same behaviour as in Selenium to navigate to the previous URL] but to no avail. Can anyone suggest a way out? | How to automate the android phone back button using appium | 1 | 0 | 1 | 25,243 |
30,801,879 | 2015-06-12T11:25:00.000 | 0 | 0 | 0 | 0 | android,python,navigation,ui-automation,appium | 38,353,103 | 8 | false | 1 | 0 | driver.sendKeyEvent(AndroidKeyCode.BACK);
does the job in Java | 4 | 9 | 0 | I am working on test automation for a hybrid mobile application on Android using Appium(python client library). I haven't been able to figure out any means to automate or create a gesture for using the Phone back button to go back to the previous page of the app. Is there any driver function that can be used? I tried my luck with self.driver.navigate().back() [hoping this would simulate the same behaviour as in Selenium to navigate to the previous URL] but to no avail. Can anyone suggest a way out? | How to automate the android phone back button using appium | 0 | 0 | 1 | 25,243 |
30,801,879 | 2015-06-12T11:25:00.000 | 1 | 0 | 0 | 0 | android,python,navigation,ui-automation,appium | 50,558,943 | 8 | false | 1 | 0 | For appium-python-client, to go back you should call this method:
driver.press_keycode(4) | 4 | 9 | 0 | I am working on test automation for a hybrid mobile application on Android using Appium(python client library). I haven't been able to figure out any means to automate or create a gesture for using the Phone back button to go back to the previous page of the app. Is there any driver function that can be used? I tried my luck with self.driver.navigate().back() [hoping this would simulate the same behaviour as in Selenium to navigate to the previous URL] but to no avail. Can anyone suggest a way out? | How to automate the android phone back button using appium | 0.024995 | 0 | 1 | 25,243 |
30,802,608 | 2015-06-12T12:02:00.000 | 1 | 0 | 1 | 0 | python,multiprocessing | 30,803,007 | 1 | true | 0 | 0 | Well, one advantage of multiprocessing is that there are tools for interprocess communication and you can have shared variables with different sorts of restrictions. But if you don't need that your approach is perfectly viable. What you are doing is basically what most map reduce systems automate. I am sure there is some slight performance penalty in running an entire other interpreter but it's probably insignificant. | 1 | 1 | 0 | is there any benefit in using multiprocessing versus running several python interpreters in parallel for long-running, embarrassingly parallel tasks?
At the moment, I'm just firing up several python interpreters that run the analysis over slices of input data, each of them dumping the results into a separate pickle file. It is trivial to slice the input data as well as to combine the results. I'm using python 3.4 on OS X and linux for that.
Is rewriting the code with the multiprocessing module worth the effort? It seems to me that not, but then I'm far away from being expert... | Multiprocessing vs running several Python interpreters | 1.2 | 0 | 0 | 210 |
30,803,511 | 2015-06-12T12:48:00.000 | -1 | 0 | 0 | 0 | python,python-3.x,odoo,odoo-8 | 37,959,512 | 5 | false | 1 | 0 | Just try this, may help you
'res_model': 'your.model.to.reload', | 1 | 5 | 0 | I want to reload a page in odoo on a click of a button. I tried this:
object_name.refresh()
return {'tag': 'reload'}
but it's not working.
How can I get it? | Odoo Reload on button click | -0.039979 | 0 | 0 | 10,689 |
30,804,783 | 2015-06-12T13:50:00.000 | 0 | 0 | 0 | 0 | python,django,django-south | 30,805,141 | 1 | false | 1 | 0 | I figured out the last migration files were accidentally corrupted and caused the KeyError. | 1 | 0 | 0 | I am trying to do a schemamigration in Django with south using the following command where core is the app I would like to migrate.
$ python manage.py schemamigration core --auto
Unfortunately this throws the following KeyError:
KeyError: u"The model 'externaltoolstatus' from the app 'core' is not available in this migration."
Does anybody know a way to how figure out what went wrong or where/when this error was thrown during the migration? | Django south schemamigration KeyError | 0 | 0 | 0 | 92 |
30,807,297 | 2015-06-12T15:51:00.000 | 2 | 0 | 1 | 0 | python,dictionary,exception-handling,stack-trace | 30,807,357 | 1 | true | 0 | 0 | No, the exception doesn't retain a reference to the dictionary that threw the exception. As such, you cannot enumerate the keys that do exist from just the exception. | 1 | 0 | 0 | If I catch an KeyError exception in python, I can easily get the key that was failed. Is there a way to access the keys that are in the dictionary?
I know the exception itself doesn't have the information, but is there a way to find it from the stack trace? | access to original dict after a KeyError | 1.2 | 0 | 0 | 74 |
30,810,029 | 2015-06-12T18:36:00.000 | 0 | 0 | 0 | 0 | python,django,naming-conventions | 30,811,433 | 1 | true | 1 | 0 | You can use relative imports likefrom .. import cool_project as long you modules are in a package. I would suggest you to rename your app to something else though. It would create unnecessary complexity | 1 | 0 | 0 | I have developed a backend library, say, cool_project. Now I want to build a web interface for it. So I create a Django project. Of course, I want to name it cool_project: my mother told me that Hungarian notation is bad and that the name cool_project is much better than any of cool_project_web etc.
But now I have a collision. As long as I try importing cool_project (the backend one) from django/cool_project/views.py (the views.py of the main Django app) a frontend package is being imported.
Is there any way to import backend project in this case? I tried to add full path to backend package (sys.path.insert(0, "/home/.../...")) but it didn't help.
Or maybe there is some well-known naming convention which helps avoiding such collisions? | Django package naming issues | 1.2 | 0 | 0 | 49 |
30,810,908 | 2015-06-12T19:35:00.000 | 0 | 0 | 0 | 0 | python,flask,flask-wtforms | 30,814,109 | 1 | true | 1 | 0 | As @dirn stated, that is the nature of file uploads. You have two options to get around this.
Save the uploaded file temporarily (especially if its large) while you prompt the user to fix the input error (as suggested by @dirn). This would require extra logic to purge files (assuming the user decides they don't want to submit form anymore or they go to a different page, etc)
Validate your form using javascript so that the file only uploads when the form is actually valid (wtforms doesn't really help you much with this option) | 1 | 2 | 0 | I'm having an issue where the contents of an uploaded file, via a FileField, are lost when the user resubmits form. I'm guessing the easy answer is to force the user to re-upload the file however I was wondering if there might be a workaround that can avoid having the user re-upload. | Flask-WTF File contents lost when form fails validation and user resubmits form | 1.2 | 0 | 0 | 115 |
30,810,963 | 2015-06-12T19:39:00.000 | 2 | 0 | 0 | 0 | python,excel | 30,811,295 | 3 | false | 0 | 0 | Based on your description, this seems easy enough to do in Excel:
Assume row 1 contains column headers, and data begin in row 2. If column A contains your values (starting in A2), in cell B2 use the formula =IF(ISBLANK(A2), B1, A2) and fill down. This formula will return the value of A2 if it is not blank, and will return the previous value in column B if the current value in column A is blank.
Note that this requires that the first cell in each group contains the value that you want to fill down.
A post-script for general reference: Excel has a hard time with blank cells resulting from formulas, so the formula ="" (or the result of something like =IFERROR(..., "")) is not blank, but does have a length of 0. Changing ISBLANK(A2) to LEN(A2)<1 accounts for these situations. | 1 | 0 | 0 | I have a couple thousand lines of data in excel. In one column, however, only every fifth line is filled. What I'm trying to do is fill in the four empty lines below each filled line with the data from the line above. I have a beginner's grasp of python, so if someone could steer me in the right direction, it would be a great help. Thanks a lot. | Filling in missing data in excel | 0.132549 | 1 | 0 | 1,002 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.