Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
26,429,023 | 2014-10-17T16:03:00.000 | 1 | 0 | 1 | 0 | python,mongodb,pymongo,bulkinsert,gridfs | 26,662,382 | 3 | true | 0 | 0 | I read and researched all the answers but unfortunately they didn't fulfill my requirements. The data that I was needed to use for specifying _id of jsons in GridFS was actually stored inside of JSON itself. It sounds like worst idea ever including redundancy and etc, but unfortunately its requirement.
What I did is I wrote insert thread for multiprocessing insertion to GridFS and inserted all the data with several threads (2 GridFS threads was enough to get proper performance). | 1 | 1 | 0 | Is it possible? If so then how?
Currently I'm inserting strings >16MB into GridFS one-by-one, but its very slow when dealing not with 1 string, but with thousands. I tried to check documentation, but didn't find a single line about bulk insert to GridFS storage, not just simple collection.
I'm using PyMongo for communication with MongoDB. | Bulk insert to GridFS in MongoDB | 1.2 | 1 | 0 | 1,705 |
26,433,057 | 2014-10-17T20:33:00.000 | 1 | 0 | 1 | 0 | python,stream,newline | 26,433,289 | 1 | true | 0 | 0 | A python file-like object typically supports the ".fileno()" method. That returns the underlying file handle. Once you have the file handle, you should be able to use os.fdopen(file_handle, "rU") to obtain a new file object with universal newline semantics. | 1 | 0 | 0 | I have a file-like object representing a potentially endless stream. I want to read from this stream and count the lines, among other things, and I want to use universal newlines.
I don't have access to the statment that opens the file, so I can't just add mode='rU' to the open statement or equivalent thereof.
Nor can I read the entire file into memory and use splitlines() or io.StringIO(unicode(mystream.read()), newline=None)
Does anyone know of a way to accomplish this? | How can I read from an already opened file in universal newline mode? | 1.2 | 0 | 0 | 159 |
26,433,138 | 2014-10-17T20:40:00.000 | 8 | 0 | 1 | 0 | python,string,syntax,concatenation,python-internals | 26,433,175 | 2 | false | 0 | 0 | The strings are being concatenated by the python parser before anything is executed, so its not really like 'y' + 'z' or ''.join('y','z'), except that it has the same effect. | 1 | 37 | 0 | If you run x = 'y' 'z' in Python, you get x set to 'yz', which means that some kind of string concatenation is occurring when Python sees multiple strings next to each other.
But what kind of concatenation is this?
Is it actually running 'y' + 'z' or is it running ''.join('y','z') or something else? | What is under the hood of x = 'y' 'z' in Python? | 1 | 0 | 0 | 1,783 |
26,433,963 | 2014-10-17T21:54:00.000 | 1 | 0 | 1 | 0 | python | 26,434,055 | 1 | true | 0 | 0 | When double-clicking a .py file in Windows, the standard program handler will be launched to execute that file. This will be the Python interpreter, which runs in a console window. As soon as the process finished—successful or not—the window closes again, which doesn’t allow you to see the error messages.
In order to keep it around, run your program from within the command line, e.g. by opening cmd.exe first and executing the file there. That way, even when the Python process exists, the output will be still visible. | 1 | 0 | 0 | I write Python code in Notepad++, save it to a .py file and than run it by double clicking on the file in Windows.
However if there are any syntax errors in the file, the program simply shuts down. I get no error message.
How can I receive error messages regarding problems in the program? Also, what's the best way to code an run Python scripts? | How can I receive error messages from the Python interpreter? | 1.2 | 0 | 0 | 166 |
26,435,360 | 2014-10-18T01:08:00.000 | 0 | 0 | 0 | 0 | python,mysql,heroku | 26,444,744 | 2 | false | 1 | 0 | Heroku dynos are ephemeral. Even if you manually figure out your dyno's IP, it's going to change frequently. IP whitelisting isn't a viable scheme for securing connections between Heroku apps and other services. You should use TLS. | 1 | 0 | 0 | I've pushed my first app to heroku and everything looks good except for an issue w/ MySQL.
I'm pushing/pulling a remote server that asks me to whitelist specific ip addresses. I can't find the ip address for the heroku app I've deployed so all of the queries return errors.
Any thoughts on the best way to get that?
Thanks | heroku - get app IP to whitelist queries | 0 | 0 | 0 | 388 |
26,438,022 | 2014-10-18T09:04:00.000 | 0 | 0 | 0 | 0 | javascript,python,websocket | 26,441,656 | 1 | true | 1 | 0 | Cookie is available with websocket. Just login and store a session/cookie for the user as normal. The you will know who it is.
Or, just send the cookie as the first message after connecting. | 1 | 1 | 0 | I've created a websocket avatar chat application where a user is given an avatar and they can move around with this avatar and send messages.
I want to design a login which connects to my database (already has several accounts stored). When a user has logged in with the correct details, I'd like for their username to be shown on a chatlog i.e. "Damien has logged in". Of course there'd be several more features I'd be able to finally work on when I implement the login with the application but I'm not sure how I can.
I'm presuming it will involve adding perhaps a user array list in the room? The websocket server is created in python, client in html5 and javascript.
Any suggestions? | Creating login for a websocket application? | 1.2 | 0 | 1 | 83 |
26,438,498 | 2014-10-18T09:59:00.000 | 0 | 0 | 1 | 0 | python,python-2.7 | 26,438,528 | 1 | true | 0 | 0 | Yes. reload(file) in your application code every time after you changing something. It will reflect the change to your application code. | 1 | 0 | 0 | I have written python code in file.py which is in the directory containing python application.
Python version is Python 2.7.4. Platform is Windows 7.
I imported file.py to python application and made changes to file.py while keeping the python application window on. But changes are not reflected. Every time changes are made in file.py, I will have to close python application and import file.py again for the changes to be reflected.
Is there away to solve this problem? | Python application not recognizing updates made in python file | 1.2 | 0 | 0 | 46 |
26,439,428 | 2014-10-18T11:49:00.000 | 0 | 0 | 0 | 0 | python,keypress | 26,440,943 | 1 | false | 0 | 1 | There are tutorials on how to create a key-logger with Python. They should help. But I do not know if that is the right way to go.
Also you could register shortcuts under a key combination on Windows.
You should be aware that Ctrl+Shift+Alt are handled independent of the keyboard layout and Q changes with the language.
With pywin32 you should be able to do that using Ctrl+Shift+Alt+F1 for example. | 1 | 0 | 0 | I am writing long running program with simple GUI, the 99% of time I would like the program to work only as process, however sometimes I want to check the status, so is it possible to capture the keypress event in python?
For example I want to show the program window when I press Ctrl+Shift+Alt+Q, I expect to use app on Windows
Thank you | Python 3, capturing key combinations | 0 | 0 | 0 | 145 |
26,440,093 | 2014-10-18T13:10:00.000 | 1 | 0 | 0 | 0 | python,django,django-templates,project-structure,filestructure | 26,440,891 | 1 | false | 1 | 0 | There are a few options, and the answer depends on the specifics of your software and hardware.
Are the images small and can they be generated quickly? Generate the images on-the-fly from a Django view. Do not store them anywhere. Have a url such as /user/some-important-widget-image/5/ which outputs a PNG of the some-important-widget image for user with the ID 5.
Are the files big, take a long time to generate, or generating them on the fly will not work because the server cannot handle it? Store them in the media directory. Have a cron job which every day, week, or month deletes images which were generated more than X hours ago. | 1 | 0 | 0 | I want to view html page with images dynamically generated on user's request by Python script in Django. I do not need to store them permanently, only to generate response html page from template. Where should I store these images (or should I)? When should I delete the images if I store them somewhere on server? | Where to store software generated user-specific image files in Python Django? | 0.197375 | 0 | 0 | 50 |
26,442,403 | 2014-10-18T17:10:00.000 | 1 | 0 | 0 | 0 | python,numpy,random-sample,normal-distribution,mixture-model | 26,565,108 | 2 | false | 0 | 0 | Since for sampling only relative proportion of the distribution matters, scaling preface or can be thrown away. For diagonal covariance matrix, one can just use the covariance submarine and mean subvector that has dimensions of missing data. For covariance with off-diagonal elements, the mean and std dev of a sampling gaussian will need to be changed. | 1 | 1 | 1 | I want to sample only some elements of a vector from a sum of gaussians that is given by their means and covariance matrices.
Specifically:
I'm imputing data using gaussian mixture model (GMM). I'm using the following procedure and sklearn:
impute with mean
get means and covariances with GMM (for example 5 components)
take one of the samples and sample only the missing values. the other values stay the same.
repeat a few times
There are two problems that I see with this. (A) how do I sample from the sum of gaussians, (B) how do I sample only part of the vector. I assume both can be solved at the same time. For (A), I can use rejection sampling or inverse transform sampling but I feel that there is a better way utilizing multivariate normal distribution generators in numpy. Or, some other efficient method. For (B), I just need to multiply the sampled variable by a gaussian that has known values from the sample as an argument. Right?
I would prefer a solution in python but an algorithm or pseudocode would be sufficient. | Sampling parts of a vector from gaussian mixture model | 0.099668 | 0 | 0 | 1,040 |
26,444,590 | 2014-10-18T21:09:00.000 | 0 | 0 | 0 | 0 | python,rest,content-management-system,hosting | 26,446,258 | 1 | true | 1 | 0 | The beauty of REST is precisely that it doesn't matter where your API is, as long as its accessible from your Drupal server, or from the client if you have a javascript API client.
If it's a simple application and you have admin access to your Drupal server, there's nothing preventing you from hosting the Python webservice side-by-side. They may even share the same HTTP Server, like Apache or Nginx, although depending on your demands on each one it might be better to keep them separate.
If you're new to Python, the Flask framework is a decent option to write a simple REST-like API interfacing with a Python module. | 1 | 1 | 0 | I need to create a REST server of a python module/API of a BCI, so that the application can be accessed on my Drupal website. Will I need to create and host the REST server on a python-based website or CMS, so that it can be accessed by my Drupal website, or is the api and rest server uploaded and hosted directly on my web hosting server? If so, what is the simplest python CMS that for creating a REST server for a python module/API already available? | Do rest servers need to be hosted on a website or CMS? | 1.2 | 0 | 1 | 141 |
26,446,532 | 2014-10-19T02:00:00.000 | 1 | 0 | 1 | 0 | python | 26,446,568 | 2 | false | 0 | 0 | The immutability of a tuple is shallow: you cannot change what objects the tuple refers to. But if those objects are themselves mutable, you can mutate them. | 1 | 4 | 0 | I had read that Python tuples cannot be modified, after they are created. For example, item assignment is not allowed for tuple objects. However, if I have list objects inside a tuple, then I am allowed to append to that list. So, shouldn't Python disallow that, as we are basically modifying a tuple? | Modifying lists within tuples | 0.099668 | 0 | 0 | 133 |
26,448,733 | 2014-10-19T08:25:00.000 | 0 | 0 | 0 | 0 | python,django,django-haystack,whoosh | 26,449,620 | 1 | false | 1 | 0 | is it not the label=_("Field_name") parameter in the checkbox field?
if its about the verbose name there is also verbose_name_plural which can be set up in models | 1 | 0 | 0 | I installed django haystack using whoosh. Everything works great, but I want to alter the names displayed next to the check boxes. I know they are generated using the verbose name set up on the models but I still have an issue with the 's' being added at the end of the names. I know there are custom forms and custom views but I am new to programming and some of the concepts do not make sense. I have also tried to search for any ideas but have had no luck. Any suggestions/advice?
Thanks in advance!
:) | Django haystack. How to alter names of check boxes? | 0 | 0 | 0 | 36 |
26,454,624 | 2014-10-19T19:45:00.000 | 70 | 0 | 1 | 0 | python,pycharm | 26,454,706 | 2 | true | 0 | 0 | Renaming files in PyCharm is simple. You simply select Refactor > Rename when right-clicking on a file in the tree.
This will open a popup where you can type in the new filename. There are additional options when renaming, such as searching for references and in comments, strings, etc.
NOTE: While PyCharm is indexing files, the option is unavailable. Once indexing is finished (can take a while), it becomes available again (thanks @Eric_Sven_Puudist). | 2 | 49 | 0 | In PyCharm 3.4, I want to rename a file on the file tree that appears on the left of the IDE. If I right-click on a file, there is an option to delete it, but not to rename it. Similarly, there is no way of renaming it from the File or Edit menus. Is there a fundamental reason why PyCharm does not allow this from within the IDE, or have I missed the correct way of doing it? | Renaming a file in PyCharm | 1.2 | 0 | 0 | 31,905 |
26,454,624 | 2014-10-19T19:45:00.000 | 12 | 0 | 1 | 0 | python,pycharm | 26,454,745 | 2 | false | 0 | 0 | You can just choose the file and hit shift+F6 rename it then hit refactor | 2 | 49 | 0 | In PyCharm 3.4, I want to rename a file on the file tree that appears on the left of the IDE. If I right-click on a file, there is an option to delete it, but not to rename it. Similarly, there is no way of renaming it from the File or Edit menus. Is there a fundamental reason why PyCharm does not allow this from within the IDE, or have I missed the correct way of doing it? | Renaming a file in PyCharm | 1 | 0 | 0 | 31,905 |
26,456,543 | 2014-10-19T23:29:00.000 | 0 | 1 | 0 | 1 | python,windows,permissions,file-permissions | 26,456,745 | 1 | false | 0 | 0 | If I understand correctly, you want your program to both run and edit files in its current folder. Programs that users invoke run using the user credentials by default.
If you want to prevent users from editing those application config files there are a few tricks:
Wrap your app in a DOS batch file. In that batch file use "runas" to start your app using a different account that has both execute and write permissions to those configs. Ensure the invoking user does not have write permissions. That should solve your problem.
Instead of flat text files for configs, how about using SQLite? Or encrypt the file. Either way, the same result, the user maybe able to open the file but not know what they are looking at using a typical text editor. | 1 | 0 | 0 | I made a python program, and froze it to make an executable. The only problem I can see, it that it cannot read/write the contents of several support files. I know that this is permission error because the Program Files (x86) folder is protected. I would prefer to keep my supporting files in the same folder as my executable, so that the users cannot alter them, and so my python program can look locally for them.
I have tried changing the permissions, but I'm not sure which one controls whether my executable can read/write to the local folder. | Executable read permissions for "Program Files (x86)" | 0 | 0 | 0 | 1,108 |
26,460,151 | 2014-10-20T07:08:00.000 | 3 | 0 | 0 | 0 | python,django,rest,django-rest-framework | 32,542,169 | 1 | false | 1 | 0 | Yes, you can call YourViewSet.as_view()(self.request) in your Django view.
Make sure you call the ViewSet like below:
YourViewSet.as_view({'get': 'list'})(self.request)
Else it will raise an exception
The actions argument must be provided when calling .as_view() on a ViewSet. For example .as_view({'get': 'list'}) | 1 | 6 | 0 | I know I can use drf serializer from django views, but queryset, pagination setting is all duplicated in drf viewset and django view.
Can I reuse viewset to generate json data and include it in regular django response?
Update:
ie, Can I call ViewSet.as_view()(self.request) from django view?
it's not documented way, so I'm wondering the downsides of this approach .. and if it's doable.. | Django Rest Framework, can I use ViewSet to generate a json from django view function? | 0.53705 | 0 | 0 | 6,147 |
26,472,868 | 2014-10-20T18:53:00.000 | 1 | 1 | 0 | 1 | php,python,python-2.7,background-process,python-daemon | 26,473,069 | 2 | true | 0 | 0 | The obvious answer here is to either:
Run analyze.py once per filename, instead of running it as a daemon.
Pass analyze.py a whole slew of filenames at startup, instead of passing them one at a time.
But there may be a reason neither obvious answer will work in your case. If so, then you need some form of inter-process communication. There are a few alternatives:
Use the Python script's standard input to pass it data, by writing to it from the (PHP) parent process. (I'm not sure how to do this from PHP, or even if it's possible, but it's pretty simple from Python, sh, and many other languages, so …)
Open a TCP socket, Unix socket, named pipe, anonymous pipe, etc., giving one end to the Python child and keeping the other in the PHP parent. (Note that the first one is really just a special case of this one—under the covers, standard input is basically just an anonymous pipe between the child and parent.)
Open a region of shared memory, or an mmap-ed file, or similar in both parent and child. This probably also requires sharing a semaphore that you can use to build a condition or event, so the child has some way to wait on the next input.
Use some higher-level API that wraps up one of the above—e.g., write the Python child as a simple HTTP service (or JSON-RPC or ZeroMQ or pretty much anything you can find good libraries for in both languages); have the PHP code start that service and make requests as a client. | 1 | 0 | 0 | I have a python script (analyze.py) which takes a filename as a parameter and analyzes it. When it is done with analysis, it waits for another file name. What I want to do is:
Send file name as a parameter from PHP to Python.
Run analyze.py in the background as a daemon with the filename that came from PHP.
I can post the parameter from PHP as a command line argument to Python but I cannot send parameter to python script that already runs at the background.
Any ideas? | Send Parameter to a Python Script Running at Background From PHP | 1.2 | 0 | 0 | 1,520 |
26,473,197 | 2014-10-20T19:13:00.000 | 0 | 0 | 0 | 1 | python,macos,pip,libxml2 | 45,097,599 | 2 | false | 0 | 0 | I had the same problem and installing the Command Line Tools fixed the problem for me.
I just wanted to note, that calling xcode-select -p
and getting the output /Applications/Xcode.app/Contents/Developer
does not tell if Xcode Command Line Tools are installed (like stated in the comments on the question)!
For me it returned the same output but xcode-select --install started the installation. After the installation xcode-select --install printed xcode-select: error: command line tools are already installed, use "Software Update" to install updates
So to check if command line tools are installed better use xcode-select --install. | 2 | 8 | 0 | I have installed both libxml2 and libxslt with homebrew, but it doesn't want to install libxml2-dev or libxslt-dev:
Error: No available formula for libxml2-dev
I have pip, port, and all I could found. I even installed the Xcode command line tools,
but with no luck. What is the way to install libxml2-dev & libxslt-dev on Mac 10.10? | How to install libxml2-dev libxslt-dev on Mac os | 0 | 0 | 0 | 10,943 |
26,473,197 | 2014-10-20T19:13:00.000 | 11 | 0 | 0 | 1 | python,macos,pip,libxml2 | 26,490,857 | 2 | true | 0 | 0 | Try adding STATIC_DEPS, like this
STATIC_DEPS=true sudo pip install lxml | 2 | 8 | 0 | I have installed both libxml2 and libxslt with homebrew, but it doesn't want to install libxml2-dev or libxslt-dev:
Error: No available formula for libxml2-dev
I have pip, port, and all I could found. I even installed the Xcode command line tools,
but with no luck. What is the way to install libxml2-dev & libxslt-dev on Mac 10.10? | How to install libxml2-dev libxslt-dev on Mac os | 1.2 | 0 | 0 | 10,943 |
26,477,915 | 2014-10-21T02:20:00.000 | 0 | 0 | 1 | 0 | python,twitter,streaming,tweepy | 68,458,970 | 2 | false | 0 | 0 | Yes it is, you have to create a separate Listener class per stream | 1 | 4 | 0 | For example, I'd like to collect data related to three keywords:
keyword1
keyword2
keyword3
I understand that I could collect them all at one time using: set track=[keyword1,keyword2,keyword3]. Is it possible to run three different Python processes to collect data for those keywords separately? | Does Tweepy support running multiple Streams to collect data? | 0 | 0 | 0 | 2,395 |
26,478,208 | 2014-10-21T02:58:00.000 | 0 | 0 | 0 | 1 | python,shell,subprocess,psutil | 26,627,865 | 1 | false | 0 | 0 | If the process "dies" or gets reaped how are you supposed to interact with it? Of course you can't, 'cause it's gone. If on the other hand the process is a zombie, then in that case you might be able to extract some info off of it, like the parent PID, but not CPU or memory stats. | 1 | 0 | 0 | Apparently I can't get the process resources usage in Mac OS X with psutil after the process got reaped, i.e. after p.wait() where p is a psutil.Popen() instance. So for example, if I try ps.cpu_times().system where ps is a psutil.Process() instance, I get a raise of no such process. What are the other options for measuring the resources usage in a mac (elapsed time, memory and cpu usage)? | Python psutil collect process resources usage on Mac OS X | 0 | 0 | 0 | 605 |
26,479,903 | 2014-10-21T06:04:00.000 | 1 | 0 | 0 | 0 | python,web.py | 26,498,875 | 1 | false | 1 | 0 | Use web.data(). | 1 | 0 | 0 | My situation is : A server send a request to me, the request's contentType is 'text/xml', and the request content is an xml. First I need to get the request content. But when I use 'web.input()' in 'POST' function, I couldn't get any message, the result just is ''. I know web.py can get form data from a request, so how I can get message from request when the contentType is 'text/xml' in POST function. Thanks! | web.py how to get message from request when the contentType is 'text/xml' | 0.197375 | 0 | 1 | 86 |
26,479,928 | 2014-10-21T06:06:00.000 | 1 | 1 | 1 | 0 | python | 26,480,045 | 2 | true | 0 | 0 | If you're doing from os import environ, then you'll reference it as environ.
If you do import os, it's os.environ.
So depending on your needs, the second option might be better. The first will look better and read easier, whereas the second avoids namespace pollution. | 1 | 0 | 0 | in myModule.py I am importing environ from os , like
from os import environ since I am only using environ, but when I do dir(myModule) it shows environ as publicly visible , how ever should it be imported as protected assuming some other project may also have its own environ function ? | when importing functions from inside builtins like os or sys is it good practice to import as protected? | 1.2 | 0 | 0 | 114 |
26,480,008 | 2014-10-21T06:12:00.000 | 2 | 0 | 0 | 0 | python,sockets,flask,twisted,pythonanywhere | 26,503,901 | 2 | false | 0 | 0 | It depends what sort of connection your clients need to make to the server. PythonAnywhere supports WSGI, which means "normal" HTTP request/response interactions -- GET, POST, etc. That works well for "traditional" web pages or web apps.
If your client side needs dynamic, two-way connections using non-HTTP protocols, using raw sockets, or even websockets, PythonAnyhwere doesn't support that at present. | 1 | 3 | 0 | I'm building a turn-based game and I'm hoping to implement client-server style networking. I really just need to send the position of a couple of objects and some other easily encodable data. I'm pretty new to networking, although I've coded some basic stuff in socket and twisted. Now, though, I need to be able to send the data to a computer that isn't on my local network, and I can't do port forwarding since I don't have admin access to the router and I'm also not totally sure that would do the trick anyways since I've never done it. So, I was thinking of running some Flask or Bottle or Django, etc. code off PythonAnywhere. The clients would then send data to the server code on PythonAnywhere, and when the turn passed, the other client would just go look up the information it needed on the server. I guess then the server would act as just a data bank with some simple getter and setter methods. My question is how can this be implemented? Can my Socket code on my client program talk to my Flask code on PythonAnywhere? | Using PythonAnywhere as a game server | 0.197375 | 0 | 0 | 889 |
26,480,009 | 2014-10-21T06:12:00.000 | -2 | 0 | 0 | 0 | android,python | 44,345,713 | 3 | false | 0 | 1 | You need a Python Interpreter to run Python scripts,Pydroid Gives you access to compile Python scipts and comes with pip,So you could Install new Python Modules,it's the only app that gives a real Experience of Programming using python. | 1 | 1 | 0 | I have some python scripts which are doing image processing work using its own numpy and scipy libraries. How can I use/call these scripts in Android application providing image input from camera captures and saving the images after processed. Is there some native support for Python like C++. What performance implications would be there if I compare with using C++ as a native support. Any help would be greatly appreciated. | Using Python scripts in Android for image processing | -0.132549 | 0 | 0 | 3,179 |
26,486,808 | 2014-10-21T12:31:00.000 | 0 | 0 | 0 | 0 | python,flask,jinja2 | 26,513,516 | 3 | false | 1 | 0 | One way I can think of is to use a decorator that provides extra context variables to each view's result. | 1 | 2 | 0 | When using jinja2, base "skeleton" template are often extended by many other templates.
One of my base templates require certain variables in the context, and everywhere I use this base template I have to duplicate the setting up procedure.
For example, I may need to read some category names from DB and render them as a list in the header, now I have to write this query everywhere I use the base template.
What are some good way to avoid duplicating these kind of code when using jinja2? | How to avoid duplicating context-setting-up procedure when using base template? | 0 | 0 | 0 | 81 |
26,487,648 | 2014-10-21T13:14:00.000 | 1 | 0 | 0 | 1 | python,django,django-chronograph | 26,487,761 | 3 | false | 1 | 0 | I would suggest you to configure cron to run your command at specific times/intervals. | 2 | 0 | 0 | I need to run a specific manage.py commands on an EC2 instance every X minutes. For example: python manage.py some_command.
I have looked up django-chronograph. Following the instructions, I've added chronograph to my settings.py but on runserver it keeps telling me No module named chronograph.
Is there something I'm missing to get this running? And after running how do I get manage.py commands to run using chronograph?
Edit: It's installed in the EC2 instance's virtualenv. | Run specific django manage.py commands at intervals | 0.066568 | 0 | 0 | 368 |
26,487,648 | 2014-10-21T13:14:00.000 | 0 | 0 | 0 | 1 | python,django,django-chronograph | 26,488,221 | 3 | false | 1 | 0 | First, install it by running pip install django-chronograph. | 2 | 0 | 0 | I need to run a specific manage.py commands on an EC2 instance every X minutes. For example: python manage.py some_command.
I have looked up django-chronograph. Following the instructions, I've added chronograph to my settings.py but on runserver it keeps telling me No module named chronograph.
Is there something I'm missing to get this running? And after running how do I get manage.py commands to run using chronograph?
Edit: It's installed in the EC2 instance's virtualenv. | Run specific django manage.py commands at intervals | 0 | 0 | 0 | 368 |
26,487,774 | 2014-10-21T13:20:00.000 | 0 | 1 | 0 | 0 | python,soap-client,suds | 26,581,414 | 1 | false | 1 | 0 | While I couldn't find an alternate lib, I was able to run suds over multiprocessing using pathos.multiprocessing package. | 1 | 1 | 0 | I am using suds-jurko==0.6, but its very slow when i try to connect to remote SOAP Server with caching and local WSDL.
Can anyone suggest faster, more active/recent SOAP client for Python? | python '14 fast SOAP Client | 0 | 0 | 0 | 342 |
26,488,595 | 2014-10-21T14:00:00.000 | 0 | 0 | 1 | 0 | python,64-bit,32bit-64bit,py2exe,32-bit | 26,672,009 | 1 | true | 0 | 0 | You should install the 32-bit python (in a separate directory, you can do it on the same machine). Install 32-bit py2exe for this 32-bit Python installation plus all Python packages that you need. Then you can build a 32-bit eecutable. | 1 | 1 | 0 | I need to make an 32bit exe file using py2exe. The problem is that my machine and Python are 64bit. Is there some simple way how to make 32bit using 64bit Python and py2exe?
I heard that I should uninstall py2exe and install new py2exe 32bit, can this help me?
EDIT: If 32bit py2exe works, can I install 32bit py2exe next to my 64bit py2exe? | 32bit exe on 64bit Python using py2exe | 1.2 | 0 | 0 | 803 |
26,489,103 | 2014-10-21T14:23:00.000 | 0 | 0 | 1 | 0 | python,uninstallation,py2exe | 26,674,641 | 1 | false | 0 | 0 | py2exe ist a Python library. It uses the Python it is installed for.
So, it seems you need to install 32-bit Python (NOT into the same directory as ths 64-bit Python!), and then install the py2exe in it, and you should be ready to go. | 1 | 0 | 0 | I'm trying to switch my 64bit py2exe to 32bit version. Since I haven't found any uninstall command for py2exe, I've decided to just install 32bit version without uninstalling previous 64bit version.
Is this process harmless?
EDIT: I want to install 32bit py2exe to make an 32bit program. | Switch py2exe 64bit to 32bit | 0 | 0 | 0 | 263 |
26,490,096 | 2014-10-21T15:10:00.000 | 4 | 0 | 1 | 0 | python,main | 26,490,345 | 1 | true | 0 | 0 | You do not have to have a main function in Python and writing separate files without a main function, to be imported into other programs, is the normal and correct way of doing Python programming.
When a Python file is loaded (either using import or by getting executed from the command line) each statement in the program is executed at that time. Statements that are def or class statements create a function or class definition for later use. Statements that are are not inside a def or class will be executed right away.
Therefore, the equivalent of a main() function in other languages is actually the set of executable statements found in your file. If you limit these to def and/or class statements, you will get the effect you want. | 1 | 3 | 0 | Can I write a python code with a bunch of functions but without a main function. The purpose of this script is for other scripts to import some of the functions from. I will call it setvar_general.py or something which will be imported by a series of other setvar_x scripts. While these setvar_x do more specific things, setvar_general does not do any thing other than providing building blocks. Therefore there is not need for defining a main function in setvar_general.py.
I guess it all comes down to the question "do I have to have main function"? | do I have to have main function in my python code? | 1.2 | 0 | 0 | 6,215 |
26,493,207 | 2014-10-21T18:00:00.000 | 0 | 1 | 0 | 0 | python,amazon-web-services,amazon-ec2,boto | 26,495,168 | 2 | false | 1 | 0 | Here's the method I have come up with:
To look up all IPs to see if they are EIPs associated with our AWS account
Get a list of all our EIPs
Get a list of all instances
Build list of all public IPs of instances
Merge lists/use same list
Check desired IPs against this list.
Comments welcome. | 1 | 1 | 0 | So I have a list of public IP addresses and I'd like to see if they are a public IP that is associated with our account. I know that I can simply paste each IP into the search box in the AWS EC2 console. However I would like to automate this process via a Python program.
I'm told anything you can do in the console, you can do via CLI/program, but which function do I use to simply either return a result or not, based on whether it's a public IP that's associated with our account?
I understand I may have to do two searches, one of instances (which would cover non-EIP public IPs) and one of EIPs (which would cover disassociated EIPs that we still have).
But how? | How to tell if my AWS account "owns" a given IP address in Python boto | 0 | 0 | 1 | 1,555 |
26,493,370 | 2014-10-21T18:10:00.000 | 1 | 0 | 0 | 0 | javascript,python,ghost.py | 26,505,780 | 1 | true | 1 | 0 | I solved my problem. There is an optional parameter to the Ghost class' click() method called expect_loading and when set to true it sets an internal boolean self.loaded = False and then calls wait_for_page_loaded() which then works, I guess because of the loaded boolean. | 1 | 0 | 0 | I'm having an issue with Ghost.py. The site I am trying to crawl has links for a paginated list that work with javascript, rather than direct hrefs. When I click the links, I can't really wait for selectors because the selectors are the same on each page, so ghost doesn't wait since the selector is already present. I can't assume I know what text will be on the next page, so waiting for text will not work. And waiting for page loaded won't work either. It's almost as though the javascript is not being executed.
Ghost.py seems to have minimal documentation (if you can call the examples on the website documentation) so it is really difficult to work out what I can do, and what tools are available to me. Can anybody with more experience help me out? | Ghost.py links through javascript | 1.2 | 0 | 1 | 565 |
26,496,226 | 2014-10-21T21:00:00.000 | 1 | 0 | 1 | 0 | python,performance,optimization,dictionary | 26,496,326 | 2 | false | 0 | 0 | Python dictionaries are implemented as hash-maps in the background. The key length might have some impact on the performance if, for example, the hash-functions complexity depends on the key-length. But in general the performance impacts will be definitely negligable.
So I'd say there is little to no benefit for the added complexity. | 1 | 6 | 0 | I'm not clear on what goes on behind the scenes of a dictionary lookup. Does key size factor into the speed of lookup for that key?
Current dictionary keys are between 10-20 long, alphanumeric.
I need to do hundreds of lookups a minute.
If I replace those with smaller key IDs of between 1 & 4 digits will I get faster lookup times? This would mean I would need to add another value in each item the dictionary is holding. Overall the dictionary will be larger.
Also I'll need to change the program to lookup the ID then get the URL associated with the ID.
Am I likely just adding complexity to the program with little benefit? | Optimizing Python Dictionary Lookup Speeds by Shortening Key Size? | 0.099668 | 0 | 0 | 3,023 |
26,496,539 | 2014-10-21T21:19:00.000 | 0 | 0 | 1 | 1 | windows,python-2.7,python-3.x | 26,502,745 | 2 | false | 0 | 0 | Create a python2.bat and a python3.bat file somewhere on your path (could in your main python folder). That file only contains the location of the relavant python.exe, e.g.
C:\Programs\Python26\python.exe %* | 1 | 1 | 0 | How do I set permanent paths for both Python 2 and 3 in command prompt such that I can invoke either every time I open the command window ie 'python2' for python 2 interpreter or 'python3' for python 3 interpreter | Python dual install | 0 | 0 | 0 | 395 |
26,496,708 | 2014-10-21T21:30:00.000 | 0 | 0 | 1 | 1 | python,shell,terminal | 26,496,759 | 3 | false | 0 | 0 | You can use the sys module's stdin attribute as a file like object. | 2 | 0 | 0 | Is it possible to run a python script and feed in a file as an argument using <? For example, my script works as intended using the following command python scriptname.py input.txt and the following code stuffFile = open(sys.argv[1], 'r').
However, what I'm looking to do, if possible, is use this command line syntax: python scriptname.py < input.txt. Right now, running that command gives me only one argument, so I likely have to adjust my code in my script, but am not sure exactly how.
I have an automated system processing this command, so it needs to be exact. If that's possible with a Python script, I'd greatly appreciate some help! | How to Accept Command Line Arguments With Python Using < | 0 | 0 | 0 | 299 |
26,496,708 | 2014-10-21T21:30:00.000 | 1 | 0 | 1 | 1 | python,shell,terminal | 26,496,756 | 3 | true | 0 | 0 | < file is handled by the shell: the file doesn't get passed as an argument. Instead it becomes the standard input of your program, i.e., sys.stdin. | 2 | 0 | 0 | Is it possible to run a python script and feed in a file as an argument using <? For example, my script works as intended using the following command python scriptname.py input.txt and the following code stuffFile = open(sys.argv[1], 'r').
However, what I'm looking to do, if possible, is use this command line syntax: python scriptname.py < input.txt. Right now, running that command gives me only one argument, so I likely have to adjust my code in my script, but am not sure exactly how.
I have an automated system processing this command, so it needs to be exact. If that's possible with a Python script, I'd greatly appreciate some help! | How to Accept Command Line Arguments With Python Using < | 1.2 | 0 | 0 | 299 |
26,499,051 | 2014-10-22T01:32:00.000 | 0 | 1 | 1 | 0 | python,mbox | 26,501,853 | 1 | false | 0 | 0 | Consider splitting the mailbox manually. The format is fairly easy to process (as long as you only need read-only access) by reading it line-per-line; and you can use the existing classes for the actual parsing of individual messages.
Look up the definition of the mbox format - lines beginning with "From" start a new mail. You can split the huge file at these markers, then use the mailbox package to read only one file at a time. | 1 | 4 | 0 | I am using the python package mailbox, and I am trying to extract the messages and clean the data. I am running into the problem that for large databases, I can call the constructor with my sample file, but when I try to print any messages my program hangs. I assume it is because the file I am trying to read is over 7GB. How can I deal with this problem? | Python mailbox on large mbox datasets | 0 | 0 | 0 | 654 |
26,500,184 | 2014-10-22T04:07:00.000 | 0 | 0 | 1 | 1 | python,windows,subprocess | 26,500,297 | 3 | false | 0 | 0 | You need to call results.kill() or results.terminate() (they are aliases on Windows) to end your subprocesses before exiting your main script. | 1 | 0 | 0 | Hi I am writing a python script in windows and using subprocess
I have a line like
results=subprocess.Popen(['xyz.exe'],stdout=subprocess.PIPE)
After the script ends, and I get back to the promp carrot in cmd, I see more output from the script being printed out.
I'm seeing stuff like
Could Not Find xxx_echo.txt
Being printed out repeatedly.
How do I properly close the subprocess in windows? | Python windows script subprocess continues to output after script ends | 0 | 0 | 0 | 390 |
26,500,725 | 2014-10-22T05:16:00.000 | 0 | 0 | 1 | 1 | python,windows,command-line,environment-variables | 70,948,379 | 2 | false | 0 | 0 | One quick solution to those who are still struggling with environment variable setup issue. Just Uninstall the existing python version and reinstall it make sure to enable checkbox as "Add Python 3.10 to PATH to the environment variable. | 1 | 0 | 0 | I've been using Python for some time now, but I have never been able to properly run it from the Windows command line. The error shown is:
C:\Windows\system32>python
'python' is not recognized as an internal or external command, operable program or batch file.
I've tried to solve the problem many times. I understand it's a matter of editing the environment variables, but this hasn't fixed the problem. My System Path variable is currently
C:\Python27;C:\Python27\Lib;C:\Python27\DLLs;C:\Python27\Lib\lib-tk
This is the correct location of Python in my directory. I've tried adding this to my User Path, and I've tried creating a PYTHONPATH variable containing them.
I should note that running python.exe does work.
C:\Windows\system32>python.exe
Python 2.7.5 (default, May 15
2013, 22:43:36) [MSC v.1500 32 bit (Intel)] on win 32 Type "help",
"copyright", "credits" or "license" for more information.
I've tried a variety of solutions to no avail. Any help is greatly appreciated. | Adding Python to Windows environmental variables | 0 | 0 | 0 | 1,498 |
26,500,750 | 2014-10-22T05:19:00.000 | 0 | 0 | 1 | 0 | python | 26,500,795 | 4 | false | 0 | 0 | Define a comparison function that increments a counter variable (global or an instance variable for the object that owns the comparison function). Pass the comparison function as the key to the sort function. After the sort, query the counter. | 1 | 1 | 0 | I have a list in python. I need to find count of comparisons, which doing func sort(), when i'm using it. How can i do this?
Thank you. | How to find count of comparisons in method sort() | 0 | 0 | 0 | 106 |
26,503,035 | 2014-10-22T08:17:00.000 | 1 | 0 | 1 | 0 | python,ruby,scripting,powershell-2.0 | 26,507,747 | 1 | true | 0 | 0 | Best answer is 'write tests'. For purely syntactical checking with some code correctness, like calling a function which does not exist like you are describing, pylint is probably the best tool. Install it with pip install pylint. | 1 | 0 | 0 | This question applies to dynamically interpreted code, I guess
In detail
Say I have a set of data processing projects that depend on a common module called tools. Down the road of development, I find out that I want to change the interface of one of the functions or methods in tools.
This interface-change might not be totally backwards compatible, it might break a subset of my data processing projects.
If all the software involved would have to be compiled, I could simple re-compile everything and the compiler would point me to the spots where I have to adapt the calling code to the new signature. But how can this be done in an interpreted situation?
TL;DR
A set of script programs depend on a script module. After chaning the interface of the module in a possibly not backwards-compatible way, how do I check the dependent programs and make them compliant to the new interface? | Checking all code paths in a project written in a scripting language for syntax-correctness | 1.2 | 0 | 0 | 83 |
26,503,826 | 2014-10-22T09:01:00.000 | 1 | 0 | 0 | 0 | python,django,database-migration,django-1.7,django-1.9 | 57,603,582 | 8 | false | 1 | 0 | From Django 2.X, using ugettext_lazy instead of ugettext or gettext fixes it. | 1 | 54 | 0 | When I change help_text or verbose_name for any of my model fields and run python manage.py makemigrations, it detects these changes and creates a new migration, say, 0002_xxxx.py.
I am using PostgreSQL and I think these changes are irrelevant to my database (I wonder if a DBMS for which these changes are relevant exists at all).
Why does Django generate migrations for such changes? Is there an option to ignore them?
Can I apply the changes from 0002_xxxx.py to the previous migration (0001_initial.py) manually and safely delete 0002_xxxx.py?
Is there a way to update previous migration automatically? | Why does Django make migrations for help_text and verbose_name changes? | 0.024995 | 0 | 0 | 7,359 |
26,509,319 | 2014-10-22T14:02:00.000 | 2 | 0 | 0 | 0 | python,arrays,numpy,svm,libsvm | 26,509,674 | 1 | true | 0 | 0 | The svmlight format is tailored to classification/regression problems. Therefore, the array X is a matrix with as many rows as data points in your set, and as many columns as features. y is the vector of instance labels.
For example, suppose you have 1000 objects (images of bicycles and bananas, for example), featurized in 400 dimensions. X would be 1000x400, and y would be a 1000-vector with a 1 entry where there should be a bicycle, and a -1 entry where there should be a banana. | 1 | 1 | 1 | I have a numpy array for an image and am trying to dump it into the libsvm format of LABEL I0:V0 I1:V1 I2:V2..IN:VN. I see that scikit-learn has a dump_svmlight_file and would like to use that if possible since it's optimized and stable.
It takes parameters of X, y, and file output name. The values I'm thinking about would be:
X - numpy array
y - ????
file output name - self-explanatory
Would this be a correct assumption for X? I'm very confused about what I should do for y though.
It appears it needs to be a feature set of some kind. I don't know how I would go about obtaining that however. Thanks in advance for the help! | How to convert numpy array into libsvm format | 1.2 | 0 | 0 | 2,922 |
26,510,101 | 2014-10-22T14:36:00.000 | 3 | 0 | 1 | 0 | python-3.x,import | 26,511,121 | 1 | false | 0 | 0 | winsound only exists in Python installed under Windows. Do not attempt to import it if you are not running under Windows. | 1 | 1 | 0 | When I import winsound and then try to run the program, it returns an error message saying:
ImportError: No module named 'winsound'.
Are there any settings I need to change? | Trying to import winsound | 0.53705 | 0 | 0 | 2,205 |
26,512,324 | 2014-10-22T16:24:00.000 | 0 | 0 | 0 | 1 | python,task,celery,chord | 26,513,120 | 2 | false | 1 | 0 | Instead of chording the tasks themselves you may want to consider having the chords tasks that watch the A tasks. What I mean by this is the chord would contain tasks that check the running tasks(A) every so often to see if they are done or revoked. When all of those return successfully the chord with then chain into task B | 1 | 4 | 0 | I use the following setup with a Redis broker and backend:
chord([A, A, A, ...])(B)
Task A does some checks. It uses AbortableTask as a base and regularly checks the task.is_aborted() flag.
Task B notifies the user about the result of the calculation
The user has the possibility to abort the A tasks. Unfortunately, when calling AbortableAsyncResult(task_a_id).abort() on all the task A instances, only the active ones are being aborted. The status for tasks that have not been received yet by a worker are changed to ABORTED, but they're still processed and the is_aborted() flag returns False.
I could of course revoke() the pending tasks instead of abort()-ing them, but the problem is that in that case the chord body (task B) is not executed anymore.
How can all pending and running task A instances be stopped, while still ensuring that task B runs? | Celery: Abort or revoke all tasks in a chord | 0 | 0 | 0 | 3,818 |
26,517,504 | 2014-10-22T21:33:00.000 | 0 | 0 | 0 | 0 | python,django,django-registration | 26,518,127 | 1 | true | 1 | 0 | I changed django-registration's template views to specific views and it works now. | 1 | 0 | 0 | I have moved login form into my base.html and I added this line to my template tag to make the login form work.:
login(request, template_name='base.html')
It works in my links and the auth links but it doesn't work with django-registration's links such as /accounts/registration/complete/. I want to make them work but I couldn't figure out why it's not working. How can I fix it? Thanks. | django-registration POST 405 | 1.2 | 0 | 0 | 51 |
26,518,355 | 2014-10-22T22:41:00.000 | 0 | 0 | 0 | 0 | javascript,python,ruby-on-rails,angularjs,node.js | 26,518,519 | 3 | false | 1 | 0 | Welcome to the world of development.
In general, javascript is only used to give more resources to the user's navigation on the site (ex: visual effects).
As you're starting out, I advise you to start studying the part of the server-side login. For security purposes who confirms whether the user is logged in or not is always the server.
Some developers prefer PHP, other developers love Ruby on Rails. Maybe your best friend prefer Python... It is your choice, they both are easy. | 1 | 0 | 0 | I'm new to Web Dev and I came across a problem. I was wondering if there's a Javascript Framework that will allow me to register and authenticate users to a database like when using PHP and MySql.
Also, when the user is granted access to the site, such user will be required to upload files that will be written to the local filesystem of that server.
Can this be done with Javascript or some sort of Javascript Framework, or is it better just for me to learn PHP and do it in a normal LAMP stack? Or perhaps Ruby on Rails?
I have been searching online but the majority of results are leaning towards PHP & MySql.
Thanks a lot! | User Registration and Authentication to a Database using Javascript | 0 | 1 | 0 | 959 |
26,526,365 | 2014-10-23T10:43:00.000 | 0 | 1 | 1 | 0 | python,python-2.7,amazon-ec2,module,snappy | 26,531,727 | 1 | true | 0 | 0 | I just found python-snappy on github and installed it via python. Not a permanent solution, but at least something. | 1 | 0 | 0 | I downloaded Snappy library sources for working with compression and everything was great on one machine, but it didn't work on another machine. They have completely same configurations of hardware/OS + python 2.7.3.
All I was doing is "./configure && make && make install".
There were 0 errors during any of these processes and it installed successfully to the default lib directory, but python cant see it anyhow. help('modules') and pip freeze doesn't show snappy on the second machine and as the result I cant import it.
I tried even 'to break' structure and install it to different lib catalogs, but even that didn't work. I don't think if its related to system environment variables, since python should have completely same configuration on any of these machines (Amazon EC2).
Anyone knows how to fix this issue? | Python cant see installed module | 1.2 | 0 | 0 | 95 |
26,528,019 | 2014-10-23T12:24:00.000 | 0 | 0 | 0 | 0 | python,statistics,statsmodels,logistic-regression | 29,172,738 | 2 | false | 0 | 0 | If the response is on the unit interval interpreted as a probability, in addition to loss considerations, the other perspective which may help is looking at it as a Binomial outcome, as a count instead of a Bernoulli. In particular, in addition to the probabilistic response in your problem, is there any counterpart to numbers of trials in each case? If there were, then the logistic regression could be reexpressed as a Binomial (count) response, where the (integer) count would be the rounded expected value, obtained by product of the probability and the number of trials. | 1 | 3 | 1 | So I'm trying to do a prediction using python's statsmodels.api to do logistic regression on a binary outcome. I'm using Logit as per the tutorials.
When I try to do a prediction on a test dataset, the output is in decimals between 0 and 1 for each of the records.
Shouldn't it be giving me zero and one? or do I have to convert these using a round function or something.
Excuse the noobiness of this question. I am staring my journey. | Python statsmodel.api logistic regression (Logit) | 0 | 0 | 0 | 10,827 |
26,528,833 | 2014-10-23T13:07:00.000 | 0 | 0 | 0 | 0 | python,openerp,odoo | 37,538,805 | 2 | false | 1 | 0 | I had same experience like yours. I think most of the time this happened because of a permission issue for certain files or database issue.
You cannot drop database on a production environment right?. So best way to solve it is removed last updated module source from addon folder and restart the odoo service.
Then update your addon source from working copy and install them. | 2 | 2 | 0 | Maybe someone could help me.
I've installed Odoo (OpenERP) on localhost and I've installed many basic modules (15). I have a problem, after install Expense Tracker module the web client displays a blank page, in any section. I can't see also the Settings page, just the menu bar at the top.
If I install just the Expense Tracker, it works. So it isn't a module problem.
Thank you! :) | Blank page in OpenERP after module install | 0 | 0 | 0 | 1,658 |
26,528,833 | 2014-10-23T13:07:00.000 | 0 | 0 | 0 | 0 | python,openerp,odoo | 41,838,775 | 2 | false | 1 | 0 | I is likely that you installed a corrupt module. Refer to the addons and delete a module that you installed last. This should sort out your problem as it did on my end | 2 | 2 | 0 | Maybe someone could help me.
I've installed Odoo (OpenERP) on localhost and I've installed many basic modules (15). I have a problem, after install Expense Tracker module the web client displays a blank page, in any section. I can't see also the Settings page, just the menu bar at the top.
If I install just the Expense Tracker, it works. So it isn't a module problem.
Thank you! :) | Blank page in OpenERP after module install | 0 | 0 | 0 | 1,658 |
26,529,779 | 2014-10-23T13:56:00.000 | 3 | 0 | 0 | 0 | doxygen,python-sphinx,documentation-generation,dymola | 26,543,595 | 1 | true | 0 | 0 | If you mean the Modelica model code, how does the HTML export in Dymola work for you? What's missing?
If you mean the C code generated by Dymola, the source code generation option enables more comments in the code. | 1 | 1 | 0 | since I could not find an answer to my question neither here nor in other forums, I decided to ask it to the community:
Does anybody know if and how it is possible to realize automatic documentation generation for code generated with Dymola?
The background for this e. g. is that I want/need to store additional information within my model files to explain the concepts of my modelling and to store and get the documentation directly from the model code, which I would later like to be in a convenient way displayable not only from within Dymola, but also by a html and LaTeX documentation.
I know that there exist several tools for automatic documentation generation like e. g. DoxyGen and Python Sphinx, but I could not figure out if the can be used with Dymola code. Plus, I am pretty new to this topic, so that I do not really know how to find out if they will work out.
Thank you people very much for your help!
Greetings, mindm49907 | Automatic documentation generation for Dymola code | 1.2 | 1 | 0 | 274 |
26,532,824 | 2014-10-23T16:28:00.000 | 0 | 0 | 1 | 0 | python,logging,unit-testing | 37,944,519 | 1 | false | 0 | 0 | I fixed the same issue by setting 'disable_existing_loggers' to False when reconfiguring the Logger: the previous logger was disabled and it was preventing it from propagating the logs to the RootLogger. | 1 | 3 | 0 | I have a multi-module package in python. One of the modules is essentially a command line application. We'll call that one the "top level" module. The other module has three classes in it, which are essentially the backend of the application.
The toplevel module, in the init for it's class, does logging.basicConfig to log debug to file, then adds a console logger for info and above. The backend classes just use getLogger(classname), because when the application run in full, the backend will be called by the top level command line frontend, so logging will already be configured.
In the Test class (subclassed from unittest.TestCase and run via nose), I simply run testfixtures.LogCapture() in setup, and testfixtures.LogCapture.uninstall_all() in tearDown, and all the logging is captured just fine, no effort.
In the backend test file, I tried to do the same thing. I run testfixtures.LogCapture in the setup, uninstall_all in the teardown. However, all the "INFO" level logmessages still print when I'm running unittests for the backend.
Any help on
1) why log capture works for the frontend but not backend
2) an elegant way to be able to log and capture logs in my backend class without explictly setting up logging in those files.
would be amazing. | python testfixtures.LogCapture not capturing logs | 0 | 0 | 0 | 843 |
26,534,348 | 2014-10-23T17:57:00.000 | 3 | 1 | 0 | 0 | python,kerberos,mit-kerberos | 26,548,064 | 2 | true | 0 | 0 | From what i learned when working with kerberos (although in my work i used C) is that you can hardly replace KINIT. There are two ways you can simulate KINIT behaviour using programming and those are: Calling kinit shell command from python with the appropriate arguments or (as I did) calling one method that pretty much does everything:
krb5_get_init_creds_password(k5ctx, &cred, k5princ, password, NULL, NULL, 0, NULL, NULL);
So this is a C primitive but you should find one for python(i assume) that will do the same. Basically this method will receive the kerberos session, a principal(built from the username) and the password. In order to fully replace KINIT behaviour you do need a bit more than this though(start session, build principal, etc). Sorry, since i did not work with python my answer may not be what you want but i hope i shed you some light. Feel free to ask any conceptual question about how kerberized-applications work. | 1 | 3 | 0 | Is there a way to create a Kerberos ticket in Python if you know the username/password?
I have MIT Kerberos in place and I can do this interactively through KINIT but want to do it from Python. | How can I get a Kerberos ticket in Python | 1.2 | 0 | 1 | 9,619 |
26,534,848 | 2014-10-23T18:28:00.000 | 2 | 1 | 1 | 0 | python,email,password-encryption | 26,679,300 | 2 | true | 0 | 0 | I've faced this issue before as well. I think that ultimately, if you are stuck being able to produce a plain-text password inside your app, then all of the artifacts to produce the password must be accessible by the app.
I don't think there is some encryption-magic to do here. Rely on file-system permissions to prevent anyone from accessing the data in the first place. Notice that your SSH private key isn't encrypted in your home dir. It is just in your home dir and you count on the fact that Linux won't let just anyone read it.
So, make a user for this app and put the passwords in a directory that only that user can access. | 1 | 3 | 0 | I know the best practise is to hash user passwords, and I do that for all my other web apps, but this case is a bit different.
I'm building an app that sends email notifications to a company's employees.
The emails will be sent from the company's SMTP servers, so they'll need to give the app email/password credentials for an email account they allocate for this purpose.
Security is important to me, and I'd rather not store password that we can decrypt, but there seems like no other way to do this.
If it makes any difference, this is a multi-tenant web app.
What's the best way in python to encrypt these passwords since hashing them will do us no good in trying to authenticate with the mail server?
Thanks!
Note: The mailserver is not on the same network as the web app | Safest way in python to encrypt a password? | 1.2 | 0 | 0 | 603 |
26,538,004 | 2014-10-23T21:45:00.000 | 1 | 1 | 0 | 1 | python,python-3.x,usb,pyusb | 26,553,765 | 1 | true | 1 | 0 | Solved my own issue. After running my code on a full linux machine, capturing the data and comparing it to the wireshark trace I took on the windows application, I realized the length of the read was not the issue. The results were very similar and the windows app was actually requesting 4096 bytes back instead of 2 or 7 and the device was just giving back whatever it had. The problem actually had to do with my Tx message not being in the correct format before it was sent out. | 1 | 0 | 0 | So I'm relatively new to USB and PyUSB. I am trying to communicate with a bluetooth device using PyUSB. To initialize it, I need to send a command and read back some data from the device. I do this using dev.write(0x02, msg) and ret = dev.read(0x81, 0x07). I know the command structure and the format of a successful response. The response should have 7 bytes, but I only get back 2.
There is a reference program for this device that runs on windows only. I have run this and used USBPcap/wireshark to monitor the traffic. From this I can see that after my command is sent, the device responds several times with a 2 byte response, and then eventually with the full 7 byte response. I'm doing the python work on a Raspberry Pi, so I can't monitor the traffic as easily.
I believe the issue is due to the read expecting 7 bytes and then returning the result after the default timeout is reached, without receiving the follow up responses. I can set the bytes to 2 and do multiple readings, but I have no way of knowing if the message had more bytes that I am missing. So, what I'm looking for is a way to check the length of the message in the buffer before requesting the read so I can specify the size.
Also, is there a buffer or anything to clear to make sure I am reading next message coming through. It seems that no matter how many times I run the read command I get the same response back. | PyUSB read multiple frames from bulk transfer with unknown length | 1.2 | 0 | 0 | 2,500 |
26,541,264 | 2014-10-24T04:01:00.000 | 0 | 0 | 0 | 0 | python,openerp,odoo | 26,581,186 | 1 | false | 1 | 0 | You can add groups attribute to that button in .xml file. like
Eg:
groups="group_eg" | 1 | 0 | 0 | In Odoo / OpenERP 8, How do I visible button Attachment for a specific group on a specific model form? Thanks very much! | In Odoo / OpenERP 8, How to visible button Attachment for a specific group on a specific model form? | 0 | 0 | 0 | 1,038 |
26,541,595 | 2014-10-24T04:40:00.000 | 0 | 0 | 0 | 0 | python,dns,unpack | 26,562,361 | 1 | false | 0 | 0 | The DNS wireformat can (and very often does) contain internal pointers within a packet, which falls well outside what the Python struct module is intended to do. On top of that, every single type of resource record needs to be unpacked according to its own specification.
Parsing wireformat DNS packets is a great way of learning how DNS really works, but if your goal is to actually get something done I would strongly suggest finding a library to do it for you. It's not a difficult task, but it's a lot of work. | 1 | 2 | 0 | I am trying to unpack binary data I get from my DNS (Unbound).
Format is like that (for example):
'\x00\x10\x03ns1\x06google\x03com\x00'
'\x00\x16\x00\n\x05aspmx\x01l\x06google\x03com\x00'
'\x00\x1b\x002\x04alt4\x05aspmx\x01l\x06google\x03com\x00'
I am doing this in Python and I have been trying to do that with the unpack method of the struct module.
Yet, I couldn't find a proper way to express the format. Can I have some help on that? | Unpack DNS Wireformat with Python | 0 | 0 | 0 | 497 |
26,543,890 | 2014-10-24T08:10:00.000 | 2 | 0 | 1 | 0 | python | 26,543,916 | 2 | false | 0 | 0 | There is no difference. Assigning to a name in Python is the same whether or not the name already existed. | 1 | 0 | 0 | x = 3
x = 4
Is the second line an assignment statement or a new variable binding? | Python Assignment or Variable binding? | 0.197375 | 0 | 0 | 326 |
26,545,188 | 2014-10-24T09:28:00.000 | 0 | 0 | 1 | 0 | python,pytest | 26,611,314 | 2 | false | 0 | 0 | You can just keep printing to stdout and simply not use -s. If you do this py.test will put the details you printed next to the assertion failure message when the test fails, in a "captured stdout" section.
When using -s things get worse since they are also printed to stdout even if the test passes and it also displays during the test run instead of nicely in a section of a failure report. | 1 | 0 | 0 | I have some details I have to print out for a failed test. Right now I'm just outputting this information to STDOUT and I use the -s to see this information. But I would like to append this information to the test case details when it failed, and not need to use the -s option. | Append information to failed tests | 0 | 0 | 0 | 307 |
26,550,430 | 2014-10-24T14:50:00.000 | 0 | 0 | 0 | 0 | python,algorithm,memory-efficient | 26,559,152 | 1 | false | 0 | 0 | Nothing is going to be superfast, and there's a lot of data there (half a million results, to start with), but the following should fit in your time and space budget on modern hardware.
If possible, start by sorting the lists by length, from longest to shortest. (I don't mean sort each lists; the order of elements in the list is irrelevant. I mean, sort the collection of lists so that you can process the longest list first.) The only point of doing this is to allow the similarity metrics to be stored in a half-diagonal matrix instead of full matrix, which saves half the matrix space. So if you don't know the length of the lists before you start, it's not a crisis; it just means that you'll need a bit more space.
Note 1: The important thing is that the metric you propose is completely symmetric as long as no list has repeated elements.Without repeated elements, the metric is simply |A⋂B|, regardless of whether A or B is longer, so when you compute the size of the intersection of A and B you could fill in the similarity matrix for both (A,B) and (B,A).)
Note 2: The description of the algorithm seemed confusing to me when I reread it so I changed the word "list" to "list" when it refers to one of the thousand input lists, leaving "list" to mean an ordinary Python list. Because lists can't be keys in Python dictionaries, working on the assumption that lists are implemented as lists, it's necessary to somehow identify each list with an identifier which can be used as a key. I hope that's clear.
The Algorithm:
We need two auxiliary structures: one is the (half-diagonal) result matrix, keyed by pairs of list identifiers, which we initialize to all 0s. The other one is a dictionary keyed by unique data element, mapping onto a list of list identifiers.
Then, taking each list in turn, for each element in that list we do the following:
If the element is not yet present in the dictionary, add it, mapping to a single element list consisting of the current list's identifier.
If the element is present in the dictionary but the last element in the corresponding list of ids is the current id, then we've found a repeated element. Since we don't expect repeated elements, either ignore it or issue an error message.
Otherwise, we've seen the element before and we have a list of identifiers of lists in which the element appears. For each such identifier, increment the similarity count between the current identifier and the identifier in the list. (Note that if we scan lists in reverse order by length, all the identifiers in the list correspond to lists which are at least as long as the current list, which is why we sorted the lists in the first place.) Finally, append the current identifier to the end of the list, so that the next time that data element is found, the current list will be present.
That's it. The space requirement is O(N2 + M) where N is the number of lists and M is the total size of all the lists. The time requirement is essentially O(M2) in the worst case -- that being the case where every list has exactly one element and they are all the same element. (More precisely, it's the sum of the squares of the frequencies of each unique element.) | 1 | 2 | 1 | I wish to compare around 1000 lists of varying size. Each list might have thousands of items. I want to compare each pair of lists, so potentially around 500000 comparisons. Each comparison consists of counting how many of the smaller list exists in the larger list (if same size, pick either list).Ultimately I want to cluster the lists using these counts. I want to be able to do this for two types of data:
any textual data
strings of binary digits of the same length.
Is there an efficient way of doing this in python? I've looked at LShash and other clustering related algorithms, but they seem to require same length lists. TIA.
An example to try to clarify what I am aiming to do:
List A: car, dig, dog, the.
List B: fish, the, dog.
(No repeats in any list. Not sorted although I suppose they could be fairly easily. Size of lists varies.)
Result:2, since 'dog' and 'the' are in both lists.
In reality the length of each list can be thousands and there are around 1000 such lists, each having to be compared with every other.
Continuing the example:
List C: dog, the, a, fish, fry.
Results:
AB: 2
AC: 2
BC: 3 | efficient different sized list comparisons | 0 | 0 | 0 | 159 |
26,553,392 | 2014-10-24T17:45:00.000 | 0 | 0 | 0 | 0 | python-2.7,simpy,traffic-simulation | 27,251,344 | 1 | false | 1 | 0 | I guess you can just request the other resources (B and C; maybe using preemption) once you get resource A and release all three resources once you are done with A. | 1 | 0 | 0 | I am modelling a train station(using simpy, with python 2.7) where there are some incoming routes, some outgoing routes and some platforms. Now, when one of these resources is occupied, I can't assign a train to certain other resources.
Now when a train engages a route - i.e. traverses it - some other routes in the stations area become unusable for some time. If I were to model a route as a resource, then a request yielded at that resource will affect/engage other resources as well.
Is there some way of modelling resources, such that engagement of one resource_A puts resource_B, resource_C out of action for some predetermined amount of time?
Aseem Awad | Interrelated resources | 0 | 0 | 0 | 88 |
26,554,519 | 2014-10-24T18:58:00.000 | 2 | 0 | 1 | 0 | python,multiprocessing | 26,554,752 | 5 | false | 0 | 0 | I've used Celery and Redis for real-time multiprocessing in high memory applications, but it really depends on what you're trying to accomplish.
The biggest benefits I've found in Celery over built-in multiprocessing tools (Pipe/Queue) are:
Low overhead. You call a function directly, no need to serialize data.
Scaling. Need to ramp up worker processes? Just add more workers.
Transparency. Easy to inspect tasks/workers and find bottlenecks.
For really squeezing out performance, ZMQ is my go to. A lot more work to set up and fine-tune, but it's as close to bare sockets as you can safely get.
Disclaimer: This is all anecdotal. It really comes down to what your specific needs are. I'd benchmark different options with sample data before you go down any path. | 1 | 3 | 0 | I'm trying to find a reasonable approach in Python for a real-time application, multiprocessing and large files.
A parent process spawn 2 or more child. The first child reads data, keep in memory, and the others process it in a pipeline fashion. The data should be organized into an object,sent to the following process, processed,sent, processed and so on.
Available methodologies such as Pipe, Queue, Managers seem not adequate due to overheads (serialization, etc).
Is there an adequate approach for this? | How to share objects and data between python processes in real-time? | 0.07983 | 0 | 0 | 3,727 |
26,554,857 | 2014-10-24T19:22:00.000 | 0 | 0 | 0 | 1 | python,asynchronous,rabbitmq,tornado,rpc | 27,069,239 | 1 | false | 1 | 0 | Use nginx with embedded perl.. It works like superman.. We are using this for our analytics tool. | 1 | 0 | 0 | I have web-server listening clients and when someone hit handler server send an RPC message to RabbitMQ and waiting for response while keeping connection. When response from RMQ came server pass it to the client as response to request.
All async examples in Tornado docs works with their own http.fetch_async() or something like that methods, and I understand that I have to wait/read for RMQ asynchronously... But how? And even worse - sometimes I have to send several messages at one moment (I create pool of threads and each thread send one message).
Right now I cannot rebuild architecture to get rid of waiting to send an answer from RMQ, so I have web-server blocked. Yet we have no a lot of requests and RMQ response quickly enough but sometimes it can make server waiting up to a minute.
So now we just using Gunicorn with A LOT of workers and BIG SERVERS but I feel it should be a better solution and investigate different options.
Python 3.4, so we cannot use pika RMQ adapter and work with py-amqp from Celery. | Make Tornado able to handle new requests while waiting response from RPC over RabbitMQ | 0 | 0 | 0 | 199 |
26,556,120 | 2014-10-24T20:55:00.000 | 1 | 0 | 0 | 0 | python,json,django,api,rest | 26,557,044 | 2 | true | 1 | 0 | I'm answering myself to the question:
I just found out that the creator of Tastypie made a second Python API framework named Restless.
Inspite of the name, it's still a Restfull framework, but the philosophy is very different from the former framework.
In building Tastypie, I tried to create something extremely complete & comprehensive. The result was writing a lot of hook methods (for easy extensibility) & a lot of (perceived) bloat, as I tried to accommodate for everything people might want/need in a flexible/overridable manner. But in reality, all I really ever personally want are the RESTful verbs, JSON serialization & the ability of override behavior.
This framework lets you hardcode the behaviour of each REST methods (thank you!!!!).
The only thing now is to try to bypass the "REST behaviour" in an elegant way... | 1 | 0 | 0 | There are debates about the pro's and con's of REST.
I personnaly don't need it in my project, and this topic is not to debate about the fact I actally need it ^^
Just note that I used Tastypie in a "REST mode" and decided to switch to a non-REST mode because my app is not CRUD based at all. My API is an application API, not a user API. In my case, using REST force me to do dirty and foolish things.
In my project, what I'd like to do can't be simpler:
custom URL #1 executes some custom Django code
custom URL #2 executes other custom Django code
etc... That's it!
The only things I need are:
GET and POST requests.
when a user calls a URL, I want to know who is this user (request.user) or not authenticated. I use classic HTTP authentication.
return results as JSON sothat my clients understand it.
The intrusive API stuff I don't need (but I am pretty forced to use it in a REST mode) are:
Split logic by resources -> when a request need to deal with many models, splitted resources just drives me crazy to reach my goal.
Authentication -> let me handle it myself with my Django code itself! My models actually DO know who can do what.
So, How to create this non-REST api in a easy way? Which framework use?
Thanks a lot. | What is the simplest way to create a NON-REST API in Django? | 1.2 | 0 | 0 | 914 |
26,564,698 | 2014-10-25T16:35:00.000 | 1 | 0 | 0 | 0 | python,amazon-ec2,boto,amazon-cloudwatch | 26,566,582 | 1 | true | 1 | 0 | An alarm can only publish to an SNS topic but there are a number of ways to subscribe to that topic. You can get an SMS message, get email, or you can have your own program called via HTTP or HTTPS. You would have to write a small web application that listens for the SNS messages and then perform whatever action you want. Or you could subscribe an SQS queue to the SNS topic and then have your program poll the SQS queue waiting for messages. | 1 | 0 | 0 | I am playing around with boto and Amazon EC2 instances. I am able to create an alarm on a metric for cpu utilisation that sends an email via an SNS Topic. However what I would like to do is call a function in my code when the alarm is triggered to launch a new instance. I don't see a way of placing anything besides an ARN string on an alarm action? Does anyone have any ideas? Thanks | Custom metric alarm function Amazon Cloudwatch | 1.2 | 0 | 0 | 356 |
26,566,155 | 2014-10-25T19:06:00.000 | 0 | 0 | 1 | 0 | python | 26,566,369 | 1 | false | 0 | 0 | Short answer - no, not to a significant degree. Quite apart from the fact that PEP8 says use import and use it at the top of your file rather than at the start of the if __name__ == '__main__' block, there is no difference between using it at the head of your file versus under the if __name__ conditional. | 1 | 1 | 0 | When linking scripts are there performance considerations when using os.system verses import? Next, when the imported script has imports that must be shared with the calling script is it okay to put those imports in the if __name__ == '__main__': block? | What is the best practice for linking python scripts | 0 | 0 | 0 | 69 |
26,570,453 | 2014-10-26T06:34:00.000 | 0 | 0 | 1 | 0 | python,ide,rstudio,data-analysis | 52,906,423 | 7 | false | 0 | 0 | PyCharm is perfect for those who already have experience using another JetBrain’s IDE, due to the fact that the interface and features be similar. Also, if you like IPython or Anaconda distribution, it’s nice for you to know that PyCharm integrates its tools and libraries such as NumPy and Matplotlib, allowing you work with array viewers and interactive plots.
In addition to Python, PyCharm provides support for Scientific mode.Scientific mode in PyCharm provides support for interactive scientific computing and data visualization.
Features Just like other IDEs, PyCharm has interesting features such as a code editor, errors highlighting, a powerful debugger with a graphical interface, besides of Git integration, SVN, and Mercurial. You can also customize your IDE, choosing between different themes, color schemes, and key-binding. Additionally, you can expand PyCharm’s features by adding plugins; You can take a look at the PyCharm Plugins Library. | 3 | 0 | 0 | Python can be used for many tasks. I want to use Python for data analysis. What Python IDEs are particular good for data analysis tasks.
As a reference for a data analysis specific IDE, please see RStudio for the R language. | Python IDE for Data Analysis | 0 | 0 | 0 | 5,709 |
26,570,453 | 2014-10-26T06:34:00.000 | 0 | 0 | 1 | 0 | python,ide,rstudio,data-analysis | 42,297,150 | 7 | false | 0 | 0 | I suggest jupyter notebook..best for data analysis..
2nd preference -- spyder..
Just install anaconda python.
You'll get inbuild Jupyter notebook, spyder IDE's | 3 | 0 | 0 | Python can be used for many tasks. I want to use Python for data analysis. What Python IDEs are particular good for data analysis tasks.
As a reference for a data analysis specific IDE, please see RStudio for the R language. | Python IDE for Data Analysis | 0 | 0 | 0 | 5,709 |
26,570,453 | 2014-10-26T06:34:00.000 | 0 | 0 | 1 | 0 | python,ide,rstudio,data-analysis | 26,571,259 | 7 | false | 0 | 0 | PyCharm works fine for me. It has plug-ins for database access and supports multiple languages. There is a plug-in for R, but I didn't use it so far.
The integrated shells (Python and bash) are also good to quickly try something.
IPython notebook is fine especially for explorative work. But the editing support is not that great IMHO. There is also no source control and other functionality for developing software. | 3 | 0 | 0 | Python can be used for many tasks. I want to use Python for data analysis. What Python IDEs are particular good for data analysis tasks.
As a reference for a data analysis specific IDE, please see RStudio for the R language. | Python IDE for Data Analysis | 0 | 0 | 0 | 5,709 |
26,572,932 | 2014-10-26T12:21:00.000 | 0 | 0 | 1 | 0 | python-3.x | 26,572,955 | 1 | false | 0 | 1 | IDLE combines several functionalities. It contains an interactive interpreter (the window where the >>> appears in, and in which you can bring code to execution immediately), and it's a small-scale IDE (integrated development environment), which means you can load, edit and save python-files, and launch them conveniently. This functionality is meant with "Editor". Probably just go to the Menu and pick something like "New File". | 1 | 0 | 0 | So I've been playing around with IDLE. Then the Lesson2 tells me to open the editor window, not the shell window. I'm not sure which is Editor? I have EDLE, Python Launcher (downloaded from python.org) and TextWranger...maybe I misunderstood about sth? :'( | Python Launcher is Editor? | 0 | 0 | 0 | 36 |
26,575,729 | 2014-10-26T17:18:00.000 | 0 | 0 | 0 | 0 | python,amazon-web-services,boto,emr,amazon-emr | 72,334,675 | 2 | false | 0 | 0 | bootstrap script executed once the cluster started (first time or at the beginning), however AWS provide ssh to master and other nodes there you can write shell script,install libs, packages, python program , git clone your repo etc...
Hope this may be helpful.
Amit | 2 | 17 | 0 | I have one EMR cluster which is running 24/7. I can't turn it off and launch the new one.
What I would like to do is to perform something like bootstrap action on the already running cluster, preferably using Python and boto or AWS CLI.
I can imagine doing this in 2 steps:
1) run the script on all the running instances (It would be nice if that would be somehow possible for example from boto)
2) adding the script to bootstrap actions for case that I'd like to resize the cluster.
So my question is: Is something like this possible using boto or at least AWS CLI? I am going through the documentation and source code on github, but I am not able to figure out how to add new "bootstrap" actions when the cluster is already running. | AWS EMR perform "bootstrap" script on all the already running machines in cluster | 0 | 0 | 0 | 2,264 |
26,575,729 | 2014-10-26T17:18:00.000 | 6 | 0 | 0 | 0 | python,amazon-web-services,boto,emr,amazon-emr | 35,529,652 | 2 | false | 0 | 0 | Late answer, but I'll give it a shot:
That is going to be tough.
You could install Amazon SSM Agent and use the remote command interface to launch a command on all instances. However, you will have to assign the appropriate SSM roles to the instances, which will require rebuilding the cluster AFAIK. However, any future commands will not require rebuilding.
You would then be able to use the CLI to run commands on all nodes (probably boto as well, haven't checked that). | 2 | 17 | 0 | I have one EMR cluster which is running 24/7. I can't turn it off and launch the new one.
What I would like to do is to perform something like bootstrap action on the already running cluster, preferably using Python and boto or AWS CLI.
I can imagine doing this in 2 steps:
1) run the script on all the running instances (It would be nice if that would be somehow possible for example from boto)
2) adding the script to bootstrap actions for case that I'd like to resize the cluster.
So my question is: Is something like this possible using boto or at least AWS CLI? I am going through the documentation and source code on github, but I am not able to figure out how to add new "bootstrap" actions when the cluster is already running. | AWS EMR perform "bootstrap" script on all the already running machines in cluster | 1 | 0 | 0 | 2,264 |
26,576,229 | 2014-10-26T18:07:00.000 | 0 | 0 | 1 | 1 | python,python-2.7,python-3.x,installation | 26,827,960 | 1 | true | 0 | 0 | The problem can be fixed by doing a system restore to a point before, then installing again | 1 | 1 | 0 | I uninstalled python 2.7.2 today as i have had both python 2 and 3 on my computer when it is only python 3 i use. After I uninstalled it all of my python files associated with notepad and it would not allow me to change it to python again - No Error Message but just wont register the change. I tried rebooting but that did not work so i decided to reinstall python 3.4 again as well, i did this and now i found out that whilst i can open the python file, i cannot open the pythonw file and therefore am unable to open the idle window to do anything. I have rebooted the system since then several times, tried another install but nothing happens and i am unable to use python at the moment.
A fix to both problems would be greatly appreciated but i am more worried about python not being able to be opened.
Thanks in advance | Installation Errors + File Association Errors Python 3.4 | 1.2 | 0 | 0 | 36 |
26,576,432 | 2014-10-26T18:30:00.000 | 2 | 0 | 1 | 1 | python,pip,package,homebrew | 26,588,821 | 1 | true | 0 | 0 | Use the /usr/local/bin/python instead of the system installed python.
brew doctor should tell you that /usr/local/bin is not early enough in your path. By putting /usr/local/bin first (or earlier than /usr/bin) in your path, your shell will find homebrew versions of executables before system versions.
If you don't want to adjust your path, you can invoke which python you want to run. /usr/local/bin/python instead of just python at the shell prompt. | 1 | 0 | 0 | I have different versions of python installed on my mac. My system default python is ($ which python)
"/Library/Frameworks/Python.framework/Versions/2.7/bin/python".
And if I install something with pip command such as pip install numpy, the package will be installed in the system python's site-package "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages"
However, I want to setup ipython & Qt working environment. So I brew install pyqt, brew install PySide And these packages are installed in my home-brew python pack-control part. My home-brew python is in "/usr/local/lib/python2.7/site-packages".
Now my python just can't import any Qt or PySide...
Any suggestions? How can I fix this? | system default python can't use homebrew installed package | 1.2 | 0 | 0 | 3,822 |
26,577,017 | 2014-10-26T19:30:00.000 | 1 | 0 | 1 | 0 | python | 26,577,143 | 3 | false | 0 | 0 | Calling sys.exit() simply raises SystemExit. It's the exception handling being done on the way back up the stack that does cleanup, so both techniques will perform the same clean up. But you should use sys.exit instead of SystemExit anyway though: there's no point trying to save one import of sys, that doesn't cost anything. Calling sys.exit is the conventional way to end the Python process.
That said, are you sure you want to be exiting the process explicitly in your code? That makes your functions very difficult to re-use. Often, people use sys.exit when simply returning from the function would work just as well, because it's the main function of the program. | 1 | 2 | 0 | I've read several threads here on SO and the python docs on the SystemExit exception.
This thread is not intended to be a duplicate as i did not find the answer in the similar threads.
Does both calling sys.exit() and raising SystemExit do cleanups ? I know sys.exit calls SystemExit, but if you just raise the SystemExit exception, does it do any cleanups for you ? The official python doc wasnt very clear on that. The reason im asking is because a colleague of mine thought that SystemExit was more clear to write in the code and you dont need to import the sys module..but just raising the exception im not sure that any cleanups are being done compared to calling sys.exit which does cleanup before it calls SystemExit from what i know. | Confusion about sys.exit and SystemExit | 0.066568 | 0 | 0 | 737 |
26,577,908 | 2014-10-26T21:00:00.000 | 6 | 0 | 1 | 0 | python | 26,577,937 | 1 | true | 0 | 0 | You need to put whatever you're repeating in an array, and then multiply by an integer to get repeating sequence.
x = [False] * 12 should work. | 1 | 2 | 0 | If I want to make a list consisting of 12 False Boolean values, is there a shortcut to do this without typing out all 12? I know that 'string ' * 3 returns 'string string string'. But True * 3 just returns 3.
Hopefully this question isn't too simple, but I was having a hard time finding the answer by searching. | Shortcut for creating a list with many repeating Boolean values | 1.2 | 0 | 0 | 910 |
26,579,670 | 2014-10-27T00:42:00.000 | 6 | 1 | 0 | 0 | python,python-2.7,python-3.x,web.py,nosetests | 26,580,687 | 1 | false | 0 | 0 | As @dano suggeted in his comment:
Try python2.7 -m nose instead of running nosetests. | 1 | 6 | 0 | I've been learning Python using version 3.4. I recently started learning Web.py so have been using Python 2.7 for that, since web.py not supported in Python 3.4. I have nose 1.3.4 module installed for both Python 3.4 and 2.7. I need to run the nosetests command on some Python code written in 2.7 that uses the Web.py module. However, when I type nosetests command it automatically uses Python 3.4, so is throwing an error as unable to import the Web.py module in my Python code. Is there a way to force nosetests to use Python 2.7? Macbook Pro running OS X Yosemite. | Force Nosetests to Use Python 2.7 instead of 3.4 | 1 | 0 | 0 | 2,718 |
26,581,397 | 2014-10-27T04:57:00.000 | 1 | 0 | 1 | 0 | python,list,python-3.x | 26,581,720 | 6 | false | 0 | 0 | You said: "From what I can tell, calling the constructor list removes the most outer braces (tuple or list) and replaces them with []. Is this true?"
IMHO, this is not a good way to think about what list() does. True, square brackets [] are used to write a list literal, and are used when you tell a list to represent itself as a string, but ultimately, that's just notation. It's better to think of a Python list as a particular kind of container object with certain properties, eg it's ordered, indexable, iterable, mutable, etc.
Thinking of the list() constructor in terms of it performing a transformation on the kind of brackets of a tuple that you pass it is a bit like saying adding 3 to 6 turns the 6 upside down to make 9. It's true that a '9' glyph looks like a '6' glyph turned upside down, but that's got nothing to do with what happens on the arithmetic level, and it's not even true of all fonts. | 2 | 5 | 0 | I know that the list() constructor creates a new list but what exactly are its characteristics?
What happens when you call list((1,2,3,4,[5,6,7,8],9))?
What happens when you call list([[[2,3,4]]])?
What happens when you call list([[1,2,3],[4,5,6]])?
From what I can tell, calling the constructor list removes the most outer braces (tuple or list) and replaces them with []. Is this true? What other nuances does list() have? | What does the list() function do in Python? | 0.033321 | 0 | 0 | 18,537 |
26,581,397 | 2014-10-27T04:57:00.000 | 1 | 0 | 1 | 0 | python,list,python-3.x | 26,581,511 | 6 | false | 0 | 0 | Yes it is true.
Its very simple. list() takes an iterable object as input and adds its elements to a newly created list. Elements can be anything. It can also be an another list or an iterable object, and it will be added to the new list as it is.
i.e no nested processing will happen. | 2 | 5 | 0 | I know that the list() constructor creates a new list but what exactly are its characteristics?
What happens when you call list((1,2,3,4,[5,6,7,8],9))?
What happens when you call list([[[2,3,4]]])?
What happens when you call list([[1,2,3],[4,5,6]])?
From what I can tell, calling the constructor list removes the most outer braces (tuple or list) and replaces them with []. Is this true? What other nuances does list() have? | What does the list() function do in Python? | 0.033321 | 0 | 0 | 18,537 |
26,581,496 | 2014-10-27T05:10:00.000 | 1 | 0 | 1 | 0 | python | 26,581,529 | 4 | false | 0 | 0 | 2,2,2,2 in range(20)
constructs a tuple with four elements, the first three being 2, and the last is the Boolean expression 2 in range(20) which evaluates to True.
In python, the comma creates a tuple. which is a little bit confusing.
So, 2, would create a new single valued tuple
While, (2) would return an integer. | 2 | 0 | 0 | Could someone explain why a search as follows
2,2,2,2 in range(20)
yields a result as (2, 2, 2, True)
and 5,4,3,19 in range(20)
yields a result as (5, 4, 3, True)
A search like "tab" in "batman" gives False whereas "bat" in "batman" is true. Likewise why is the order not preserved in the above searches. Also I'd like an explanation of the results it gives. | searching within string python iterables vs list iterables | 0.049958 | 0 | 0 | 52 |
26,581,496 | 2014-10-27T05:10:00.000 | 1 | 0 | 1 | 0 | python | 26,581,530 | 4 | false | 0 | 0 | in is a boolean expression. So when you enter 2,2,2,2 in range(20) what you are doing in creating a tuple with three 2's and the result of the Boolean expressin 2 in range(20), which is True. | 2 | 0 | 0 | Could someone explain why a search as follows
2,2,2,2 in range(20)
yields a result as (2, 2, 2, True)
and 5,4,3,19 in range(20)
yields a result as (5, 4, 3, True)
A search like "tab" in "batman" gives False whereas "bat" in "batman" is true. Likewise why is the order not preserved in the above searches. Also I'd like an explanation of the results it gives. | searching within string python iterables vs list iterables | 0.049958 | 0 | 0 | 52 |
26,584,752 | 2014-10-27T09:46:00.000 | 0 | 0 | 0 | 1 | c#,python,mono,zeromq | 26,587,560 | 2 | false | 0 | 0 | You can always open up a TCP or UDP socket and communicate through that. | 1 | 1 | 0 | I've got a C# app running under Windows and Linux. I would like to implement a way to communicate with it through a Python script.
I've already tried using ZeroMQ library, and it was working right when the C# app was running on Windows - I could send/receive messages on both ends. But I failed miserably when I tried to use on Linux/Mono - the app crashed, kernel32 exception. I tried recompiling the libzmq.dll, using the tutorials, but I can't get it right.
Is there any other way to do this, or should I stick with ZeroMQ and try to get it running on Linux/Mono? | Simple way to communicate between C# app and Python app | 0 | 0 | 0 | 773 |
26,591,805 | 2014-10-27T16:06:00.000 | 1 | 0 | 1 | 0 | python,pandas | 26,591,932 | 3 | false | 0 | 0 | If you don't want to convert to date time but still want to do math with them you'd most like be best off converting them to seconds in a different column while retaining the string format of them or creating a function that converts to string and applying that after any computations. | 1 | 0 | 1 | Currently I am storing duration in a pandas column using strings.
For example '12:05' stands for 12 minutes and 5 seconds.
I would like to convert this pandas column from string to a format that allows arithmetic, while retaining the MM:SS format.
I would like to avoid storing day, hour, dates, etc. | how to store duration in a pandas column in minutes:second format that allows arithemtic? | 0.066568 | 0 | 0 | 173 |
26,595,519 | 2014-10-27T19:39:00.000 | 0 | 0 | 0 | 0 | python,eclipse,numpy,pydev | 26,625,834 | 1 | false | 0 | 0 | I recommend you to either use the setup.py from the downloaded archive or to download the "superpack"-executable for windows, if you work on windows anyway.
In PyDev, i overcame problems with new libraries by using the autoconfig button. If that doesn't work, another solution could be deleting and reconfiguring the python interpreter. | 1 | 1 | 1 | Although I've been doing things with python by myself for a while now, I'm completely new to using python with external libraries. As a result, I seem to be having trouble getting numpy to work with PyDev.
Right now I'm using PyDev in Eclipse, so I first tried to go to My Project > Properties > PyDev - PYTHONPATH > External Libraries > Add zip/jar/egg, similar to how I would add libraries in Eclipse. I then selected the numpy-1.9.0.zip file that I had downloaded. I tried importing numpy and using it, but I got the following error message in Eclipse:
Undefined variable from import: array.
I looked this up, and tried a few different things. I tried going into Window > Preferences > PyDev > Interpreters > Python Interpreters. I selected Python 3.4.0, then went to Forced Builtins > New, and entered "numpy". This had no effect, so I tried going back to Window > Preferences > PyDev > Interpreters > Python Interpreters, selecting Python 3.4.0, and then, under Libraries, choosing New Egg/Zip(s), then adding the numpy-1.9.0.zip file. This had no effect. I also tried the String Substitution Variables tab under Window > Preferences > PyDev > Interpreters > Python Interpreters (Python 3.4.0). This did nothing.
Finally, I tried simply adding # @UndefinedVariable to the broken lines. When I ran it, it gave me the following error:
ImportError: No module named 'numpy'
What can I try to get this to work? | Using numpy with PyDev | 0 | 0 | 0 | 2,141 |
26,596,297 | 2014-10-27T20:30:00.000 | 0 | 0 | 1 | 0 | python,regex | 26,596,340 | 7 | false | 0 | 0 | You can use this: ^[A-Za-z_][A-Za-z0-9_]*$ | 1 | 7 | 0 | How do I create a regex that matches all alphanumerics without a number at the beginning?
Right now I have "^[0-9][a-zA-Z0-9_]"
For example, 1ab would not match, ab1 would match, 1_bc would not match, bc_1 would match. | Regex not beginning with number | 0 | 0 | 0 | 21,840 |
26,596,396 | 2014-10-27T20:37:00.000 | 0 | 1 | 0 | 0 | python,api,email,automation,surveymonkey | 26,597,085 | 1 | true | 0 | 0 | There is currently not a way to do this via the SurveyMonkey API - it sounds like your solution is the best way to do things.
I think your best bet is to go with your solution and email [email protected] and ask them about the feasibility of adding this functionality in future. It sounds like what you need is a get_collector_details method that returns specific details from a collector, which doesn't currently exist. | 1 | 0 | 0 | I have a survey that went out to 100 recipients via the built-in email collector. I am trying to design a solution in python that will show only the list of recipients (email addresses) who have not responded (neither "Completed" nor "Partial"). Is there any way to get this list via the SurveyMonkey API?
One possible solution is to store the original list of 100 recipients in my local database, get the list of recipients who have already responded using the get_respondent_list api, and then do a matching to find the people who have not responded. But I would prefer to not approach it this way since it involves storing the original list of recipients locally.
Thanks for the help! | Getting list of all recipients in an email collector for a survey via SurveyMonkey API | 1.2 | 0 | 1 | 165 |
26,598,571 | 2014-10-27T23:28:00.000 | 3 | 0 | 0 | 0 | python,html,css,django | 26,598,616 | 5 | false | 1 | 0 | You can have a single Django project for many screens by using i.e. front-end responsive framework such like Bootstrap or Fundation. | 1 | 0 | 0 | I'm making a website in Django, but i want to make two sites one for the phone and one for the computer.
How do you instruct phones to load my phone friendly page instead of the normal website? | How does a mobile phone know to use a different page? | 0.119427 | 0 | 0 | 115 |
26,599,137 | 2014-10-28T00:40:00.000 | 2 | 0 | 0 | 0 | python,csv,merge | 41,861,621 | 5 | false | 0 | 0 | For those of us using 2.7, this adds an extra linefeed between records in "out.csv". To resolve this, just change the file mode from "w" to "wb". | 1 | 14 | 1 | I have hundreds of large CSV files that I would like to merge into one. However, not all CSV files contain all columns. Therefore, I need to merge files based on column name, not column position.
Just to be clear: in the merged CSV, values should be empty for a cell coming from a line which did not have the column of that cell.
I cannot use the pandas module, because it makes me run out of memory.
Is there a module that can do that, or some easy code? | Merge CSVs in Python with different columns | 0.07983 | 0 | 0 | 16,477 |
26,603,204 | 2014-10-28T07:49:00.000 | 4 | 0 | 1 | 0 | python,extract,7zip | 26,603,477 | 5 | false | 0 | 0 | You can use PyLZMA and py7zlib libraries to extract 7z file or try executing shell scripts to extract zip file using python subprocess module. | 2 | 4 | 0 | How to extract 7z zip file in python .Please some one let me know is there any library for that.
I have install libarchive library in python 2.7.3 version . But i am not able to use that library. | How to extract 7z zip file in Python 2.7.3 version | 0.158649 | 0 | 0 | 22,120 |
26,603,204 | 2014-10-28T07:49:00.000 | -2 | 0 | 1 | 0 | python,extract,7zip | 51,999,073 | 5 | false | 0 | 0 | !apt-get install p7zip-full
!p7zip -d file_name.tar.7z
Try the above steps | 2 | 4 | 0 | How to extract 7z zip file in python .Please some one let me know is there any library for that.
I have install libarchive library in python 2.7.3 version . But i am not able to use that library. | How to extract 7z zip file in Python 2.7.3 version | -0.07983 | 0 | 0 | 22,120 |
26,603,456 | 2014-10-28T08:07:00.000 | 15 | 0 | 1 | 0 | python,kernel,ipython,reload | 35,554,510 | 7 | false | 0 | 0 | Even though it would be handy if %reset would clear the namespace and the cache for the imports (as in the notebook) one can explicitly reload a previously imported module using importlib.reload in python3.4 or imp.reload in python3.0-3.3 (and if needed reset the kernel in a second step). | 3 | 41 | 0 | I was wondering if there is a way to restart the ipython kernel without closing it, like the kernel restart function that exists in the notebook. I tried %reset but that doesn't seem to clear the imports. | Reset ipython kernel | 1 | 0 | 0 | 74,727 |
26,603,456 | 2014-10-28T08:07:00.000 | 4 | 0 | 1 | 0 | python,kernel,ipython,reload | 51,645,130 | 7 | false | 0 | 0 | If you have installed Spyder with anaconda, then open Spyder window.
Then Consoles (menu bar) -> Restart Consoles.
or you can use CTRL+. which is a shortcut key to restart the console. | 3 | 41 | 0 | I was wondering if there is a way to restart the ipython kernel without closing it, like the kernel restart function that exists in the notebook. I tried %reset but that doesn't seem to clear the imports. | Reset ipython kernel | 0.113791 | 0 | 0 | 74,727 |
26,603,456 | 2014-10-28T08:07:00.000 | 1 | 0 | 1 | 0 | python,kernel,ipython,reload | 35,557,576 | 7 | false | 0 | 0 | In the qt console you could hit ctrl- | 3 | 41 | 0 | I was wondering if there is a way to restart the ipython kernel without closing it, like the kernel restart function that exists in the notebook. I tried %reset but that doesn't seem to clear the imports. | Reset ipython kernel | 0.028564 | 0 | 0 | 74,727 |
26,615,576 | 2014-10-28T18:16:00.000 | 14 | 0 | 0 | 0 | python,sublimetext2 | 26,617,291 | 2 | true | 0 | 0 | To show the build results panel, select Tools -> Build Results -> Show Build Results. There are also options in that menu to move back and forth in the build results history. | 1 | 6 | 0 | I'm running the Python code in Sublime - it works fine.
The only quirk I noticed is that when the code is executing if I, for example, do a search in the code the output window disappears and I haven't found a way to bring it back.
How to show/hide the python output window? | Python Sublime Text output window show/hide | 1.2 | 0 | 0 | 3,302 |
26,615,835 | 2014-10-28T18:29:00.000 | -2 | 0 | 0 | 0 | python,theano | 26,640,860 | 1 | true | 0 | 0 | In a rather simplified way I've managed to find a nice solution. The trick was to create one model, define its function and then create the other model and define the second function. Works like a charm | 1 | 0 | 1 | I'd like to have 2 separate networks running in Theano at the same time, where the first network trains on the results of the second. I could embed both networks in the same structure but that would be a real mess in the entire forward pass (and probably won't even work because of the shared variables etc.)
The problem is that when I define a theano function I don't specify the model it's applied on, meaning if I'm having a predict and a train function they'll both work on the first model I define.
Is there a way to overcome that issue? | Multiple networks in Theano | 1.2 | 0 | 0 | 111 |
26,617,865 | 2014-10-28T20:27:00.000 | 0 | 0 | 0 | 0 | python,django,cas | 26,826,648 | 1 | false | 1 | 0 | Turns out django-cas handles TGT using django sessions. However, for validation of the service ticket, you have to manually make a validation request including the ST(service ticket) granted after login and the service being accessed. | 1 | 0 | 0 | I'm using CAS to provide authentication for a number of secure services in my stack. The authentication front-end is implemented using Django 1.6 and the django-cas module. However, I'm reading around and I don't seem to get information on how django-cas handles Ticket Granting Tickets and also validation of service tickets.
Does anyone know how the aspects mentioned are handled by django-cas? | Django CAS and TGT(Ticket Granting Tickets) and service ticket validation | 0 | 0 | 0 | 477 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.