Q_Id
int64
2.93k
49.7M
CreationDate
stringlengths
23
23
Users Score
int64
-10
437
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
DISCREPANCY
int64
0
1
Tags
stringlengths
6
90
ERRORS
int64
0
1
A_Id
int64
2.98k
72.5M
API_CHANGE
int64
0
1
AnswerCount
int64
1
42
REVIEW
int64
0
1
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
15
5.1k
Available Count
int64
1
17
Q_Score
int64
0
3.67k
Data Science and Machine Learning
int64
0
1
DOCUMENTATION
int64
0
1
Question
stringlengths
25
6.53k
Title
stringlengths
11
148
CONCEPTUAL
int64
0
1
Score
float64
-1
1.2
API_USAGE
int64
1
1
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
15
3.72M
26,142,468
2014-10-01T13:23:00.000
0
0
0
0
0
python,django,geodjango,django-countries
0
26,143,098
0
2
0
false
1
0
Personally I would set a generic flag as a fallback (django flag perhaps?) if user does not set their country, and to be fair there must be a reason if user doesn't. Or you can simply make that country field mandatory. And assuming that person in that country based on their existing IP just doesn't sound right. Imagine if you are working remotely via VPN or behind a proxy server... you get the idea. It makes more sense if you just want to redirect people to sub-domain (different language) site base on IP.
1
0
0
0
I am using django-countries to get a flag from the CountryField and it works. However, some users don't have a country set on their profiles, but I still have to display a flag. My idea was to get a country from IP and then get a flag from the country or anything similar to that. Unfortunately, this is not happening in the view, so I don't have the request. I can set the IP in the template and then, in the backend, check if no flag was found (if a country is not provided) and then somehow use the IP that I got from the template to get a flag. Is something like that possible? If needed, GeoDjango is already included in the project. Perhaps there is a totally different way of doing this, but I am new to Django and still learning. Thanks.
Python/Django get country flag from IP
1
0
1
0
0
1,719
26,157,625
2014-10-02T09:07:00.000
1
0
0
0
0
mysql,django,python-2.7
0
26,158,170
0
1
0
false
1
0
It doesn't make sense to create a new database for each organization. Even if the number of customers or organizations grows to the hundreds or thousands, keeping data in a single database is your best option. Edit: Your original concern was that an increase in the number of organizations would impact performance. Well then, imagine if you had to create an entire database for each organization. You would have the same tables, indexes, views, etc replicated n-times. Each database would need system resources and disk space. Not to mention the code needed to make Django aware of the split. You would have to get data from n databases instead of just one table in one database. Also keep in mind that robust databases like PostgreSQL or MySQL are very capable of managing thousands or millions of records. They will do a better job out of the box than any code you may come up with. Just trust them with the data and if you notice a performance decline then you can find plenty of tips online to optimize them.
1
0
0
0
In my project there is two models ,ORGANISATION and CUSTUMER .Here what i am doing is while i am adding new customer to the organisation i save the organisation_id to the table CUSTOMER .But now i am worrying about the performance of my project when the database becomes huge. So now i am planning to create new database for every newly creating organisation .And save all the information of the organisation in that organisation's database. But i don't know how to create a new database for every newly creating organisation.And i'd like to know which method is better in performance.Please correct the question if not.
django multiple database for multiple organisation in a single project
0
0.197375
1
1
0
181
26,178,035
2014-10-03T11:19:00.000
0
0
0
0
0
python,machine-learning,scikit-learn
0
38,604,808
0
2
0
false
0
0
The above answer was too short, outdated and might result in misleading. Using score method could only give accuracy (it's in the BaseEstimator). If you want the loss function, you could either call private function _get_loss_function (defined in the BaseSGDClassifier). Or accessing BaseSGDClassifier.loss_functions class attribute which will give you a dict and whose entry is the callable for loss function (with default setting) Also using sklearn.metrics might not get exact loss used for minimization (due to regularization and what to minimize, but you can hand compute anyway). The exact code for Loss function is defined in cython code (sgd_fast.pyx, you could look up the code in scikit-learn github repo) I'm looking for a good way to plot the minimization progress. Probably will redirect stdout and parse the output. BTW, I'm using 0.17.1. So a update for the answer.
1
0
1
1
I'm using an SGDClassifier in combination with the partial fit method to train with lots of data. I'd like to monitor when I've achieved an acceptable level of convergence, which means I'd like to know the loss every n iterations on some data (possibly training, possibly held-out, maybe both). I know this information is available if I pass verbose=1 in the constructor of the classifier, but I'd like to query it programmatically rather than visually. I also know I can use the score method to get accuracy, but I'd like actual loss as measured by my chosen loss function. Does anyone know how to do this?
Way to compute the value of the loss function on data for an SGDClassifier?
0
0
1
0
0
1,528
26,178,633
2014-10-03T12:07:00.000
0
0
0
0
0
python,django,database,sqlite
0
26,179,396
0
2
0
false
1
0
You could also use fixtures. And generate fixtures for your app. Dependes on what you're planing to do with them. You'll just make a loaddata after that.
1
0
0
0
How can i create Django sqlite3 dump file (*.sql) using terminal? There is a fabric fabfile.py with certain dump scripts, but when i try to use fab command next massage shows up: The program 'fab' is currently not installed. To run fab please ask your administrator to install the package 'fabric'. But there are fabric files in /python2.7/site-packages/fabric/. I'm not good at Django and Python at all. The guy, who was responsible for our Django project, just left without any explanations. In general i need to know how to create Django sqlite3 dump file (*.sql) via terminal. Help? :)
Django sqlite3 database dump
0
0
1
1
0
1,114
26,199,115
2014-10-05T02:35:00.000
0
0
0
0
0
python,windows,console,pyqt,exe
0
26,208,237
0
1
0
false
0
1
The best solution is in the post linked to in a comment to your post, but on Windows you can also set a property on a shortcut that starts the app, to have the console minimized. It still shows in taskbar, there id you need it, can be handy in some situations(like during dev to use same script as user will, but not have to bother evrytime start the app to manually minimize the console). In general best to use pythonw.exe or use .pyw extension for your script.
1
0
0
0
I create an exe executable of my application made ​​in pyqt, and everything went well, but when I run the exe in windows before opening the application opens a console and while the application is open that console is also. I need someone to tell me how to make the console does not come out, or at least not visible. I've seen some answers to this problem but with C ++ with Python I have not seen anything
The exe executable of my application, also opens a console
0
0
1
0
0
50
26,211,308
2014-10-06T07:04:00.000
0
0
0
0
0
python,twitter,machine-learning,classification,nltk
0
29,718,799
0
1
1
false
0
0
The thing is that how do I even generate/create a training data for such a huge data I would suggest finding a training data set that could help you with the categories you are interested in. So let's say price related articles, you might want to find a training data set that is all about price related articles and then perhaps expand it by using synonyms for key-words like cheap or so. And perhaps look into sentence structure to find out whether if the structure of the sentence helps with your classifier algorithm. If not then what is the best approach to create a training data for multi-class classification of text/comments? key-words, pulling articles that are all about related categories and then go from there. Lastly, I suggest being very familiar with NLTK's corpus library, this might also help you with retrieving training data as well. As for your last question, I'm kinda confused to what you mean by 'multiple categories to classify the comments into', do you mean having multiple classifiers for a particular comment to belong in? So a comment can belong to 1 to more classifiers?
1
1
1
0
So I have some 1 million lines of twitter comments data in csv format. I need to classify them in certain categories like if somebody is talking about : "product longevity", "cheap/costly", "on sale/discount" etc. As you can see I have multiple classes to classify these tweets data into. The thing is that how do I even generate/create a training data for such a huge data.Silly question but I was wondering whether/not there are already preclassified/tagged comments data to train our model with? If not then what is the best approach to create a training data for multi-class classification of text/comments ? While I have tried and tested NaiveBayes for sentiment classification for a smaller dataset, could you please suggest which classifier shall I use for this problem (multiple categories to classify the comments into). Thanks!!!
preclassified trained twitter comments for categorization
0
0
1
0
0
149
26,280,838
2014-10-09T14:19:00.000
0
0
0
0
0
python,math,signal-processing
0
26,286,979
0
3
0
false
0
0
Assuming that you've loaded multiple readings of the PSD from the signal analyzer, try averaging them before attempting to find the bandedges. If the signal isn't changing too dramatically, the averaging process might smooth away any peaks and valleys and noise within the passband, making it easier to find the edges. This is what many spectrum analyzers can do to make for a smoother PSD. In case that wasn't clear, assume that each reading gives you 128 tuples of the frequency and power and that you capture 100 of these buffers of data. Now average the 100 samples from bin 0, then samples from 1, 2, ..., 128. Now try and locate the bandpass on this data. It should be easier than on any single buffer. Note I used 100 as an example. If your data is very noisy, it may require more. If there isn't much noise, fewer. Be careful when doing the averaging. Your data is in dB. To add the samples together in order to find an average, you must first convert the dB data back to decimal, do the adds, do the divide to find the average, and then convert the averaged power back into dB.
1
0
1
0
The data that i have is stored in a 2D list where one column represents a frequency and the other column is its corresponding dB. I would like to programmatically identify the frequency of the 3db points on either end of the passband. I have two ideas on how to do this but they both have drawbacks. Find maximum point then the average of points in the passband then find points about 3dB lower Use the sympy library to perform numerical differentiation and identify the critical points/inflection points use a histogram/bin function to find the amplitude of the passband. drawbacks sensitive to spikes, not quite sure how to do this i don't under stand the math involved and the data is noisy which could lead to a lot of false positives correlating the amplitude values with list index values could be tricky Can you think of better ideas and/or ways to implement what I have described?
How can I find the break frequencies/3dB points from a bandpass filter frequency sweep data in python?
0
0
1
0
0
1,113
26,282,986
2014-10-09T16:04:00.000
4
0
1
1
0
python
0
36,884,696
0
2
0
false
0
0
Python 2 and 3 can safely be installed together. They install most of their files in different locations. So if the prefix is /usr/local, you'll find the library files in /usr/local/lib/pythonX.Y/ where X.Y are the major and minor version numbers. The only point of contention is generally is the file python itself, which is generally a symbolic link. Currently it seems most operating systems still use Python 2 as the default, which means that python is a symbolic link to python2. This is also recommended in the Python documentation. It is best to leave it like that for now. Some programs in your distributions may depend on this, and might not work with Python 3. So install Python 3 (3.5.1 is the latest version at this time) using your favorite package manager or compiling it yourself. And then use it by starting python3 or by putting #!/usr/bin/env python3 as the first line in your Python 3 scripts and making them executable (chmod +x <file>).
1
4
0
0
My OS is CentOS 7.0. It's embedded python version is 2.7, and I want to update it to Python 3.4. when input the print sys.path output is: ['', '/usr/lib/python2.7/site-packages/setuptools-5.8-py2.7.egg', '/usr/lib64/python27.zip', '/usr/lib64/python2.7', '/usr/lib64/python2.7/plat-linux2', '/usr/lib64/python2.7/lib-tk', '/usr/lib64/python2.7/lib-old', '/usr/lib64/python2.7/lib-dynload', '/usr/lib64/python2.7/site-packages', '/usr/lib64/python2.7/site-packages/gtk-2.0', '/usr/lib/python2.7/site-packages'] So, if I download the python 3.7, then ./configure , make , make install. Will it override all the python-related files ? Or if I use the ./configure --prefix=***(some path) then is it safe to remove all the old python files or directory? In a word, hope someone gives me instructions about how to update to python 3 on linux. Thanks a lot.
how to update python 2.7 to python 3 in linux?
0
0.379949
1
0
0
20,066
26,302,718
2014-10-10T15:18:00.000
0
0
0
1
0
python,apache,deployment,openerp,mod-wsgi
0
26,330,930
0
1
0
false
1
0
The schedulers don't work when running through wsgi because your Odoo instances are just workers. AFAIK you just run a standalone instance on a 127.0.0.1 port and it runs your scheduled tasks.
1
0
0
0
When I deploy openerp/odoo using mod_wsgi, I found my schedulers stop working, can any one help how can I get my cron/schedulers working. If I deploy it using mod_proxy it will solve the issue but I want to deploy using mod_wsgi.
odoo mod_wsgi schedulers not working
0
0
1
0
0
246
26,304,413
2014-10-10T16:57:00.000
0
0
0
0
0
python,django
0
26,305,048
0
1
0
false
1
0
When you say "procedures" i guess you're talking about pages (or views in Django). So I would implement a single "app" to do that. Remember a project is composed of apps. When you create a project a main app (with the same name of the project) is created. This is a good place to code the procedures you said. Think of apps as indepent sections of your project (site); maybe some forum, a blog, a custom administration panel, a game and stuff like that. Every one of them could be an independent app. A project is mostly intended as a single website, so there's no need to create another project on the example you mentioned.
1
1
0
0
I'm learning Django but it's difficult to me to see how should I divide a project into apps? I've worked on some Java EE systems, almost for procedures for government and all that stuff but I just can't see how to create a Django project for this purposes? For example, if you should have to do a web app for making easier three process: Procedure to get the passport, procedure to get the driver license and procedure to get the social number. The 3 procedures have steps in common: Personal information, Contact Information, Health Information. Would you do a project for each procedure, an app for each procedure, an app for each step? I'm sorry if I'm posting this in the wrong Stack Exchange site. Thank you.
Django apps structure
0
0
1
0
0
73
26,311,022
2014-10-11T04:20:00.000
0
0
0
0
0
android,python,adb,monkeyrunner
0
26,311,559
0
1
0
true
1
0
You are trying to install an APK which is intended for a higher version onto API level 8 and thus the package manager refuses to install it. It has nothing to do with adb or monkeyrunner.
1
0
0
0
I've downloaded an APK onto a Velocity Cruz Tablet running Android 2.2.1 (API Level 8), and I'm trying to install it via whatever I can manage to make work. I already had ADT on my computer (Windows 8.1 if this helps) for API Level 19 for use with my phone. So I used the SDK Manager to get API Level 8. I can't for the life of me figure out how to make adb or monkeyrunner target API Level 8. I've got the paths right but the problem I'm having is making it target the proper API Level. I've gone through the adb commands, pm commands and MonkeyRunner API Documentation, but I don't see anything helpful. I've decided to come here to see if anyone knows what to do. Thanks.
Android - ADB/MonkeyRunner Setting API Levels
0
1.2
1
0
0
225
26,315,848
2014-10-11T14:44:00.000
1
0
0
0
0
python,button,user-interface,python-3.x,tkinter
0
26,315,912
0
2
0
false
0
1
You need a variable in a global or class instance scope and a function that has access to the scope of the variable that increments the variable when called. Set the function as the command attribute of the Button so that the function is called when the button is clicked.
1
0
0
0
So I am writing a program for school, and I have to make a maths quiz, the quiz needs to be out of 10 questions. I have made a button that is defined with a command that generates a new questions, clears the text box, get the answer from the dictionary, and inserts the new question into the textbox. At the moment the user can press the button as many times as they want. I dont actually know how to count or monitor the amount of times a button in tkinter has been pressed. I would be very grateful if someone could provide me with some code for Python(3.1.4) that I could use to count the amount of times the button has been pressed.
How to count the number of times a button is clicked Python(tkinter)
0
0.099668
1
0
0
17,843
26,316,273
2014-10-11T15:34:00.000
0
1
0
0
0
python,heroku,importerror,psycopg2
1
26,317,020
0
1
0
false
1
0
You could keep the psycopg2 directory in the same directory of apps, but that's actually a hack and you should try fix on installing psycopg2 on Heroku.
1
1
0
0
I have deployed my local python web service project to Heroku. I have all my dependencies in requirement.txt file. But, one module psycopg2 is not getting installed properly and I am getting installation error. So, I removed it from requirement.txt and thought I will push everything to heroku first, and then I will manually copy the psycopg2 module folder in /app/.heroku/python/lib/python2.7/site-packages folder. But I don't know how to access this folder! Can you please help?
How to manually copy folder to /app/.heroku/python/lib/python2.7/site-packages/?
0
0
1
0
0
248
26,318,326
2014-10-11T19:16:00.000
1
0
0
0
1
android,segmentation-fault,android-browser,qpython,pyjnius
0
26,428,421
0
2
0
true
1
1
Apparently, this happens just in console mode, so in other QPython mods it works fine.
1
3
0
0
I'm new developer in QPython (experienced with python), I want to open an url with user's default browser. I tried AndroidBrowser().open("...") but, to my surprise, I got Segmentation Fault! So I said OK, let's try to open it manually as activity, then I tried to import jnius and got Segmentation Fault as well. Any suggestion how to fix it or other ways to open the browser?
Open a URL with default broswer?
0
1.2
1
0
0
1,277
26,319,382
2014-10-11T21:22:00.000
3
0
1
0
0
python,fonts,truetype
0
26,319,577
0
1
0
false
0
1
It's pretty tricky to do analytically. One way is trial and error. Choose a large font size and render the layout to see whether it fits Use bisection algorithm to converge on the largest font that fits
1
0
0
0
I'm trying to adequate the FontSize of my text into a width-height specific context. For instance, if I got a image (512x512 pixels) and I got for instance 140 characters. What would be the ideal FontSize? In the above case, a 50 pixels Fontsize seems to be ok but what happened if there's a lot more text? the text will not fit into the picture so it needs reduction. I've been trying to calculate this on my own without success. What I've tried is this: get total pixels, with a picture 512x512 = 262144 and divide into the length of the text. But that gives a big number. Even if I divided that number by 4 (thinking about a box-pixel-model for the font). Do you have any solutions for this? PS. I've been using truetype (if this is somehow useful) I've been using python for this purpose, and PIL for image manipulation. Thank you in advance.
Calculate fontsize
0
0.53705
1
0
0
376
26,323,942
2014-10-12T10:03:00.000
3
0
1
0
1
python
1
26,324,214
0
1
0
true
0
1
Do not use import to implement application logic. In your use case, a room is the classic example of an object in object-oriented programming. You should have a class Room which defines the functionality for rooms. Individual rooms are instances of that class (later you can add subclasses, but I would not worry about that initially). Your application will have a "current room" as a variable. It will ask the room about its description and display that to the user. When the user types "go Kitchen", your application will ask the current room "hey, do you have a room named 'Kitchen' as a neighbor?" This method will return the appropriate room object, which your application then can set as the current room. From the above, you can see two functionalities (methods) rooms should have: "Give me your description" and "give me the adjacent room named 'X', if any". This should get you started.
1
0
0
0
I was thinking of making a text-based game about detectives, case-solving, with complete freedom, loads of variables, etc. But before I get serious with it I need to know how to make rooms. E.g. you start in the hall and you type in "Go kitchen" and you go to the kitchen. I have achieved this by using import file when you type in "Go kitchen" (the file is the kitchen file), but if I want to go back and forth between them it gives an error. Is there something I am missing about this method? Is there a better way to do it? The simpler, the better, please.
How best to implement rooms for a text-based game?
0
1.2
1
0
0
486
26,349,503
2014-10-13T21:56:00.000
0
0
0
0
0
python,emacs,pdb
0
26,373,563
0
1
0
true
0
0
Normally there is no "=>" inserted at all. What there is instead is a "=>" that is displayed but which is not part of the buffer's content. Are you sure it's really in the code and are you sure you can delete it as if it were normal text?
1
0
0
0
I'm trying to learn how to use pdb in emacs. I run emacs in console mode and in my python file I have something like import pdb and pdb.set_trace() at the beginning of the file. I use C-c C-c to execute the buffer and pdb starts running. It works fine except that I end up with a => inserted into my code on the line that pdb is looking at. When pdb ends, the => characters remain in my code on the last line and I have to manually delete it. How do I prevent this from happening?
Using pdb in emacs inserts => into my code
0
1.2
1
0
0
105
26,355,152
2014-10-14T07:35:00.000
0
0
0
0
0
python-2.7,scrapy,web-crawler
0
26,378,448
0
2
0
false
1
0
For this we have to make a list in fields_to_export in the BaseItemExporter class field_iter = ['Offer', 'SKU', 'VendorCode'] like this and then have to pass this list in the field
1
0
0
0
I have to crawl data from a web page in a specfic order as liked i declared fields in my item class and then have to put them in csv file.problem now occuring is there its stores data not in specfic order as like its scrapping data of any field and putting in csv file but i want it should store data as i declared in my item class. I am newbie in python. can you tell me how to do this For ex: my item class is class DmozItem(Item): title = Field() link = Field() desc = Field() Now when its storing data in csv file its storing first desc ,link and then title "desc": [], "link": ["/Computers/Programming/"], "title": ["Programming"]}
How to store data crawled in scrapy in specfic order?
0
0
1
0
0
262
26,358,511
2014-10-14T10:26:00.000
1
0
0
0
0
python,django,templates,variables
0
26,358,812
0
2
0
false
1
0
You can add variables to context
1
0
0
0
I have a django and I wrote some views. I know how to pass my variables to template, but I also has some external modules with their own views, which I wont modify. Please help me understand how can I get one of my object "Menu.objects.all()" exist in all templates? So for example a I have django-registration and i want to have all my menu items appear at top when someone visits not my app url. I mean it will be registration app url, which returns templateresponse (and here I dont have my variable).
Make an object exists in all templates
0
0.099668
1
0
0
58
26,358,543
2014-10-14T10:28:00.000
-3
0
1
0
0
javascript,python
0
26,358,654
0
2
0
false
1
0
If you use an IDE such as phpstorm, it can easily find variables for you. I don't see the use in programming something in Python to do this.
1
1
0
0
I have directory with many JaveScript files. Here I want to scan each file and want to replace each JavaScript variable with string like 'str1', 'str2', 'str3', ..., 'strn' and so on. My question is: how to identify a JavaScript variable? Doubts: If I say keyword after var is a variable, however there is no compulsion of var while declaring variable If I say keyword before = is a variable, however file also contains HTML code, so inside HTML tag there is = sign between attribute and its value. So how can I identify the variables I have to replace?
How to search for JavaScript variables in a file, using Python?
0
-0.291313
1
0
0
1,278
26,375,763
2014-10-15T06:21:00.000
2
1
1
0
1
python,z3,z3py
0
26,385,910
0
1
1
true
0
0
Using Z3 over Python is generally pretty slow. It includes parameter checks and marshaling (_coerce_expr among others). For scalability you will be better of using one of the other bindings or bypass the Python runtime where possible.
1
4
0
0
I am using Z3 Python bindings to create an And expression via z3.And(exprs) where exprs is a python list of 48000 equality constraints over boolean variables. This operation takes 2 seconds on a MBP with 2.6GHz processor. What could I be doing wrong? Is this an issue with z3 Python bindings? Any ideas on how to optimize such constructions? Incidentally, in my experiments, the constructions of such expressions is taking more time than solving the resulting formulae :)
Why is z3.And() slow?
0
1.2
1
0
0
283
26,391,208
2014-10-15T20:09:00.000
1
0
0
0
0
javascript,python,templates,pyramid
0
26,392,168
0
1
0
false
1
0
This was a red herring; the URL was wrong but the log file mentioned a missing template so I was focused in the wrong direction. I had to get a custom redirection piece of code from one of the developers on this project and I have it working now.
1
0
0
0
I'm adding a page to a complex Pyramid-based app that uses Handlebar templates. I need a file download URL that doesn't need a template, but the system is giving me a 404 code for missing template anyway. How do I tell a view at configuration time "do not use a handlebar template with this one?"
Pyramid app with handlebar.js: I don't need a template for this view; how to disable?
0
0.197375
1
0
0
68
26,393,540
2014-10-15T23:00:00.000
0
0
0
1
1
python,parallel-processing,rabbitmq,celery,django-celery
0
26,732,448
0
1
0
false
1
0
If queuing up your tasks takes longer than the task, how about you increase the scope of the tasks so they operate on N files at a time. So instead of queuing up 1000 tasks for 1000 files. You queue up 10 tasks that operate on 100 files at a time. Make your task take a list of files, rather than a file for input. Then when you loop through your list of files you can loop 100 at time.
1
0
0
0
I'm having a major problem in my celery + rabbitmq app where queuing up my jobs is taking longer than the time for my workers to perform jobs. No matter how many machines I spin up, my queuing time will always overtake my task time. This is because I have one celery_client script on one machine doing all the queuing (calling task.delay()) sequentially. I am iterating through a list of files stored in S3. How can I parallelize the queuing process? I imagine this is a widespread basic problem, yet I cannot find a solution. EDIT: to clarify, I am calling task.delay() inside a for loop that iterates through a list of S3 files (of which there are a huge amount of small files). I need to get the result back to me so I can return it to the client, so for this reason I iterate through a list of results after the above to see if the result is completed -- if it is, I append it to a result file. Some solutions I can think of immediately is some kind of multi threaded support in my for loop, but I am not sure whether .delay() would work with this. Is there no built in celery support for this problem? EDIT2 More details: I am using one queue in my celeryconfig -- my tasks are all the same. EDIT3: I came across "chunking", where you can group a lot of small tasks into one big one. Not sure if this can help out my problem, as although I can transform a large number of small tasks into a small number of big ones, my for loop is still sequential. I could not find much information in the docs.
The time to queue tasks in celery bottlenecks my application - how to parallelize .delay()?
0
0
1
0
0
349
26,413,505
2014-10-16T20:39:00.000
0
1
0
0
0
python,c++,linux,signals,paramiko
0
26,413,679
0
1
0
false
0
0
The best way I can think of to do this is to run both of them in a web server. Use something like Windows Web Services for C++ or a native CGI implementation and use that to signal the python script. If that's not a possibility, you can use COM to create COM objects on both sides, one in Python, and one in C++ to handle your IPC, but that gets messy with all the marshalling of types and such.
1
1
0
0
I'm working on developing a test automation framework. I need to start a process(a C++ application) on a remote linux host from a python script. I use the python module "paramiko" for this. However my c++ application takes sometime to run and complete the task assigned to it. So till the application completes processing, I cannot close the connection to the paramiko client. I wan thinking if I could do something like "the c++ application executing a callback(or some kind of signalling mechanism) and informing the script on completion of the task" Is there a way I can achieve this ? I'm new to python, so any help would be much appreciated. thanks! Update: Is it not possible to have event.wait() and event.set() mechanism between the c++ application and the python script ? If yes, can somebody explain how it can be achieved ? thanks in advance!
Can a "C++ application signal python script on completion"?
1
0
1
0
0
331
26,442,403
2014-10-18T17:10:00.000
1
0
0
0
0
python,numpy,random-sample,normal-distribution,mixture-model
0
26,565,108
0
2
0
false
0
0
Since for sampling only relative proportion of the distribution matters, scaling preface or can be thrown away. For diagonal covariance matrix, one can just use the covariance submarine and mean subvector that has dimensions of missing data. For covariance with off-diagonal elements, the mean and std dev of a sampling gaussian will need to be changed.
1
1
1
0
I want to sample only some elements of a vector from a sum of gaussians that is given by their means and covariance matrices. Specifically: I'm imputing data using gaussian mixture model (GMM). I'm using the following procedure and sklearn: impute with mean get means and covariances with GMM (for example 5 components) take one of the samples and sample only the missing values. the other values stay the same. repeat a few times There are two problems that I see with this. (A) how do I sample from the sum of gaussians, (B) how do I sample only part of the vector. I assume both can be solved at the same time. For (A), I can use rejection sampling or inverse transform sampling but I feel that there is a better way utilizing multivariate normal distribution generators in numpy. Or, some other efficient method. For (B), I just need to multiply the sampled variable by a gaussian that has known values from the sample as an argument. Right? I would prefer a solution in python but an algorithm or pseudocode would be sufficient.
Sampling parts of a vector from gaussian mixture model
0
0.099668
1
0
0
1,040
26,479,903
2014-10-21T06:04:00.000
1
0
0
0
0
python,web.py
0
26,498,875
0
1
0
false
1
0
Use web.data().
1
0
0
0
My situation is : A server send a request to me, the request's contentType is 'text/xml', and the request content is an xml. First I need to get the request content. But when I use 'web.input()' in 'POST' function, I couldn't get any message, the result just is ''. I know web.py can get form data from a request, so how I can get message from request when the contentType is 'text/xml' in POST function. Thanks!
web.py how to get message from request when the contentType is 'text/xml'
0
0.197375
1
0
1
86
26,479,928
2014-10-21T06:06:00.000
1
1
1
0
0
python
0
26,480,045
0
2
0
true
0
0
If you're doing from os import environ, then you'll reference it as environ. If you do import os, it's os.environ. So depending on your needs, the second option might be better. The first will look better and read easier, whereas the second avoids namespace pollution.
1
0
0
0
in myModule.py I am importing environ from os , like from os import environ since I am only using environ, but when I do dir(myModule) it shows environ as publicly visible , how ever should it be imported as protected assuming some other project may also have its own environ function ?
when importing functions from inside builtins like os or sys is it good practice to import as protected?
0
1.2
1
0
0
114
26,480,008
2014-10-21T06:12:00.000
2
0
0
0
0
python,sockets,flask,twisted,pythonanywhere
0
26,503,901
0
2
0
false
0
0
It depends what sort of connection your clients need to make to the server. PythonAnywhere supports WSGI, which means "normal" HTTP request/response interactions -- GET, POST, etc. That works well for "traditional" web pages or web apps. If your client side needs dynamic, two-way connections using non-HTTP protocols, using raw sockets, or even websockets, PythonAnyhwere doesn't support that at present.
1
3
0
1
I'm building a turn-based game and I'm hoping to implement client-server style networking. I really just need to send the position of a couple of objects and some other easily encodable data. I'm pretty new to networking, although I've coded some basic stuff in socket and twisted. Now, though, I need to be able to send the data to a computer that isn't on my local network, and I can't do port forwarding since I don't have admin access to the router and I'm also not totally sure that would do the trick anyways since I've never done it. So, I was thinking of running some Flask or Bottle or Django, etc. code off PythonAnywhere. The clients would then send data to the server code on PythonAnywhere, and when the turn passed, the other client would just go look up the information it needed on the server. I guess then the server would act as just a data bank with some simple getter and setter methods. My question is how can this be implemented? Can my Socket code on my client program talk to my Flask code on PythonAnywhere?
Using PythonAnywhere as a game server
0
0.197375
1
0
0
889
26,487,648
2014-10-21T13:14:00.000
1
0
0
1
1
python,django,django-chronograph
0
26,487,761
0
3
0
false
1
0
I would suggest you to configure cron to run your command at specific times/intervals.
2
0
0
0
I need to run a specific manage.py commands on an EC2 instance every X minutes. For example: python manage.py some_command. I have looked up django-chronograph. Following the instructions, I've added chronograph to my settings.py but on runserver it keeps telling me No module named chronograph. Is there something I'm missing to get this running? And after running how do I get manage.py commands to run using chronograph? Edit: It's installed in the EC2 instance's virtualenv.
Run specific django manage.py commands at intervals
0
0.066568
1
0
0
368
26,487,648
2014-10-21T13:14:00.000
0
0
0
1
1
python,django,django-chronograph
0
26,488,221
0
3
0
false
1
0
First, install it by running pip install django-chronograph.
2
0
0
0
I need to run a specific manage.py commands on an EC2 instance every X minutes. For example: python manage.py some_command. I have looked up django-chronograph. Following the instructions, I've added chronograph to my settings.py but on runserver it keeps telling me No module named chronograph. Is there something I'm missing to get this running? And after running how do I get manage.py commands to run using chronograph? Edit: It's installed in the EC2 instance's virtualenv.
Run specific django manage.py commands at intervals
0
0
1
0
0
368
26,488,595
2014-10-21T14:00:00.000
0
0
1
0
0
python,64-bit,32bit-64bit,py2exe,32-bit
0
26,672,009
0
1
0
true
0
0
You should install the 32-bit python (in a separate directory, you can do it on the same machine). Install 32-bit py2exe for this 32-bit Python installation plus all Python packages that you need. Then you can build a 32-bit eecutable.
1
1
0
0
I need to make an 32bit exe file using py2exe. The problem is that my machine and Python are 64bit. Is there some simple way how to make 32bit using 64bit Python and py2exe? I heard that I should uninstall py2exe and install new py2exe 32bit, can this help me? EDIT: If 32bit py2exe works, can I install 32bit py2exe next to my 64bit py2exe?
32bit exe on 64bit Python using py2exe
0
1.2
1
0
0
803
26,503,035
2014-10-22T08:17:00.000
1
0
1
0
0
python,ruby,scripting,powershell-2.0
0
26,507,747
0
1
0
true
0
0
Best answer is 'write tests'. For purely syntactical checking with some code correctness, like calling a function which does not exist like you are describing, pylint is probably the best tool. Install it with pip install pylint.
1
0
0
0
This question applies to dynamically interpreted code, I guess In detail Say I have a set of data processing projects that depend on a common module called tools. Down the road of development, I find out that I want to change the interface of one of the functions or methods in tools. This interface-change might not be totally backwards compatible, it might break a subset of my data processing projects. If all the software involved would have to be compiled, I could simple re-compile everything and the compiler would point me to the spots where I have to adapt the calling code to the new signature. But how can this be done in an interpreted situation? TL;DR A set of script programs depend on a script module. After chaning the interface of the module in a possibly not backwards-compatible way, how do I check the dependent programs and make them compliant to the new interface?
Checking all code paths in a project written in a scripting language for syntax-correctness
0
1.2
1
0
0
83
26,509,319
2014-10-22T14:02:00.000
2
0
0
0
0
python,arrays,numpy,svm,libsvm
0
26,509,674
0
1
0
true
0
0
The svmlight format is tailored to classification/regression problems. Therefore, the array X is a matrix with as many rows as data points in your set, and as many columns as features. y is the vector of instance labels. For example, suppose you have 1000 objects (images of bicycles and bananas, for example), featurized in 400 dimensions. X would be 1000x400, and y would be a 1000-vector with a 1 entry where there should be a bicycle, and a -1 entry where there should be a banana.
1
1
1
0
I have a numpy array for an image and am trying to dump it into the libsvm format of LABEL I0:V0 I1:V1 I2:V2..IN:VN. I see that scikit-learn has a dump_svmlight_file and would like to use that if possible since it's optimized and stable. It takes parameters of X, y, and file output name. The values I'm thinking about would be: X - numpy array y - ???? file output name - self-explanatory Would this be a correct assumption for X? I'm very confused about what I should do for y though. It appears it needs to be a feature set of some kind. I don't know how I would go about obtaining that however. Thanks in advance for the help!
How to convert numpy array into libsvm format
0
1.2
1
0
0
2,922
26,526,365
2014-10-23T10:43:00.000
0
1
1
0
1
python,python-2.7,amazon-ec2,module,snappy
0
26,531,727
0
1
0
true
0
0
I just found python-snappy on github and installed it via python. Not a permanent solution, but at least something.
1
0
0
0
I downloaded Snappy library sources for working with compression and everything was great on one machine, but it didn't work on another machine. They have completely same configurations of hardware/OS + python 2.7.3. All I was doing is "./configure && make && make install". There were 0 errors during any of these processes and it installed successfully to the default lib directory, but python cant see it anyhow. help('modules') and pip freeze doesn't show snappy on the second machine and as the result I cant import it. I tried even 'to break' structure and install it to different lib catalogs, but even that didn't work. I don't think if its related to system environment variables, since python should have completely same configuration on any of these machines (Amazon EC2). Anyone knows how to fix this issue?
Python cant see installed module
0
1.2
1
0
0
95
26,529,779
2014-10-23T13:56:00.000
3
0
0
0
0
doxygen,python-sphinx,documentation-generation,dymola
0
26,543,595
0
1
0
true
0
0
If you mean the Modelica model code, how does the HTML export in Dymola work for you? What's missing? If you mean the C code generated by Dymola, the source code generation option enables more comments in the code.
1
1
0
0
since I could not find an answer to my question neither here nor in other forums, I decided to ask it to the community: Does anybody know if and how it is possible to realize automatic documentation generation for code generated with Dymola? The background for this e. g. is that I want/need to store additional information within my model files to explain the concepts of my modelling and to store and get the documentation directly from the model code, which I would later like to be in a convenient way displayable not only from within Dymola, but also by a html and LaTeX documentation. I know that there exist several tools for automatic documentation generation like e. g. DoxyGen and Python Sphinx, but I could not figure out if the can be used with Dymola code. Plus, I am pretty new to this topic, so that I do not really know how to find out if they will work out. Thank you people very much for your help! Greetings, mindm49907
Automatic documentation generation for Dymola code
1
1.2
1
1
0
274
26,575,729
2014-10-26T17:18:00.000
0
0
0
0
0
python,amazon-web-services,boto,emr,amazon-emr
0
72,334,675
0
2
0
false
0
0
bootstrap script executed once the cluster started (first time or at the beginning), however AWS provide ssh to master and other nodes there you can write shell script,install libs, packages, python program , git clone your repo etc... Hope this may be helpful. Amit
2
17
0
0
I have one EMR cluster which is running 24/7. I can't turn it off and launch the new one. What I would like to do is to perform something like bootstrap action on the already running cluster, preferably using Python and boto or AWS CLI. I can imagine doing this in 2 steps: 1) run the script on all the running instances (It would be nice if that would be somehow possible for example from boto) 2) adding the script to bootstrap actions for case that I'd like to resize the cluster. So my question is: Is something like this possible using boto or at least AWS CLI? I am going through the documentation and source code on github, but I am not able to figure out how to add new "bootstrap" actions when the cluster is already running.
AWS EMR perform "bootstrap" script on all the already running machines in cluster
1
0
1
0
0
2,264
26,575,729
2014-10-26T17:18:00.000
6
0
0
0
0
python,amazon-web-services,boto,emr,amazon-emr
0
35,529,652
0
2
0
false
0
0
Late answer, but I'll give it a shot: That is going to be tough. You could install Amazon SSM Agent and use the remote command interface to launch a command on all instances. However, you will have to assign the appropriate SSM roles to the instances, which will require rebuilding the cluster AFAIK. However, any future commands will not require rebuilding. You would then be able to use the CLI to run commands on all nodes (probably boto as well, haven't checked that).
2
17
0
0
I have one EMR cluster which is running 24/7. I can't turn it off and launch the new one. What I would like to do is to perform something like bootstrap action on the already running cluster, preferably using Python and boto or AWS CLI. I can imagine doing this in 2 steps: 1) run the script on all the running instances (It would be nice if that would be somehow possible for example from boto) 2) adding the script to bootstrap actions for case that I'd like to resize the cluster. So my question is: Is something like this possible using boto or at least AWS CLI? I am going through the documentation and source code on github, but I am not able to figure out how to add new "bootstrap" actions when the cluster is already running.
AWS EMR perform "bootstrap" script on all the already running machines in cluster
1
1
1
0
0
2,264
26,595,519
2014-10-27T19:39:00.000
0
0
0
0
0
python,eclipse,numpy,pydev
1
26,625,834
0
1
0
false
0
0
I recommend you to either use the setup.py from the downloaded archive or to download the "superpack"-executable for windows, if you work on windows anyway. In PyDev, i overcame problems with new libraries by using the autoconfig button. If that doesn't work, another solution could be deleting and reconfiguring the python interpreter.
1
1
1
0
Although I've been doing things with python by myself for a while now, I'm completely new to using python with external libraries. As a result, I seem to be having trouble getting numpy to work with PyDev. Right now I'm using PyDev in Eclipse, so I first tried to go to My Project > Properties > PyDev - PYTHONPATH > External Libraries > Add zip/jar/egg, similar to how I would add libraries in Eclipse. I then selected the numpy-1.9.0.zip file that I had downloaded. I tried importing numpy and using it, but I got the following error message in Eclipse: Undefined variable from import: array. I looked this up, and tried a few different things. I tried going into Window > Preferences > PyDev > Interpreters > Python Interpreters. I selected Python 3.4.0, then went to Forced Builtins > New, and entered "numpy". This had no effect, so I tried going back to Window > Preferences > PyDev > Interpreters > Python Interpreters, selecting Python 3.4.0, and then, under Libraries, choosing New Egg/Zip(s), then adding the numpy-1.9.0.zip file. This had no effect. I also tried the String Substitution Variables tab under Window > Preferences > PyDev > Interpreters > Python Interpreters (Python 3.4.0). This did nothing. Finally, I tried simply adding # @UndefinedVariable to the broken lines. When I ran it, it gave me the following error: ImportError: No module named 'numpy' What can I try to get this to work?
Using numpy with PyDev
0
0
1
0
0
2,141
26,617,865
2014-10-28T20:27:00.000
0
0
0
0
0
python,django,cas
0
26,826,648
0
1
0
false
1
0
Turns out django-cas handles TGT using django sessions. However, for validation of the service ticket, you have to manually make a validation request including the ST(service ticket) granted after login and the service being accessed.
1
0
0
0
I'm using CAS to provide authentication for a number of secure services in my stack. The authentication front-end is implemented using Django 1.6 and the django-cas module. However, I'm reading around and I don't seem to get information on how django-cas handles Ticket Granting Tickets and also validation of service tickets. Does anyone know how the aspects mentioned are handled by django-cas?
Django CAS and TGT(Ticket Granting Tickets) and service ticket validation
0
0
1
0
0
477
26,649,495
2014-10-30T09:38:00.000
-1
0
0
0
0
python,web.py
0
26,651,096
0
2
0
false
1
0
web.py runs CherryPy as the web server and it has support for handling requests with chunked transfer coding. Have you misread the documentation?
1
1
0
0
I am using web.py to run a server. I need to get a request from a remote server, however, the request sends me a data with Chunked Transfer Coding. I can use web.ctx.env['wsgi.input'].read(1000) to get the data. But this is not what I need since I don't know the length of the data (because it is chunked). But if I use web.ctx.env['wsgi.input'].read() the server would crash. Can anybody tell me how to get the chunked data in a request?
Python: how to read 'Chunked Transfer Coding' from a request in web.py server
0
-0.099668
1
0
1
599
26,657,334
2014-10-30T15:39:00.000
0
0
0
0
1
python,numpy,scipy,windows64
1
62,499,396
0
14
0
false
0
0
Follow these steps: Open CMD as administrator Enter this command : cd.. cd.. cd Program Files\Python38\Scripts Download the package you want and put it in Python38\Scripts folder. pip install packagename.whl Done You can write your python version instead of "38"
2
31
1
0
I found out that it's impossible to install NumPy/SciPy via installers on Windows 64-bit, that's only possible on 32-bit. Because I need more memory than a 32-bit installation gives me, I need the 64-bit version of everything. I tried to install everything via Pip and most things worked. But when I came to SciPy, it complained about missing a Fortran compiler. So I installed Fortran via MinGW/MSYS. But you can't install SciPy right away after that, you need to reinstall NumPy. So I tried that, but now it doesn't work anymore via Pip nor via easy_install. Both give these errors: There are a lot of errors about LNK2019 and LNK1120,. I get a lot of errors in the range of C: C2065,C2054,C2085,C2143`, etc. They belong together I believe. There is no Fortran linker found, but I have no idea how to install that, can't find anything on it. And many more errors which are already out of the visible part of my cmd-windows... The fatal error is about LNK1120: build\lib.win-amd64-2.7\numpy\linalg\lapack_lite.pyd : fatal error LNK1120: 7 unresolved externals error: Setup script exited with error: Command "C:\Users\me\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\amd64\link.exe /DLL /nologo /INCREMENTAL:NO /LIBPATH:C:\BLAS /LIBPATH:C:\Python27\libs /LIBPATH:C:\Python27\PCbuild\amd64 /LIBPATH:build\temp.win-amd64-2.7 lapack.lib blas.lib /EXPORT:initlapack_lite build\temp.win-amd64-2.7\Release\numpy\linalg\lapack_litemodule.obj /OUT:build\lib.win-amd64-2.7\numpy\linalg\lapack_lite.pyd /IMPLIB:build\temp.win-amd64-2.7\Release\numpy\linalg\lapack_lite.lib /MANIFESTFILE:build\temp.win-amd64-2.7\Release\numpy\linalg\lapack_lite.pyd.manifest" failed with exit status 1120 What is the correct way to install the 64-bit versions NumPy and SciPy on a 64-bit Windows machine? Did I miss anything? Do I need to specify something somewhere? There is no information for Windows on these problems that I can find, only for Linux or Mac OS X, but they don't help me as I can't use their commands.
Installing NumPy and SciPy on 64-bit Windows (with Pip)
0
0
1
0
0
131,918
26,657,334
2014-10-30T15:39:00.000
0
0
0
0
1
python,numpy,scipy,windows64
1
44,685,941
0
14
0
false
0
0
for python 3.6, the following worked for me launch cmd.exe as administrator pip install numpy-1.13.0+mkl-cp36-cp36m-win32 pip install scipy-0.19.1-cp36-cp36m-win32
2
31
1
0
I found out that it's impossible to install NumPy/SciPy via installers on Windows 64-bit, that's only possible on 32-bit. Because I need more memory than a 32-bit installation gives me, I need the 64-bit version of everything. I tried to install everything via Pip and most things worked. But when I came to SciPy, it complained about missing a Fortran compiler. So I installed Fortran via MinGW/MSYS. But you can't install SciPy right away after that, you need to reinstall NumPy. So I tried that, but now it doesn't work anymore via Pip nor via easy_install. Both give these errors: There are a lot of errors about LNK2019 and LNK1120,. I get a lot of errors in the range of C: C2065,C2054,C2085,C2143`, etc. They belong together I believe. There is no Fortran linker found, but I have no idea how to install that, can't find anything on it. And many more errors which are already out of the visible part of my cmd-windows... The fatal error is about LNK1120: build\lib.win-amd64-2.7\numpy\linalg\lapack_lite.pyd : fatal error LNK1120: 7 unresolved externals error: Setup script exited with error: Command "C:\Users\me\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\amd64\link.exe /DLL /nologo /INCREMENTAL:NO /LIBPATH:C:\BLAS /LIBPATH:C:\Python27\libs /LIBPATH:C:\Python27\PCbuild\amd64 /LIBPATH:build\temp.win-amd64-2.7 lapack.lib blas.lib /EXPORT:initlapack_lite build\temp.win-amd64-2.7\Release\numpy\linalg\lapack_litemodule.obj /OUT:build\lib.win-amd64-2.7\numpy\linalg\lapack_lite.pyd /IMPLIB:build\temp.win-amd64-2.7\Release\numpy\linalg\lapack_lite.lib /MANIFESTFILE:build\temp.win-amd64-2.7\Release\numpy\linalg\lapack_lite.pyd.manifest" failed with exit status 1120 What is the correct way to install the 64-bit versions NumPy and SciPy on a 64-bit Windows machine? Did I miss anything? Do I need to specify something somewhere? There is no information for Windows on these problems that I can find, only for Linux or Mac OS X, but they don't help me as I can't use their commands.
Installing NumPy and SciPy on 64-bit Windows (with Pip)
0
0
1
0
0
131,918
26,698,805
2014-11-02T11:30:00.000
0
0
1
0
0
python,c++,windows,boost-python
1
26,714,852
0
2
0
false
0
0
I just got an answer from one of my colleagues who told me had the exact same problem. The solution was indeed downloading and installing a version of vcredist_x86.exe, but the trick is to find the exact right one. Apparently you can get to a page somewhere from where you can choose the right version. Sorry for not being able to give more exact information, I just have the file now and it works, but it doesn't even say the version number in the file name. This is all very obscure for my taste, but then I'm not a Windows guy.
1
1
0
0
I've downloaded pythonxy (2.7.6.1) on my new 64 bit Windows machine (Windows 7 Enterprise, SP1). When I try to run python, I get an error saying the side-by-side configuration was incorrect. WinPython 32 bit (2.7.6.3) shows the same behavior, WinPython 64 bit is fine. However, I badly need to compile Python modules with boost and found myself taking the first few steps into what I believe will be searching-the-internet/configuration/compilation hell for 64 bit, so I'd rather try to make the 32-bit python work, for which I have my whole MinGW procedure set up and working. Does anybody know what I need to do in order to fix the side-by-side error? Install some redristributable package or something like that?
32 bit python on 64 bit windows machine
0
0
1
0
0
1,442
26,700,204
2014-11-02T14:08:00.000
0
0
0
0
0
python,sockets,port,zeromq
0
26,716,452
0
2
0
false
0
0
Having read details about ZeroRPC-python current state, the safest option to solve the task would be to create a central LotterySINGLETON, that would receive <-REQ/REP-> send a next free port# upon an instance's request. This approach is isolated from ZeroRPC-dev/mod(s) modifications of use of the otherwise stable ZeroMQ API and gives you the full control over the port#-s pre-configured/included-in/excluded-from LotterySINGLETON's draws. The other way aroung would be to try to by-pass the ZeroRPC layer and ask ZeroMQ directly about the next random port, but the ZeroRPC documentation discourages from bypassing their own controls imposed on (otherwise pure) ZeroMQ framework elements ( which is quite a reasonable to be emphasised, as it erodes the ZeroRPC-layer consistency of it's add-on operations & services, so it shall rather be "obeyed" than "challenged" in trial/error ... )
1
3
0
0
I am using ZeroRPC for a project, where there may be multiple instances running on the same machine. For this reason, I need to abe able to auto-assign unused port numbers. I know how to accomplish this with regular sockets using socket.bind(('', 0)) or with ZeroMQ using the bind_to_random_port method, but I cannot figure out how to do this with ZeroRPC. Since ZeroRPC is based on ZeroMQ, it must be possible. Any ideas?
ZeroRPC auto-assign free port number
1
0
1
0
1
346
26,722,127
2014-11-03T20:00:00.000
2
0
0
1
0
python,google-app-engine,google-bigquery,google-cloud-datastore
0
26,722,516
0
2
0
false
1
0
There is no full working example (as far as I know), but I believe that the following process could help you : 1- You'd need to add a "last time changed" to your entities, and update it. 2- Every hour you can run a MapReduce job, where your mapper can have a filter to check for last time updated and only pick up those entities that were updated in the last hour 3- Manually add what needs to be added to your backup. As I said, this is pretty high level, but the actual answer will require a bunch of code. I don't think it is suited to Stack Overflow's format honestly.
1
2
0
0
Currently, I'm using Google's 2-step method to backup the datastore and than import it to BigQuery. I also reviewed the code using pipeline. Both methods are not efficient and have high cost since all data is imported everytime. I need only to add the records added from last import. What is the right way of doing it? Is there a working example on how to do it in python?
Import Data Efficiently from Datastore to BigQuery every Hour - Python
0
0.197375
1
1
0
541
26,726,950
2014-11-04T02:49:00.000
-2
0
0
0
0
python,numpy,matrix,indexing,pygame
0
26,727,002
0
2
0
false
0
0
set a bool to checks every turn if someone has won. if it returns true, then whosever turn it is has won so, for instance, it is x turn, he plays the winning move, bool checks if someone has won,returns true, print out (player whose turn it is) has won! and end game.
1
1
1
0
My assignment is Tic-Tac_Toe using pygame and numpy. I Have almost all of the program done. I just need help understanding how to find if a winner is found. I winner is found if the summation of ANY row, column, or diagonal is equal to 3. I have two 3x3 matrices filled with 0's. Let's call them xPlayer and oPlayer. The matrices get filled with 1 every time player x or player o chooses their choice at a certain location. So if, player x selects [0,0], the matrix location at [0,0] gets a 1 value. This should continue until the summation of any row, column, or diagonal is 3. If All the places in both the matrices are 1, then there is no winner. I need help finding the winner. I'm really new to python so I don't know much about indexing though a matrix. Any help would be greatly appreciated! EDIT: Basically, how do you find out the summation of every row, column, and diagonal to check if ANY of them are equal to 3.
Summation of every row, column and diagonal in a 3x3 matrix numpy
0
-0.197375
1
0
0
1,887
26,733,418
2014-11-04T10:57:00.000
2
0
0
0
0
python,algorithm,classification,extraction
0
29,713,740
0
1
0
false
0
0
One approach would be to take the least RMS energy value of the signal as a parameter for classification. You should use a music segment, rather than using the whole music file for classification.Theoretically, the part of the music of 30 sec, starting after the first 30 secs of the music, is best representative for genre classification. So instead of taking the whole array, what you can do is to consider the part which corresponds to this time window, 30sec-59sec. Calculate the RMS energy of the signal separately for every music file, averaged over the whole time. You may also take other features into account, eg. , MFCC. In order to use MFCC, you may go for the averaged value of all signal windows for a particular music file. Make a feature vector out of it. You may use the difference between the features as the distance between the data points for classification.
1
5
1
0
I'm developing a little tool which is able to classify musical genres. To do this, I would like to use a K-nn algorithm (or another one, but this one seems to be good enough) and I'm using python-yaafe for the feature extraction. My problem is that, when I extract a feature from my song (example: mfcc), as my songs are 44100Hz-sampled, I retrieve a lot (number of sample windows) of 12-values-array, and I really don't know how to deal with that. Is there an approach to get just one representative value per feature and per song?
Processing musical genres using K-nn algorithm, how to deal with extracted feature?
0
0.379949
1
0
0
449
26,735,790
2014-11-04T13:00:00.000
0
1
0
0
0
python,pycharm,pytest
0
27,827,622
0
2
0
true
0
0
Answer to own question: I installed pyCharm again (for other reasons) and now it uses utrunner.py. It is much faster if I run Run 'Unittest test_foo', since not this does not collect all tests before running the test. Problem solved.
1
0
0
0
If I do Run Unittest .... test_foo in PyCharm it takes quite long to run the test, since all tests get collected first. PyCharm uses py.test -k to run the test. Since we have more than 1000 tests, collecting them takes some time (about 1.2 seconds). Often the test itself needs less time to execute! Since I use this very often, I want to speed this up. Any idea how to get this done?
py.test -k: collecting tests takes too much time
0
1.2
1
0
0
1,778
26,743,435
2014-11-04T19:30:00.000
0
0
0
0
0
python,django
0
26,743,769
0
2
0
false
1
0
By others, do you mean someone on your local network or someone on internet. On local network its very easy, instead starting the local developmnet server by python manage.py runserver you can do python manage.py runserver 10.1.0.123:8000 (assuming 10.1.0.123 is your system's ip), then people on you local network can access http://10.1.0.123:8000 to see your site. If you want to show it to someone on internet, then either you host it on something like heroku, or another cheap and quick method if to configure your router to divert the specific port to your machine and give that person your dynamic ip. This only applies if you have a router like in a home dev setup. You can go to google and just type what is my ip and get your dynamic static ip
1
0
0
0
A bit of a beginner question.Iv just started learning Django and can pretty much create basic stuff .Now when I want to access my website on my computer I just type in the local URL and I can access the site,other links etc. If I want to show this to someone else how would I do it?..They wouldn't be able to just type in the local url so what would they need to do to access it?Also If someone asks me to create an API for them what exactly does it mean?I'm a beginner with web technologies so any help would be appreciated! Thanks.
Django site-external access
1
0
1
0
0
290
26,749,349
2014-11-05T03:43:00.000
2
0
1
0
0
python,linked-list
0
26,749,431
0
1
0
true
0
0
It sounds like you have some sort of hash to get a shortlist of possibilities, so, you hash your key to a small-ish number, e.g. 0-256 (as an example, it might hash to 63). You can then go directly to your data at index 63. Because you might have more than one item that hashes to 63, your entry for 63 will contain a list of (key,value) pairs, that you would have to search one by one - effectively, you've reduced your search area by 255/256th of the full list. Optionally, when the collisions for a particular key exceeds a threshold, you could repeat the process - so you get mydict[63][92], again reducing the problem size by the same factor. You could repeat this indefinitely.
1
0
0
0
My teacher wants us to recreate the dict class in Python using tuples and linkedlists (for collisions). One of the methods is used to return a value given a key. I know how to do this in a tuple ( find the key at location[0] and return location[1]) but I have no idea how I would do this in the case of a collision. Any suggestions? If more info is needed please let me know
Using tuples in a linked list in python
0
1.2
1
0
0
1,152
26,783,510
2014-11-06T15:44:00.000
0
0
0
0
0
python,google-app-engine,google-cloud-datastore
0
26,784,116
0
1
0
true
1
0
When you create your entity do this: MyModel(id=emailAddress).put() then use get_by_id user = MyModel.get_by_id(emailAddress)
1
0
0
0
I define my model ndb in python. I want to use the email address of a user as the key, how do I do that? user is passing in the email address through an html form. I have everything setup and working. I just don't know how to specify that the email address string is the key.
use email address as datastore key using python app engine
0
1.2
1
0
0
98
26,812,282
2014-11-08T00:25:00.000
2
0
1
0
0
python
1
26,812,299
0
4
0
false
0
0
os.path.expanduser will help you
1
0
0
0
I am trying to read a file in another directory. My current script is in path/to/dir. However, I want to read a file in ~/. I'm not sure how to do this. I tried f = open("~/.file") but I am getting an error IOError: [Errno 2] No such file or directory: '~/.file'
Reading a file in home directory from another directory
0
0.099668
1
0
0
134
26,831,593
2014-11-09T18:20:00.000
0
0
0
0
1
python,numpy,wxpython
0
26,832,062
0
1
0
true
0
1
You can use pyqt or pyside, but I would recommend pyqtgraph which is excellent for this sort of thing. You can build your ui in pyside and use pyqtgraph to manage image output
1
0
0
0
guys, I am trying to find a python GUI library that can show and process 16bit greyscale image easily. I need to modify pixels. I have tried wxpython. It can show the images. But when I tried to convert a numpy array with single channel 16bit data to a string and loaded it in wxImage, it showed me that invalid buffer size. What's more, I have tried to decode the first element of data string in a wxImage instance that load the same image directly. Its value wasn't equal to the pixel (0,0) in numpy array. Could someone tell me how wxPython format its data string or introduce a better GUI library that can fix this? I know wxImage normally formats its pixel data with RGB. But I just need grey image. And I need to create a sophisticated UI. I think opencv can't meet my need.
Python GUI library to show 16bit greyscale image
0
1.2
1
0
0
377
26,837,554
2014-11-10T05:49:00.000
0
0
1
0
0
python
0
26,847,891
0
3
0
false
0
0
The modular and extensible solution is to put YamlParser in its own source file, and simply put the import yaml statement at the beginning. Any code which tries to import this code will fail if the required module yaml is missing.
1
1
0
0
I have classes in a Python project that depend on an external packages. I would like these classes to be created only if their dependencies are available. For example, how can I have a class YamlParser which only exists if yaml can be imported?
create a class only if a package is available on the system
0
0
1
0
0
78
26,848,683
2014-11-10T16:44:00.000
0
0
1
0
0
python,sockets,socketserver
0
26,849,719
0
1
0
true
0
0
SocketServer will work for you. You create one SocketServer per port you want to listen on. Your choice is whether you have one listener that handles the client/server connection plus per file connections (you'd need some sort of header to tell the difference) or two listeners that separate client/server connection and per file connections (you'd still need a header so that you knew which file was coming in). Alternately, you could choose something like zeromq that provides a message transport for you.
1
0
0
0
I've come to the realization where I need to change my design for a file synchronization program I am writing. Currently, my program goes as follows: 1) client connects to server (and is verified) 2) if the client is verified, create a thread and begin a loop using the socket the client connected with 3) if a file on the client or server changes, send the change through that socket (using select for asynchronous communication) My code sucks because I am torn between using one socket for file transfer or using a socket for each file transfer. Either case (in my opinion) will work, but for the first case I would have to create some sort of protocol to determine what bytes go where (some sort of header), and for the second case, I would have to create new sockets on a new thread (that do not need to be verified again), so that files can be sent on each thread without worrying about asynchronous transfer. I would prefer to do the second option, so I'm investigating using SocketServer. Would this kind of problem be solved with SocketServer.ThreadingTCPServer and SocketServer.ThreadingMixIn? I'm having trouble thinking about it because I would assume SocketServer.ThreadingMixIn works for newly connected clients, unless I somehow have an "outer" socket server which servers "inner" socket servers?
Can I use the Python SocketServer class for this implementation?
0
1.2
1
0
1
158
26,855,274
2014-11-11T00:01:00.000
0
0
1
0
1
python,utf-8
0
26,855,540
0
4
0
false
0
0
I got the answer: my console needed to be restarted. I use Spyder(from Python x,y) for development and this error occcured, so beware. UPDATE: Spyder console seems to suck, because to get it to work, I had to use string.encode('latin1') and (now here's the catch) OPEN A NEW CONSOLE! If I try to reuse my already open console, special characters just won't work.
2
0
0
0
i´m working with python version 2.7 and I need to know how to print utf-8 characters. Can anyone help me? ->I already tried putting # coding: iso-8859-1 -*- on top, ->using encode like print "nome do seu chápa".encode('iso-8859-1') also doesn't work and even -> using print u"Nâo" doesn't work
python 2.7 - how to print with utf-8 characters?
0
0
1
0
0
4,338
26,855,274
2014-11-11T00:01:00.000
0
0
1
0
1
python,utf-8
0
26,855,341
0
4
0
false
0
0
A more complete response. Strings have two types in Python 2, str and unicode. When using str, you are using bytes so you can write them directly to files like stdout. When using unicode, it has to be serialized or encoded to bytes before writing to files. So, what happens here? print "nome do seu chápa".encode('iso-8859-1') You have bytes but you try to encode them so Python 2 first decodes them behind your back and then encodes using the requested standard. This may work, if lucky, or produce gibberish. Now, when doing the following: print u"Nâo".encode('utf-8') You are telling Python 2 that you start with Unicode so then it will encode it without the problematic decode. Python 3 solved this nastiness.
2
0
0
0
i´m working with python version 2.7 and I need to know how to print utf-8 characters. Can anyone help me? ->I already tried putting # coding: iso-8859-1 -*- on top, ->using encode like print "nome do seu chápa".encode('iso-8859-1') also doesn't work and even -> using print u"Nâo" doesn't work
python 2.7 - how to print with utf-8 characters?
0
0
1
0
0
4,338
26,881,673
2014-11-12T07:42:00.000
2
0
1
0
0
ipython,xshell
0
26,924,407
0
1
0
true
0
0
According to the response from technical support team of XSHELL, it seems xshell does not support interactive shell currently. how can I enter interactive console of iPython in xshell? In cmd of windows, when I type "ipython", it will bring me to the interactive console automatically. However, in xshell, I've tried several command like "ipython", "ipython console", all of them would not bring me to the interactive console of ipython. Xshell does not support interactive shell at this time. We are working on it.
1
1
0
0
I just begin to use xshell in Windows 7, it looks good, but how can I enter interactive console of iPython in xshell? In cmd of windows, when I type "ipython", it will bring me to the interactive console automatically. However, in xshell, I've tried several command like "ipython", "ipython console", all of them would not bring me to the interactive console of ipython. BTW, I'm using xshell 5 (home/school edition) in Windows 7.
how to start iPython in xshell
0
1.2
1
0
0
262
26,889,459
2014-11-12T14:33:00.000
1
0
1
0
0
python,garbage-collection
0
26,889,994
0
1
0
true
0
0
It looks like gc.get_objects is what you want. Be careful using DEBUG_LEAK as it implies DEBUG_SAVEALL. DEDUG_SAVEALL causes all unreferenced objects to be saved in gc.garbage rather than freed. This means the number of objects tracked by the garbage collector can only increase. Additionally gc.get_objects does not return all currently live objects, as some types are not tracked by the garbage collector (atomic types are not tracked). For instance, [i for i in range(1000)] will only increase the objects tracked by one as the integers are not tracked by the garbage collector. Whereas, [[] for i in range(1000)] will increase the number of objects tracked by 1001.
1
1
0
0
I'm currently debugging a memory leak in a Python program. Since none of my objects have a __del__ method, I'm assuming the issue is some sort of global variable that continues to accumulate references to reachable objects. Also, when I run using gc.debug(gc.DEBUG_LEAK), I see a lot of gc: collectable messages, but no gc: uncollectable messages. To confirm this suspicion, I'd like to somehow get a count of the total number of reachable objects in my program, so I can confirm that it is steadily increasing. Is there any way to get this? I was looking at gc.get_count but this seems to give me the number of objects which were actually collected (separated by generation) rather than the number of live objects which are still reachable.
Get count of reachable live objects
0
1.2
1
0
0
492
26,915,644
2014-11-13T18:12:00.000
1
0
0
0
0
python,protocol-buffers
0
26,941,801
0
1
0
true
0
0
I wouldn't recommend trying to monkey-patch the encoding functions. You will almost certainly break something. What you can do is write an independent function which encodes a protobuf in an arbitrary way via reflection. For an example of this, see the text_format.py module in Protobufs itself. This module encodes and decodes messages in an alternative human-readable format entirely based on the public descriptor and reflection interfaces. You can follow the same pattern to write your own encoder/decoder for any format.
1
0
0
0
I'm writing a python app that already has the protobuf messages defined. However, we need to use a custom wire format (I believe that's the correct term). How do you (in python) override the base encoding functions? I've looked in encoder.py and that's a maze of nested functors. So what do I need to monkey patch (or whatever) to specify my own encodings? Thanks.
how to use custom encoding/decoding with Google Protobuf
0
1.2
1
0
0
819
26,923,372
2014-11-14T05:15:00.000
0
0
0
0
1
python,data-mining,orange
0
28,112,745
0
1
0
false
0
0
In my case this error was due to an old version of numpy being installed prior to installing Orange on a windows install. The Orange installer did not upgrade numpy even though for some widgets it requires it (the discretize widget being one of them which then fails to load silently^). Uninstalling numpy and rerunning the orange installer (which then installed a current version of numpy) fixed the issue. ^ Silently with respect to the interface - errors were shown in canvas.log which can be found in C:\Users\[your_username]\AppData\Local\Orange Canvas
1
0
0
0
In ORANGE - I found that the Discretize widget is missing in my installation - where do I go to load it?
Missing widget - how do I load one (Discretize)
0
0
1
0
0
228
26,939,090
2014-11-14T21:22:00.000
0
0
0
1
0
python,windows,shell,unix
0
26,939,339
0
2
0
false
0
0
You have to install Unix Bash for Windows, but not sure if it works correctly ... Better solution is install or virtualize some Linux distribution.
1
0
0
0
I tried using some Unix command using the subprocess module on my Python interpreter installed on a windows 7 OS. However, it errors out saying command not found. It does recognize Windows commands though. Can I somehow use Unix commands in here too? Thanks, Vishal
How to run Unix commands in python installed on a windows machine?
0
0
1
0
0
1,284
26,948,469
2014-11-15T17:16:00.000
0
0
1
0
0
python,django,pycharm
0
27,120,852
0
1
0
false
1
0
(Using PyCharm 4.0) Go to Tools --> Deployment --> and either select an existing server connection or create a new one. Under the Mappings tab, you can associate a local path with a remote path. I keep my Django project folder mapped to the one on the server. Once you hit OK, go back to Tools --> Deployment, and you should see a bunch of options available to you that were greyed out before, such as: Upload to [server name] Download from [server name] Sync to Deployed to [server name] Automatic Upload I find Automatic Upload handy for when I want the changes in newly saved Django files reflected on the server within seconds of hitting save.
1
1
0
0
I have an existing django project on an AWS instance. I have copied my Python files to my local machine. PyCharm is configured to use the Python interpreter on the remote machine. The django file references are, naturally, not resolved by PyCharm, since the django files are not on the local machine. Is the usual procedure to copy all the django files from the remote server to my local machine. Or is there another way to get PyCharm to know where to look to resolve the django references?
With PyCharm, how do I reference django files when using remote AWS interpreter?
1
0
1
0
0
215
26,965,450
2014-11-17T04:17:00.000
2
0
1
1
0
python,macos,pip,easy-install
0
26,965,467
0
2
0
true
0
0
There's an easy way around it - use pip2 or pip2.7 or pip-2.7 for Python 2, and pip3 or pip3.4 or pip-3.4 for Python 3. Both version ship with easy_install, but Python 2 does not contain pip by default - you have to install it yourself.
1
3
0
0
I am a non-programmer who started to learn Python. My Mac OS X Yosemite shipped with Python 2.7.6. I installed Python 3.4.2 too. If I use pip or easy_install in the terminal to install a package, how do I know which Python I installed the package in? It seems Python 3.4.2 shipped with pip and easy_install, but I think Python 2.7.6 may also have some version of pip or easy_install. I know my system can have both versions of Python, but can it have multiple versions of pip or easy_install?
Which version of Python did pip or easy_install refer to by default?
0
1.2
1
0
0
163
27,001,985
2014-11-18T19:04:00.000
0
0
1
0
0
python,python-3.x,import,module,anaconda
0
27,034,348
0
2
0
false
0
0
As a side note, If you want to keep this functionality and move to a more script-like environment I would suggest using something like Spyder IDE. It comes with an editor linked with the IPython console that supports all the same magics as the IPython notebook.
2
2
0
0
I'm interested in creating physics simulations with Python so I decided to download Anaconda. Inside of an IPython notebook I can use the pylab module to plot functions, for example, with ease. However, if I try to import pylab in a script outside of IPython, it won't work; Python claims that the pylab module doesn't exist. So how can I use Anaconda's modules outside of IPython?
Using Anaconda modules outside of IPython
0
0
1
0
0
238
27,001,985
2014-11-18T19:04:00.000
1
0
1
0
0
python,python-3.x,import,module,anaconda
0
27,002,473
0
2
0
false
0
0
I bet it will work if you use Anaconda's Python distribution. Try running ./anaconda/bin/python and importing it from that Python session.
2
2
0
0
I'm interested in creating physics simulations with Python so I decided to download Anaconda. Inside of an IPython notebook I can use the pylab module to plot functions, for example, with ease. However, if I try to import pylab in a script outside of IPython, it won't work; Python claims that the pylab module doesn't exist. So how can I use Anaconda's modules outside of IPython?
Using Anaconda modules outside of IPython
0
0.099668
1
0
0
238
27,014,098
2014-11-19T10:10:00.000
0
0
1
0
1
python,macos,button,crash,python-idle
0
45,559,740
0
2
0
false
0
0
I got the similar problem, Try British pc keyboard it works for me
1
0
0
0
I am running OS X 10.9.5, and IDLE w/ Python 3.4.1. When I press the buttons for (¨/^) or (´/`), IDLE crashes and the program closes. This causes me to lose changes to files, as well as time. My fellow students using Mac experience the same problem. Anyone know how I can fix this?
IDLE crashes for certain buttons on Mac
0
0
1
0
0
3,590
27,021,617
2014-11-19T16:14:00.000
2
0
1
1
0
python,bash,fastq
0
27,021,713
0
2
0
false
0
0
You can try find . -name "*.fastq" | xargs your_bash_script.sh, which use find to get all the files and apply your script to each one of them.
1
0
0
0
I have a directory with 94 subdirectories, each containing one or two files *.fastq. I need to apply the same python command to each of these files and produce a new file qc_*.fastq. I know how to apply a bash script individually to each file, but I'm wondering if there is a way to write a bash script to apply the command to all the files at once
applying the same command to multiple files in multiple subdirectories
1
0.197375
1
0
0
115
27,031,912
2014-11-20T04:43:00.000
0
0
1
0
0
python,indentation
0
29,987,945
0
3
0
false
0
0
If you are using Eclipse IDE, there are two formatting options available that can be used to accomplish this, accessed from the Source Menu. a) "Source > Convert space-tabs to tabs" or b) "Source > Convert tabs to space-tabs" I was able to format code that had 2 spaces instead of a tab using the first option. You simply specify the number of spaces to be converted into a tab.
1
8
0
0
I am a beginner at Python and have made the mistake of mixing spaces and tabs for indentations. I see people use reindent.py, but I have no idea how to use it. please explain in the simplest way possible without trying to use too fancy words and dumb it down as best as possible as I am a beginner. Thanks.
HOWTO: Use reindent.py for dummies
1
0
1
0
0
17,316
27,034,666
2014-11-20T08:11:00.000
0
0
1
1
0
python,eclipse
0
27,035,599
1
3
0
false
1
0
Had this same problem a few days ago. You might have downloaded the wrong version of PyDev for your python version (2.7.5 or something is my python version, but I downloaded PyDev for version 3.x.x) 1) Uninstall your current version PyDev 2) you have to install the correct version by using the "Install New Software", then uncheck the "Show only newest software" or whatever it is. Then select the version that matched your python version, and install :)
1
3
0
0
I have Java version 7 and had installed PyDev version 3.9 from Eclipse Marketplace..but it's not showing up in New project or in Windows perspective in Eclipse..Can some one please tell me what i need to do ???
PyDev not appearing in Eclipse after install
0
0
1
0
0
7,516
27,053,070
2014-11-21T02:12:00.000
0
0
1
0
0
java,python
0
40,499,350
0
2
0
false
1
0
Use the add_library function: add_library("sound")
2
1
0
0
Im using processing in python mode but I want to use the processing library sound. But I dont know how to import this into my program in python syntax. In java its like this: Import processing.sound.*; Thanks
How to use java libraries in python processing
0
0
1
0
0
390
27,053,070
2014-11-21T02:12:00.000
0
0
1
0
0
java,python
0
28,146,057
0
2
0
false
1
0
You can use add_library(processing.sound). I used it with g4p library
2
1
0
0
Im using processing in python mode but I want to use the processing library sound. But I dont know how to import this into my program in python syntax. In java its like this: Import processing.sound.*; Thanks
How to use java libraries in python processing
0
0
1
0
0
390
27,066,366
2014-11-21T16:50:00.000
6
0
0
0
0
python,django,server
0
27,074,081
0
10
0
false
1
0
well it seems that it's a bug that django hadn't provided a command to stop the development server . I thought it have one before~~~~~
5
52
0
0
I use a Cloud server to test my django small project, I type in manage.py runserver and then I log out my cloud server, I can visit my site normally, but when I reload my cloud server, I don't know how to stop the development server, I had to kill the process to stop it, is there anyway to stop the development?
django development server, how to stop it when it run in background?
0
1
1
0
0
122,154
27,066,366
2014-11-21T16:50:00.000
3
0
0
0
0
python,django,server
0
27,066,460
0
10
0
false
1
0
As far as i know ctrl+c or kill process is only ways to do that on remote machine. If you will use Gunicorn server or somethink similar you will be able to do that using Supervisor.
5
52
0
0
I use a Cloud server to test my django small project, I type in manage.py runserver and then I log out my cloud server, I can visit my site normally, but when I reload my cloud server, I don't know how to stop the development server, I had to kill the process to stop it, is there anyway to stop the development?
django development server, how to stop it when it run in background?
0
0.059928
1
0
0
122,154
27,066,366
2014-11-21T16:50:00.000
1
0
0
0
0
python,django,server
0
50,886,343
0
10
0
false
1
0
From task manager you can end the python tasks that are running. Now run python manage.py runserver from your project directory and it will work.
5
52
0
0
I use a Cloud server to test my django small project, I type in manage.py runserver and then I log out my cloud server, I can visit my site normally, but when I reload my cloud server, I don't know how to stop the development server, I had to kill the process to stop it, is there anyway to stop the development?
django development server, how to stop it when it run in background?
0
0.019997
1
0
0
122,154
27,066,366
2014-11-21T16:50:00.000
-3
0
0
0
0
python,django,server
0
53,480,387
0
10
0
false
1
0
You can Quit the server by hitting CTRL-BREAK.
5
52
0
0
I use a Cloud server to test my django small project, I type in manage.py runserver and then I log out my cloud server, I can visit my site normally, but when I reload my cloud server, I don't know how to stop the development server, I had to kill the process to stop it, is there anyway to stop the development?
django development server, how to stop it when it run in background?
0
-0.059928
1
0
0
122,154
27,066,366
2014-11-21T16:50:00.000
4
0
0
0
0
python,django,server
0
66,500,994
0
10
0
false
1
0
Ctrl+c should work. If it doesn't Ctrl+/ will force kill the process.
5
52
0
0
I use a Cloud server to test my django small project, I type in manage.py runserver and then I log out my cloud server, I can visit my site normally, but when I reload my cloud server, I don't know how to stop the development server, I had to kill the process to stop it, is there anyway to stop the development?
django development server, how to stop it when it run in background?
0
0.07983
1
0
0
122,154
27,081,314
2014-11-22T19:06:00.000
0
0
0
0
0
python,snmp,conceptual,mib,pysnmp
0
27,083,747
0
1
0
false
0
0
Manager needs to know the variables to query for something specific. The variables can be identified by OIDs or MIB objects names. MIBs give Manager information such as: Human friendly symbolic names associated with the OIDs Types of values associated with particular OIDs Hints on variable access permissions that are implemented by the Agent SNMP tables indices structure and types References to other MIB objects (e.g. Notifications) If MIB is available, Manager would be able to perform any SNMP operation knowing either symbolic name or OID of the Agent's variable it is interested in. All required details would be gathered from the MIB. If MIB is not available, Manager would still have to figure out more or less of the additional details (some are listed above) so those can be hardcoded to the Manager. For example, a GET operation could be performed having just an OID, however without MIB Manager may have troubles making response value looking human friendly. Another example is a SET operation that requires Manager to properly encode value -- its type can be dynamically looked up at the MIB or hardcoded into the Manager for specific OIDs. More complex scenarios involve dynamically building OIDs (for addressing SNMP table entries) using indices structure formally defined by the MIB. The purpose of the GETNEXT/GETBULK queries is to let Manager be unaware of the exact set of OIDs provided by the Agent. So Manager could iterate over Agent's variables starting from a well known OID (or even its prefix). One of the uses of this feature is SNMP table retrieval. MIBs are written in a subset of ASN.1 language. Unlike ASN.1, MIBs are very specific to SNMP domain. To use MIBs with pysnmp you need to pass ASN.1 MIBs to the build-pysnmp-mib shell script (from pysnmp distribution) which would invoke smidump and other tools to convert ASN.1 MIBs into a collection of Python classes representing pysnmp-backed MIB objects.
1
0
0
0
I am fairly new to the SNMP protocol and have only been introduced to it recently in my computer networking course. I understand how the manager sends Gets, Sets, GetNext, GetBulk and all that, it will catch Traps and such. One thing I don't entirely understand is the MIB From what I gather, the MIB is chillen on an agent and the Manager will query for the MIB tree. That is fine, although the Manager needs the OID to be able to properly query. One question I have regards if these are hardcoded or not. Are the OIDs hardcoded in the manager or not? Other than that, I'm not sure how to build the MIB file, apparently there is some special file type that defines the MIB structure and I don't really get how to use pySNMP to build that. I feel like I would run that on the agent side of things upon startup Can somebody help clear up these conceptual issues for me?
Trouble grasping MIBs with PySNMP
0
0
1
0
0
329
27,081,784
2014-11-22T19:51:00.000
0
0
1
0
0
python,dictionary,statistics
0
27,081,959
0
4
0
false
0
0
assuming you use Scipy to calculate the Z-score and not manually from scipy import stats d = {'keys':values, ...} dict_values = d.values() z = stats.zscore(dict_values) This will return a Numpy array with your z scores
1
1
0
0
I have a dictionary, for which I want to calculate all values to zscores. Now I do know how to how to compute the zscore of an array, but have no idea how to do this for a dictionary. Does anybody have some tips? Thanks!
Python: Calculate all values of dictionary to z-scores
0
0
1
0
0
2,411
27,091,319
2014-11-23T16:30:00.000
0
0
0
0
0
ipython,classification,decision-tree,nearest-neighbor,cross-validation
0
27,095,449
0
1
0
false
0
0
I assume here that you mean the value of k that returns the lowest error in your wine quality model. I find that a good k can depend on your data. Sparse data might prefer a lower k whereas larger datasets might work well with a larger k. In most of my work, a k between 5 and 10 have been quite good for problems with a large number of cases. Trial and Error can at times be the best tool here, but it shouldn't take too long to see a trend in the modelling error. Hope this Helps!
1
2
1
0
I am working on a UCI data set about wine quality. I have applied multiple classifiers and k-nearest neighbor is one of them. I was wondering if there is a way to find the exact value of k for nearest neighbor using 5-fold cross validation. And if yes, how do I apply that? And how can I get the depth of a decision tree using 5-fold CV? Thanks!
Using cross-validation to find the right value of k for the k-nearest-neighbor classifier
1
0
1
0
0
185
27,115,526
2014-11-24T22:41:00.000
0
0
1
1
1
python,linux,ipython,redhat
0
27,115,695
0
4
0
false
0
0
You have your $PATH fine, as you can run python without specifying full path, aka /usr/bin/python. You get 2.6.6 in Ipython directory because it has python executable in it, named, wild guess - python. 2.7.5 is installed system-wide. To call 2.7.5 from the Ipython dir, use full path /usr/bin/python, or whatever which python points to. Try out python virtualenv if you need two or more version of python on your system. Otherwise, having different versions is not a good idea.
2
1
0
0
I have installed (or so I think) python 2.7.5. When I type "Python --version" I get python2.7.5 I've narrowed this down to: When I run "python" in a terminal in my /Home/UsrName/ directory it is version 2.7.5 However when I run "python" in a terminal in /Home/UserName/Downloads/Ipython directory I get 2.6.6 I went into the Ipython folder to run the Ipython Setup file. I think I need to add python27 to a system path so that when I am inside the /Home/UserName/Downloads/Ipython directory and run the install file Ipython knows I am using a required version of python. I am not sure how to add python27 to the system on redhat linux 6.5 (Also I am not even sure that this will fix it).
how to get python 2.7 into the system path on Redhat 6.5 Linux
0
0
1
0
0
18,214
27,115,526
2014-11-24T22:41:00.000
0
0
1
1
1
python,linux,ipython,redhat
0
27,116,633
0
4
0
false
0
0
I think I know what is happening - abarnert pointed out that the cwd (".") may be in your path which is why you get the local python when you're running in that directory. Because the cwd is not normally setup in the global bashrc file (/etc/bashrc) it's probably in your local ~/.bashrc or ~/.bash_profile. So edit those files and look for something like PATH=$PATH:. and remove that line. Then open a new window (or logout and log back in) to refresh the path setting and you should be OK.
2
1
0
0
I have installed (or so I think) python 2.7.5. When I type "Python --version" I get python2.7.5 I've narrowed this down to: When I run "python" in a terminal in my /Home/UsrName/ directory it is version 2.7.5 However when I run "python" in a terminal in /Home/UserName/Downloads/Ipython directory I get 2.6.6 I went into the Ipython folder to run the Ipython Setup file. I think I need to add python27 to a system path so that when I am inside the /Home/UserName/Downloads/Ipython directory and run the install file Ipython knows I am using a required version of python. I am not sure how to add python27 to the system on redhat linux 6.5 (Also I am not even sure that this will fix it).
how to get python 2.7 into the system path on Redhat 6.5 Linux
0
0
1
0
0
18,214
27,117,461
2014-11-25T01:47:00.000
2
0
1
1
0
python,file-io,cmd,notepad++
0
27,117,505
0
2
0
false
0
0
In the properties of the shortcut that you use to start Notepad++, you can change its working directory, to whichever directory you're more accustomed to starting from in Python. You can also begin your python program with the appropriate os.chdir() command.
1
1
0
0
Not a major issue but just an annoyance I've come upon while doing class work. I have my Notepad++ set up to run Python code straight from Notepad++ but I've noticed when trying to access files I have to use the full path to the file even given the source text file is in the same folder as the Python program being run. However, when running my Python program through cmd I can just type in the specific file name sans the entire path. Does anyone have a short answer as to why this might be or maybe how to reconfigure Notepad++? Thanks in advance.
Python program needs full path in Notepad++
0
0.197375
1
0
0
510
27,135,428
2014-11-25T19:49:00.000
0
0
0
0
0
python,django,admin,manytomanyfield
0
27,135,554
0
4
0
false
1
0
The Django Admin select menus use the unicode value of the model instance to populate things like menus. Whatever your __unicode__ method returns should be what is in the select menu.
1
2
0
0
I have "Articles" and "Modules" apps. Inside "Modules" app there is model which have to display articles and they're linked by ManyToManyField in "Modules". My question is how to modify text value in select field in Django admin? As default it displays names of articles, but i want also some information from Article model here. Is there a simple way to do that?
How to modify Django admin ManyToManyField text?
0
0
1
0
0
579
27,145,014
2014-11-26T09:05:00.000
0
0
1
0
0
javascript,python,processing
0
49,843,328
0
1
0
false
1
0
At the moment there is no straightforward way to do this. But some workarounds can be found. For example, since Processing's python mode is using Jython language (Python on JVM) you can do the following: Compile the Jython code to Java bytecode Decompile Java bytecode to real Java code Use processing.js to make the sketch run in a webpage Of course, there are chances that the generated Java code will not be 100% Processing code, thus will not be converted to Javascript by the processing.js library.
1
4
0
0
The title says it all. I spent a lot of time designing a sketch in Processing using the Python language. Now, I would like to put the sketch on a webpage. Of course, I could just translate the sketch from python language to javascript and use a javascript library for processing. However, this would be a very lengthy process. As such, do you know if there is a way to integrate a python sketch in the website? If yes, how to do that? Thank you in advance!
Is there a way to integrate a processing sketch written in Python in a webpage?
1
0
1
0
0
144
27,170,958
2014-11-27T12:34:00.000
0
0
1
0
0
python,python-sphinx
0
59,568,068
0
3
0
false
0
0
I use :py:obj: instead. :py:attr: won't work for me when the property is in another page.
1
16
0
0
How can I reference a method, decorated with @property? For simple methods, :py:meth: is working fine, but not for properties: it does not create a link to them.
Sphinx documentation: how to reference a Python property?
0
0
1
0
0
6,984
27,177,721
2014-11-27T19:37:00.000
1
0
0
0
0
python-3.x,tf-idf,lda,topic-modeling,gensim
0
27,378,733
0
1
0
false
0
0
id2word must map each id (integer) to term (string). In other words, it must support id2word[123] == 'koala'. A plain Python dict is the easiest option.
1
3
1
0
I have a tf-idf matrix already, with rows for terms and columns for documents. Now I want to train a LDA model with the given terms-documents matrix. The first step seems to be using gensim.matutils.Dense2Corpus to convert the matrix into the corpus format. But how to construct the id2word parameter? I have the list of the terms (#terms==#rows) but I don't know the format of the dictionary so I cannot construct the dictionary from functions like gensim.corpora.Dictionary.load_from_text. Any suggestions? Thank you.
Training a LDA model with gensim from some external tf-idf matrix and term list
0
0.197375
1
0
0
505
27,192,467
2014-11-28T16:11:00.000
3
0
0
0
0
python,random,spatial,coordinate
0
27,192,613
0
1
0
true
0
1
There's a lot that's unspecified in your question, such as what distribution you want to use. For the sake of this answer, I'll assume a uniform distribution. The straightforward way to handle an arbitrary volume uniform distribution is to choose three uniformly random numbers as coordinates in the range of the bounding rectilinear solid enclosing your volume, then check to see if the chosen coordinate lies within the volume. If the coordinate is not within the volume, discard it and generate a new one. If this is not sufficient, due to its non-constant performance or whatever other reason, you'll need to constrain your problem (say, to only tetrahedra) and do a bunch of calculus to compute the necessary random distributions and model the dependencies between the axes. For example, you could start with the x axis and integrate the area of the intersecting shapes between the volume and the plane where x = t. This will give you a function p(x) which, when normalized, is the probability density function along the X axis. (If you want nonuniform distribution, you need to put that in the integrated function, too.) Then you need to do another set of integrals to determine p(y|x0), the probability distribution function on the Y axis given the chosen x coordinate. Finally, you'll need to determine p(z|x0,y0), the probability distribution function on the z axis. Once you have all this, you need to use whatever random number algorithm you have to choose random numbers in these distributions: first choose x0 from p(x), then use that to choose y0 from p(y|x0), then use those to choose z0 from p(z|x0,y0), and you'll have your result (x0, y0, z0). There are various algorithms to determine if a point is outside a volume, but a simple one could be: For each polygon face: Compute its characteristic planes. Use cross product to compute plane normals. One vertex of the face and the plane normal are sufficient to define the plane. Remember the right-hand rule and choose the points so that the plane normal consistently points into or out of the polyhedron. Check that the random point lies on the "inside" half-space of that plane. A half-space is the set of all points on one side of the plane. Compute the vector from the plane vertex to the random point. Compute the dot product between the plane normal and this vector. If you defined the plane normals to point out of the polyhedron, then all dot products must be negative. If you defined the plane normals to point into the polyhedron, then all dot products must be positive. Note that you only have to recompute characteristic planes when the volume moves, not for each random point. There are probably much better algorithms out there, and their discussion is outside the scope of this question and answer. This algorithm is what I could come up with with no research, and is probably as good as a bubble sort.
1
0
1
0
I would like to generate a uniformly random coordinate that is inside a convex bounding box defined by its (at least) 4 vertices (for the case of a tetrahedron). Can someone suggest an algorithm that I can use? Thanks! If a point is generated in a bounding box, how do you detect whether or not it is outside the geometry but inside the box?
Generate a random point in space (x, y, z) with a boundary
0
1.2
1
0
0
1,463
27,192,905
2014-11-28T16:39:00.000
1
0
1
0
0
ipython,ipython-notebook
0
27,229,741
0
3
0
true
0
0
I didn't find how to get the previous content cell in a cell. But I found another solution, creating a custom magic notebook function to capture the cell content and work with that.
1
5
0
0
Is it possible in an IPython-Notebook cell to get the previous (above) cell content ? I can see previous output with %capture magic function but I don't find how to get the previous cell content.
IPython Notebook previous cell content
1
1.2
1
0
0
3,415
27,193,835
2014-11-28T17:56:00.000
0
1
0
0
0
qpython
0
27,197,253
0
2
0
false
0
1
The comment from Yulia V got me thinking, maybe I just needed to append the location of "Scripts" to the sys.path, and Yep, that worked fine!! Thanks Yulia!
2
0
0
0
With QPython on Kindle fire .. I use QEdit to write & save a .py file .. say bob.py .. But when I switch to Console, I can't IMPORT from bob .. Can someone tell me how to do this? John (new to QPython)
How to import from saved QPython file?
0
0
1
0
0
1,136
27,193,835
2014-11-28T17:56:00.000
0
1
0
0
0
qpython
0
27,209,867
0
2
0
false
0
1
I think you can save the module file into the same directory which you script locates or /sdcard/com.hipipal.qpyplus/lib/python2.7/site-packages/
2
0
0
0
With QPython on Kindle fire .. I use QEdit to write & save a .py file .. say bob.py .. But when I switch to Console, I can't IMPORT from bob .. Can someone tell me how to do this? John (new to QPython)
How to import from saved QPython file?
0
0
1
0
0
1,136
27,194,932
2014-11-28T19:32:00.000
2
1
1
0
0
python
0
30,282,342
0
2
0
false
0
0
You run your .py files just like you would if you were running the python commandline in windows. ex. python myfile.py Open a SSH terminal from your python devbox and type it in the cmd line and you're all set.
1
2
0
0
I just started with learning Python 3.4.x. I really want to keep learning and developing on all devices. That's why I'm using Codeanywhere. But the problem is I don't know how to execute a .py file in Codeanywhere. Is there a method to do it? Thanks
How do I run python in Codeanywhere?
0
0.197375
1
0
0
9,516
27,198,697
2014-11-29T03:58:00.000
1
0
1
0
0
python,linux,sockets,video,udp
0
27,199,731
0
1
0
true
0
0
If you are to send a video(with audio) to a peer in the network, you would better use RTP(Real time Transfer Protocol) which works on top of UDP. RTP provides feature of timestamps and profile which help you syncronize the audio and video sent through two ports.
1
0
0
0
I am doing a project on media (more like a video conferencing). The problem is , although I am able to send text/string data from one peer to another, I am still not sure about video files. Using gstreamer, I am able to capture video stream from my webcam and doing the encoding/coding (H.264) , I am able to write the video stream into actual mp4 contanier directly using a file sink Now my problem is, I am not sure on reading the video files as it contains both audio and video streams, convert into transmission stream to transmit it using packets (I am able to send a very small jpeg file although). I am using socket module and implementing UDP
python - how to have video file(data) into list?
0
1.2
1
0
1
73
27,204,134
2014-11-29T15:58:00.000
0
1
0
0
0
python,serialization,raspberry-pi
0
64,522,252
0
3
0
false
0
0
It's the cable. Check the USB cable. All of that yanking
1
1
0
0
I'm making a code where the pi gets a serial input from a usb-serial board(From the sparkfun RFID starter kit), how can I make this work? error Traceback (most recent call last): File "main", line 22, in ser = s.Serial('ttyUSB0', 9600, timeout=10000) File "/usr/lib/python2.7/dist-packages/serial/serialutil.py", line 260, in init self.open() File "/usr/lib/python2.7/dist-packages/serial/serialposix.py", line 276, in open raise SerialException("could not open port %s: %s" % (self._port, msg)) serial.serialutil.SerialException: could not open port ttyUSB0: [Errno 2] No such file or >directory: 'ttyUSB0' The RFID port is the ttyUSB0
ttyUSB0 not found on Raspberry Pi
0
0
1
0
0
9,230
27,207,643
2014-11-29T22:02:00.000
1
0
0
0
0
python,performance,tkinter,syntax-highlighting,lexical-analysis
0
27,210,959
0
1
1
false
0
1
One possible answer is to do something like Idle does. As a user hits each key, its custom incremental parser tags identifiers that are a keyword, builtin, or def/class name*. It also tags delimited char sequences that are a string or comment. I does what can be done quickly enough. For example, if one types printer not in a string or comment, Idle checks if the word is a keyword or builtin name after each key. After t is hit, print is tagged. After e (or any other identifier char) is entered, printe is untagged. I believe some of the code is in idlelib/Hyperparser.py and some in ColorDelegator.py. You are free to copy and adapt code, but please do not use it directly, as the API may change. I presume the parser does the minimum needed according to the current state (after def/class, in an identifier, comment, string, etc.) Idle has an re-based function to retag the entire file. I 'think' this is separate from the incremental colorizer, but I have not read all the relevant code. If one edits a long enough file, such as idlelib/EditorWindow.py (about 3000 lines), and changes font size, Idle retags the file (I am not sure why). There is a noticeable delay between the file turning all black and its being recolorized. You definitely would not want that delay with each keystroke. Class/functions names with non-ascii chars are not yet properly recognized in 3.x, but should be. The patch is stuck on deciding the faster accurate method. Recognizing an ascii-only (2.x) indentifier is trivial by comparison. PS I am correctly in guessing that you are interested in tagging something other than Python code?
1
0
0
0
I'm trying to implement syntax highlighting for text in a Text widget. I use an external library to lexically analyse the text and give me the tokenised text. After that I go over all the words in the text and apply tag to their position in the text widget so that I can style each word. My concern now is how do I deal with changes. Every time the user presses a key, do I tokenise the entire text again and add style tags to the text widget for the entire text. This is proving to be quite slow. I then transitioned to only doing the highlighting process for the line the insert character was to make it faster but this is giving faulty results and the highlighting is not perfect now. What would be an ideal compromise between fast and perfect? What is the best way to do this?
Efficiently applying text widget tags in tkinter text widgets
0
0.197375
1
0
0
228
27,208,926
2014-11-30T01:01:00.000
1
0
0
0
0
c#,.net,dll,ironpython
0
27,233,826
0
1
0
true
1
1
You can try replicating what "works for me". Create a solution containing: python project (ironpython) C# project Add a reference to desired oracle library (Oracle.DataAccess.dll) to C# project using the standard VS mechanism. C# project should also contains a post build step to copy the resulting dll and pdb into the place where python script can find it. In my case root of the python project. Your python project is selected as Startup Project. I use Ctrl-F5 and F5 to start it. In both cases things work as expected. In debug mode I am able to set and hit breakpoint in python and in referenced C# module. I can see the oracle library being loaded (Output window of debugger). However: The stack traces are C# only. Visual Studio 2013 Update 4 together with PTVS 2.1 crashes on occasions when debugging.
1
0
0
0
I want to debug separate pieces of my application in Visual Studio 2012. I have C# executable which works with Oracle.DataAccess dll. It works fine. Within it IronPython runtime is invoked and it works fine too. Within these IronPython modules object from main C# application is invoked and it works fine with Oracle dll. If IronPython script is invoked standalone then it works fine and uses C# object fine as well. However in this case C# object doesn't see Oracle dll. To debug IronPython scripts I have to create separate Python solution so I cannot configure my C# solution. So I do not have control of C# references. GAC has right Oracle dll but how to tell C# dll to use it? Vise versa if I'm in C# solution where I can manage the references then I cannot add py files and debug them. In what way can I configure VS to be able to run/debug my application with dual entry C# or IronPython separately?
DLL loading (C#/IronPython/C#) in VS2012
0
1.2
1
0
0
390
27,217,051
2014-11-30T19:08:00.000
0
0
0
0
0
python,opengl,webgl,ipython,vispy
0
27,259,240
0
1
0
true
0
0
This looks like a good use-case for Vispy indeed. You'd need to use a PointVisual for the nodes, and a LineVisual for the edges. Then you can update the edges in real time as the simulation is executed. The animation would also work in the IPython notebook with WebGL. Note that other graphics toolkits might also work for you (although you'd not necessarily have GPU acceleration through OpenGL) if you specify static positions for the nodes. I think you can fix the nodes positions with d3js or networkx instead of relying on an automatic layout algorithm.
1
0
1
0
So I have a particular task I need help with, but I was not sure how to do it. I have a model for the formation of ties between a fixed set of network nodes. So I want to set up a window or visualization that shows the set of all nodes on some sort of 2-dimensional or 3-dimensional grid. Then for each timestep, I want to update the visualization window with the latest set of ties between nodes. So I would start with a set of nodes positioned in space, and then with each timestep the visualization will gradually add the new edges. The challenge here is that I know in something like networkx, redrawing the network at each timestep won't work. Many of the common network display algorithms randomly place nodes so that as to maximize the distance between thenm and better show the edges. So if I were to redraw the network at each timestep, the nodes would end up in different locations each time, and it would be hard to identify the pattern of network growth. That is why I want a set of static nodes, so I can see how the edges get added at each timestep. I am looking to visualize about 100 nodes at a time. So I will start with a small number of nodes like 20 or so, and gradually build up to 100 nodes. After the model is validated, then I would build up to 1000 or 2000 nodes. Of course it is hard to visualize 1000 or 2000 node network, that is why I just want to make sure I can visualize the network when I just have 100 nodes in the simulation. I was not sure if I could do this in webgl or something, or if there is a good way to do this in python. I can use Vispy for communication between python and webgl if needed.
network animation with static nodes in python or even webgl
1
1.2
1
0
0
308
27,226,551
2014-12-01T10:39:00.000
1
1
0
1
0
python,perl,ubuntu,automation,perl-module
0
27,242,918
0
1
0
false
0
0
The main thing to understand is that each tab has a different instance of terminal running, more importantly a different instance of shell (just thought I would mention as it didnt seem like you were clear about that from your choice of words). So "passing control" in such a scenario could most probably entail inter-process communication (IPC). Now that opens up a range of possibilities. You could, for example, have a python/perl script running in the target shell (tab) to listen on a unix socket for commands in the form of text, which the script can then execute. In Python, you have modules subprocess (call, Popen) and os (exec*) for this. If you have to transfer control back to the calling process, then I would suggest using subprocess as you would be able to send back return codes too. Switching between tabs is a different action and has no consequences on the calling/called processes. And you have already mentioned how you intend on doing that.
1
0
0
0
I am trying to automate a scenario in which, I have a terminal window open with multiple tabs open in it. I am able to migrate between the tabs, but my problem is how do i pass control to another terminal tab while i run my perl script in a different tab. Example: I have a terminal open with Tab1,Tab2,Tab3,Tab4 open in the same terminal, i run the perl script in Tab3 and i would want to pass some commands onto Tab1. Could you please tell me how can i do this ?? I use GUI tool to switch between tabs X11::GUITest and use keyboard shortcuts to switch between tabs, any alternative suggestion is welcome, my ultimate aim is to pass control on to a different tab.
How do i pass on control on to different terminal tab using perl?
0
0.197375
1
0
0
134
27,255,560
2014-12-02T17:36:00.000
1
0
0
0
0
python,machine-learning,language-features,feature-selection
0
27,256,151
0
1
0
true
0
0
You either need to under-sample the bigger class (take a small random sample to match the size of the smaller class), over-sample the smaller class (bootstrap sample), or use an algorithm that supports unbalanced data - and for that you'll need to read the documentation. You need to turn your words into a word vector. Columns are all the unique words in your corpus. Rows are the documents. Cell values are one of: whether the word appears in the document, the number of times it appears, the relative frequency of its appearance, or its TFIDF score. You can then have these columns along with your other non-word columns. Now you probably have more columns than rows, meaning you'll get a singularity with matrix-based algorithms, in which case you need something like SVM or Naive Bayes.
1
2
1
0
I have been trying to build a prediction model using a user’s data. Model’s input is documents’ metadata (date published, title etc) and document label is that user’s preference (like/dislike). I would like to ask some questions that I have come across hoping for some answers: There are way more liked documents than disliked. I read somewhere that if somebody train’s a model using way more inputs of one label than the other this affects the performance in a bad way (model tends to classify everything to the label/outcome that has the majority of inputs Is there possible to have input to a ML algorithm e.g logistic regression be hybrid in terms of numbers and words and how that could be done, sth like: input = [18,23,1,0,’cryptography’] with label = [‘Like’] Also can we use a vector ( that represents a word, using tfidf etc) as an input feature (e.g. 50-dimensions vector) ? In order to construct a prediction model using textual data the only way to do so is by deriving a dictionary out of every word mentioned in our documents and then construct a binary input that will dictate if a term is mentioned or not? Using such a version though we lose the weight of the term in the collection right? Can we use something as a word2vec vector as a single input in a supervised learning model? Thank you for your time.
Training a Machine Learning predictor
1
1.2
1
0
0
232
27,259,478
2014-12-02T21:38:00.000
4
0
0
0
0
python,excel,openpyxl
1
27,280,801
0
1
0
true
0
0
In openpyxl cells are stored individually in a dictionary. This makes aggregate actions like deleting or adding columns or rows difficult as code has to process lots of individual cells. However, even moving to a tabular or matrix implementation is tricky as the coordinates of each cell are stored on each cell meaning that you have process all cells to the right and below an inserted or deleted cell. This is why we have not yet added any convenience methods for this as they could be really, really slow and we don't want the responsibility for that. Hoping to move towards a matrix implementation in a future version but there's still the problem of cell coordinates to deal with.
1
1
0
0
I'm trying to delete cells from an Excel spreadsheet using openpyxl. It seems like a pretty basic command, but I've looked around and can't find out how to do it. I can set their values to None, but they still exist as empty cells. worksheet.garbage_collect() throws an error saying that it's deprecated. I'm using the most recent version of openpyxl. Is there any way of just deleting an empty cell (as one would do in Excel), or do I have to manually shift all the cells up? Thanks.
Delete cells in Excel using Python 2.7 and openpyxl
0
1.2
1
1
0
2,658