Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
37,571,165 | 2016-06-01T14:11:00.000 | 0 | 0 | 0 | 0 | python,neural-network,tensorflow,data-science | 37,574,806 | 4 | false | 0 | 0 | To emphasize on any vector elements in your input vector you will have to give less information of the unimportant vector to your neural network.
Try to encode the first less important 285 numbers into one number or any vector size you like, with a a multiplayer neural network then use that number with other 4 number as a input to a neural network.
Example:
v1=[1,2,3,..........285]
v2=[286,287,288,289]
v_out= Neural_network(input_vector=v1,neurons=[100,1]) # 100 hidden unit with one outpt.
v_final=Neural_network(input_vector=v_out,neurons=[100,1]) # 100 hidden unit with one outpt. | 1 | 5 | 1 | I looked around online but couldn't find anything, but I may well have missed a piece of literature on this. I am running a basic neural net on a 289 component vector to produce a 285 component vector. In my input, the last 4 pieces of data are critical to change the rest of the input into the resultant 285 for the output. That is to say, the input is 285 + 4, such that the 4 morph the rest of the input into the output.
But when running a neural network on this, I am not sure how to reflect this. Would I need to use convolution on the rest of the input? I want my system to emphasize the 4 data points that critically affect the other 285. I am still new to all of this, so a few pointers would be great!
Again, if there is something already written on this, then that would be awesome too. | How can I make my neural network emphasize that some data is more important than the rest? | 0 | 0 | 0 | 1,982 |
37,575,270 | 2016-06-01T17:32:00.000 | 0 | 0 | 0 | 0 | python,scipy,kolmogorov-smirnov | 37,575,927 | 2 | false | 0 | 0 | The args argument must be a tuple but it can be a single variable. You can do your test using ks_statistic, pvalue = scipy.stats.kstest(x, 't', (10,)) if 10 is the degrees of freedom. | 1 | 2 | 1 | I have data regarding metallicity in stars, I want to compare it with a student's t distribution. To do this I am running a Kolmogorov-Smirnov test using scipy.stats.kstest on python
KSstudentst = scipy.stats.kstest(data,"t",args=(a,b))
But I am unable to find what the arguments are supposed to be. I know the student's t requires a degree of freedom (df) parameter but what is the other parameter. Also which one of the two is the df parameter.
In the documentation for scipy.stats.t.cdf the inputs are the position at which value is to be calculated and df, but in the KS test there is no sense in providing the position. | What Arguments to use while doing a KS test in python with student's t distribution? | 0 | 0 | 0 | 3,626 |
37,577,819 | 2016-06-01T20:05:00.000 | 2 | 0 | 0 | 1 | python,subprocess | 37,577,952 | 1 | false | 0 | 0 | I think you should use communicate. The message warns you about performance issues with the default behaviour of the method. In fact, there's a buffer size parameter to the popen constructor that can be tuned to improve a lot performance for large data size.
I hope it will help :) | 1 | 1 | 0 | I am trying to run a subprocess in Python 3 and constantly read the output.
In the documentation for subprocess in Python 3 I see the following:
Popen.wait(timeout=None)
Wait for child process to terminate. Set and return returncode attribute.
Warning This will deadlock when using stdout=PIPE and/or stderr=PIPE
and the child process generates enough output to a pipe such that it
blocks waiting for the OS pipe buffer to accept more data. Use
communicate() to avoid that.
Which makes me think I should use communicate as the amount of data from stdout is quite large. However, reading the documentation again shows this:
Popen.communicate(input=None, timeout=None)...
Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached.
Note The data read is buffered in memory, so do not use this method if the data size is large or
unlimited.
So again, it seems like there are problems with reading starndard out from subprocesses this way. Can someone please tell me the best / safest way to run a subprocess and read all of its (potentially large amount) of stdout? | Should I use Popen's wait or communicate to read stdout in subprocess in Python 3? | 0.379949 | 0 | 0 | 1,281 |
37,578,259 | 2016-06-01T20:32:00.000 | 0 | 0 | 0 | 0 | python,github,github3.py | 37,578,664 | 1 | true | 1 | 0 | github3.pull_request(owner, repo, number).as_dict()['head']['repo']['clone_url'] | 1 | 0 | 0 | I'm trying to get the clone URL of a pull request. For example in Ruby using the Octokit library, I can fetch it from the head and base like so, where pr is a PullRequest object: pr.head.repo.clone_url or pr.base.repo.clone_url.
How can I achieve the same thing using github3.py? | clone url of a pull request | 1.2 | 0 | 1 | 107 |
37,578,763 | 2016-06-01T21:05:00.000 | 0 | 0 | 0 | 0 | python,mysql,orm,sqlalchemy,pyqt | 37,579,094 | 1 | true | 0 | 0 | Qt Sql is the SQL Framework that comes with Qt library. It provides basic (and classic) classes to access a database, execute queries and fetch the results.*Qt can be recompiled to support various DBMS such as MySQL, Postgres etc.
Sql Connector
I assume you refer to MySql Connector ?
If so, it's a set of C++ classes to access natively a MySql database.
Classes are almost the same than those in QtSQL. But you don't have to support the Wrapper layer of Qt. But of course, you cannot access another database than MySQL.
Sql Alchemy
It's hard to briefly explain the complexity of an ORM.
Object Relation Mapping. Wikipedia says:
"A technique for converting data between incompatible type systems in
object-oriented programming languages"
It's a good definition. It's basically a technique to map table / queries data into Object-Oriented data structure.
For example, an ORM engine hides the process that explicitly maps the fields of a TABLE to an OO class.
In addition, it works the same whatever the database you're accessing (as long as the ORM knows the DBMS dialect).
For such a purpose, Python language and philosophy perfectly fits an ORM.
But an ORM such as SqlAlchemy is everything but an Object-Oriented Database !
It has some limitations though.
If you need to make complex queries (and believe me, it often happens in specific contexts), it becomes a bit tricky to use it properly and you might experience performance penalties.
If you need just need to access a single table with hundreds records...it won't be worth it as the initialization process is a bit laborious.
Z. | 1 | 0 | 0 | So I have a database that stores a lot of information about many different objects; to simplify it, just imagine a database that stores information about the weights of 100 dogs and 100 cats over a period of a few years. I made a GUI, and I want one of the tabs to allow the user to enter a newly taken weight or change the past weight of a pet if it was wrong.
I created the GUI using Qt Designer. Before, I communicated with the database using SqlAlchemy. However, PyQt4 also offers QtSql, which I think is another ORM? Could someone explain to me how the ORM exactly works, and what the difference is between SqlAlchemy, QtSql, and even Sql Connector? | Which ORM to use for Python and MySql? | 1.2 | 1 | 0 | 2,414 |
37,580,165 | 2016-06-01T23:03:00.000 | 3 | 0 | 0 | 0 | python,json,node.js,mongodb,immutability | 38,043,469 | 2 | false | 1 | 0 | imho there is no know methods to prevent updates inside mongo.
As you can control app behavior, then someone will still able to make this update outside the app. Mongo don't have triggers - which in sql world have the possibility to play as a data guards and prevent field changes.
As you re not using ODM, then all you can have is CQRS pattern which will allow you to control app behavior and prevent such updates. | 1 | 2 | 0 | Use case: I'm writing a backend using MongoDB (and Flask). At the moment this is not using any ORM like Mongoose/Mongothon. I'd like to store the _id of the user which created each document in the document. I'd like it to be impossible to modify that field after creation. The backend currently allows arbitrary updates using (essentially) collection.update_one({"_id": oid}, {"$set": request.json})
I could filter out the _creator_id field from request.json (something like del request.json["_creator_id"]) but I'm concerned that doesn't cover all possible ways in which the syntax could be modified to cause the field to be updated (hmm, dot notation?). Ideally I'd like a way to make fields write-once in MongoDB itself, but failing that, some bulletproof way to prevent updates of a field in code. | How to make a field immutable after creation in MongoDB? | 0.291313 | 1 | 0 | 3,632 |
37,580,272 | 2016-06-01T23:14:00.000 | 1 | 0 | 0 | 0 | python,numpy | 37,580,756 | 4 | false | 0 | 0 | Maybe it is not the easiest, but a compact way is
from numpy import array
array([i for i in bin(5)[2:]]) == '1' | 1 | 2 | 1 | What's the easiest way to produce a numpy Boolean array representation of an integer? For example, map 6 to np.array([False, True, True], dtype=np.bool). | numpy Boolean array representation of an integer | 0.049958 | 0 | 0 | 653 |
37,581,880 | 2016-06-02T02:56:00.000 | 0 | 0 | 1 | 0 | python,graph-tool | 37,604,141 | 2 | false | 0 | 0 | You are probably using the system's python, whereas graph-tool was installed for macport's python. You should call the interpreter corresponding to macport's version, usually /usr/local/bin/python. | 1 | 1 | 0 | I spent 1 hr 30mins to install graph-tool package. Installation declared that it was successful. When I tried to import it says "no module by name graph_tool..." I guess I am missing the path or link to this module. How to link or import?
Also, when I ran the command "pip freeze" it does not show the graph_tool package installed. Please help to resolve these problems. Thanks. | Unable to locate the installed graph-tool package in Python | 0 | 0 | 0 | 1,661 |
37,586,188 | 2016-06-02T08:11:00.000 | 0 | 0 | 0 | 0 | python,openerp,database-migration,odoo-8,talend | 37,652,714 | 1 | false | 1 | 0 | Here is my code
from openerp.osv import osv,fields
from openerp import SUPERUSER_ID
from openerp import netsvc
class generation(osv.osv):
_name ="generation.models"
_columns={
"visible":fields.boolean("Visible")
}
_defaults={
'visible':False
}
def get_stockmovedone(self,cr ,uid ,ids, Context=None):
cr.execute("SELECT id from stock_move where state='done' order by
date ASC")
liste_move_done =[]
res = cr.fetchall()
for i in range(len(res)):
liste_move_done.append(res[i][0])
return liste_move_done
def get_id_wkf_workitem(self,cr ,uid,ids ,Context=None):
cr.execute("select distinct(wkf_workitem.id) from wkf_workitem \
where inst_id in (select distinct(wkf_instance.id) from stock_move\
inner join wkf_instance\
on stock_move.id=wkf_instance.res_id \
where stock_move.state='done'\
and wkf_instance.res_type='stock.move')")
res=cr.fetchall()
liste_wkf_work=[]
for i in range(len(res)):
liste_wkf_work.append(res[i][0])
return liste_wkf_work
def get_id_wkf_inst(self,cr ,uid,ids ,Context=None):
cr.execute("select distinct(wkf_instance.id) from stock_move \
inner join wkf_instance on stock_move.id=wkf_instance.res_id \
where stock_move.state='done' and wkf_instance.res_type='stock.move'")
res=cr.fetchall()
liste_wkf_inst=[]
for i in range(len(res)):
liste_wkf_inst.append(res[i][0])
return liste_wkf_inst
def update_leisyah(self,cr,uid,ids,Context=None):
liste1= self.get_id_wkf_inst(cr, uid, ids, Context)
liste2= self.get_id_wkf_workitem(cr, uid, ids, Context)
liste_stockmove =self.get_stockmovedone(cr,uid,ids,Context)
for t in liste_stockmove:
cr.execute("UPDATE stock_move SET state ='assigned' where
id={}".format(t))
for i in liste1:
cr.execute("UPDATE wkf_instance SET state='active' \
where id={} ".format(i))
for j in liste2:
cr.execute("UPDATE wkf_workitem SET act_id=62 \
where id={} ".format(j))
for r in liste_stockmove:
netsvc.LocalService("workflow").trg_validate(SUPERUSER_ID,
'stock.move', r, 'action_done', cr)
gener_obj = self.pool.get('generation.models')
gener_obj.write(cr,uid, [ids[0]] , {'visible':True}, context=Context) | 1 | 0 | 0 | How to to fill stock.quant and stock_quants_move_rel with stock_move Migration openErp7 to odoo8?
Im trying to reprocess stock move with states done to fill stock quants but I have a lot of data in stock_moves | Migrate Stock_quants using stock move openerp7 odoo 8 | 0 | 0 | 0 | 156 |
37,592,297 | 2016-06-02T12:46:00.000 | 4 | 0 | 0 | 0 | python,django | 37,592,383 | 1 | false | 1 | 0 | Instead of "guessing" the table name using the default convention you should use Model.objects.model._meta.db_table to get the real name.
A model can override the default table name convention and this will break your code reusability... | 1 | 0 | 0 | I need to execute some custom raw SQL in Django (1.9). Since tables in Django are prefixed with the app name I need to retrieve the app name. I want to use the same code in different apps later on, so I would like to get the app name in a soft coded way, just given the file the code resides in. What would be the best way to do this? | Django get app name from file | 0.664037 | 0 | 0 | 463 |
37,592,580 | 2016-06-02T12:58:00.000 | 1 | 0 | 0 | 0 | python,c++,qt,qt4,pyqt4 | 37,598,213 | 1 | true | 0 | 1 | Is there a better way to make the hierarchy of widgets, my current one seems excessive (Scene > Widget > Layout > Widgets)
When I use the QGraphics.... classes, I generally will subclass QGraphicsView. The view will create it's own scene and have convenience methods for creating all the child items as well. From an api standpoint, only the view widget is really exposed.
What's the best way to make different sized widgets the same size (so they would all appear the same size in the grid and not overlap)?
It depends what types of widgets they are. If they are images/pixmaps, you can just scale them to a certain size. If they are actual widgets with controls, you probably don't want to scale them and should just set their actual height/width using setGeometry. Otherwise, all QGraphicsItem's support scaling.
My main QGraphicsWidget (on which the grid is displayed) doesn't take up the whole width of the view, what is the best way to achieve this?
On the view, you can get the .viewport() size, then just set the size of your graphics widget to the size of the viewport. You'll have to override the resizeEvent on the QGraphicsView to also resize your QGraphicsWidgets whenever the view resizes.
How would I go about making my QGraphicsGrid "responsive"?
For this, you'd probably be better off not using a grid and computing the placement of each item in your graphics scene manually. It depends somewhat on how you want your items placed. Are some widgets supposed to be below or next to others? Do you just want a tightly packed grid? Can an item span two grid lines? Does everything have to fit on a single "page", or is vertical scrolling allowed. You'll have to answer these questions first before you get any good answers. But generally, you know the size of the viewport. If you have a list of graphics items, you can just iterate through them and set their position based off the items you've already placed. Again, override the resizeEvent on the graphics view so that you can re-compute item positions whenever the view resizes. | 1 | 0 | 0 | I am trying to make a grid on a QGraphicsView and QGraphicsScene with custom QGraphicsWidgets, but I am not sure how the best way to do this would be. I'm working with PyQt4, but this is a general Qt question.
My current implementation contains the following. One QGraphicsScene (with view) and one QGraphicsWidget that contains the QGraphicsGridLayout onto which I insert my custom widgets.
The problem I'm having is that each of these custom widgets has different sizes, and they overlap and I'm not sure as to how I would go about changing the sizes for each widget separately. Also, the grid needs to be "responsive" i.e. if the widget is some width, then there should be 3 columns, but if the widget is smaller, only 2 columns should be displayed.
I've read this may be solvable by implementing the size hints, but I haven't really found any good documentation on that topic.
So my questions are:
Is there a better way to make the hierarchy of widgets, my current one seems excessive (Scene > Widget > Layout > Widgets)
What's the best way to make different sized widgets the same size (so they would all appear the same size in the grid and not overlap)?
My main QGraphicsWidget (on which the grid is displayed) doesn't take up the whole width of the view, what is the best way to achieve this?
How would I go about making my QGraphicsGrid "responsive"? | Adjusting widget size in QGraphicsGridLayout | 1.2 | 0 | 0 | 464 |
37,592,608 | 2016-06-02T13:00:00.000 | 1 | 0 | 0 | 0 | python,machine-learning,scikit-learn,rapidminer | 37,593,003 | 2 | true | 0 | 0 | Practically, I would say no - just train your model in sklearn from the beginning if that's where you want it.
Your RapidMiner model is some kind of object. The two formats you are exporting as are just storage methods. Sklearn models are a different kind of object. You can't directly save one and load it into the other. A similar example would be to ask if you can take an airplane engine and load it into a train.
To do what you're asking, you'll need to take the underlying data that your classifier saved, find the format, and then figure out a way to get it in the same format as a sklearn classifier. This is dependent on what type of classifier you have. For example, if you're using a bayesian model, you could somehow capture the prior probabilities and then use those, but this isn't trivial. | 1 | 1 | 1 | I have trained a classifier model using RapidMiner after a trying a lot of algorithms and evaluate it on my dataset.
I also export the model from RapidMiner as XML and pkl file, but I can't read it in my python program (scikit-learn).
Is there any way to import RapidMiner classifier/model in a python program and use it to predict or classify new data in my end application? | Can I export RapidMiner model to integrate with python? | 1.2 | 0 | 0 | 2,438 |
37,593,092 | 2016-06-02T13:20:00.000 | 1 | 0 | 1 | 1 | python,debian,anaconda | 37,594,156 | 2 | true | 0 | 0 | My question is: will programs, that depends from python command and
expects python2, work correctly?
Those programs should use full path of the python binary. Something like /usr/bin/python, and so $PATH is irrelevant. As long as you don't change /usr/bin/python, nothing will break.
If you remove the stuff that Anaconda has added, it's likely that Anaconda will not work properly. | 1 | 0 | 0 | I have installed Anaconda3 just now, and I noticed that now, when I run python command from terminal, Python 3.5.1 |Anaconda 4.0.0 (64-bit)| is starting. Anaconda installer had added path to anaconda dir in $PATH and there is symlink from python to python3.5
My question is: will programs, that depends from python command and expects python2, work correctly, or I should remove symlink python from anaconda dir? | Is it safe to set python bin in $PATH to another python version? | 1.2 | 0 | 0 | 55 |
37,593,483 | 2016-06-02T13:36:00.000 | 0 | 0 | 1 | 0 | python,python-3.x | 37,594,200 | 2 | false | 0 | 0 | Run pip --version and check the Python version and start IDLE and check the Python version and see if they match.
If not, use explicit pipx.y matching IDLE or start IDLE from a Python matching plain pip. | 2 | 0 | 0 | new to python installed python 3.5 32-bit on windows 10, used pip to install a module
C:\Users\Lopez\Anaconda3\Scripts>pip install ystockquote
Requirement already satisfied (use --upgrade to upgrade): ystockquote in c:\users\lopez\anaconda3\lib\site-packages
You are using pip version 7.1.2, however version 8.1.2 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.
when I go to the IDLE and try the following I get a no module name
import ystockquote
Traceback (most recent call last):
File "", line 1, in
import ystockquote
ImportError: No module named 'ystockquote'
not sure what i am missing, guess its path related but appreciate any feedback
thanks, Juan | python 3.5 32 bit windows import module fails but pip install worked | 0 | 0 | 0 | 745 |
37,593,483 | 2016-06-02T13:36:00.000 | 0 | 0 | 1 | 0 | python,python-3.x | 37,594,282 | 2 | true | 0 | 0 | thanks for your help guys,
I beleive the issues where with adding paths to my environment variables not the version of pip. I managed to get this working using the window pip install once i installed python 3 in a custom directory C:\Python35 rather than the deafault long windows suggested path
Instructions
In the Control Panel, search for Environment; click Edit the System Environment Variables. Then click the Environment Variables button.
In the User Variables section, we will need to either edit an existing PATH variable or create one. If you are creating one, make PATH the variable name and add the following directories to the variable values section as shown, separated by a semicolon. If you’re editing an existing PATH, the values are presented on separate lines in the edit dialog. Click New and add one directory per line.
C:\Python35-32;C:\Python35-32\Lib\site-packages\;C:\Python35-32\Scripts\ | 2 | 0 | 0 | new to python installed python 3.5 32-bit on windows 10, used pip to install a module
C:\Users\Lopez\Anaconda3\Scripts>pip install ystockquote
Requirement already satisfied (use --upgrade to upgrade): ystockquote in c:\users\lopez\anaconda3\lib\site-packages
You are using pip version 7.1.2, however version 8.1.2 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.
when I go to the IDLE and try the following I get a no module name
import ystockquote
Traceback (most recent call last):
File "", line 1, in
import ystockquote
ImportError: No module named 'ystockquote'
not sure what i am missing, guess its path related but appreciate any feedback
thanks, Juan | python 3.5 32 bit windows import module fails but pip install worked | 1.2 | 0 | 0 | 745 |
37,594,983 | 2016-06-02T14:40:00.000 | 2 | 0 | 0 | 0 | python,openerp | 37,606,530 | 1 | true | 1 | 0 | Yes it's very interesting point and no one can predict which method from which module called first because odoo manages hierarchical structure for dependency.
Calling pattern comes into the picture only when the method will be
called from object (manually from code) and if write method call from UI
(means Sales Order edit from UI) then it will call each write method
written for that model no matter in which module it is and it's
sequence is LAST WRITTEN CALL FIRST (but it's only when the method is
called from UI).
So in your case Custom Module 1 and Custom Module 2 will be on same level and both have the same parent Sale Order.
Sales Order => Custom Module 1 (Write method override)
Sales Order => Custom Module 2 (Write method override)
So while the write method will be called manually from code then it
will gives priority to local module first and then it will call super
method.
In that case suppose write method calling from Module 1 then it may be possible that it will ignore write method of Module 2 because Module 1 and Module 2 both are on same level (super will be called write method of parent class). Because we have face this issue many time in development that methods which are override on multiple modules and these are on same level then it will not going to call the method of the next module.
So when you need to call each method of the each module they must be in hierarchy but not at the same level.
Because there is main reason why the method will not be called some times for parallel modules.
Because here two thing comes into the picture,
1). depends : Parent Module (which decides the module hierarchy)
2). _inherit : Here the methods and behavoirs of the object's definied.
Module 1 and Module 2 are not there in depends of each other so by
hierarchy it's not necessary to call the method from these both module
no matter whether they are overriding the same method of same model. | 1 | 3 | 0 | It is to my understanding that odoo does not extend its models as Python extends its classes (_inherit = 'model'. And that seems pretty reasonable. My question though is this:
If I have customModule1 that extends sale.order and overrides the write method adding some functionality and I later install customModule2 which in turn extends sale.order model and overrides the write method adding some functionality as well I understand that all the versions of the write method will be called but at which order?
Is write of customModule1 going to be called first when the client writes on the sale.order model? Or write of customModule2 ? | Odoo overriden function call order | 1.2 | 0 | 0 | 1,781 |
37,599,213 | 2016-06-02T18:08:00.000 | 1 | 0 | 1 | 0 | python,arrays,python-3.x,static | 63,222,522 | 4 | false | 0 | 0 | Its not python but at one point in your future you will see this in other languages as well. Another common way this is solved that doesn't involve using a vector or a linked list is with dynamic arrays.
Essentially you create an array with a finite size. If the user calls append and you have no more room in your array. You create a new array that is 2x larger than the old array. Then copy all the elements over and append the new element.
The 2x is actually important because it keeps the insert time amortized constant. (That is more advanced algorithms though) | 2 | 4 | 0 | I am learning how to program in python and am also learning theory as part of a computer science course. In programming i know that i can add additional variables to an array just by using the .append function, however in my theory classes we are told that arrays can neither be increase nor decreased in size.
How does this work in python? | Static Arrays in Python | 0.049958 | 0 | 0 | 10,824 |
37,599,213 | 2016-06-02T18:08:00.000 | 0 | 0 | 1 | 0 | python,arrays,python-3.x,static | 72,295,042 | 4 | false | 0 | 0 | In the theory class, you learned about static arrays. We see these types of arrays in C usually. But in python, we have dynamic arrays which are extensible. Search for Linked List in google and you will gain further knowledge | 2 | 4 | 0 | I am learning how to program in python and am also learning theory as part of a computer science course. In programming i know that i can add additional variables to an array just by using the .append function, however in my theory classes we are told that arrays can neither be increase nor decreased in size.
How does this work in python? | Static Arrays in Python | 0 | 0 | 0 | 10,824 |
37,599,867 | 2016-06-02T18:45:00.000 | 1 | 0 | 1 | 0 | python,eclipse,pydev,package-explorer | 37,603,520 | 1 | true | 0 | 0 | Sounds like an issue with the indexer, or PYTHONPATH. Try Project -> Properties -> PyDev -PYTHONPATH, "add source folder" | 1 | 0 | 0 | Usually this is cause by the init.py missing in the package. But I do have all my init.py into place. In the PyDev package explorer, packages are displayed as yellow rectangles, just like regular folders, rather than with the package icon. Why is this? I've tried clean all, build all, doesn't help. | PyDev in Eclipse - packages displayed as regular folders | 1.2 | 0 | 0 | 317 |
37,600,197 | 2016-06-02T19:06:00.000 | 1 | 1 | 1 | 0 | python,python-3.x,text-to-speech,google-text-to-speech | 57,925,186 | 5 | false | 0 | 0 | May be possible to pass the gTTS output through Audacity and apply a change to a male-sounding voice? gTTS I have just got going has a very good female voice, but the engine fails to read well on long sentences or unexpected words. Still, it's the best I found for free so far, and actually is better than all the other free ones, and a good deal of the pay ones. I just had to work out the py scripts and how to use python and later learned Anaconda is a miracle cure to what ails you. Got my systems terminal and the pip to install gTTS properly, which I could not do prior to Anaconda. Scripts made by people for 3.+ now run without errors trying run them in default py v2.7. The terminal is now env 3.6.8 but also all the old py scripts still run fine. | 2 | 10 | 0 | I have been using the gTTS module for python 3.4 to make mp3 files of spoken text. It has been working, but all of the speech is in a certain adult female voice. Is there a way to customize the voice that gTTS reads the text in? | Custom Python gTTS voice | 0.039979 | 0 | 0 | 35,163 |
37,600,197 | 2016-06-02T19:06:00.000 | 2 | 1 | 1 | 0 | python,python-3.x,text-to-speech,google-text-to-speech | 64,368,114 | 5 | false | 0 | 0 | If you call "gtts-cli --all" from a command prompt, you can see that gTTS actually supports a lot of voices. However, you can only change the accents, and not the gender. | 2 | 10 | 0 | I have been using the gTTS module for python 3.4 to make mp3 files of spoken text. It has been working, but all of the speech is in a certain adult female voice. Is there a way to customize the voice that gTTS reads the text in? | Custom Python gTTS voice | 0.07983 | 0 | 0 | 35,163 |
37,600,960 | 2016-06-02T19:54:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,anaconda,conda | 37,614,696 | 1 | false | 0 | 0 | Finally, I figured out the answer. It is all about the PATH variable. It was pointing to os python rather than anaconda python. Thanks all for your time. | 1 | 0 | 1 | pip list inside conda env:
pip list
matplotlib (1.4.0)
nose (1.3.7)
numpy (1.9.1)
pandas (0.15.2)
pip (8.1.2)
pyparsing (2.0.1)
python-dateutil (2.4.1)
pytz (2016.4)
scikit-learn (0.15.2)
scipy (0.14.0)
setuptools (21.2.1)
six (1.10.0)
wheel (0.29.0)
which python:
/Users/xxx/anaconda/envs/pythonenvname/bin/python
(pythonenvname)pc-xx-xx:oo xxx$ which pip
/Users/xxx/anaconda/envs/pythonenvname/bin/pip
python
Python 3.4.4 |Anaconda custom (x86_64)| (default, Jan 9 2016, 17:30:09)
[GCC 4.2.1 (Apple Inc. build 5577)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
import pandas as pd error:
sh: sysctl: command not found | anaconda env couldn't import any of the packages | 0 | 0 | 0 | 204 |
37,602,576 | 2016-06-02T21:39:00.000 | 0 | 0 | 1 | 1 | windows,python-2.7,pip | 37,618,999 | 2 | true | 0 | 0 | So I found my issue. Was running cmd prompt from P4 directly which was my issue. When running cmd prompt outside of P4 I was able to run pip with no issues. | 2 | 0 | 0 | I have tried setting my System variable Path=C:\Python27\Scripts\
I also have set User Variable Path=C:\Python27\
I can see it return correctly when I echo %PATH%
What else is needed for pip to work correctly? | pip is not recoginzed as an internal or external command | 1.2 | 0 | 0 | 32 |
37,602,576 | 2016-06-02T21:39:00.000 | 0 | 0 | 1 | 1 | windows,python-2.7,pip | 37,607,502 | 2 | false | 0 | 0 | You need to install pip too.
For convinience you can use some package manager to get started and going. Google for python package managers (anacond etc) | 2 | 0 | 0 | I have tried setting my System variable Path=C:\Python27\Scripts\
I also have set User Variable Path=C:\Python27\
I can see it return correctly when I echo %PATH%
What else is needed for pip to work correctly? | pip is not recoginzed as an internal or external command | 0 | 0 | 0 | 32 |
37,602,604 | 2016-06-02T21:41:00.000 | 3 | 0 | 0 | 1 | python,google-app-engine,google-cloud-datastore | 37,609,672 | 1 | false | 1 | 0 | Strictly speaking, Google Cloud Datastore is distributed multi-dimensional sorted map. As you mentioned it is based on Google BigTable, however, it is only a foundation.
From high level point of view Datastore actually consists of three layers.
BigTable
This is a necessary base for Datastore. Maps row key, column key and timestamp (three-dimensional mapping) to an array of bytes. Data is stored in lexicographic order by row key.
High scalability and availability
Strong consistency for single row
Eventual consistency for multi-row level
Megastore
This layer adds transactions on top of the BigTable.
Datastore
A layer above Megastore. Enables to run queries as index scans on BigTable. Here index is not used for performance improvement but is required for queries to return results.
Furthermore, it optionally adds strong consistency for multi-row level via ancestor queries. Such queries force the respective indexes to update before executing actual scan. | 1 | 1 | 0 | From my understanding BigTable is a Column Oriented NoSQL database. Although Google Cloud Datastore is built on top of Google’s BigTable infrastructure I have yet to see documentation that expressively says that Datastore itself is a Column Oriented database. The fact that names reserved by the Python API are enforced in the API, but not in the Datastore itself makes me question the extent Datastore mirrors the internal workings of BigTable. For example, validation features in the ndb.Model class are enforced in the application code but not the datastore. An entity saved using the ndb.Model class can be retrieved someplace else in the app that doesn't use the Model class, modified, properties added, and then saved to datastore without raising an error until loaded into a new instance of the Model class. With that said, is it safe to say Google Cloud Datastore is a Column Oriented NoSQL database? If not, then what is it? | Is Google Cloud Datastore a Column Oriented NoSQL database? | 0.53705 | 1 | 0 | 429 |
37,603,610 | 2016-06-02T23:18:00.000 | 4 | 0 | 1 | 0 | python,multithreading | 37,603,825 | 2 | false | 0 | 0 | Premature Optimization
This is a classic example of premature optimization. Without knowing how much time your threads spend blocking, presumably waiting for other writes to happen, it's unclear what you have to gain from creating the added complexity of managing thousands of locks.
The Global Interpreter Lock
Threading itself can be a premature optimization. Is your task easily threadable? Can many threads safely work in parallel? Tasks that require a large amount of shared state (i.e. many and frequent locks) are typically poor candidates for high thread counts. In python, you're likely to see even less benefit because of the GIL. Are your threads doing a lot of IO, or calling out to external applications, or using python modules written in C that properly releases the GIL? If not, threading might not actually give you any benefit. You can sidestep the GIL by using the multiprocessing module, but there's an overhead to passing locks and writes across process boundaries, and ironically, it might make your application much slower
Queues
Another option is to use a write queue. If threads don't actually need to share state, but they all need to write to the same object (i.e. very little reading from that object), you can simply add the writes to a queue and have a single thread process the writes, with no need for any locks. | 1 | 7 | 0 | I have a single-threaded python3 program that I'm trying to convert to use many threads. I have a tree-like data structure that gets read from and written to. Potentially many threads will want to read and write at the same time.
One obvious way about this is to have a single lock for the entire data structure: no one can read while a write is happening, no more than one write can happen at a time, and no write can happen when there are pending reads.
However, I'd like to make the locking more fine-grained for greater performance. It's a full 16-ary tree, and when fully populated has about 5 to 6 million leafs (mostly well-balanced in practice, but no guarantee). If I wanted the finest-grained locking, I could lock the parents of the leafs. That would mean over 100 thousand locks.
I must admit, I haven't tried this yet. But I thought I'd ask first: are there any hardware limitations or performance reasons that should stop me from creating so many lock objects? That is, should I consider just locking down to, say, depth 2 from the root (256 locks)?
Thanks for any insights.
EDIT:
More details:
I don't know how many cores yet as we're still experimenting as to just how much computing power we'll need, but I'd hazard a guess that just a handful of cores will be used.
I'm aiming for around 50,000 threads. There's async I/O, and one thread per socket. During a bootstrapping phase of the code, as many threads as possible will be running simultaneously (as limited by hardware), but that's a one-time cost. The one we're more interested in is once things are up and running. At that point, I'd hazard a guess that only several thousand per second are running. I need to measure the response time, but I'd guess it's around 10ms per wake period. That's a few 10s of threads active at a time (on average).
Now that I write that out, maybe that's the answer to my question. If I only need a few 10s of threads reading or writing at a time, then I don't really need that fine-grained locking on the tree. | are there any limitations on the number of locks a python program can create? | 0.379949 | 0 | 0 | 643 |
37,603,634 | 2016-06-02T23:20:00.000 | 0 | 0 | 0 | 0 | python,oauth,slack-api,slack | 37,614,761 | 2 | false | 0 | 0 | You do indeed need to
Store each team token. Please remember to encrypt it
When a team installs your app, create a new RTM connection. When your app/server restarts, loop across all your teams, open a RTM connection for each of them
each connection will receive events from that team, and that team only. You will not receive all notifications on the same connection
(maybe you are coming from Facebook Messenger bots background, where all notifications arrive at the same webhook ? That's not the case with Slack) | 1 | 0 | 0 | I am making a slack bot. I have been using python slackclient library to develop the bot. Its working great with one team. I am using Flask Webframework.
As many people add the app to slack via "Add to Slack" button, I get their bot_access_token.
Now
how should I run the code with so many Slack tokens. Should I store them in a list and then traverse using for loops for all token! But this was is not good as I may not be able to handle the simultaneous messages or events I receive Or "its a good way".
Any other way if its not? | How to handle many users in slack app for in Python? How to use the multiple tokens? | 0 | 0 | 1 | 904 |
37,607,312 | 2016-06-03T06:26:00.000 | 1 | 0 | 0 | 0 | python-3.x,tkinter | 37,613,354 | 1 | false | 0 | 1 | You can't do what you want. You can simulate it partly by setting up bindings that can grow and shrink the size of fonts as a window is resized, and you can double or halve the size of images. You can also have widgets like the canvas, text widget, and frames grown and shrink to fit. However, widgets in general won't scale. For example, checkboxes, radiobuttons, scrollbars, sliders will all stay the same size. | 1 | 0 | 0 | Well I need to scale (kinda like changing the screen resolution on your PC) the TkInter window up and down and I have checked like 200 answers and they are all for Python 2.0 so please do any of you guys have any help on this? | How can I scale the contents of a tkinter window up and down? | 0.197375 | 0 | 0 | 53 |
37,607,390 | 2016-06-03T06:29:00.000 | 9 | 0 | 1 | 0 | python-2.7,scrapy | 37,614,289 | 3 | false | 0 | 0 | The main difference is that runspider does not need a project. That is, you can write a spider in a myspider.py file and call scrapy runspider myspider.py.
The crawl command requires a project in order to find the project's settings, load available spiders from SPIDER_MODULES settings, and lookup the spider by name.
If you need quick spider for a short task, then runspider has less boilerplate required. | 1 | 15 | 0 | Can someone explain the difference between runspider and crawl commands? What are the contexts in which they should be used? | Python Scrapy: What is the difference between "runspider" and "crawl" commands? | 1 | 0 | 0 | 8,787 |
37,611,742 | 2016-06-03T10:17:00.000 | 0 | 0 | 0 | 0 | python,django,nginx,django-rest-framework | 37,630,789 | 1 | false | 1 | 0 | Ok I tried the access the same code from different network and it worked.
Probably it was firewall issue of that particular wifi network. | 1 | 2 | 0 | I am using django rest framework.
Patch on api endpoint( users/user_id) is working in local django server on my machine. But on nginx development server its showing
{"detail":"Method \"METHOD_OTHER\" not allowed."}
Do we need to change some settings in nginx? | Django-Nginx Patch request :405 Method \"METHOD_OTHER\" not allowed | 0 | 0 | 0 | 642 |
37,612,434 | 2016-06-03T10:50:00.000 | 34 | 0 | 0 | 0 | python,performance,parallel-processing,seaborn | 48,292,419 | 3 | false | 0 | 0 | Rather than parallelizing, you could downsample your DataFrame to say, 1000 rows to get a quick peek, if the speed bottleneck is indeed occurring there. 1000 points is enough to get a general idea of what's going on, usually.
i.e. sns.pairplot(df.sample(1000)). | 1 | 25 | 1 | I have a dataframe with 250.000 rows but 140 columns and I'm trying to construct a pair plot. of the variables.
I know the number of subplots is huge, as well as the time it takes to do the plots. (I'm waiting for more than an hour on an i5 with 3,4 GHZ and 32 GB RAM).
Remebering that scikit learn allows to construct random forests in parallel, I was checking if this was possible also with seaborn.
However, I didn't find anything. The source code seems to call the matplotlib plot function for every single image.
Couldn't this be parallelised? If yes, what is a good way to start from here? | What are ways to speed up seaborns pairplot | 1 | 0 | 0 | 15,683 |
37,616,460 | 2016-06-03T14:02:00.000 | 0 | 0 | 0 | 0 | python,http,header,response | 48,498,013 | 7 | false | 0 | 0 | Try to use req.headers and that's all. You will get the response headers ;) | 1 | 21 | 0 | Today I actually needed to retrieve data from the http-header response. But since I've never done it before and also there is not much you can find on Google about this. I decided to ask my question here.
So actual question: How does one print the http-header response data in python? I'm working in Python3.5 with the requests module and have yet to find a way to do this. | How to print out http-response header in Python | 0 | 0 | 1 | 68,238 |
37,618,977 | 2016-06-03T16:06:00.000 | 3 | 0 | 0 | 0 | python,apache-spark,pyspark,apache-spark-sql,apache-spark-mllib | 60,295,986 | 4 | false | 0 | 0 | df.stat.corr("column1","column2") | 1 | 13 | 1 | I want to use pyspark.mllib.stat.Statistics.corr function to compute correlation between two columns of pyspark.sql.dataframe.DataFrame object. corr function expects to take an rdd of Vectors objects. How do I translate a column of df['some_name'] to rdd of Vectors.dense object? | PySpark computing correlation | 0.148885 | 0 | 0 | 31,767 |
37,623,661 | 2016-06-03T21:24:00.000 | 0 | 0 | 0 | 0 | python,qt,user-interface | 37,623,811 | 1 | true | 0 | 1 | Maybe try PYQT 3. Version 4 has some bugs in it | 1 | 0 | 0 | I tried a lot of technics but nothing work, I want to convert my .ui file to a .py file, using the pyuic4 in cmd, but the result is :
from PyQt4 import QtCore
ImportError: DLL load failed: %1 [...] is not a valid Win32 application
I'm using a 64bits system with python27 and "PyQt4-4.11.4-gpl-Py2.7-Qt4.8.7-x64.exe" from Riverbank, I think it's because I install a 64bits version but i'm not sure.
If someone could have an idea, it could be awesome ! :) | Python Error When creating a .py file from a .ui file (from Qt designer) | 1.2 | 0 | 0 | 127 |
37,628,706 | 2016-06-04T09:38:00.000 | 3 | 0 | 1 | 0 | python,python-3.x,pythonanywhere | 37,628,726 | 1 | true | 0 | 0 | The help menu opens up in your default pager (usually less). Press q to quit it. | 1 | 2 | 0 | I am using PythonAnywhere to run 3.4, and when I use help(math) I get a menu. How do I exit the menu? | How do I exit the help menu? | 1.2 | 0 | 0 | 3,126 |
37,629,312 | 2016-06-04T10:52:00.000 | 0 | 1 | 0 | 1 | python,events,asynchronous,twisted,gevent | 71,033,066 | 1 | false | 0 | 0 | Short answer: Twisted is a network framework. Gevent tries to act as a library without requiring from the programmer to change the way he programs. That's their focus.. and not so much how that is achieved under the hood.
Long answer:
All asyncio libraries (Gevent, Asyncio, etc.) work pretty much the same:
Have a main loop running endlessly on a single thread.
When an event occurs, it's captured by the main loop.
The main loop decides based on different rules (scheduling) if it should continue checking for events or switch temporarily and give control to any subscriber functions to the event.
greenlet is a different library. It's very simple in that it just changes the order that Python code is run and lets you change jumping back and forth between functions. Gevent uses it under the hood to implement its async features.
asyncio which comes with Python3 is like gevent. The big difference is the interface again. It requires the programmer to mark functions with async and allow him to explicitly wait for a subscribed function in the main loop with await.
Gevent is like asyncio. But instead of the keywords it patches existing code where appropriate. It uses greenlet under the hood to switch between main loop and subscribed functions and make it all work seamlessly.
Twisted as mentioned feels more like a framework than a library. It requires the programmer to follow very specific ways to achieve concurrency. Again though it has a main loop under the hood called reactor like everything else.
Back to your initial question: You can in theory replace the reactor with any loop (including gevent). But that would defeat the purpose. Probably Twisted's team decided to use their own version of a main loop for optimisation reasons. All these libraries use different scheduling in their main loops to meet their needs. | 1 | 2 | 0 | I'm trying to figure out how Gevent works with respect to other asynchronous frameworks in python, like Twisted.
The key difference between Gevent and Twisted is that Gevent uses greenlets and monkey patching the standard library for an implicit behavior and a synchronous programming model whereas Twisted requires specific libraries and callbacks for an explicit behavior. The event loop in Gevent is libev/libevent, which is written in C, and the event loop in Twisted is the reactor, which is written in python.
Is there anything special about libev/libevent that allows for this implicit behavior? Why not use an event loop written in Python? Conversely, why isn't Twisted using libev/libevent? Is there any particular reason? Maybe it was simply a design choice and could have gone either way...
Theoretically, can Gevent's libev be replaced with another event loop, written in python, like Twisted's reactor? And can Twisted's reactor be replaced with libev? | Gevent's libev, and Twisted's reactor | 0 | 0 | 0 | 318 |
37,630,059 | 2016-06-04T12:08:00.000 | 0 | 1 | 0 | 0 | python,abaqus | 37,686,072 | 1 | false | 0 | 0 | Thank you for reply. I tried your advice, I deleted the elements that I wanted to be removed from *.inp file. But when I imported the the *.inp, ABAQUS did not accept the file and gave error.
What I understood from your answer is that If I could not change *.inp file manually, It would not be possible to make change by python.
Excuse me if I did not explain clearly. I think It is better to ask my question in this way:
I have an inp file containing a crash box simulation and dynamic explicit analysis applied on it.
I want this tube have some void elements before analysis. So I should manipulate the *.inp file. (This FE model will be used for topology optimization purpose in matlab) | 1 | 0 | 0 | I am looking for a way to delete elements from abaqus inp. The analysis type is dynamic explicit and elements are S4R.
I should notice that elements which should be deleted are updated in a matlab optimization cycle.
Is there any way except using subroutine VUMAT?(even python scripting is preferred)
any idea will appreciated. | changing inp by deleting element | 0 | 0 | 0 | 430 |
37,632,393 | 2016-06-04T16:13:00.000 | 1 | 0 | 0 | 0 | python,tensorflow,jupyter-notebook,tensorboard | 37,682,991 | 1 | false | 0 | 0 | The jupyter stuff seems fine. In general, if you don't close TensorBoard properly, you'll find out as soon as you try to turn on TensorBoard again and it fails because port 6006 is taken. If that isn't happening, then your method is fine.
As regards the logdir, passing in the top level logdir is generally best because that way you will get support for comparing multiple "runs" of the same code in TensorBoard. However, for this to work, it's important that each "run" be in its own subdirectory, e.g.:
logs/run1/..tfevents..
logs/run2/..tfevents..
tensorboard --logdir=logs | 1 | 1 | 1 | what is the proper way to close tensorboard with jupyter notebook?
I'm coding tensorflow on my jupyter notebook. To launch, I'm doing: 1.
!tensorboard --logdir = logs/
open a new browser tab and type in localhost:6006
to close, I just do:
close the tensorflow tab on my browser
on jupyter notebook, I click on interrupt kernel
Just wondering if this is the proper way....
BTW, in my code I set my log file as './log/log1'.
when starting tensorboard, should I use --logdir = ./log or --logdir = ./log/log1?
thank you very much. | how to close tensorboard server with jupyter notebook | 0.197375 | 0 | 0 | 2,652 |
37,632,614 | 2016-06-04T16:36:00.000 | 0 | 0 | 0 | 0 | python,matlab,vtk | 37,661,211 | 1 | true | 0 | 1 | I kind of got it to work. Here's what I did:
matlab session running the GUI converts to shared session through matlab.engine.shareEngine('shared_matlab_session') and keeps track of the matlab GUI slider position in matlab_slider_pos variable
the python script connects to the session through MatEng=matlab.engine.connect_matlab('shared_matlab_session') before initializing VTK objects
the VTK render window adds an observer for TimerEvent that triggers SliderSync python callback function every 0.1 seconds
the SliderSync function checks if the current VTK slider value corresponds with result of calling MatEng.eval('matlab_slider_pos') and if it doesn't, update the VTK slider value along with the rest of VTK pipeline. Of course at this point the VTK slider is no longer needed and the function can directly update what needs updating.
I really hope there is a more elegant way to view a 3D volume in matlab than the abomination I created (it works pretty smooth though). Comments are still welcome! | 1 | 0 | 0 | I have a Matlab function that takes a 3D binary object as input, saves it as temporary file, then loads a python script through cmd (I made this before Matlab's python integration). The python script loads and reorganizes the 3D data and displays them through VTK.
The python script also creates some VTK controls that I would like to control through Matlab GUI sliders rather than VTK sliders.
Is there a way to open a realtime data flow between VTK and Matlab, either through Matlab's new python integration or python's Matlab engine API (or any other way)? I haven't found any way to control VTK actors other than VTK-created controls right in the VTK interactor window so far.
EDIT: I also ran into an odd issue when trying to figure this out. When I run the VTK visualization with system(['pythonw ' folder '\vizualize.pyw" &']); (and adding the main() so that the script executes itself) everything runs smooth, but when I do it with py.vizualize.main() and attempt to close the vizualization window, it doesn't close but it does return control to matlab command line. On the second attempt, it kills the matlab window instead and then after a while itself. The scripts are identical.
EDIT 2: Adding renderWin.Finalize() right after renderInteractor.Start() fixed this issue for some reason.
Thanks for any answers or ideas! | controlling VTK through Matlab | 1.2 | 0 | 0 | 204 |
37,636,509 | 2016-06-05T01:10:00.000 | 0 | 0 | 0 | 1 | python,pyserial,wine | 37,960,529 | 1 | true | 0 | 0 | First of all, and this is untested, try creating a symlink from .wine/dosdevices/COM1 to /dev/ttyS0. It should simply allow you to open the com port the Windows way.
If, however, you are determined to know whether you are running on Wine, the "official" way is to check whether the registry has the key "HKEY_LOCAL_MACHINE\Software\Wine".
Either way, if opening COM1 doesn't work on Wine, it is a bug and should be filed with the Wine bugzilla. | 1 | 0 | 0 | I can check for Linux/Windows/cygwin/etc. with sys.platform, but on WINE it just reports 'win32'.
I am attempting to write a multi-platform application that uses pyserial, and I am using WINE to test setup of a Windows environment. On Windows serial ports are named COMxx, but on Linux they are /dev/ttyxxx. However, on WINE the serial ports have Linux names. I need to detect if it is running on WINE separate from Windows so I can handle this properly. | Determine if a python program is running on WINE | 1.2 | 0 | 0 | 365 |
37,642,462 | 2016-06-05T13:27:00.000 | 0 | 1 | 1 | 0 | python,python-2.7 | 37,642,876 | 1 | true | 0 | 0 | I found the solution for the problem, it had permission issues, normal user didn't had the permission to execute the command, hence i added it by running the command
sudo chmod 775 python2.7
and same inside for its sub folders as well.
sudo chmod 775 *
and now its working fine, i can import everything i install from pip or sudo pip | 1 | 1 | 0 | I am using Ubuntu 14.04
I have installed a module pymavlink using sudo pip install pymavlink
now when i run a code like
python code.py
it says no module names as pymavlink but when i run it as
sudo python code.py
it works fine, i don't understand whats the problem without sudo.
Also i have Python 2.7 and python 3 installed as they both came with Ubuntu.
can someone please let me know the fix for this. | python module import error without sudo | 1.2 | 0 | 0 | 1,996 |
37,642,502 | 2016-06-05T13:31:00.000 | 1 | 0 | 1 | 0 | python,ipython,jupyter | 37,645,847 | 1 | false | 0 | 0 | Ended up solving it with:
pip install -U ipython
ipython3 kernelspec install-self | 1 | 1 | 0 | I had anaconda2 installed, and manually added the python3 kernel, so I could chose between python2 and python3. The problem was that I added my system's python3 binary, not anaconda's, so I was missing all the libraries that anaconda brings. Specifically I couldnt import 'from scipy.misc import imread'.
So I deleted anaconda2, installed anaconda3, but my jupyter notebook still uses my system's old python3 kernel. When I run sys.version inside the jupyter notebook I get python 3.4, but when I run it inside ipython in console I get python 3.5, with all the modules I need good to go.
So how do I tell jupyter notebook specifically what bin to use as a kernel? | Jupyter Notebook uses my system's python3.4 instead of anaconda's python3.5 as a kernel | 0.197375 | 0 | 0 | 519 |
37,643,172 | 2016-06-05T14:44:00.000 | 0 | 0 | 1 | 0 | java,python,c,jython,cython | 37,643,350 | 4 | false | 0 | 1 | Jython is a implementation of Python language on the Java Virtual Machine, so Jython is Python but is not CPython. Cython is an extension to CPython and has not much in common with Python except some similarities in Syntax. | 3 | 3 | 0 | So I spent a lot of time trying to fugure out what Cython and Jython are and I'm still (more) confused as anyone who just started computer programming. I heard that Cython is an extension, but..is also and indepent language??
What I think I've understood is that:
Cython/Jython is just Python and you can use C or C++/Java libraries respectively with just a little bit of Cython/Jython syntax.
It's meant to speed up performance as well as improve code readability when a task would be more efficient to write in C or C++/Java and, this is done by using statically typed variables.
Or is Cython/Jython just some sort of extension that is used sometimes but not independently? What I mean is, you'd still write everything in Python and then, for the tasks you'd rather use C or C++/Java for, you'd use Cython/Jython instead? (I know I still need Python installed and similar things as it's dependent.)
Because if it really is better, other than the barrier that it's still being developed since it's rather new, wouldn't the need for C or C++/Java completely go away as these are as easy as Python but as powerful as C or C++/Java? | Is Cython/Jython an independent language? | 0 | 0 | 0 | 3,486 |
37,643,172 | 2016-06-05T14:44:00.000 | 2 | 0 | 1 | 0 | java,python,c,jython,cython | 37,643,398 | 4 | false | 0 | 1 | Ok. Jython is an implementation of Python that converts python source code into java bytecode or object code as most people call it. So you basically write your programs using the python syntax, but the output of the source code when compiled to an object code is the java bytecode
Cython on the other hand is an implementation where by standard modules in the python language have been translated into the C language. So here, when you basically use a module, it is the python standard module, but for efficiency sake, under the hood, its C language or code that is executed | 3 | 3 | 0 | So I spent a lot of time trying to fugure out what Cython and Jython are and I'm still (more) confused as anyone who just started computer programming. I heard that Cython is an extension, but..is also and indepent language??
What I think I've understood is that:
Cython/Jython is just Python and you can use C or C++/Java libraries respectively with just a little bit of Cython/Jython syntax.
It's meant to speed up performance as well as improve code readability when a task would be more efficient to write in C or C++/Java and, this is done by using statically typed variables.
Or is Cython/Jython just some sort of extension that is used sometimes but not independently? What I mean is, you'd still write everything in Python and then, for the tasks you'd rather use C or C++/Java for, you'd use Cython/Jython instead? (I know I still need Python installed and similar things as it's dependent.)
Because if it really is better, other than the barrier that it's still being developed since it's rather new, wouldn't the need for C or C++/Java completely go away as these are as easy as Python but as powerful as C or C++/Java? | Is Cython/Jython an independent language? | 0.099668 | 0 | 0 | 3,486 |
37,643,172 | 2016-06-05T14:44:00.000 | 1 | 0 | 1 | 0 | java,python,c,jython,cython | 37,643,279 | 4 | false | 0 | 1 | CPython is comparable to Jython. They're the implementation of the Python language. CPython is the de-facto standard and was written in C. Jython is written in Java and runs on the JVM. It also allows accessing the Java ecosystem to a great extent. There are also other implementation of the language, like PyPy or Pyston.
Cython is totally different. It allows us to write extensions for Python in C or Pyrex, a subset of the Python language. Cython speeds up the execution speed for the parts written with it. | 3 | 3 | 0 | So I spent a lot of time trying to fugure out what Cython and Jython are and I'm still (more) confused as anyone who just started computer programming. I heard that Cython is an extension, but..is also and indepent language??
What I think I've understood is that:
Cython/Jython is just Python and you can use C or C++/Java libraries respectively with just a little bit of Cython/Jython syntax.
It's meant to speed up performance as well as improve code readability when a task would be more efficient to write in C or C++/Java and, this is done by using statically typed variables.
Or is Cython/Jython just some sort of extension that is used sometimes but not independently? What I mean is, you'd still write everything in Python and then, for the tasks you'd rather use C or C++/Java for, you'd use Cython/Jython instead? (I know I still need Python installed and similar things as it's dependent.)
Because if it really is better, other than the barrier that it's still being developed since it's rather new, wouldn't the need for C or C++/Java completely go away as these are as easy as Python but as powerful as C or C++/Java? | Is Cython/Jython an independent language? | 0.049958 | 0 | 0 | 3,486 |
37,646,652 | 2016-06-05T20:53:00.000 | 2 | 0 | 0 | 1 | python,google-oauth | 37,647,153 | 1 | true | 0 | 0 | You can create a server-side script in which you use Google OAuth to upload videos to A's account.
Then you can create a client-side app which allows your clients B and C to upload their videos to the server; on completion, the server can then upload them to A's account.
Alternatively, to avoid uploading twice, if you trust the clients and would like them to be able to upload directly, you can pass them an OAuth access token to A's account. | 1 | 0 | 0 | My client asked me to build a tool that would let him and his partners to upload video to youtube to his channel automatically .
For example let's say that my client is A and he has some buisness partners . A want to be able to upload videos to his channel, that is easy to do, but the problem here is to let other parners B and C to upload their videos to His channel (channel of the person of A) .
In this case I would need "A" to auth my app so he can upload videos to his own channel, but how can I handle that for other users . How can users use the access token of the person "A" to upload videos to his channel ?
What I've done so far ?
I've got the youtube upload python sample from google api docs and played with it a bit. I tried to subprocess.Popen(cmd) where cmd is the following command : python upload.py --file "video name" --title "title of the vid" .
This will lead the user to auth my app once , that's only fine for the "A" person .The others won't be able to do that, since they need to upload the vid to A's account . | How to handle google api oauth in this app? | 1.2 | 0 | 1 | 45 |
37,650,206 | 2016-06-06T05:38:00.000 | 4 | 0 | 1 | 0 | python,string,algorithm,dynamic-programming,interleave | 37,650,921 | 1 | true | 0 | 0 | Here's some ideas to get you started.
The tag wiki for dynamic programming describes it as:
Dynamic programming is an algorithmic technique for efficiently solving problems with a recursive structure containing many overlapping subproblems
So first, try and think of a way to solve the problem recursively. Ask: "How can I bite off a small piece of this problem and process it, in such a way that what I have left over is another example of the problem"?
In this case, the "small piece" would be a single character, and the left over would be the remaining characters in the string. Think of the problem as "what's the shortest interleaving of the characters of these two strings, starting at position X of string A and position Y of string B"? When you call it initially, X and Y are 0.
Three possible answers to that question are:
If X is not at the end of A, take the character A[X] off string A, then solve the problem recursively for for X+1,Y (find the shortest interleaving of the two strings starting at X+1,Y)
As above but taking a character off string B instead of A and solving recursively for X,Y+1
In the case that the characters of A[X] and B[Y] are identical, take the character off both and find the solution for X+1,Y+1
If X and Y have reached the end of A and B, return an empty string.
Return the shortest string of the above 3, added to the character from either A or B (or both) that you took off.
When the function returns from the top level you should have your answer.
That's the "recursive" part. The "overlapping subproblems" is about how you reuse stuff you have already calculated. In this case you can make a 2 dimensional array of strings that represents the problem solved for all the possible values of X and Y, and when entering the function, check whether you already have the answer and just return it if you do. If not then work it out as above and before returning from the function, save the value you are going to return in the array at location [X][Y]. | 1 | 2 | 0 | I'm having a problem with a question on dynamic programming.
Given two strings A and B find the shortest interleaved string of the two.
For example for A = "APPLE", B = "ABSOLUTE"
The shortest answer will be "ABPPSOLUTE"
Instead answer my function returns "APPABSOLUTE"
My idea to solve this problem was to interleave A[0] and B[0] continually len(A)+len(B) times
But that didn't work. | Find the shortest interleaved string of A and B with Dynamic Programming | 1.2 | 0 | 0 | 398 |
37,657,280 | 2016-06-06T12:18:00.000 | 16 | 0 | 1 | 1 | python,multithreading,docker | 37,692,379 | 2 | false | 0 | 0 | A container as such has nothing to do with the computation you need to perform. The question you are posting is whether I should have multiple processes doing my processing or multiple threads spawned by the same process doing the processing ?
A container is just a platform for running your application in the environment you want. Period. It means, you would be running a process inside a container to run your business logic. Multiple containers simply means multiple processes and as it is advised, you should go for multiple threads rather than multiple processes as spawning a new process (in your case, as container) would eat up more resources and would also require more memory etc. So it is better to have just one container which will spawn multiple threads to do the job for you.
However, it also depends upon the configuration of the underlying machine on which the container is started. If it makes sense to spawn multiple containers with multiple threads because of the multicore capabilities of the underlying hardware, you should do that as well. | 1 | 32 | 0 | I need to spawn N threads inside a docker container. I am going to receive a list of elements, then divide it in chunks and each thread will process each chunk.
So I am using a docker container with one process and N threads. Is it good practice in docker? I think so, because we have, e.g, apacha webserver that handle connections spawining threads.
Or it will be better to spawn N container each one for each chunk? If it is, what is the correct way to do this? | Multiple threads inside docker container | 1 | 0 | 0 | 42,455 |
37,660,288 | 2016-06-06T14:40:00.000 | 0 | 0 | 1 | 0 | python | 37,660,480 | 4 | false | 0 | 0 | You would read the file and obtain a list of lines (i.e. list of strings)
then you could use a list comprehension, like this one:
[ l1 + ' ' + l2 for l1,l2 in zip(lines[::2], lines[1::2]) ]
Note, this means you'll have to have an equal number of lines. so if len(lines)%2==1 then use lines[-1] to print out/use the last line by itself | 1 | 0 | 0 | I have a Chinese txt file with thousands of sentence lines as following,
line 1
line 2
line 3
line 4
…………
I want to combine every two adjoining lines into one line,it should be transformed as:
line 1 + space + line 2
line 3 + space + line 4
line 5 + space + line 6
…………
How can I use Python to finish the combination? | How to combine every two adjoining lines in Chinese txt file into one line with Python | 0 | 0 | 0 | 57 |
37,662,390 | 2016-06-06T16:24:00.000 | 1 | 0 | 0 | 0 | python-social-auth,django-socialauth | 37,662,391 | 1 | true | 1 | 0 | include a url to your website that is the absolute url version of this relative url:
/complete/facebook/
how to find this out?
use Chrome browser dev tools, enable preserve log, try to login to your app.
This question / answer is for django-social-auth but likely applies to python-social-auth too. | 1 | 0 | 0 | I was getting this facebook login error:
URL Blocked
This redirect failed because the redirect URI is not
whitelisted in the app’s Client OAuth Settings. Make sure Client and
Web OAuth Login are on and add all your app domains as Valid OAuth
Redirect URIs.
Facebook login requires whitelisting of the call-back url.
what is the call back url for django-social-auth or python-social-auth ? | python-social-auth and facebook login: what is the whitelist redirect url to include in fb configuration? | 1.2 | 0 | 1 | 654 |
37,665,304 | 2016-06-06T19:22:00.000 | 1 | 0 | 1 | 0 | python | 37,665,610 | 2 | false | 0 | 0 | For any ABC you can tell all virtual subclasses via the attribute '_abc_registry'.
No you can't. You can only find explicitly registered virtual subclasses that way. Anything handled by __subclasshook__ won't show up in your check.
To do what you're trying to do, you'd have to go through every ABC ever defined in your Python session and call isinstance. While this is technically possible in CPython by traversing the type hierarchy with the __subclasses__ method, it's probably a bad idea. | 1 | 1 | 0 | For any class you can tell the (non-virtual) superclasses via the attribute __mro__ and the (non-virtual) subclasses by calling __subclasses__.
For any ABC you can tell all virtual subclasses via the attribute _abc_registry.
Is there a way to tell all virtual superclasses of a class, i.e. all classes for which it is registered as virtual subclass? | Python: Howto enumerate the virtual superclasses of a class? | 0.099668 | 0 | 0 | 123 |
37,667,631 | 2016-06-06T22:10:00.000 | 0 | 0 | 1 | 0 | python,user-interface,tkinter,transparent | 37,801,513 | 1 | false | 0 | 1 | You could use some basic photoshop tools like the magic wand tool to remove the background, but keep in mind, some PNG format images have a faint background. This is either in the from of a watermark, or the image background was rendered with a lower opacity than the rest of the image. Your GUI may also have a layer placed above the images by default. Does it appear on each image seperatly when loaded into the GUI? | 1 | 0 | 0 | I'm trying to make a GUI in tkinter that uses one image as an overlay on top of another, but when I place the image over the lower one, the transparent area of the image appears as grey.
I've searched for solutions to this, but all the results that I've found have pointed towards using PIL.
Is it possible for me to use transparent, or partially transparent images in the python tkinter module without using PIL? | How can I make the background of an image appear transparent without PIL? | 0 | 0 | 0 | 234 |
37,668,706 | 2016-06-07T00:21:00.000 | 0 | 0 | 1 | 0 | python,atom-editor | 37,685,872 | 1 | true | 0 | 0 | You have to set the config options autoIndentOnPaste and normalizeIndentOnPaste both to false, and then reload the configuration. | 1 | 0 | 0 | I'm using Atom at work for editing Python code, and I'm running against a painful interaction between muscle memory and a labor-saving feature.
Near as I can tell, Atom will, when you paste a snippet of code, redo the indentation so that it's consistent with the indentation of the line it was pasted into, preserving relative indents.
If I didn't have any baggage from using editors without this feature, I'm pretty sure it'd be great, but as it is, I can't break my habit of selecting back to the preceding newline, and pasting that, which tends to do crazy things when pasting to or from the first line of a block.
I've tried to turn off Auto Indent on Paste, but it's not on anywhere I can find, and I'm not even sure it's the same feature; it's just what I hear about from people complaining about Atom going crazy when they paste Python.
So, where do I look to disable this? I'm willing to work up from no extensions back to what I've got installed, so assume a vanilla install.
I guess the workflow I'm looking for is "paste, manual re-indent", because at least that way I know what I'm getting and my response is always the same. As it stands, I don't have to think about it until it converts simple line rearrangements into syntactic garbage, which is worse than just adjusting things every time.
EDIT: In response to Milo Price, I have just now tried setting both autoIndentOnPaste and normalizeIndentOnPaste to false. The behavior is unchanged.
FURTHER EDIT: I had to reload the configuration for it to take. It's working now. | How to REALLY disable re-indent on paste | 1.2 | 0 | 0 | 221 |
37,668,737 | 2016-06-07T00:24:00.000 | -1 | 0 | 1 | 0 | python,string,unicode | 37,718,811 | 2 | true | 0 | 0 | I solved by this way
read byte by byte and add byte(variable c) into variable
aJSON+=encode(c)
aJSON.decode('unicode-escape') gives expected result.
Thanks for interest anyway.. | 1 | 0 | 0 | I searched a bit about this. But most of people want to convert original string(테스트) to unicode(\uD14C\uC2A4\uD2B8).
But what I want is converting unicode string(such as \uD14C\uC2A4\uD2B8) to real string(테스트). I have JSON file in which all the Korean strings are in form of unicode(\uXXXX) and I have to parse it into original string. How can I do it in Python?
To sum up,
the way to convert unicode string to original string such as in Python
\uD14C\uC2A4\uD2B8 -> 테스트 | How to print original string from Unicode string(such as \uD14C\uC2A4\uD2B8) in python | 1.2 | 0 | 0 | 126 |
37,670,895 | 2016-06-07T05:06:00.000 | 0 | 1 | 0 | 0 | python,cron,scrapy,crontab | 37,671,502 | 1 | true | 1 | 0 | Problem solved. Rather than running the crawl as root, use crontab -u user -e to create a crontab for user, and run as user. | 1 | 0 | 0 | The code in crontab 0 * * * * cd /home/scrapy/foo/ && scrapy crawl foo >> /var/log/foo.log
It failed to run the crawl, as there was no log in my log file.
I tested using 0 * * * * cd /home/scrapy/foo/ && pwd >> /var/log/foo.log, it echoed '/home/scrapy/foo' in log.
I also tried PATH=/usr/local/bin and PATH=/usr/bin, but no success.
I'm able to run it manually by typing cd /home/scrapy/foo/ && scrapy crawl foo in command line.
Any thoughts? Thanks. | cron couldn't run Scrapy | 1.2 | 0 | 0 | 108 |
37,674,728 | 2016-06-07T08:52:00.000 | -1 | 0 | 1 | 0 | python | 37,675,428 | 3 | false | 0 | 0 | why is everything in Python, an object?
Python (unlike other languages) is a truly Object Orient language (aka OOP)
when everything is an object, it becomes easier to search, manipulate or access things. (But everything comes at the cost of speed)
what prompted this shift of approach, to treat everything including, even functions, as objects?
"Necessity is the mother of invention" | 2 | 2 | 0 | Why is everything in Python, an object? According to what I read, everything including functions is an object. It's not the same in other languages. So what prompted this shift of approach, to treat everything including, even functions, as objects. | Approach behind having everything as an object in Python | -0.066568 | 0 | 0 | 106 |
37,674,728 | 2016-06-07T08:52:00.000 | -1 | 0 | 1 | 0 | python | 37,675,534 | 3 | false | 0 | 0 | In my opinion, the 'Everything is object' is great in Python. In this language, you don't react to what are the objects you have to handle, but how they can interact. A function is just an object that you can __call__, a list is just an object that you can __iter__. But why should we divide data in non overlapping groups. An object can behave like a function when we call it, but also like an array when we access it.
This means that you don't think your "function" like, "i want an array of integers and i return the sum of it" but more "i will try to iterate over the thing that someone gave me and try to add them together, if something goes wrong, i will tell it to the caller by raiseing error and he will hate to modify his behavior".
The most interesting exemple is __add__. When you try something like Object1 + Object2, Python will ask (nicely ^^) to Object1 to try to add himself with object2 (Object1.__add__(Object2)). There is 2 scenarios here: either Oject1 knows how to add himself to Object2 and everything is fine, either he raises a NotImplemented error and Python will ask to Object2 to radd himself to Object1. Just with this mechanism, you can teach to your object to add themselves with any other object, you can manage commutativity,... | 2 | 2 | 0 | Why is everything in Python, an object? According to what I read, everything including functions is an object. It's not the same in other languages. So what prompted this shift of approach, to treat everything including, even functions, as objects. | Approach behind having everything as an object in Python | -0.066568 | 0 | 0 | 106 |
37,675,203 | 2016-06-07T09:16:00.000 | 1 | 0 | 1 | 0 | python,parsing,pdf,pdfminer,pdf-parsing | 37,680,867 | 1 | true | 0 | 0 | There are 2 viable approaches to extract that field data:
Search for some predefined keyword, like Experience to get its location. Then search for the next section's keyword (Hobbies) and then just determine coordinates of the text partition between these 2 sections and extract this text from this location.
If PDF are generated using the same generator then you may just find coordinates of Experience section and just extract text from the same location everytime.
(easiest) Just convert the whole page into text and then parse the generated text using substring search or regular expressions. This will be the easiest and simpliest way as all the work regarding PDF format relies on the specialized tool | 1 | 1 | 0 | I wanted to parse the PDF file in python. I have seen examples with PDFMiner which could not explain my requirement.
For Example if I want to parse a resume, it contains various fields like Summary, Experience and Hobbies.
I am interested to extract only experience and this experience field will be in the first place or second place or at any place, I need to Identify where the experience field located and need to extract the data.
How can I do this? | Extracting Data from PDF with particular heading in python | 1.2 | 0 | 0 | 2,655 |
37,683,003 | 2016-06-07T15:05:00.000 | 0 | 0 | 1 | 0 | python,python-wheel | 37,704,826 | 1 | false | 0 | 0 | What ended up working for me is to write the full name of the file, instead of using '*' after the beginning of the file name. 'pip install' command works fine with the full name of the wheel (.whl) file - in this instance. | 1 | 1 | 0 | I am trying to install the 'TA_Lib-0.4.9-cp27-*.whl' file with powershell (windows). I receive the message 'file ... looks like a filename, but the file does not exist'.
I run 'pip install C:\Programs\TA_Lib-0.4.9-cp27-*.whl' from C:\Programs> where the whl file is located.
I use python 2.7, yet I also tried with file 'TA_Lib-0.4.9-cp34-*.whl' with same result.
I looked online and at SO, for similar cases, but so far everything I tried keeps giving me the same red error message 'TA_Lib-0.4.9-cp27-*.whl is not a valid wheel filename'.
EDIT:
the full message I receive in powershell is the following:
'Requirement 'C:\Programs\TA_Lib-0.4.9-cp27-.whl' looks like a filename, but the file does not exist
TA_Lib-0.4.9-cp27-.whl is not a valid wheel filename.'
Thank you for your help and suggestions. | python wheel file install TA_lib*.whl | 0 | 0 | 0 | 1,026 |
37,686,139 | 2016-06-07T17:54:00.000 | 0 | 0 | 1 | 0 | python,tensorflow | 37,693,149 | 3 | false | 0 | 0 | Last tflearn update had a compatibility issue with old TensorFlow versions (like mrry said, caused by 'variance_scaling_initializer()' that was only compatible with TensorFlow 0.9).
That error had already been fix, so you can just update TFLearn and it should works fine with any TensorFlow version over 0.7. | 1 | 4 | 1 | I've tried installing tflearn through pip as follows
pip install tflearn
and now when I open python, the following happens:
>>> import tflearn
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "//anaconda/lib/python2.7/site-packages/tflearn/__init__.py", line 22, in <module>
from . import activations
File "//anaconda/lib/python2.7/site-packages/tflearn/activations.py", line 7, in <module>
from . import initializations
File "//anaconda/lib/python2.7/site-packages/tflearn/initializations.py", line 5, in <module>
from tensorflow.contrib.layers.python.layers.initializers import \
ImportError: cannot import name variance_scaling_initializer
Any ideas? I'm using an anaconda installation of python. | TFLearn pip installation bug | 0 | 0 | 0 | 3,984 |
37,687,463 | 2016-06-07T19:10:00.000 | 13 | 0 | 1 | 0 | python,multithreading,qt,pyqt,pyqt5 | 37,688,544 | 2 | true | 0 | 0 | What you can do is design an object to do all these tasks (inherit QObject for slots / signals). Lets say each task is defined as a separate function - lets designate these functions as slots.
Then (a general order of events):
instantiate a QThread object.
instantiate your class.
Move your object into the thread using YouClass->moveToThread(pThread).
Now define a signal for each slot and connect these signals to the relevant slots in your object.
Finally run the thread using pThread->start()
Now you can emit a signal to do a particular task in the thread. You do not need to sub-class QThread just use a normal class derived from QObject (so that you can use slots/signals).
You can either use one class in one thread to do many operations (note: they will be queued). Or make many classes in many threads (to run "parallel").
I don't know python well enough to attempt an example here so I won't :o
Note: The reason to sub-class QThread would be if you wanted to extend the functionality of the QThread class - i.e. add more/specific thread-related functions. QThread is a class that controls a thread, and is not meant to be used to run arbitrary/generic tasks... even though you can abuse it to do so if you wish :) | 1 | 9 | 0 | I'm creating a simple GUI application using PyQt5 where I request some data from an API which is then used to populate various controls of the UI.
The examples I was following about worker threads in PyQt all seem to sub-class QThread and then do their business logic in the overridden run() method. This works fine but I want to execute different API calls at different times using a worker.
So my question is: do I need to create a specific worker thread for every operation I wish to do or is there a way of having a single thread class that I can use to carry out different operations at different times and therefore avoid the overhead of creating different thread sub-classes? | Single worker thread for all tasks or multiple specific workers? | 1.2 | 0 | 0 | 3,118 |
37,688,668 | 2016-06-07T20:24:00.000 | 3 | 0 | 1 | 0 | python,algorithm,sorting,data-structures,set | 37,688,762 | 1 | true | 0 | 0 | You can get O(N) time complexity with frozenset(Counter(my_string).items()). You may want to time whether this actually wins in practice, though, because the constant factor of this code might be high enough to outweigh the extra logarithmic factor of ''.join(sorted(my_string)). | 1 | 1 | 0 | I want to use something like "characters where order does not matter" as the key to build a dictionary in Python.
Like "abc" and "cba" can give me the same hash index, and "aab" and "ab" give me different hash indices.
I found one way is to use tuple(sorted(my_string)) to hashable a list of characters, but it may require O(NlogN) time complexity.
I tried to use Counter, but it is not hashable. Frozenset is hashable, but it does not allow duplicates.
Is there a better way (O(N) time complexity) to replace tuple(sorted(my_string))?
If there are mistakes above, please correct me. Thank you! | What can I replace tuple(sorted(my_string)) with in Python? | 1.2 | 0 | 0 | 75 |
37,691,903 | 2016-06-08T01:29:00.000 | 0 | 0 | 0 | 0 | python | 37,710,144 | 2 | false | 0 | 0 | Thank you man for sharing the code. I'll try to implement.
Actually I am trying to create an aquarium controller that'd eventually take care of lot of tasks for me. Right now, I am using a raspberry pi, along with few temperature sensors, to measure my tank temperature. As long as the temperature goes beyond say 28 degrees, I plan to trigger few actions. Right now it's just an email notification.
The code I am using is simply comparing the temperature and nothing much. | 1 | 0 | 0 | I am trying to write a python code in which I'd want to send a notification to a user when a specific condition is met. The event can occur every second and I do not want to send notifications every second.
How do I program (pseudo code probably) so that my notifications are sent every 30 minutes, even if event continues to occur more frequently? | Python - Ensuring that event notifications are not sent before a pre specified time | 0 | 0 | 0 | 55 |
37,693,154 | 2016-06-08T04:17:00.000 | 0 | 0 | 1 | 1 | python,virtualenv | 37,693,496 | 1 | false | 0 | 0 | This is pretty simpleJust goto environment folder
Try: Scripts/activate
This will activate the environment
Try:Scripts/deactivate
This will deactivate current environment | 1 | 0 | 0 | I try to use virtualenv inside a folder using the command virtualenv . and getting the error -bash: virtualenv: command not found. However, I installed virtualenv using pip pip installed virtualenv and also, upgraded it earlier with sudo pip install virtualenv.
How to use virtualenv properly ? I'm following a tutorial and they seems doing the same and gets away with it. I'm a Java developer with beginner knowledge of Python and working to improve it. | How to use virtualenv inside a folder? | 0 | 0 | 0 | 94 |
37,693,771 | 2016-06-08T05:16:00.000 | 3 | 0 | 1 | 0 | python,macos,python-2.7,python-3.x | 50,246,018 | 3 | false | 0 | 0 | I you have python 2.x already installed then you can use brew upgrade python to upgrade to python 3.x | 2 | 14 | 0 | How to upgrade from python 2.7 to 3.5 in Mac OSX? I downloaded python 3.5 .dmg file and installed it. what are the changes I should make for PYTHONPATH and PATH?
Is it possible to use both without any issues using virtualenv ? | How to upgrade to python 3.5 from 2.7 in Mac OSX | 0.197375 | 0 | 0 | 19,399 |
37,693,771 | 2016-06-08T05:16:00.000 | 0 | 0 | 1 | 0 | python,macos,python-2.7,python-3.x | 58,355,022 | 3 | false | 0 | 0 | had this issue as well. I navigated to the PATH location for both Python2.x and Python3.7. Updated with homebrew and hopefully the pip installer will work now. | 2 | 14 | 0 | How to upgrade from python 2.7 to 3.5 in Mac OSX? I downloaded python 3.5 .dmg file and installed it. what are the changes I should make for PYTHONPATH and PATH?
Is it possible to use both without any issues using virtualenv ? | How to upgrade to python 3.5 from 2.7 in Mac OSX | 0 | 0 | 0 | 19,399 |
37,694,804 | 2016-06-08T06:29:00.000 | 0 | 0 | 0 | 0 | python | 55,979,236 | 1 | false | 0 | 0 | Answer
If you are getting the error while using Python 3 on Mac OS X, try these commands in Terminal: pip3 install Send2Trash or pip3 install send2trash
I am on Mac OS X using Python 3, and this worked for me.
Explanation
I have read that most Mac OS X users have both Python 2 and Python 3 installed. Using pip3 (instead of pip) installs the module on Python 3 (instead of Python 2). (Disclaimer: I don't know if this explanation is correct.) | 1 | 1 | 0 | I am a beginner in python programming and I am working on a project that I want to send files to the recycle bin using python. I heard of this "add-on" called Send2Trash which is what I wanted but I don't really know how to install it. I tried on the python website, other websites and from the author and it really didn't made any sense about the python setup.py install and about "Distutils". Can someone help give a clear instruction on installing this type of "add-on" very clearly. And also I apologize if I ask something like this because I'm still a beginner in python but it is really a big help if someone can solve this problem. | How to install an "add-on" like Send2Trash to be used in Python? | 0 | 0 | 0 | 1,242 |
37,695,376 | 2016-06-08T07:00:00.000 | 1 | 0 | 1 | 0 | python,opencv,video | 65,464,031 | 2 | false | 0 | 0 | You can simply measure a certain position in the video in milliseconds using
time_milli = cap.get(cv2.CAP_PROP_POS_MSEC)
and then divide time_milli by 1000 to get the time in seconds. | 1 | 3 | 1 | Let say I have made a program to detect a green ball in a video. Whenever there is a green ball detected, I want to print out the duration of video at the time the green ball is detected. Is it possible? | Python and OpenCV - getting the duration time of a video at certain points | 0.099668 | 0 | 0 | 5,963 |
37,697,438 | 2016-06-08T08:41:00.000 | 1 | 0 | 1 | 0 | python,reference,pycharm | 37,698,048 | 1 | false | 0 | 0 | go to File >> Settings >> Project >> Project Interpreter
select the setting icon there and click on more
here click in last option (Interpreter Paths).
add your modules directory location here.
restart Pycharm | 1 | 1 | 0 | I'm a new Pycharm's user, and in my project, I need to turn a python file in a directory. This file contains a lot of functions and so i would like reorder my project creating a new directory that contains new python files with the functions cited previously. So my question is:
Is there a pycharm functionality that update automatically all references (for example import) in project?
I have searched something like this but i haven't found a good solution.
Thanks in advance
Regards | Pycharm update reference automatically | 0.197375 | 0 | 0 | 219 |
37,702,032 | 2016-06-08T12:06:00.000 | 0 | 0 | 0 | 0 | python,django,django-models,django-migrations | 37,811,571 | 1 | true | 1 | 0 | Based on my investigation and on comments provided, it looks like there is no solution for the moment. | 1 | 1 | 0 | We're using CustomFlatPage model derived from django.FlatPage model in our application. It works fine, but the FlatPage changed in Django 1.9, which triggers the migration for our CustomFlatPage. But we'd like to have a clean migrations, that is state where makemigrations doesn't create any migrations in 1.8, nor 1.9.
Is it possible to write a migration which would be compatible with Django 1.8 and 1.9 without any change to the CustomFlatPage model itself? | Migration for model inherited from Django | 1.2 | 0 | 0 | 322 |
37,702,630 | 2016-06-08T12:33:00.000 | 0 | 0 | 0 | 0 | python,image,16-bit,lab-color-space | 46,964,844 | 1 | false | 0 | 0 | skimage version of rgb2lab uses floating point input where the color range is 0-1. You can normalize your 16bit image to this range and use the rgb2lab routine. | 1 | 1 | 1 | i have a 16 bit image in PhoPhoto RGB color space.For equalizzation I want to convert it in Lab colorspace and then equalize L-channel without losing precision. I have used skimage.color.rgb2lab but this convert image in float64.
help me!! | Convert a 16-bit image from rgb to Lab without losing precision | 0 | 0 | 0 | 455 |
37,702,938 | 2016-06-08T12:47:00.000 | 1 | 0 | 1 | 0 | python,lxml,setup.py | 37,703,300 | 1 | false | 0 | 0 | No. For one thing, the python-dev package is specific to Debian-like distributions; there is no guarantee that other distributions will have a package with the same name that fulfills the desired role. For another, the user installing your Python package may have permission to install Python modules (e.g., in a virtualenv or user-specific directory) but not permission to install system packages. | 1 | 1 | 0 | Is there a way to tell python in the setup.py file that "python-dev" (which cannot be installed with pip because is a OS package) is necessary and therefore should be installed?
How to install it automatically? | list python-dev as install_requires in setup.py | 0.197375 | 0 | 0 | 89 |
37,703,061 | 2016-06-08T12:52:00.000 | 0 | 1 | 1 | 0 | python,math,multiplication | 37,703,548 | 1 | true | 0 | 0 | take 10 based logarithms, sum them up and take the fractional part of the sum.
think about scientific notation of large numbers. | 1 | 0 | 0 | I am multiplying many large numbers and finally taking modulo of it. To optimise this I am using MOD at each step. But I also want the 1st digit of the final answer. Is there any way to know that even after using MOD?
Or is there any other efficient way to do huge multiplication many times, get the final answer and extract the 1st digit from it?
Order of elements is 10^9 and number of multiplications is about 10^5 | 1st digit before taking modulo(10**9 + 7) | 1.2 | 0 | 0 | 239 |
37,704,832 | 2016-06-08T14:05:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,queue | 62,942,500 | 4 | false | 0 | 0 | The only problem is that dequeue doesn't wait for new messages to come on .pop(). Push | 1 | 2 | 0 | Is it possible to put items on the top of a Queue instead of the bottom?
In same cases I need to re-popoulate the Queue by mantaining the original order after that I've get items. | How to put item on top of the Queue in python? | 0.049958 | 0 | 0 | 7,575 |
37,705,427 | 2016-06-08T14:30:00.000 | 0 | 0 | 0 | 1 | python,eclipse,pydev | 37,706,507 | 1 | true | 0 | 0 | I have absolutely no idea why it solved the problem, but updating the Maven Plugins solved the problem. | 1 | 2 | 0 | My Eclipse stalled so I shut it down (normally, I didn't send any kill signal or anything, the editor was bugging but the menu was still working so I simply quit it from the menu).
When I reopened eclipse however I got the problem:
Plug-in org.python.pydev was unable to load class org.python.pydev.editor.PyEdit.
I am using Eclipse Kepler Release 2 Build id: 20140224-0627
with Java 8 and
PyDev 4.5.4.20160129223
I have tried rebuilding the workspace, cleaning the workspace, restarting it, but nothing works. I have now updated PyDev to PyDev 5 and it still gives me the same error.
Additionally, the Package Explorer can't load either and gives the error:
Plug-in org.eclipse.jdt.ui was unable to load class org.eclipse.jdt.internal.ui.packageview.PackageExplorerPart.
Any ideas?
The exact traceback is:
org.eclipse.core.runtime.CoreException: Plug-in org.python.pydev was unable to load class org.python.pydev.editor.PyEdit.
at org.eclipse.core.internal.registry.osgi.RegistryStrategyOSGI.throwException(RegistryStrategyOSGI.java:194)
at org.eclipse.core.internal.registry.osgi.RegistryStrategyOSGI.createExecutableExtension(RegistryStrategyOSGI.java:178)
at org.eclipse.core.internal.registry.ExtensionRegistry.createExecutableExtension(ExtensionRegistry.java:905)
at org.eclipse.core.internal.registry.ConfigurationElement.createExecutableExtension(ConfigurationElement.java:243)
at org.eclipse.core.internal.registry.ConfigurationElementHandle.createExecutableExtension(ConfigurationElementHandle.java:55)
at org.eclipse.ui.internal.WorkbenchPlugin.createExtension(WorkbenchPlugin.java:274)
at org.eclipse.ui.internal.registry.EditorDescriptor.createEditor(EditorDescriptor.java:235)
at org.eclipse.ui.internal.EditorReference.createPart(EditorReference.java:318)
at org.eclipse.ui.internal.e4.compatibility.CompatibilityPart.createPart(CompatibilityPart.java:266)
at org.eclipse.ui.internal.e4.compatibility.CompatibilityEditor.createPart(CompatibilityEditor.java:61)
at org.eclipse.ui.internal.e4.compatibility.CompatibilityPart.create(CompatibilityPart.java:304)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.eclipse.e4.core.internal.di.MethodRequestor.execute(MethodRequestor.java:56)
at org.eclipse.e4.core.internal.di.InjectorImpl.processAnnotated(InjectorImpl.java:877)
at org.eclipse.e4.core.internal.di.InjectorImpl.processAnnotated(InjectorImpl.java:857)
at org.eclipse.e4.core.internal.di.InjectorImpl.inject(InjectorImpl.java:119)
at org.eclipse.e4.core.internal.di.InjectorImpl.internalMake(InjectorImpl.java:333)
at org.eclipse.e4.core.internal.di.InjectorImpl.make(InjectorImpl.java:254)
at org.eclipse.e4.core.contexts.ContextInjectionFactory.make(ContextInjectionFactory.java:162)
at org.eclipse.e4.ui.internal.workbench.ReflectionContributionFactory.createFromBundle(ReflectionContributionFactory.java:102)
at org.eclipse.e4.ui.internal.workbench.ReflectionContributionFactory.doCreate(ReflectionContributionFactory.java:71)
at org.eclipse.e4.ui.internal.workbench.ReflectionContributionFactory.create(ReflectionContributionFactory.java:53)
at org.eclipse.e4.ui.workbench.renderers.swt.ContributedPartRenderer.createWidget(ContributedPartRenderer.java:129)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createWidget(PartRenderingEngine.java:949)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:633)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:735)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.access$2(PartRenderingEngine.java:706)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$7.run(PartRenderingEngine.java:700)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createGui(PartRenderingEngine.java:685)
at org.eclipse.e4.ui.workbench.renderers.swt.StackRenderer.showTab(StackRenderer.java:1147)
at org.eclipse.e4.ui.workbench.renderers.swt.LazyStackRenderer.postProcess(LazyStackRenderer.java:96)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:649)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:735)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.access$2(PartRenderingEngine.java:706)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$7.run(PartRenderingEngine.java:700)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createGui(PartRenderingEngine.java:685)
at org.eclipse.e4.ui.workbench.renderers.swt.SWTPartRenderer.processContents(SWTPartRenderer.java:62)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:645)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$6.run(PartRenderingEngine.java:526)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createGui(PartRenderingEngine.java:511)
at org.eclipse.e4.ui.workbench.renderers.swt.ElementReferenceRenderer.createWidget(ElementReferenceRenderer.java:61)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createWidget(PartRenderingEngine.java:949)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:633)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:735)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.access$2(PartRenderingEngine.java:706)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$7.run(PartRenderingEngine.java:700)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createGui(PartRenderingEngine.java:685)
at org.eclipse.e4.ui.workbench.renderers.swt.SWTPartRenderer.processContents(SWTPartRenderer.java:62)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:645)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:735)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.access$2(PartRenderingEngine.java:706)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$7.run(PartRenderingEngine.java:700)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createGui(PartRenderingEngine.java:685)
at org.eclipse.e4.ui.workbench.renderers.swt.SWTPartRenderer.processContents(SWTPartRenderer.java:62)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:645)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:735)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.access$2(PartRenderingEngine.java:706)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$7.run(PartRenderingEngine.java:700)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createGui(PartRenderingEngine.java:685)
at org.eclipse.e4.ui.workbench.renderers.swt.SWTPartRenderer.processContents(SWTPartRenderer.java:62)
at org.eclipse.e4.ui.workbench.renderers.swt.PerspectiveRenderer.processContents(PerspectiveRenderer.java:59)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:645)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:735)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.access$2(PartRenderingEngine.java:706)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$7.run(PartRenderingEngine.java:700)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createGui(PartRenderingEngine.java:685)
at org.eclipse.e4.ui.workbench.renderers.swt.PerspectiveStackRenderer.showTab(PerspectiveStackRenderer.java:103)
at org.eclipse.e4.ui.workbench.renderers.swt.LazyStackRenderer.postProcess(LazyStackRenderer.java:96)
at org.eclipse.e4.ui.workbench.renderers.swt.PerspectiveStackRenderer.postProcess(PerspectiveStackRenderer.java:77)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:649)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:735)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.access$2(PartRenderingEngine.java:706)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$7.run(PartRenderingEngine.java:700)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createGui(PartRenderingEngine.java:685)
at org.eclipse.e4.ui.workbench.renderers.swt.SWTPartRenderer.processContents(SWTPartRenderer.java:62)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:645)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:735)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.access$2(PartRenderingEngine.java:706)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$7.run(PartRenderingEngine.java:700)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createGui(PartRenderingEngine.java:685)
at org.eclipse.e4.ui.workbench.renderers.swt.SWTPartRenderer.processContents(SWTPartRenderer.java:62)
at org.eclipse.e4.ui.workbench.renderers.swt.WBWRenderer.processContents(WBWRenderer.java:581)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:645)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:735)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.access$2(PartRenderingEngine.java:706)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$7.run(PartRenderingEngine.java:700)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createGui(PartRenderingEngine.java:685)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$9.run(PartRenderingEngine.java:1042)
at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.run(PartRenderingEngine.java:997)
at org.eclipse.e4.ui.internal.workbench.E4Workbench.createAndRunUI(E4Workbench.java:140)
at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:611)
at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332)
at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:567)
at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:150)
at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:124)
at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:354)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:181)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:636)
at org.eclipse.equinox.launcher.Main.basicRun(Main.java:591)
at org.eclipse.equinox.launcher.Main.run(Main.java:1450)
at org.eclipse.equinox.launcher.Main.main(Main.java:1426)
Caused by: java.lang.NoClassDefFoundError: org/eclipse/ui/editors/text/TextEditor
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.defineClass(DefaultClassLoader.java:188)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.defineClassHoldingLock(ClasspathManager.java:638)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.defineClass(ClasspathManager.java:613)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findClassImpl(ClasspathManager.java:574)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findLocalClassImpl(ClasspathManager.java:492)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findLocalClass(ClasspathManager.java:465)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.findLocalClass(DefaultClassLoader.java:216)
at org.eclipse.osgi.internal.loader.BundleLoader.findLocalClass(BundleLoader.java:395)
at org.eclipse.osgi.internal.loader.SingleSourcePackage.loadClass(SingleSourcePackage.java:35)
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:461)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.defineClass(DefaultClassLoader.java:188)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.defineClassHoldingLock(ClasspathManager.java:638)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.defineClass(ClasspathManager.java:613)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findClassImpl(ClasspathManager.java:574)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findLocalClassImpl(ClasspathManager.java:492)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findLocalClass(ClasspathManager.java:465)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.findLocalClass(DefaultClassLoader.java:216)
at org.eclipse.osgi.internal.loader.BundleLoader.findLocalClass(BundleLoader.java:395)
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:464)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.defineClass(DefaultClassLoader.java:188)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.defineClassHoldingLock(ClasspathManager.java:638)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.defineClass(ClasspathManager.java:613)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findClassImpl(ClasspathManager.java:574)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findLocalClassImpl(ClasspathManager.java:492)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findLocalClass(ClasspathManager.java:465)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.findLocalClass(DefaultClassLoader.java:216)
at org.eclipse.osgi.internal.loader.BundleLoader.findLocalClass(BundleLoader.java:395)
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:464)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.eclipse.osgi.internal.loader.BundleLoader.loadClass(BundleLoader.java:340)
at org.eclipse.osgi.framework.internal.core.BundleHost.loadClass(BundleHost.java:229)
at org.eclipse.osgi.framework.internal.core.AbstractBundle.loadClass(AbstractBundle.java:1212)
at org.eclipse.core.internal.registry.osgi.RegistryStrategyOSGI.createExecutableExtension(RegistryStrategyOSGI.java:174)
... 120 more
Caused by: org.eclipse.core.runtime.internal.adaptor.EclipseLazyStarter$TerminatingClassNotFoundException: An error occurred while automatically activating bundle org.eclipse.ui.editors (216).
at org.eclipse.core.runtime.internal.adaptor.EclipseLazyStarter.postFindLocalClass(EclipseLazyStarter.java:124)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findLocalClass(ClasspathManager.java:469)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.findLocalClass(DefaultClassLoader.java:216)
at org.eclipse.osgi.internal.loader.BundleLoader.findLocalClass(BundleLoader.java:395)
at org.eclipse.osgi.internal.loader.SingleSourcePackage.loadClass(SingleSourcePackage.java:35)
at org.eclipse.osgi.internal.loader.MultiSourcePackage.loadClass(MultiSourcePackage.java:31)
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:452)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at com.laboki.eclipse.plugin.smartsave.main.EditorContext.(EditorContext.java:81)
at com.laboki.eclipse.plugin.smartsave.task.AsyncTask$1.runTask(AsyncTask.java:17)
at com.laboki.eclipse.plugin.smartsave.task.TaskJob.run(TaskJob.java:28)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:53)
Caused by: org.osgi.framework.BundleException: Exception in org.eclipse.ui.internal.editors.text.EditorsPlugin.start() of bundle org.eclipse.ui.editors.
at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:734)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl.start(BundleContextImpl.java:683)
at org.eclipse.osgi.framework.internal.core.BundleHost.startWorker(BundleHost.java:381)
at org.eclipse.osgi.framework.internal.core.AbstractBundle.start(AbstractBundle.java:300)
at org.eclipse.osgi.framework.util.SecureAction.start(SecureAction.java:478)
at org.eclipse.osgi.internal.loader.BundleLoader.setLazyTrigger(BundleLoader.java:263)
at org.eclipse.core.runtime.internal.adaptor.EclipseLazyStarter.postFindLocalClass(EclipseLazyStarter.java:109)
... 14 more
Caused by: org.eclipse.swt.SWTException: Invalid thread access
at org.eclipse.swt.SWT.error(SWT.java:4397)
at org.eclipse.swt.SWT.error(SWT.java:4312)
at org.eclipse.swt.SWT.error(SWT.java:4283)
at org.eclipse.swt.widgets.Display.error(Display.java:1204)
at org.eclipse.swt.widgets.Display.checkDevice(Display.java:759)
at org.eclipse.swt.widgets.Display.disposeExec(Display.java:1181)
at org.eclipse.jface.resource.ColorRegistry.hookDisplayDispose(ColorRegistry.java:268)
at org.eclipse.jface.resource.ColorRegistry.(ColorRegistry.java:123)
at org.eclipse.jface.resource.ColorRegistry.(ColorRegistry.java:106)
at org.eclipse.ui.internal.themes.WorkbenchThemeManager.(WorkbenchThemeManager.java:98)
at org.eclipse.ui.internal.themes.WorkbenchThemeManager.getInstance(WorkbenchThemeManager.java:58)
at org.eclipse.ui.internal.Workbench.getThemeManager(Workbench.java:3232)
at org.eclipse.ui.internal.editors.text.EditorsPlugin.start(EditorsPlugin.java:214)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl$1.run(BundleContextImpl.java:711)
at java.security.AccessController.doPrivileged(Native Method)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:702)
... 20 more | Plug-in org.python.pydev was unable to load class org.python.pydev.editor.PyEdit | 1.2 | 0 | 0 | 6,842 |
37,705,914 | 2016-06-08T14:49:00.000 | 0 | 0 | 0 | 0 | python,django,django-authentication,django-users,django-custom-user | 37,771,830 | 1 | true | 1 | 0 | SOLVED!
The problem was not to extend AbstracUser, the problem was when i saved the user,
CustomUser(username='myusername', password='mypassword'), the password is saved like plain text and de function authenticate() doesn't work with this.
It's necessary save user usinng UserCreationForm, or extending it, the most important is save() method because use set_password() method which set encrypted password so authenticate() method works. | 1 | 0 | 0 | I want create my custom user extending AbstractUser, but when i want authenticate my custom user return None.
When i create CustomUser() is stored in database but the passwowrd is not encrypted. Can i use authenticate function of banckend default? or i must create a custom backend for my custom user.
i added:
AUTH_USER_MODEL = 'mysite.customuser'
I think extending of AbstractUser, my class don't have same method or something is wrong | Custom User Model extending AbstractUser, authenticate return None | 1.2 | 0 | 0 | 411 |
37,706,457 | 2016-06-08T15:11:00.000 | 0 | 1 | 1 | 0 | python,time,raspberry-pi,clock | 37,706,564 | 3 | false | 0 | 0 | Check wheather it ajusts clock from external source like NPT. | 1 | 0 | 0 | Issue:
When calling the function time.time() I notice it jumping about 30 seconds after reboot. By jumping I mean it changes its return value by about 40 seconds instantly.
Setup:
I am running my script on a Raspberry Pi 3B, immeadiately after reboot. The issue does not occur when ran later.
Question:
Why does that occur? I suspect the Raspberry of changing its System clock at some point after reboot through WiFi. May that be the issue? I do not think posting code is helpful, as it really is a question related to the time.time() function. | time.time() values jumping | 0 | 0 | 0 | 360 |
37,707,616 | 2016-06-08T16:07:00.000 | 1 | 0 | 1 | 0 | python,find | 37,708,306 | 2 | false | 0 | 0 | The objective of the find method is to return the index value, which (for all practical purposes) programmers want as positive. In this case, any negative value means that the function could not find the particular element.
The reason why True or False is NOT used is just to avoid thousands of TypeErrors | 2 | 1 | 0 | I would expect a return of 0. Is -1 simply the equivalent of false? For a moment I thought it was because 0 is a position (index?) in the string, but so is -1. While I know it is enough to simply memorize that this is how the find operation works, I was wondering if there was a deeper explanation or if this is something common that I will continue to encounter as I study. | Why does the find function return -1 in python when searching a string and failing to find a match? | 0.099668 | 0 | 0 | 4,891 |
37,707,616 | 2016-06-08T16:07:00.000 | 0 | 0 | 1 | 0 | python,find | 37,708,679 | 2 | false | 0 | 0 | @jonrsharpe I believe your comment answered my question best. While -1 is an index, the find function always returns a non-negative index when successful and -1 otherwise. | 2 | 1 | 0 | I would expect a return of 0. Is -1 simply the equivalent of false? For a moment I thought it was because 0 is a position (index?) in the string, but so is -1. While I know it is enough to simply memorize that this is how the find operation works, I was wondering if there was a deeper explanation or if this is something common that I will continue to encounter as I study. | Why does the find function return -1 in python when searching a string and failing to find a match? | 0 | 0 | 0 | 4,891 |
37,708,002 | 2016-06-08T16:26:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,pip | 37,708,130 | 3 | false | 0 | 0 | You can download tar package from PIP and extract it locally.
You can copy the package directory from the tar and place at the root location of your application.
The other approach is to copy the package directory to /usr/lib/python2.7/site-packages/ or /usr/local/lib/python2.7/site-packages/ in CentOS and
/usr/lib/python2.7/dist-packages/ in Ubuntu. | 2 | 0 | 0 | A Python 2.7 script needs to be deployed on several systems that does not have a internet connection for pip install. This script depends on several libraries installed using pip install.
How can these packages that are normally installed using pip be packaged along with the Python script and be made to run on another system? | Deploy Python Script without using pip | 0 | 0 | 0 | 137 |
37,708,002 | 2016-06-08T16:26:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,pip | 37,708,223 | 3 | false | 0 | 0 | I guess you download the packagename.whl (for linux, use it's .tar) file of the required package on a machine and copy that file to install it over several machines using pip -install packagename.whl (.tar for Linux) on terminal.
Make sure that file is in the folder where this command is run from. | 2 | 0 | 0 | A Python 2.7 script needs to be deployed on several systems that does not have a internet connection for pip install. This script depends on several libraries installed using pip install.
How can these packages that are normally installed using pip be packaged along with the Python script and be made to run on another system? | Deploy Python Script without using pip | 0 | 0 | 0 | 137 |
37,711,850 | 2016-06-08T20:01:00.000 | 0 | 0 | 0 | 0 | python,ajax,django | 37,712,058 | 1 | false | 1 | 0 | Unless you set up some kind of socket (for instance using Django Channels) you'll have to resend the entire request data on each request. This doesn't seem to be much of a problem for your use case, though. | 1 | 0 | 0 | I'm using Django templates with HTML/JS to show the results of simulations in python/Anaconda. The simulation depends on the setting of different parameters. After the initial data is loaded (from files) and visualized by the first call of the page, the parameters can be chosen in textfields/dropdowns in the template. An AJAX-request sends the parameter to the view and retrieves an array with the results.
Do I need to send all the initial data with the requests everytime, or is it possible to store it in, for example, an attribute of an object in the view? Are examples avaiable? | Django object in view? | 0 | 0 | 0 | 37 |
37,712,275 | 2016-06-08T20:26:00.000 | 0 | 0 | 0 | 1 | python,audio,amazon-ec2,pjsip,pyaudio | 37,734,848 | 1 | true | 1 | 0 | Alright, this isn't the most reliable solution but it does seem to work.
To start with you must verify you have pulseaudio installed and working
Use what ever package installer you need:
apt-get/yum/zypper pulseaudio pulseaudio-devel alsa-lib alsa-devel alsa-plugins-pulseaudio
pulseaudio --start
pacmd load-module module-null-sink sink_name=MySink
pacmd update-sink-proplist MySink device.description=MySink
This will allow you to pass audio around in your vm so that it can be sent out using pjsip.
If you dont have your own loopback written in python you can use:
pacmd load-module module-loopback sink=MySink
to pass audio back out. If you do have a loopback written you cannot use both. | 1 | 1 | 0 | I am writing a SIP client in python. I am able to make my script run on my computer just fine. It plays a wav file, grabs the audio and then sends the audio out using a sip session. I am having a very hard time getting this to run in the AWS ec2 VM. The VM is running SUSE 12.
There seems to be a lot of questions related to audio loop backs and piping audio around. But I haven't found any that seem to encompass all of the ways I am having issues.
I have tried figuring out how to set one up using pacmd but havent had and luck. I have Dummy Output and Monitor of Dummy Output as defaults but that didnt work.
When I try to open the stream i still get a no default output device error.
What I am trying to find is a way to have a virtual sound card (i guess) that I can have for channels on the sip call and stream the wav file into.
Any advice or direction would be very helpful.
Thanks in advance | pyAudio and PJSIP in a Virtual Machine | 1.2 | 0 | 0 | 689 |
37,712,561 | 2016-06-08T20:43:00.000 | 6 | 0 | 0 | 0 | python,cython,ctypes,python-extensions | 37,712,741 | 1 | true | 0 | 1 | It helps to know what you need to do here.
If you're not using ctypes for function calls, it's unlikely that it will save you anything to just have ctypes types involved. If you already have some DLL lying around with a "solve it for me" function, then sure, ctypes it is.
Cython creates extension modules, so anything you can do with Cython could also be done with an extension module, it just depends on how comfortable you are writing extensions by hand. Cython is more limited than writing extension by hand, and harder to "see" performance in (the rules for optimizing Cython are basically the opposite of optimizing CPython code, and if you forget to cdef the right things, you gain nothing), but Cython is generally simpler too.
Writing a separate non-extension DLL is only worthwhile if you have non-Python uses for it; otherwise, a Python extension is basically just the DLL case, but better integrated.
Basically, by definition, with infinite time and skill, a CPython extension will beat any other option on performance since it can do everything the others do, and more. It's just more work, and easy to make mistakes (because you're writing C, which is error prone). | 1 | 5 | 0 | Right now I have an image processing algorithm that is roughly 100 lines or so in Python. It takes about 500ms using numpy, PIL and scipy. I am looking to get it faster, and as the actual algorithm seems pretty optimized thus far, I'm wondering if using a different approach such as Cython would improve the times. I believe that I have several different things I could do:
Use Cython to expose relevant parts of the C library to Python.
Use Ctypes to just write everything in C but still have it pure Python (not leaning towards this at all)
Create an extension module in C/C++ and them import it and call the functions. I'm not sure if I would be able to use numpy this way though.
Create a DLL and have Python load it. This doesn't get to use numpy or those modules, but would still be very efficient.
I'm just looking for speed here, not worried about difficulty of the implementation. Is there any one option that is better in this case, are they all the same, or is it even worth doing? | Differences between Cython, extending C/C++ with Python.h, etc | 1.2 | 0 | 0 | 929 |
37,718,216 | 2016-06-09T06:19:00.000 | 0 | 0 | 0 | 1 | c#,python,sockets,twisted,beginreceive | 37,736,839 | 1 | false | 0 | 1 | Sorry the question was badly asked. I did find the solution though.
int netmsg_size = BitConverter.ToInt32(state.buffer, 0);
int msg_size = IPAddress.NetworkToHostOrder(netmsg_size);
This converts the network integer back into a regular integer. | 1 | 0 | 0 | I'm using length based message framing with python twisted framework with a C# client running BeginRecieve async reads and I'm having trouble grabbing the value of the length of the message.
This is the twisted python code
self.transport.write(pack(self.structFormat, len(string)) + string)
And this is the C# code:
int bytesRead = client.EndReceive(ar);
if (bytesRead > 0)
{
int msg_size = BitConverter.ToInt32(state.buffer, 0);
Problem is the len(string) value is not correct when I grab it via Bitconverter on the c# side.
The value should be 15 but its coming across as 251658240.
Any insight would be much appreciated. | Python Twisted frameowrk transport.write to a c# socket BeginReceive reading length based message framing value | 0 | 0 | 0 | 72 |
37,720,228 | 2016-06-09T08:07:00.000 | 5 | 0 | 1 | 0 | python-3.x,floating-point,rounding | 37,720,389 | 1 | false | 0 | 0 | Because 1.4 999 999 999 999 999 when parsed is exactly 1.5, the difference between them is too small to represent at that magnitude.
But 1.4 99 999 999 999 999 is low enough to parse to "less than 1.5", actually 1.4999999999999988897769753748434595763683319091796875, which is clearly less than 1.5 | 1 | 3 | 0 | round(1.4 999 999 999 999 999) (without the spaces) gets rounded to 2
but
round(1.4 99 999 999 999 999) (without the spaces) gets rounded to 1.
I suppose this has to do with imprecise floating point representations, but fail to understand how does it come that the first representation is interpreted as closer to 2 than to 1. | Rounding in Python | 0.761594 | 0 | 0 | 312 |
37,720,588 | 2016-06-09T08:26:00.000 | 0 | 0 | 1 | 0 | python,performance,data-structures,graph | 37,721,273 | 1 | false | 0 | 0 | The key point here is the data format of the graphs already generated by your algorithm. Does it contruct a new graph by adding vertices and edges ? Is it rewritable ? Does it uses a given format (matrix, adjacency list, vertices and nodes sets etc.)
If you have the choice however, because your subgraph have a "low" cardinality and because space is not an issue, I would store subgraphs as arrays of bitmasks (the bitmask part is optional, but it is hashable and makes a compact set). A subgraph representation would be then
L a list of node references in your global graph G. It can also be a bitmask to be used as a hash
A an array of bitmask (matrix) where A[i][j] is the truth value of the edge L[i] -> L[j]
This takes advantage of the infinite size low space requirement of Python integers. The space complexity is O(n*n) but you get efficient traversal and can easily hash your structure. | 1 | 1 | 1 | What is the efficient way of keeping and comparing generated sub-graphs from given input graph G in Python?
Some details:
Input graph G is a directed, simple graph with number of vertices varying from n=100-10000. Number of edges - it can be assumed that maximum would be 10% of complete graph (usually less) so it gives in that case maximum number of n*(n-1)/10
There is an algorithm that can generate from input graph G sub-graphs in number of hundreds/thousands. And for each sub-graph are made some (time consuming) computations.
Pair "subgraph, computation results" must be stored for later use (dynamic programming approach - if given sub-graph were already processed we want to re-use its results).
Because of point (2.) it would be really nice to store sub-graph/results pairs in kind of dictionary where sub-graph is a key. How it can be done efficiently? Some ideas of efficient calculation of sub-graph hash value maybe?
Let's assume that memory is not a problem and I can find machine with enough memory to keep a data - so let's focus only on speed.
Of course If there are already nice to use data-structures that might be helpful in this problem (like sparse matrices from scipy) they are very welcome.
I just would like to know your opinions about it and maybe some hints regarding approach to this problem.
I know that there are nice graph/network libraries for Python like NetworkX, igraph, graph-tool which have very efficient algorithms to process provided graph. But seems (or I could not find) efficient way to fulfill points (2. 3.) | Efficient representing sub-graphs (data structure) in Python | 0 | 0 | 0 | 437 |
37,721,263 | 2016-06-09T08:56:00.000 | 0 | 0 | 1 | 0 | python,ipython,spyder | 61,645,817 | 2 | false | 0 | 0 | Spyder 4 select the lines and then press TAB or CTRL+] For indent and shift+TAB or CTRL+] for un-indent | 2 | 16 | 0 | Is there any shortcut key in Spyder python IDE to indent the code block?
For example, Like ctr+[ in Matlab, I want to indent the code block together. | how to indent the code block in Python IDE: Spyder? | 0 | 0 | 0 | 69,522 |
37,721,263 | 2016-06-09T08:56:00.000 | 40 | 0 | 1 | 0 | python,ipython,spyder | 37,721,501 | 2 | true | 0 | 0 | Select your code and press Tab for indent and Shift+Tab to un-indent.
or go to Edit -> Indent/Unindent
Edit section also contains some other tools for editing your code. | 2 | 16 | 0 | Is there any shortcut key in Spyder python IDE to indent the code block?
For example, Like ctr+[ in Matlab, I want to indent the code block together. | how to indent the code block in Python IDE: Spyder? | 1.2 | 0 | 0 | 69,522 |
37,721,461 | 2016-06-09T09:06:00.000 | 0 | 0 | 1 | 0 | visual-studio,python-2.7,python-3.x,ptvs | 37,955,825 | 4 | false | 0 | 0 | I`m also having similar issues, first installation path:
Visual Studio 2015 Pro with Update 1
Installed PTVS using the VS2015 installation setup later on
Everything work fine
The issues started:
Installed a DEV version of PTVS from their github page
My pyproj stopped loading saying a migration needed
Noticed that after new PTVS installation, I`ve installed VS2015 Update 2
Not being able to reload my project after trying to debug the issue, I`ve decided to:
Uninstall PTVS and
Reinstall PTVS through VS2015 setup
Now the issue was different, while trying to load my previous pyproj or even creating different Python projects using multiple templates. I was getting this error:
"There is a missing project subtype. Subtype: '{1b580a1a-fdb3-4b32-83e1-6407eb2722e6}' is unsupported by this installation."
Not finding anything around this, I`ve:
Uninstalled Visual Studio 2015 (having Update 2)
Reinstalled Visual Studio 2015 with Update 1 (without checking PTVS, who requires VS 2015 Update 2 to be installed as well, I suspected it has something to do with it)
Installed PTVS latest stable version from their Github
Now Visual Studio is crashing while trying to load the past mentioned pyproj, with the same error as OP:
SetSite failed for package [Python Tools Package][Expected 1 export(s) with contract name "Microsoft.PythonTools.Interpreter.IInterpreterOptionsService" but found 0 after applying applicable constraints.]
Still trying to fix it at the moment.
Maybe these steps will help debugging the issue.
Update / Fixed
After installing VS 2015 with Update 1 and PTVS 2.2 for VS 2015, I was still having issues opening the pyproj causing VS to just crash (unfortunately nothing in ActivityLog.xml).
I've tried repairing Visual Studio through it's setup, still the same issue.
Finally, I've decided to re-update Visual Studio 2015 to Update 2, causing also to update PTVS to March release, all through VS setup utility.
And now my pyproj correctly opens. Probably some versions miss match during the initial steps where I've installed a DEV version of PTVS. Not sure which step actually corrected my issue but it did.
Hope this will help somehow other people with similar issues. | 3 | 1 | 0 | I have installed Win10, Visual Studio 2015, Python 2.7, Python 3.5 and PTVS 2.2.3.
Unfortunately PTVS does not work at all. I can not load any Python projects that were loading previously in Visual Studio. It worked before I installed Python 3.5. I tried to uninstall Python 2.7 and get an error saying that the uninstall didn't success. After several tries, the problem appears to be around pip which is somehow blocking both install and uninstall of Python 2.7.
When trying to open Python Tools from Tools menu, nothing happens. Neither window opens nor any error message is displayed. Python Environments window does not open even with the shortcut.
In Tools > Options > Python Tools, the only text shown is: "An error occurred loading this property page".
When I try to load/reload the Python project, the message is: "error : Expected 1 export(s) with contract name "Microsoft.PythonTools.Interpreter.IInterpreterOptionsService" but found 0 after applying applicable constraints." This has already been posted for 11 days ago, but no one has answered.
To solve this, I would like to know how to make the Python Environment window appearing in Visual Studio.
Thanks for any help. | Visual Studio Python Environments window does not display | 0 | 0 | 0 | 4,337 |
37,721,461 | 2016-06-09T09:06:00.000 | 0 | 0 | 1 | 0 | visual-studio,python-2.7,python-3.x,ptvs | 38,182,070 | 4 | false | 0 | 0 | Thanks for your posts.
My problem was fixed after I installed VS 2015 update 3 which included a new release of PTVS (June 2.2.40623). | 3 | 1 | 0 | I have installed Win10, Visual Studio 2015, Python 2.7, Python 3.5 and PTVS 2.2.3.
Unfortunately PTVS does not work at all. I can not load any Python projects that were loading previously in Visual Studio. It worked before I installed Python 3.5. I tried to uninstall Python 2.7 and get an error saying that the uninstall didn't success. After several tries, the problem appears to be around pip which is somehow blocking both install and uninstall of Python 2.7.
When trying to open Python Tools from Tools menu, nothing happens. Neither window opens nor any error message is displayed. Python Environments window does not open even with the shortcut.
In Tools > Options > Python Tools, the only text shown is: "An error occurred loading this property page".
When I try to load/reload the Python project, the message is: "error : Expected 1 export(s) with contract name "Microsoft.PythonTools.Interpreter.IInterpreterOptionsService" but found 0 after applying applicable constraints." This has already been posted for 11 days ago, but no one has answered.
To solve this, I would like to know how to make the Python Environment window appearing in Visual Studio.
Thanks for any help. | Visual Studio Python Environments window does not display | 0 | 0 | 0 | 4,337 |
37,721,461 | 2016-06-09T09:06:00.000 | 0 | 0 | 1 | 0 | visual-studio,python-2.7,python-3.x,ptvs | 37,890,917 | 4 | false | 0 | 0 | You'll need to open the ActivityLog.xml (%APPDATA%\Microsoft\VisualStudio\14.0\ActivityLog.xml) and see if there's any exceptions there related to PTVS.
It sounds like you have a pretty messed up configuration at this point. You could try uninstalling PTVS and re-installing it, but my guess is your messed up Python installs are somehow throwing PTVS off and causing it to crash somewhere. | 3 | 1 | 0 | I have installed Win10, Visual Studio 2015, Python 2.7, Python 3.5 and PTVS 2.2.3.
Unfortunately PTVS does not work at all. I can not load any Python projects that were loading previously in Visual Studio. It worked before I installed Python 3.5. I tried to uninstall Python 2.7 and get an error saying that the uninstall didn't success. After several tries, the problem appears to be around pip which is somehow blocking both install and uninstall of Python 2.7.
When trying to open Python Tools from Tools menu, nothing happens. Neither window opens nor any error message is displayed. Python Environments window does not open even with the shortcut.
In Tools > Options > Python Tools, the only text shown is: "An error occurred loading this property page".
When I try to load/reload the Python project, the message is: "error : Expected 1 export(s) with contract name "Microsoft.PythonTools.Interpreter.IInterpreterOptionsService" but found 0 after applying applicable constraints." This has already been posted for 11 days ago, but no one has answered.
To solve this, I would like to know how to make the Python Environment window appearing in Visual Studio.
Thanks for any help. | Visual Studio Python Environments window does not display | 0 | 0 | 0 | 4,337 |
37,723,041 | 2016-06-09T10:12:00.000 | 1 | 0 | 0 | 1 | python | 37,723,343 | 1 | true | 0 | 0 | Short answer: No.
Slightly longer answer: It would perhaps not be impossible to write a Windows Samba driver which supports this, but you seem to be asking for an existing solution. | 1 | 0 | 0 | I have a python script which runs on a Windows machine. On this machine I have mounted a Samba filesystem (on a Linux host).
When I now try to change file permissions on the filesystem with os.chmod(S_IXUSR) it doesn't set the excutable permission, since it is hardcoded in Windows to not do anything as some research I did suggests.
Do I have any chance to change unix file permissions from a Windows host using python? | Set executable permission from Windows host on Linux filesystem | 1.2 | 0 | 0 | 48 |
37,724,219 | 2016-06-09T11:06:00.000 | 0 | 0 | 1 | 0 | python,qgis | 40,808,113 | 1 | false | 0 | 0 | It's just to do:
select your features in the "from" layer
copy them with CTRL C (or icon)
enable editing mode in the "to" layer
paste them with CTRL V (or icon) | 1 | 0 | 0 | I need to join two polygon files into a single, without creating a new file.
In Arcgis this is called append tool, however does not exist in qgis | Copy and paste features or append shapefile - qGis python | 0 | 0 | 0 | 1,097 |
37,724,590 | 2016-06-09T11:23:00.000 | 2 | 1 | 0 | 0 | python,security,obfuscation,api-key | 37,724,974 | 3 | false | 0 | 0 | Do not know the architecture of your code, maybe you can use following architecture:
A -> B -> C
A is your client code, A submit request to B
B is proxy code which is on your private server, and already coding your api_key in the B code, the B code will transfer your A request with the api_key to C
C is the external service
Then, the client code will never record the api key, the api key will be on your private server, this is something like proxy design pattern.
Of course, if you do not want so complex, you can just use Py2exe to package your python code to exe, this is another option, FYI. | 3 | 3 | 0 | I am writing a Python program which uses an API Key to access data from an external service. The program makes a call to this service with the key hard coded in my Python script. Is there a method of somewhat protecting this key (not something that is irreversible, but a method preventing people copying the key straight out of the script)? Should the program request the key from my server? Perhaps a C library? | Protecting an API key in Python | 0.132549 | 0 | 0 | 4,586 |
37,724,590 | 2016-06-09T11:23:00.000 | 2 | 1 | 0 | 0 | python,security,obfuscation,api-key | 37,724,867 | 3 | true | 0 | 0 | If you want to protect yourself from "Hackers", it is impossible, since if python script has access to your API, then this same script can be modified to do nasty things with the access it possesses. You will have to find another solution there.
If you want to protect yourself from "shoulder surfers" (People who look at your monitor while they pass by), then base64.b64encode("key") and base64.b64decode("a2V5==") should be enough. | 3 | 3 | 0 | I am writing a Python program which uses an API Key to access data from an external service. The program makes a call to this service with the key hard coded in my Python script. Is there a method of somewhat protecting this key (not something that is irreversible, but a method preventing people copying the key straight out of the script)? Should the program request the key from my server? Perhaps a C library? | Protecting an API key in Python | 1.2 | 0 | 0 | 4,586 |
37,724,590 | 2016-06-09T11:23:00.000 | 2 | 1 | 0 | 0 | python,security,obfuscation,api-key | 37,724,762 | 3 | false | 0 | 0 | Should the program request the key from my server?
Even then a highly motivated (or skilled, or both...) user will be able to get the key by sniffing tools such as Wireshark (if you aren't using https), or even by modifying your script by simply adding a print somewhere. | 3 | 3 | 0 | I am writing a Python program which uses an API Key to access data from an external service. The program makes a call to this service with the key hard coded in my Python script. Is there a method of somewhat protecting this key (not something that is irreversible, but a method preventing people copying the key straight out of the script)? Should the program request the key from my server? Perhaps a C library? | Protecting an API key in Python | 0.132549 | 0 | 0 | 4,586 |
37,727,899 | 2016-06-09T13:51:00.000 | 0 | 0 | 0 | 0 | python,csv,hive,apache-pig | 37,729,976 | 1 | false | 0 | 0 | It really depends on what do you want to achieve and what hardware do you use.
If you need to process this file fast and you actually have real Hadoop cluster (bigger than 1 or 2 nodes), then probably the best way would be to write Pig script or even simple Hadoop MapReduce job to process this file. With this approach you would get output file on HDFS so it will be easily accessible via Hive.
On the other hand if you have single computer or some "toy" Hadoop cluster processing that file using Hadoop will take much longer time than simply executing python script on this file. It's because Hadopp processing has quite big overhead on data serialization and sending data through network. Of course in that case you will have to deal with the fact that input and output file may not fit into your RAM on your own. | 1 | 1 | 1 | I am a newbie in big data, I have an assignment in which I was given a CSV file and date field is one of the fields in that file. The file size is only 10GB, but I need to create a much larger file, 2TB in size, for big data practice purpose, by duplicating the file's content but increasing the date in order making the duplicated records different from the original one. Then have the new 2TB file accessible via Hive. Need help on how the best way I suppose to implement this? Is it best using pig in hadoop or python? | creating new CSV by duplicating and modifying existing records multiple times from the source CSV | 0 | 0 | 0 | 59 |
37,728,236 | 2016-06-09T14:04:00.000 | 1 | 0 | 1 | 0 | python,file,numpy | 37,728,802 | 1 | true | 0 | 0 | Try closing out Python, close any instances of Python in your task manager, reopen and try again. Sometimes there are phantom instances... otherwise use Pandas to_csv after first converting to a dataframe, ie import pandas as pd,. Dataset = pd.DataFrame(my data), Dataset.to_csv(filename) This is all from memory not in front of a PC. | 1 | 0 | 0 | When I try to write to a text file in Python, using file.write and numpy.savetext Python stops to write in the middle of the text.
I tried closing the file (file.close) and reopen it during run-time, it didn't help. I'm pretty clueless. | Python - Writing to a file stops in the middle | 1.2 | 0 | 0 | 388 |
37,729,406 | 2016-06-09T14:53:00.000 | 0 | 0 | 1 | 0 | python-2.7 | 37,730,326 | 5 | false | 0 | 0 | res=['DEFAULT SECURITY', 'YES', 'ACCT INQ', '3', '', '00', 'STOP/HOLD ADD', '5', '', '00', 'TOWER INQ', 'T', '', '00', 'ACCT FIELD MNT', '2', '', '00', 'COMB STMT MAINT', 'C', '', '00', 'MONETARY IM80', 'W', '', '00', 'MONETARY-IM201', 'D', '', '00', 'OCF INQ', 'G', '', '00', 'ACCESS ALL FUNC', 'NO', 'RATE INQ', 'K', '', '00', 'NAME/ADDR CHG', '4', '', '00', 'MEMO POST', 'Z', '', '00', 'FLOOR LIMITS', '0']
res=[x for x in res if x not in ('00','')]
print res
['DEFAULT SECURITY', 'YES', 'ACCT INQ', '3', 'STOP/HOLD ADD', '5', 'TOWER INQ', 'T', 'ACCT FIELD MNT', '2', 'COMB STMT MAINT', 'C', 'MONETARY IM80', 'W', 'MONETARY-IM201', 'D', 'OCF INQ', 'G', 'ACCESS ALL FUNC', 'NO', 'RATE INQ', 'K', 'NAME/ADDR CHG', '4', 'MEMO POST', 'Z', 'FLOOR LIMITS', '0'] | 1 | 0 | 0 | Here is the list
['DEFAULT SECURITY', 'YES', 'ACCT INQ', '3', '', '00', 'STOP/HOLD ADD', '5', '', '00', 'TOWER INQ', 'T', '', '00', 'ACCT FIELD MNT', '2', '', '00', 'COMB STMT MAINT', 'C', '', '00', 'MONETARY IM80', 'W', '', '00', 'MONETARY-IM201', 'D', '', '00', 'OCF INQ', 'G', '', '00', 'ACCESS ALL FUNC', 'NO', 'RATE INQ', 'K', '', '00', 'NAME/ADDR CHG', '4', '', '00', 'MEMO POST', 'Z', '', '00', 'FLOOR LIMITS', '0']
I would like to remove '' and '00' from the list
result should be like this
['DEFAULT SECURITY', 'YES', 'ACCT INQ', '3', 'STOP/HOLD ADD', '5', 'TOWER INQ', 'T', 'ACCT FIELD MNT', '2', 'COMB STMT MAINT', 'C', 'MONETARY IM80', 'W', 'MONETARY-IM201', 'D', 'OCF INQ', 'G', 'ACCESS ALL FUNC', 'NO', 'RATE INQ', 'K', 'NAME/ADDR CHG', '4', 'MEMO POST', 'Z', 'FLOOR LIMITS', '0']
I tried this
apa= [aa for aa in apa if aa != "''" or aa != "00"]
getting same result | remove specific values from the python list | 0 | 0 | 0 | 135 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.