Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
36,376,113 | 2016-04-02T16:54:00.000 | 0 | 0 | 0 | 0 | dronekit-python,dronekit | 36,377,649 | 1 | false | 0 | 0 | Make sure that your parameter SR0_POSITION is higher than 0. | 1 | 0 | 0 | I'm using python-droneKit along with dronekit-sitl / rover-2.50, and when I try to acquire the vehicle.location.local_frame NED coordinates (after the vehicle has been armed) I only get None values.
I'd appreciate if you could help me in this matter, thanks. | local_frame attribute returns none after vehicle is armed (ardurover-sitl) | 0 | 0 | 0 | 90 |
36,376,317 | 2016-04-02T17:09:00.000 | 0 | 1 | 1 | 0 | python,python-2.7,module,importerror,python-module | 36,377,248 | 1 | false | 0 | 0 | It means that win32 is not installed, try to install it again | 1 | 0 | 0 | After installing the module "win32" and then importing pythoncom I got the error listed above. Any idea why this is happening?
I got this message after install : close failed in file object destructor:
sys.excepthook is missing
lost sys.stderr
The installation directory: **C:\Python27\Lib\site-packages** | "ImportError: No module named pywintypes" | 0 | 0 | 0 | 3,211 |
36,377,506 | 2016-04-02T18:57:00.000 | -1 | 0 | 0 | 1 | python,windows,devcon | 43,858,051 | 1 | false | 0 | 0 | I know this is very late. But your problem can be solved by the subprocess module. Let me know if you are unable to find what you need, and I will then post the code. Thanks | 1 | 2 | 0 | I am looking for solution, I want to disable/enable the particular device in windows system using devcon.exe in python script.
I am able to disable/enable using devcon.exe windows cmd.exe separately but i am looking for this activity to done using python script to verify 10 iteration.
I need to automate one test case which will has to verify the disabling/enabling particular device in windows using devcon.exe for 10 iteration continuously and record log. | How to disable/enable device using devcon.exe in python for some iteration | -0.197375 | 0 | 0 | 992 |
36,380,183 | 2016-04-03T00:16:00.000 | 0 | 0 | 0 | 0 | python,scikit-learn,pca | 61,442,480 | 2 | false | 0 | 0 | This previous answer is mostly correct except about the loadings. components_ is in fact the loadings, as the question asker originally stated. The result of the fit_transform function will give you the principal components (the transformed/reduced matrix). | 2 | 8 | 1 | Sklearn PCA is pca.components_ the loadings? I am pretty sure it is, but I am trying to follow along a research paper and I am getting different results from their loadings. I can't find it within the sklearn documentation. | Sklearn PCA is pca.components_ the loadings? | 0 | 0 | 0 | 9,712 |
36,380,183 | 2016-04-03T00:16:00.000 | 13 | 0 | 0 | 0 | python,scikit-learn,pca | 36,386,315 | 2 | true | 0 | 0 | pca.components_ is the orthogonal basis of the space your projecting the data into. It has shape (n_components, n_features). If you want to keep the only the first 3 components (for instance to do a 3D scatter plot) of a datasets with 100 samples and 50 dimensions (also named features), pca.components_ will have shape (3, 50).
I think what you call the "loadings" is the result of the projection for each sample into the vector space spanned by the components. Those can be obtained by calling pca.transform(X_train) after calling pca.fit(X_train). The result will have shape (n_samples, n_components), that is (100, 3) for our previous example. | 2 | 8 | 1 | Sklearn PCA is pca.components_ the loadings? I am pretty sure it is, but I am trying to follow along a research paper and I am getting different results from their loadings. I can't find it within the sklearn documentation. | Sklearn PCA is pca.components_ the loadings? | 1.2 | 0 | 0 | 9,712 |
36,380,202 | 2016-04-03T00:18:00.000 | 0 | 0 | 0 | 0 | python,google-drive-api | 36,380,299 | 2 | false | 0 | 0 | Hash the file names, stored the last file name into database
Rename the file names using timestamp | 1 | 0 | 0 | So I have a script which uploads a file to a specific folder. I want to get URL of the most recently uploaded item in that folder? How would I accomplish this in as simple a manner is possibly.
For example, say I have a folder called "Photos", and I want to retrieve the latest item that was uploaded to that folder and display it somewhere. How can I get that URL? You may assume "Photos" is a shared folder. | Get the URL of the last item added to a folder | 0 | 0 | 1 | 191 |
36,380,696 | 2016-04-03T01:36:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,neural-network | 36,381,235 | 1 | true | 0 | 0 | Assuming the NN is trained using mini-batches, it is possible to simulate (instead of generate) an evenly distributed training data by making sure each mini-batch is evenly distributed.
For example, assuming a 3-class classification problem and a minibatch size=30, construct each mini-batch by randomly selecting 10 samples per class (with repetition, if necessary). | 1 | 1 | 1 | I need to train my network on a data that has a normal distribution, I've noticed that my neural net has a very high tendency to only predict the most occurring class label in a csv file I exported (comparing its prediction with the actual label).
What are some suggestions (except cleaning the data to produce an evenly distributed training data), that would help my neural net to not go and only predict the most occurring label?
UPDATE: Just wanted to mention that, indeed the suggestions made in the comment sections worked. I, however, found out that adding an extra layer to my NN, mitigated the problem. | How to cancel the huge negative effect of my training data distribution on subsequent neural network classification function? | 1.2 | 0 | 0 | 82 |
36,381,523 | 2016-04-03T04:10:00.000 | 0 | 0 | 1 | 0 | python,string,list,python-3.x,swap | 36,381,647 | 3 | false | 0 | 0 | I think this should go to the comment section, but I can't comment because of lack of reputation, so...
You'll probably want to stick with list index swapping, rather than using .pop() and .append(). .pop() can remove elements from arbitrary index, but only one at once, and .append() can only add to the end of the list. So they're quite limited, and it would complicate your code to use them in this kind of problems.
So, well, better stick with swapping with index. | 1 | 0 | 0 | Okay, I'm really new to Python and have no idea how to do this:
I need to take a string, say 'ABAB__AB', convert it to a list, and then take the leading index of the pair I want to move and swap that pair with the __. I think the output should look something like this:
move_chars('ABAB__AB', 0)
'__ABABAB'
and another example:
move_chars('__ABABAB', 3)
'BAA__BAB'
Honestly have no idea how to do it. | Swapping pairs of characters in a string | 0 | 0 | 0 | 7,193 |
36,382,361 | 2016-04-03T06:38:00.000 | 1 | 0 | 1 | 0 | python,macos,list,indexing | 36,382,642 | 1 | true | 0 | 0 | Could you provide an example of the code that ran successfully on their machine, but not yours? The behavior of list.index shouldn't be os dependent. It should throw a ValueError if the item is not in the list (at least in Python 3.4). | 1 | 0 | 0 | I am studying someone else's code that was written in Python on a Mac. I am using a PC. They use list.index in cases where exceptions will be generated (i.e: the items are missing), but they don't have exception handling. Therefore, I am trying to work out why their code ran successfully.
Does list.index generate exceptions on a Mac when items are missing, or is its behaviour different? Thanks.
I had errors upstream, my fault! Thanks guys. | Python list.index behaviour on OSX? | 1.2 | 0 | 0 | 78 |
36,385,070 | 2016-04-03T11:54:00.000 | 1 | 1 | 0 | 0 | python,pdf | 36,385,200 | 1 | false | 0 | 0 | I would suggest studying many pdfs and write down every pdf label text size. Then, you can average the top 5 highest fonts and average the top 5 lowest fonts. Now, you can make a range between them and check text if it is in that text size range.
This method will not work always, but, will cover the majority of pdfs.
(The more pdfs you study the better) | 1 | 0 | 0 | I'm trying to extract the text from PDF by subjects.
in order to do so im trying to identify the labels \ headlines in the PDF.
So far I have converted the PDF into xml file, in order to get the text data more easily, and then using the font \ size of each in to deiced if a line is a label or not.
the main problem with this way, is that each PDF can have its own build, and not necessarily what works for one PDF will work for the other.
I will be glad if someone have an idea how to overcome this problem so that it will be possible to extract the labels (text by subjects) without depending on the PDF (most of the PDFs I work with are articles \ books)
different ways to extract text by subjects also welcome.
(As the tag indicates, I'm trying to do this in Python)
Edit:
At the moment im doing 2 things:
check font of each line
check each line text size
i concluded that: regular text will have the most lines with its font (there are more than x10 lines with this font than all other texts), and that if you look at the median of text size, it will be the size of the regular text.
From the first i can remove all regular text, and from the second i can take all texts that are bigger and all the labels will be in this list.
The problem now is to extract only the labels from this list since usually there is text that is bigger than the regular text yet isn't a label.
I tried to use the amount of time each fonts shows in the text to identify the labels fonts, but without much success. For each PDF the amount can vary.
I'm looking for ideas how to solve this problem, or if someone know a tools that can do it more easily. | Extracting PDF text by subjects | 0.197375 | 0 | 0 | 106 |
36,386,528 | 2016-04-03T14:16:00.000 | 1 | 0 | 0 | 1 | google-app-engine,google-app-engine-python | 36,388,402 | 1 | false | 1 | 0 | App Engine > Dashboard
This view shows how much you are charged so far during the current billing day, and how many hours you still have until the reset of the day. This is equivalent to what the old console was showing, except there is no "total" line under all charges.
App Engine > Quotas
This view shows how much of each daily quota have been used.
App Engine > Quotas > View Usage History
This view gives you a summary of costs for each of the past 90 days. Clicking on a day gives you a detailed break-down of all charges for that day. | 1 | 0 | 0 | In the old (non-Ajax) Google Appengine's Developer Console Dashboard - showed estimated cost for the last 'n' hours. This was useful to quickly tell how the App engine is doing vis-a-vis the daily budget.
This field seems to be missing in the new Appengine Developer Console. I have tried to search various tabs on the Console and looked for documentation, but without success.
Looking for any pointers as to how do I get to this information in the new Console and any help/pointers are highly appreciated ! | Estimated Cost field is missing in Appengine's new Developer Console | 0.197375 | 0 | 0 | 18 |
36,388,952 | 2016-04-03T17:51:00.000 | 0 | 0 | 1 | 0 | python-2.7 | 36,389,003 | 1 | false | 0 | 0 | To my knowledge, there's no "cheap trick" you'll have to have all your class elements and compare their variable values with what you have . (Couldn't comment, sry) At least from what I understand you're trying to achieve, the question isn't very well constructed. | 1 | 0 | 0 | Can anyone tell me, whether there is a way to find which class a member variable belongs to, using the variable.
I am trying to create a decorator that will allow only member method and variables of certain classes be used as method parameter.
like
@acceptmemberofclass(class1,class2)
def method(memberfunc, membervar):
#do something
I have figured out how to do this with methods (using inspect.getmro(meth.im_class)) but I am unable to find a way for variables | Python how to find which class owns a variable | 0 | 0 | 0 | 29 |
36,390,762 | 2016-04-03T20:27:00.000 | 0 | 1 | 0 | 0 | python-2.7,accelerometer,raspberry-pi2 | 36,732,250 | 1 | true | 0 | 0 | After many many hours of research and fiddling I found out that nearly all electronic sensors have a bias. (bias is an offset)
Now even the accelerometers apparently have serious offsets. So what I ended up doing is building a small test stand that was balanced by four screws. By running an active while loop, and outputting live data from the accelerometer I was able to nearly zero the "other two" axis that were not pointing along gravity all the while gathering data. Once level, I ran the program for several minutes, averaged the results and thus found the bias. I did this 4/5 times per axis and found that the remaining error was due to noise and not recoverable. I had to obviously do this for all 3 axis.
Additionally I found that after I zeros the bias the readings were too high. I am not sure if what I did here was correct but it seemed the logical way to progress. Once all axis all the 3 axis were calibrated, they gave me different gravity readings respectively. All I did then was to add a correction factor to get the gravity reading I expect to get at my altitude. These correction factors were very small (eg 0.966.....) but in my opinion still significant.
Hope this helps anyone who was lost as I was | 1 | 0 | 0 | I am coding a MPU-6050 accelerometer/gyro to provide me with the accelerations and rotational velocities. Now the code works as far as to provide me with all accelerations and angular velocities.
But the results are giving me marginally weird results. If I point the accelerometer that the positive z-axis points up I get marginally lower than expected readings FOR my altitude (should be around 9.7 but I get around 8.9). Now if i turn the accelerometer so that the positive z-axis I get larger than expected readings (I get over 10.1). The same is for all the other axis if I point them along gravity.
The low readings didnt alarm me at first because I thought the accelerometer is not placed perfectly straight. But the higher than expected readings are definitely alarming.
This means that the accelerometer neutral point seems somehow wrong (it under reads on the one side and over reads on the other). Do I need to calibrate the accelerometer? This seems nearly impossibly wince one will never get the accelerometer perfectly straight.
Please advise. Do you want to see my code? | MPU-6050 impossible readings Raspberry | 1.2 | 0 | 0 | 197 |
36,391,651 | 2016-04-03T21:50:00.000 | 4 | 1 | 0 | 1 | python,linux,alsa | 36,391,683 | 2 | true | 0 | 0 | The pid attribute of the subprocess.Popen object contains its PID, but if you need to terminate the subprocess then you should just use the terminate() method.
You should consider using pyao or pygst/gst-python instead though, if you need finer control over audio. | 1 | 2 | 0 | I'm working with Iot on the Intel Galileo with a Yocto image, I have it that a python script will execute 'aplay audio.wav', But I want it to also get the PID of that aplay proccess in case the program will have to stop it. Sorry for being very short and brief. | Start a process with python and get the PID (Linux) | 1.2 | 0 | 0 | 451 |
36,392,021 | 2016-04-03T22:29:00.000 | 0 | 0 | 1 | 0 | python,python-3.x | 36,392,636 | 1 | true | 0 | 0 | You should either...
Have all caching to happen at runtime: This beats your purpose. However, it is the only way not to touch the filesystem at all.
Dedicate a special folder like __memo__ or something: This will allow you to have caching across several executions. However, you'll "pollute" the application's file structure.
Reasons you should never mess up with __pycache__:
__pycache__ is CPython-specific. Although it's most likely the implementation you're using right now, it's not portable/safe/good practice to assume everyone does for no good reason at all.
__pycache__ is intended as a transparent optimization, that is, no one should care if it exists at all or not. This is a design decision from the CPython folks, and so you should not circumvent it.
Due to the above, the aforementioned directory may be disabled if the user wants to. For instance, in CPython, if you do python3 -B myapp.py, no __pycache__ will be created if it doesn't exist, and otherwise will be ignored.
Users often delete __pycache__ because of the above two points for several reasons. I do, at least.
Messing things inside __pycache__ because "it doesn't pollute the application's file structure" is an illusion. The directory can perfectly be considered as "pollution". If the Python interpreter already pollutes stuff, why can't you with __memo__ anyway? | 1 | 2 | 0 | I am writing a memoizing decorator for Python 3. The plan is to pickle the cache in a temporary file to save time across multiple executions.
The cache will be deleted if the file containing the decorated function has been changed, just like the .pyc file in __pycache__. So then I started thinking about uncluttering the user's project directory by placing the pickled cache in __pycache__. Are there side effects of storing files in this directory? Is it misleading or confusing? | Should I store temporary files in the __pycache__ folder? | 1.2 | 0 | 0 | 1,370 |
36,394,150 | 2016-04-04T03:31:00.000 | 1 | 1 | 1 | 0 | python,c++,module | 36,400,365 | 1 | true | 0 | 0 | The concept in c++ is more complicated than it is with python,
from what I remember of python, a module will work without having to take care of the architecture the module was developed.
In C++ (as in C) you have the build process (compile, link) which is important to know when developping with these languages.
In C/C++ you have libraries and header files. To make it simple the header shows the interface of the library (which contains the real compiled code).
The thing here is that as libraries are compiled, you will need a different version depending on architecture and the compiler you are using. A Mingw built library won't be compliant with MSVC compiler.
Namespaces can be thought as modules but not in the same way as we call python modules. In C++ namespaces just allow you to "concat" a prefix to what there is in the namespace to avoid names collision (rough example here, the real mechanism behind isn't just concat) and to order the code logically. You cannot just include a namespace as you import a module in python.
I advise you look a tutorial on how the C/C++ build process work, that will explain in detail what are headers, what are libraries and how to use them ;) | 1 | 0 | 0 | Coming from programming in python, I am familiar with modules. What is the equivalent in c++? | Python has modules, what does c++ have? | 1.2 | 0 | 0 | 534 |
36,394,194 | 2016-04-04T03:38:00.000 | 1 | 0 | 0 | 0 | python,pandas | 36,395,078 | 2 | false | 0 | 0 | Use df.iloc[1] to select the second row of the dataframe (it uses zero based indexing). To select the second column, use df.iloc[:, 1] (the : is slice notation to select all rows). | 1 | 2 | 1 | While looking at indexing in pandas, I had some questions which should be simple enough. If df is a sufficiently long DataFrame, then df[1:2] gives the second row, however, df[1] gives an error and df[[1]] gives the second column. Why is that? | Pandas indexing confusion | 0.099668 | 0 | 0 | 432 |
36,408,297 | 2016-04-04T16:43:00.000 | 0 | 0 | 0 | 1 | python-3.x,file-io,path | 36,409,314 | 2 | false | 0 | 0 | Actually, unless you are using the new pathlib, the thing returned in both cases is just a str.
Also, NT accepts / as a path delimeter and to posix \ is just another character.
So -- no, you can't tell, at least not without trying to use the path; and that will only tell you if something is wrong, not if something can work. | 2 | 1 | 0 | Macs return a Posixpath when the user enters a path. Windows returns a WindowsPath object when the user does the same thing. Is there a way for me to check whether the input is valid depending on the machine? | How to check whether someone enters a Path directory on Windows or Mac? | 0 | 0 | 0 | 32 |
36,408,297 | 2016-04-04T16:43:00.000 | 1 | 0 | 0 | 1 | python-3.x,file-io,path | 36,408,579 | 2 | true | 0 | 0 | os.path.sep gives you the path separator for the platform, \\ for windows and / for unix.
But the thing is if you need this to implement a if else, then don't do that way. The os.path functions are aware of platform specific behavior and they will take care of it. | 2 | 1 | 0 | Macs return a Posixpath when the user enters a path. Windows returns a WindowsPath object when the user does the same thing. Is there a way for me to check whether the input is valid depending on the machine? | How to check whether someone enters a Path directory on Windows or Mac? | 1.2 | 0 | 0 | 32 |
36,410,379 | 2016-04-04T18:41:00.000 | 1 | 0 | 1 | 0 | python,gif | 36,410,736 | 3 | false | 0 | 0 | Try:
image = Image.open(Point(x,y), "filename.gif")
Looks like you may be missing open in Image.open | 2 | 0 | 0 | I need to display a .gif file in my source code. The only thing I've been able to find instruction wise is to use:
image = Image(Point(x,y), "filename.gif")
I keep getting an error that says:
couldn't open "filename.gif": no such file or directory
I'm not sure what to do?
I think maybe I have the files I'm trying to access in a place that python can't access.
I'm using graphics, and I did import it, and this is what the graphics module reference says to do also, so I don't think I'm putting it in wrong. | Image use in Python | 0.066568 | 0 | 0 | 406 |
36,410,379 | 2016-04-04T18:41:00.000 | 0 | 0 | 1 | 0 | python,gif | 36,411,522 | 3 | false | 0 | 0 | Make sure its in the same folder as your python script. If that doesn't work, then set up a path like user Adam Smith did. | 2 | 0 | 0 | I need to display a .gif file in my source code. The only thing I've been able to find instruction wise is to use:
image = Image(Point(x,y), "filename.gif")
I keep getting an error that says:
couldn't open "filename.gif": no such file or directory
I'm not sure what to do?
I think maybe I have the files I'm trying to access in a place that python can't access.
I'm using graphics, and I did import it, and this is what the graphics module reference says to do also, so I don't think I'm putting it in wrong. | Image use in Python | 0 | 0 | 0 | 406 |
36,414,696 | 2016-04-04T23:37:00.000 | 3 | 0 | 0 | 0 | python,python-3.x,select | 42,612,778 | 1 | true | 0 | 0 | The event value is a bitmap.
If you get POLLIN(value:1), you have something to read,
If you get POLLHUP(value:16), your input is ended,
So when you get POLLIN(1) & POLLHUP(16) = 17, that meanss that your input is ended and that your have still something to read from the buffer,
After you read everything from the buffer, you get only POLLHUP alone everytime you call poll() :
In that case, it is no use to keep a file descriptor in the poll list,
it is better to unregister this file descriptor immediately. | 1 | 1 | 0 | I'm using select.poll on Ubuntu with a socket and have registered POLLIN, POLLERR, and POLLHUP.
My understanding is that the condition when a POLLIN event occurs and recv() returns no data indicates that the peer has disconnected. My testing seems to verify this.
But why do I not get POLLHUP? Does this have different semantics? | Python: select.POLLHUP | 1.2 | 0 | 1 | 607 |
36,415,572 | 2016-04-04T19:00:00.000 | 0 | 0 | 0 | 0 | python,orange | 36,516,264 | 1 | false | 0 | 0 | I don't understand what "exported with joblib" refers to, but you can save trained Orange models by pickling them, or with Save Classifier widget if you are using the GUI. | 1 | 0 | 1 | I'm evaluating orange as a potential solution to helping new entrants into data science to get started. I would like to have them save out model objects created from different algorithms as pkl files similar to how it is done in scikit-learn with joblib or pickle. | Can the model object for a learner be exported with joblib? | 0 | 0 | 0 | 61 |
36,419,442 | 2016-04-05T07:09:00.000 | 0 | 1 | 0 | 0 | python,amazon-web-services,boto,aws-sdk,aws-lambda | 55,502,906 | 11 | false | 0 | 0 | Most people end up in this error because of giving the wrong Role ARN in CloudFormation while creating the Lambda Function.
Make sure the role is completed first by using "DependsOn" and use the intrinsic function """{ "Fn::GetAtt" : [ "your-role-logical-name", "Arn" ] }""" | 6 | 73 | 0 | I'm getting the error "The role defined for the function cannot be assumed by Lambda" when I'm trying to create a lambda function with create-function command.
aws lambda create-function
--region us-west-2
--function-name HelloPython
--zip-file fileb://hello_python.zip
--role arn:aws:iam::my-acc-account-id:role/default
--handler hello_python.my_handler
--runtime python2.7
--timeout 15
--memory-size 512 | The role defined for the function cannot be assumed by Lambda | 0 | 0 | 0 | 78,602 |
36,419,442 | 2016-04-05T07:09:00.000 | 2 | 1 | 0 | 0 | python,amazon-web-services,boto,aws-sdk,aws-lambda | 44,550,586 | 11 | false | 0 | 0 | For me, the issue was that I had set the wrong default region environment key. | 6 | 73 | 0 | I'm getting the error "The role defined for the function cannot be assumed by Lambda" when I'm trying to create a lambda function with create-function command.
aws lambda create-function
--region us-west-2
--function-name HelloPython
--zip-file fileb://hello_python.zip
--role arn:aws:iam::my-acc-account-id:role/default
--handler hello_python.my_handler
--runtime python2.7
--timeout 15
--memory-size 512 | The role defined for the function cannot be assumed by Lambda | 0.036348 | 0 | 0 | 78,602 |
36,419,442 | 2016-04-05T07:09:00.000 | 45 | 1 | 0 | 0 | python,amazon-web-services,boto,aws-sdk,aws-lambda | 37,438,525 | 11 | false | 0 | 0 | I'm also encountering this error. Have not got a definitive answer (yet) but figured I'd pass along a couple of hints that may help you and/or anyone else hitting this problem.
A) If you build the Role ARN by putting together your account ID and role name, I think the account ID needs to be without any dashes
B) If you just created the role, and possibly added policies to it, there seems to be a (small) window of time in which the role will trigger this error. Sleeping 5 or 6 seconds between the last operation on the role and the create-function call allowed me to bypass the issue (but of course, the timing may be variable so this is at best a work-around). | 6 | 73 | 0 | I'm getting the error "The role defined for the function cannot be assumed by Lambda" when I'm trying to create a lambda function with create-function command.
aws lambda create-function
--region us-west-2
--function-name HelloPython
--zip-file fileb://hello_python.zip
--role arn:aws:iam::my-acc-account-id:role/default
--handler hello_python.my_handler
--runtime python2.7
--timeout 15
--memory-size 512 | The role defined for the function cannot be assumed by Lambda | 1 | 0 | 0 | 78,602 |
36,419,442 | 2016-04-05T07:09:00.000 | 5 | 1 | 0 | 0 | python,amazon-web-services,boto,aws-sdk,aws-lambda | 62,650,143 | 11 | false | 0 | 0 | I got this problem while testing lambda function.
What worked for me was formatting JSON. | 6 | 73 | 0 | I'm getting the error "The role defined for the function cannot be assumed by Lambda" when I'm trying to create a lambda function with create-function command.
aws lambda create-function
--region us-west-2
--function-name HelloPython
--zip-file fileb://hello_python.zip
--role arn:aws:iam::my-acc-account-id:role/default
--handler hello_python.my_handler
--runtime python2.7
--timeout 15
--memory-size 512 | The role defined for the function cannot be assumed by Lambda | 0.090659 | 0 | 0 | 78,602 |
36,419,442 | 2016-04-05T07:09:00.000 | 0 | 1 | 0 | 0 | python,amazon-web-services,boto,aws-sdk,aws-lambda | 65,418,455 | 11 | false | 0 | 0 | It could be that the Lambda is missing an execution role. Or this role has been deleted.
In console you can see the status at Lambda > Functions > YourFunction > Permissions. Even an IAM empty role with no policies is enough to make it work. | 6 | 73 | 0 | I'm getting the error "The role defined for the function cannot be assumed by Lambda" when I'm trying to create a lambda function with create-function command.
aws lambda create-function
--region us-west-2
--function-name HelloPython
--zip-file fileb://hello_python.zip
--role arn:aws:iam::my-acc-account-id:role/default
--handler hello_python.my_handler
--runtime python2.7
--timeout 15
--memory-size 512 | The role defined for the function cannot be assumed by Lambda | 0 | 0 | 0 | 78,602 |
36,419,442 | 2016-04-05T07:09:00.000 | 2 | 1 | 0 | 0 | python,amazon-web-services,boto,aws-sdk,aws-lambda | 65,979,727 | 11 | false | 0 | 0 | I had this error simply because I had a typo in the role ARN. I really wish the error was more explicit and said something along the lines of "this role doesn't exist", but alas. | 6 | 73 | 0 | I'm getting the error "The role defined for the function cannot be assumed by Lambda" when I'm trying to create a lambda function with create-function command.
aws lambda create-function
--region us-west-2
--function-name HelloPython
--zip-file fileb://hello_python.zip
--role arn:aws:iam::my-acc-account-id:role/default
--handler hello_python.my_handler
--runtime python2.7
--timeout 15
--memory-size 512 | The role defined for the function cannot be assumed by Lambda | 0.036348 | 0 | 0 | 78,602 |
36,423,257 | 2016-04-05T10:06:00.000 | 1 | 0 | 0 | 1 | python,nginx,dynamic,routing,tornado | 36,427,016 | 1 | true | 0 | 0 | When updating your python code you can start a new set of python processes on different ports (say 8001-8004 for the previous set and 8011-8014 for the new set) then modify you nginx config to redirect to 8011-8014 instead 8001-8004 and run service nginx reload (or equivalent for your OS).
This way, nginx will redirect new requests to the new processes without dropping any request and finishing pending ones on the previous processes. When you know that all pending requests to the old set of python processes are finished (that might be non trivial) you can stop them. | 1 | 1 | 0 | I have 4 python tornado threads running on different ports on different machines. I used nginx to route and load balance. The code is the same for all of them. It is a asynchronous code. I also have a local file, lets say function.py on each machine that gets called by the python thread, does some computation and returns the answer.
My requirement is that I may need to periodically update the function.py file. However, I do not want the server to be stopped to reload the function since I don't want to drop any incoming request. I am open to changing nginx to something else if required. Any suggestions will be appreciated. Thanks!
Edit:
Could there be a way to modify/configure the nginx in such a way that it will redirect to certain servers(say port 8011-8014) only when they are up ? In that case I can modify the main python threads and then gracefully shut down port 8011-8014. But is this type of configuration feasible ? | dynamic routing , tornado deployment in production | 1.2 | 0 | 0 | 412 |
36,427,384 | 2016-04-05T13:11:00.000 | 1 | 0 | 0 | 0 | python,flask | 36,445,638 | 2 | true | 1 | 0 | Ok, Solution is :
app.run( threaded=True, ...)
Now it is possible to process at same time several requests, for exemple one for video streaming, other for video parameter tuning and so on. | 1 | 3 | 0 | I need two http servers in same python application (with two differents ports 8081 and 8082):
One for a video stream coming from webcam and sent to WebBrowser;
Second one for command (quality, filter, etc.)
I don't succeed to define two Flask objects, because 'app.run' is blocking.
Is it possible, or do I need to use a Flask and a BaseHTTPServer ?
Best regards. | Several (two) Flask objects in same application | 1.2 | 0 | 0 | 1,414 |
36,427,596 | 2016-04-05T13:20:00.000 | 0 | 0 | 0 | 0 | python,gtk,pygtk,gtktreeview | 36,670,919 | 1 | false | 0 | 1 | The closest I've been able to get is a bit of a kludge. I use Perl, not Python, so I'll just describe my technique.
Connect to the treeview's scroll-event signal and watch for direction=left/right (or smooth with get_scroll_deltas() returning non-zero for the X axis). Be sure to return FALSE for vertical scrolling so that remains unaffected. Then show/hide the columns after the fixed one(s) and return TRUE.
For example: when scrolling right, hide column 2, then 3, etc. When scrolling left, show the highest number column that's hidden.
There are a couple drawbacks:
It's not as smooth as what you're looking for (columns can't be partially hidden).
Doesn't work with the scrollbar, only when tilting the mouse wheel. Not all mice have tiltable wheels. | 1 | 2 | 0 | I am using a pygtk application and I have added a Treeview inside a ScrolledWindow. Now I want to freeze the first column (fix the column position), so that when scrolling the Treeview horizontally the column position is fixed and still visible (as it's done in excel for the row column).
So how I can freeze the Treeview column in pygtk? | Freeze the first column in Treeview pygtk | 0 | 0 | 0 | 544 |
36,431,659 | 2016-04-05T16:09:00.000 | 0 | 0 | 0 | 0 | python,datetime,numpy,pandas,filtering | 36,431,809 | 5 | false | 0 | 0 | Sort your array
Count contiguous occurrences by going through it once, & filter for frequency >= 20
The running time is O(nlog(n)) whereas your list comprehension was probably O(n**2)... that makes quite a difference on 2 million entries.
Depending on how your data is structured, you might be able to sort only the axis and data you need from the numpy array that holds it. | 2 | 1 | 1 | I have an array of over 2 million records, each record has a 10 minutes resolution timestamp in datetime.datetime format, as well as several other values in other columns.
I only want to retain the records which have timestamps that occur 20 or more times in the array. What's the fastest way to do this? I've got plenty of RAM, so I'm looking for processing speed.
I've tried [].count() in a list comprehension but started to lose the will to live waiting for it to finish. I've also tried numpy.bincount() but tragically it doesn't like datetime.datetime
Any suggestions would be much appreciated.
Thanks! | filter numpy array of datetimes by frequency of occurance | 0 | 0 | 0 | 964 |
36,431,659 | 2016-04-05T16:09:00.000 | 0 | 0 | 0 | 0 | python,datetime,numpy,pandas,filtering | 36,454,679 | 5 | false | 0 | 0 | Thanks for all of your suggestions.
I ended up doing something completely different with dictionaries in the end and found it much faster for the processing that I required.
I created a dictionary with a unique set of timestamps as the keys and empty lists as the values and then looped once through the unordered list (or array) and populated the value lists with the values that I wanted to count.
Thanks again! | 2 | 1 | 1 | I have an array of over 2 million records, each record has a 10 minutes resolution timestamp in datetime.datetime format, as well as several other values in other columns.
I only want to retain the records which have timestamps that occur 20 or more times in the array. What's the fastest way to do this? I've got plenty of RAM, so I'm looking for processing speed.
I've tried [].count() in a list comprehension but started to lose the will to live waiting for it to finish. I've also tried numpy.bincount() but tragically it doesn't like datetime.datetime
Any suggestions would be much appreciated.
Thanks! | filter numpy array of datetimes by frequency of occurance | 0 | 0 | 0 | 964 |
36,434,276 | 2016-04-05T18:31:00.000 | 0 | 0 | 0 | 0 | python,django,object,static,django-views | 68,335,159 | 1 | false | 1 | 0 | Try to make a fixture and before running server use manage.py loaddata | 1 | 3 | 0 | I am working on Django project. I need to create an object only once when the server initially starts. I want to use the methods associated with this particular object everytime a user accesses a particular page, that is I want the attributes and methods of this object to be accessible in views without having to instantiate the object again and again.
How exactly do i do it? | Create object in Django once when the server starts and use it throughout | 0 | 0 | 0 | 953 |
36,438,428 | 2016-04-05T22:47:00.000 | 1 | 1 | 0 | 0 | python,graph-theory,networkx | 36,454,065 | 1 | true | 0 | 0 | The way the problem is posed doesn't make full sense. It is indeed a graph search problem to maximise a sum of numbers (subject to other constraints) and it possibly can be solved via brute force as the number of nodes that will end up being traversed is not necessarily going to climb to the hundreds (for a single trip).
Each path is probably a few nodes long because of the 30 min constraint at each stop. With 8 hours in a day and negligible distances between the bushes that would amount to a maximum of 16 stops. Since the edge costs are not negligible, it means that each trip should have <<16 stops.
What we are after is the maximum sum of 5 days harvest (max of five numbers). Each day's harvest is the sum of collected eggs over a "successful" path.
A successful path is defined as the one satisfying all constraints which are:
The path begins and ends on the same node. It is therefore a cycle EXCEPT for Tuesday. Tuesday's harvest is a path.
The cycle of a given day contains the nodes specified in Mrs Bunny's
list for that day.
The sum of travel times is less than 8 hrs including the 30min harvesting time.
Therefore, you can use a modified Depth First Search (DFS) algorithm. DFS, on its own can produce an exhaustive list of paths for the network. But, this DFS will not have to traverse all of them because of the constraints.
In addition to the nodes visited so far, this DFS keeps track of the "travel time" and "eggs" collected so far and at each "hop" it checks that all constraints are satisfied. If they are not, then it backtracks or abandons the traversed path. This backtracking action "self-limits" the enumerated paths.
If the reasoning is so far inline with the problem (?), here is why it doesn't seem to make full sense. If we were to repeat the weekly harvest process for M times to determine the best visiting daily strategy then we would be left with the problem of determining a sufficiently large M to have covered the majority of paths. Instead we could run the DFS once and determine the route of maximum harvest ONCE, which would then lead to the trivial solution of 4*CycleDailyHarvest + TuePathHarvest. The other option would be to relax the 8hr constraint and say that Mr Bunny can harvest UP TO 8hr a day and not 8hr exactly.
In other words, if all parameters are static, then there is no reason to run this process multiple times. For example, if each bush was to give "up to k eggs" following a specific distribution, maybe we could discover an average daily / weekly visiting strategy with the largest yield. (Or my perception of the problem so far is wrong, in which case, please clarify).
Tuesday's task is easier, it is as if looking for "the path between source and target whose time sum is approximately 8hrs and sum of collected eggs is max". This is another sign of why the problem doesn't make full sense. If everything is static (graph structure, eggs/bush, daily harvest interval) then there is only one such path and no need to examine alternatives.
Hope this helps.
EDIT (following question update):
The update doesn't radically change the core of the previous response which is "Use a modified DFS (for the potential of exhaustively enumerating all paths / cycles) and encode the constraints as conditions on metrics (travel time, eggs harvested) that are updated on each hop". It only modifies the way the constraints are represented. The most significant alteration is the "visit each bush once per week". This would mean that the memory of DFS (the set of visited nodes) is not reset at the end of a cycle or the end of a day but at the end of a week. Or in other words, the DFS now can start with a pre-populated visited set. This is significant because it will reduce the number of "viable" path lengths even more. In fact, depending on the structure of the graph and eggs/bush the problem might even end up being unsolvable (i.e. zero paths / cycles satisfying the conditions).
EDIT2:
There are a few "problems" with that approach which I would like to list here with what I think are valid points not yet seen by your viewpoint but not in an argumentative way:
"I don't need to just find one path (that will be relatively quick), I need to find an approximation of the best path." and "I want the "longest" path with the most eggs." are a little bit contradicting statements but on average they point to just one path. The reason I am saying this is because it shows that either the problem is too difficult or not completely understood (?)
A heuristic will only help in creating a landscape. We still have to traverse the landscape (e.g. steepest descent / ascent) and there will be plenty of opportunity for oscillations as the algorithm might get trapped between two "too-low", "too-high" alternatives or discovery of local-minima / maxima without an obvious way of moving out of them.
A*s main objective is still to return ONE path and it will have to be modified to find alternatives.
When operating over a graph, it is impossible to "encourage" the traversal to move towards a specific target because the "traversing agent" doesn't know where the target is and how to get there in the sense of a linear combination of weights (e.g. "If you get too far, lower some Xp which will force the agent to start turning left heading back towards where it came from". When Mr Bunny is at his burrow he has all K alternatives, after the first possible choice he has K-M1 (M1
The MDFS will help in tracking the different ways these sums are allowed to be created according to the choices specified by the graph. (Afterall, this is a graph-search problem).
Having said this, there are possibly alternative, sub-optimal (in terms of computational complexity) solutions that could be adopted here. The obvious (but dummy one) is, again, to establish two competing processes that impose self-control. One is trying to get Mr Bunny AWAY from his burrow and one is trying to get Mr Bunny BACK to his burrow. Both processes are based on the above MDFS and are tracking the cost of MOVEAWAY+GOBACK and the path they produce is the union of the nodes. It might look a bit like A* but this one is reset at every traversal. It operates like this:
AWAY STEP:
Start an MDFS outwards from Mr Bunny's burrow and keep track of distance / egg sum, move to the lowestCost/highestReward target node.
GO BACK STEP:
Now, pre-populate the visited set of the GO BACK MDFS and try to get back home via a route NOT TAKEN SO FAR. Keep track of cost / reward.
Once you reach home again, you have a possible collection path. Repeat the above while the generated paths are within the time specification.
This will result in a palette of paths which you can mix and match over a week (4 repetitions + TuesdayPath) for the lowestCost / highestReward options.
It's not optimal because you might get repeating paths (the AWAY of one trip being the BACK of another) and because this quickly eliminates visited nodes it might still run out of solutions quickly. | 1 | 0 | 0 | I'm working on a graph search problem that can be distilled to the following simpler example:
Updated to clarify based on response below
The Easter Bunny is hopping around the forest collecting eggs. He knows how many eggs to expect from every bush, but every bush has a unique number of eggs. It takes the Easter Bunny 30 minutes to collected from any given bush. The easter bunny searches for eggs 5 days a week, up to 8 hours per day. He typically starts and ends in his burrow, but on Tuesday he plans to end his day at his friend Peter Rabbit's burrow. Mrs. Bunny gave him a list of a few specific bushes to visit on specific days/times - these are intermediate stops that must be hit, but do not list all stops (maybe 1-2 per day). Help the Easter Bunny design a route that gives him the most eggs at the end of the week.
Given Parameters: undirected graph (g), distances between nodes are travel times, 8 hours of time per day, 5 working days, list of (node,time,day) tuples (r) , list of (startNode, endNode, day) tuples (s)
Question: Design a route that maximizes the value collected over the 5 days without going over the allotted time in any given day.
Constraints: visit every node in r on the prescribed time/day. for each day in s, start and end at the corresponding nodes, whose collection value is 0. Nodes cannot be visited more than once per week.
Approach: Since there won't be very many stops, given the time at each stop and the travel times (maybe 10-12 on a large day) my first thought was to brute force all routes that start/stop at the correct points, and just run this 5 times, removing all visited nodes. From there, separately compute the collected value of each allowable route. However, this doesn't account for the fact that my "best" route on day one may ruin a route that would be best on day 5, given required stops on that day.
To solve that problem I considered running one long search by concatenating all the days and just starting from t = 0 (beginning of week) to t = 40 (end of week), with the start/end points for each day as intermediate stops. This gets too long to brute force.
I'm struggling a little with how to approach the problem - it's not a TSP problem - I'm only going to visit a fraction of all nodes (maybe 50 of 200). It's also not a dijkstra's pathing problem, the shortest path typically would be to go nowhere. I need to maximize the total collected value in the allotted time making the required intermediate stops. Any thoughts on how to proceed would be greatly appreciated! Right now I've been approaching this using networkx in python.
Edit following response
In response to your edit - I'm looking for an approach to solve the problem - I can figure out the code later, I'm leaning towards A* over MDFS, because I don't need to just find one path (that will be relatively quick), I need to find an approximation of the best path. I'm struggling to create a heuristic that captures the time constraint (stay under time required to be at next stop) but also max eggs. I don't really want the shortest path, I want the "longest" path with the most eggs. In evaluating where to go next, I can easily do eggs/min and move to the bush with the best rate, but I need to figure out how to encourage it to slowly move towards the target. There will always be a solution - I could hop to the first bush, sit there all day and then go to the solution (there placement/time between is such that it is always solvable) | Graph search - find most productive route | 1.2 | 0 | 1 | 662 |
36,440,097 | 2016-04-06T01:53:00.000 | 0 | 0 | 1 | 0 | python,save,memoization,lru | 36,441,468 | 2 | false | 0 | 0 | Nope, I think the only way to do this is to get information for the .cache_info() function and to write into file.
It would a CacheInfo object that contains information you need. | 2 | 2 | 0 | I'm using the decorator @lru_cache(maxsize=None) from functools, and I wan't to save the memoized values to a file in order to avoid re-computing them each time I run the code.
Is there an elegant way of doing this different from printing args and values to a file and then reading them? | How to export lru_cache in python? | 0 | 0 | 0 | 735 |
36,440,097 | 2016-04-06T01:53:00.000 | 0 | 0 | 1 | 0 | python,save,memoization,lru | 40,699,839 | 2 | true | 0 | 0 | @Carlos Pinzón, posted as an answer as you requested: functools.lru_cache() is designed to work with arbitrary positional and keyword arguments, and possibly a maximum cache size. If you don't need those features, it isn't too difficult to roll your own cache (aka memoize) decorator. The cache is just a dictionary, so you can provide a function to save it to disk as a pickle (or json if you want to be able to look at it) and reload it later. The lru_cache sourcecode is also available; modify it to suit your needs. | 2 | 2 | 0 | I'm using the decorator @lru_cache(maxsize=None) from functools, and I wan't to save the memoized values to a file in order to avoid re-computing them each time I run the code.
Is there an elegant way of doing this different from printing args and values to a file and then reading them? | How to export lru_cache in python? | 1.2 | 0 | 0 | 735 |
36,440,682 | 2016-04-06T02:58:00.000 | 1 | 0 | 1 | 1 | python,ipython,jupyter | 36,457,255 | 4 | false | 0 | 0 | After trying a bunch of solution, I found the quickest solution: using conda instead of pip. Or just use anaconda, which provides jupyter, too. | 4 | 1 | 0 | I want to have jupyter notebook installed. But on my MacBook Pro (OS X El Capitan) and my web server (Debian 7), I get the same error: jupyter notebook is not a jupyter command.
I just follow the official installation instruction. And no error occurs during installation.
I searched for solutions but none of them works. What should I do now? | Jupyter: "notebook" is not a jupyter command | 0.049958 | 0 | 0 | 2,769 |
36,440,682 | 2016-04-06T02:58:00.000 | 0 | 0 | 1 | 1 | python,ipython,jupyter | 41,560,474 | 4 | false | 0 | 0 | I meet withe the same problem when I use fish,when I switch to bash,everything works well! | 4 | 1 | 0 | I want to have jupyter notebook installed. But on my MacBook Pro (OS X El Capitan) and my web server (Debian 7), I get the same error: jupyter notebook is not a jupyter command.
I just follow the official installation instruction. And no error occurs during installation.
I searched for solutions but none of them works. What should I do now? | Jupyter: "notebook" is not a jupyter command | 0 | 0 | 0 | 2,769 |
36,440,682 | 2016-04-06T02:58:00.000 | 0 | 0 | 1 | 1 | python,ipython,jupyter | 41,647,660 | 4 | false | 0 | 0 | In my case, it was a matter of agreeing to the Xcode License Agreement with: sudo xcodebuild -license, before running sudo pip install notebook. | 4 | 1 | 0 | I want to have jupyter notebook installed. But on my MacBook Pro (OS X El Capitan) and my web server (Debian 7), I get the same error: jupyter notebook is not a jupyter command.
I just follow the official installation instruction. And no error occurs during installation.
I searched for solutions but none of them works. What should I do now? | Jupyter: "notebook" is not a jupyter command | 0 | 0 | 0 | 2,769 |
36,440,682 | 2016-04-06T02:58:00.000 | 0 | 0 | 1 | 1 | python,ipython,jupyter | 48,656,548 | 4 | false | 0 | 0 | What worked for me was to use the following command in MacOS High Sierra 10.13.
$HOME/anaconda3/bin/activate | 4 | 1 | 0 | I want to have jupyter notebook installed. But on my MacBook Pro (OS X El Capitan) and my web server (Debian 7), I get the same error: jupyter notebook is not a jupyter command.
I just follow the official installation instruction. And no error occurs during installation.
I searched for solutions but none of them works. What should I do now? | Jupyter: "notebook" is not a jupyter command | 0 | 0 | 0 | 2,769 |
36,444,956 | 2016-04-06T08:00:00.000 | 0 | 0 | 1 | 0 | python,zapier | 72,431,133 | 2 | false | 0 | 0 | We can import only requests, so we need to find another way to run any python code which include the other libraries.
Maybe, we can build python to exe file and run it directly by zapier service. | 2 | 1 | 0 | Is there a way to import a library in Zapier Python Code?
Specifically, I want the 'datasift' library.
When I try, I just get "ImportError: No module named datasift" | Can I import a library in zapier code | 0 | 0 | 0 | 1,137 |
36,444,956 | 2016-04-06T08:00:00.000 | 4 | 0 | 1 | 0 | python,zapier | 36,510,476 | 2 | true | 0 | 0 | In the zapier documentation, it specifically says that it only allows for the requests import and standard python imports | 2 | 1 | 0 | Is there a way to import a library in Zapier Python Code?
Specifically, I want the 'datasift' library.
When I try, I just get "ImportError: No module named datasift" | Can I import a library in zapier code | 1.2 | 0 | 0 | 1,137 |
36,445,162 | 2016-04-06T08:11:00.000 | 0 | 0 | 0 | 0 | python,ajax,selenium | 36,446,094 | 4 | false | 1 | 0 | You can implement so-called smart wait.
Indicate the most frequently updating and useful for you web element
at the page
Get data from it by using JavaScript since DOM model will not be updated without page refresh, eg:
driver.execute_script('document.getElementById("demo").innerHTML')
Wait for certain time, get it again and compare with previous result. If changed - refresh page, fetch data, etc. | 2 | 1 | 0 | I have already written a script that opens Firefox with a URL, scrape data and close. The page belongs to a gaming site where page refresh contents via Ajax.
Now one way is to fetch those AJAX requests and get data or refresh page after certain period of time within open browser.
For the later case, how should I do? Should I call the method after certain time period or what? | Python Selenium: How to get fresh data from the page get refreshed periodically? | 0 | 0 | 1 | 1,158 |
36,445,162 | 2016-04-06T08:11:00.000 | 0 | 0 | 0 | 0 | python,ajax,selenium | 36,449,601 | 4 | false | 1 | 0 | Make sure to call findElement() again after waiting, bc otherwise you might not get a fresh instance. Or use page factory, which will get a fresh copy of the WebElement for you every time the instance is accessed. | 2 | 1 | 0 | I have already written a script that opens Firefox with a URL, scrape data and close. The page belongs to a gaming site where page refresh contents via Ajax.
Now one way is to fetch those AJAX requests and get data or refresh page after certain period of time within open browser.
For the later case, how should I do? Should I call the method after certain time period or what? | Python Selenium: How to get fresh data from the page get refreshed periodically? | 0 | 0 | 1 | 1,158 |
36,445,861 | 2016-04-06T08:42:00.000 | 4 | 0 | 0 | 1 | python,root,python-2.x | 36,445,985 | 2 | true | 0 | 0 | Try to use os.seteuid(some_user_id) before os.system("some bash command"). | 2 | 2 | 0 | I have a python 2 script that is run as root. I want to use os.system("some bash command") without root privileges, how do I go about this? | using os.system() to run a command without root | 1.2 | 0 | 0 | 1,614 |
36,445,861 | 2016-04-06T08:42:00.000 | -1 | 0 | 0 | 1 | python,root,python-2.x | 47,768,790 | 2 | false | 0 | 0 | I have test on my PC. If you run the python script like 'sudo test.py' and the question is resolved. | 2 | 2 | 0 | I have a python 2 script that is run as root. I want to use os.system("some bash command") without root privileges, how do I go about this? | using os.system() to run a command without root | -0.099668 | 0 | 0 | 1,614 |
36,446,960 | 2016-04-06T09:28:00.000 | -1 | 0 | 1 | 0 | python,dictionary,max,min | 66,118,397 | 5 | false | 0 | 0 | For a rapid look at your dataframe's count, mean, max, min, standard deviation and quartiles you can use df.describe() | 1 | 1 | 0 | Let's say I have a dict of lists like this:
mydict={10:[],20:[],30:[],40:[],50:[1],60:[],70:[1],80:[7, 2, 7, 2, 2, 7, 2],90:[5, 2, 2, 6, 2, 3, 1, 2, 1, 2],...}
I want to compute: min, max, median, 1st and 3rd quartiles for each list in the dict. I tried min and max first, like this:
mins_mydict={k:min(v) for k,v in mydict.items()}
maxes_mydict={k:max(v) for k,v in mydict.items()}
but I get this error: ValueError: min() arg is an empty sequence. Same for max. Is it because some of my lists are empty?
How can I create an exception that checks if len(list)=0? | Python: how to compute min, max, median, 1st and 3rd quartiles from a dict of lists? | -0.039979 | 0 | 0 | 1,504 |
36,450,840 | 2016-04-06T12:14:00.000 | 0 | 0 | 0 | 0 | python,image-processing,3d,medical | 36,453,666 | 4 | false | 0 | 0 | Try using MeVisLab to do this easily. Also, you could try FSL for more robust results. | 1 | 3 | 0 | I have searched and found a lot of 2D image registration images in Python, but those will not serve my need. I have several MRI and CT images all taken of the same patient over time and was wondering if anybody had sample Python code for performing a 3D rigid image registration for these medical images. | 3D multimodal medical image registration in python | 0 | 0 | 0 | 4,816 |
36,452,520 | 2016-04-06T13:24:00.000 | 2 | 1 | 0 | 0 | java,python | 36,452,775 | 2 | true | 1 | 0 | If you meant if there is something in python to handle ZIP format, there is. It is the module zipfile. Python comes with all batteries included. | 1 | 0 | 0 | I am working with byte arrays in Java 1.7. I am using java.util.zip's Inflater and Deflater classes to compress the data. I have to interface with data generated by Python code.
Does Python have the capability to compress data that can be uncompressed by Java's Inflater class, and the capability to decompress data that has been compressed by Java's Deflater class? | Does Python have a zip class that is compatible with Java's java.util.zip Inflater and Deflater classes? | 1.2 | 0 | 0 | 594 |
36,452,973 | 2016-04-06T13:42:00.000 | 1 | 0 | 1 | 0 | python,anaconda,pythonxy | 36,453,989 | 1 | true | 0 | 0 | I solved the problem.
Anaconda didn't clean the registry, so all modules were still registered. I did a search for "anaconda" in the registry and deleted all occurences. Now the Python(x,y) installation went well. | 1 | 2 | 0 | I'm trying to reinstall a Python(x,y) distribution on my computer. Before that, I had a Anaconda2 distribution and a WinPython distribution which I have uninstalled correctly (and removed system paths).
I'm doing the custom installation of Python(x,y) to specify the Python path directory. The problem is that after the installation, if I check in the Python directory, I don't have all the packages that are supposed to be installed with PythonXY: they are all located in a C:\Anaconda2 directory. Therefore, nothing can start (Spyder or others).
Why is it doing that ? Are all the packages not supposed to be in the C:\Python27_XY directory that I specified during the installation? | Trouble with Python(x,y) installation that creates a C:\Anaconda2 directory | 1.2 | 0 | 0 | 356 |
36,456,584 | 2016-04-06T16:10:00.000 | 1 | 0 | 0 | 0 | python,pygame | 36,456,629 | 1 | false | 0 | 1 | Use python3.5 -mpip install pygame. | 1 | 0 | 0 | I am running El Capitan, when I type python --version the terminal prints Python 2.7.10, I have successfully downloaded pygame for Python 2.7.10 but I want to develop in python 3.5.1, I know I can do this by entering python3 in the terminal, but how do I properly set up pygame for this version of python? | How to download pygame for non default version of python | 0.197375 | 0 | 0 | 44 |
36,457,076 | 2016-04-06T16:32:00.000 | 2 | 0 | 0 | 0 | python,sqlalchemy,zodb | 36,479,782 | 1 | true | 0 | 0 | ZODB does not use SQLAlchemy, and there is no relational model. There are no tables to join, period. The ZODB stores an object tree, there is no schema. It's just Python objects in more Python objects.
Any references to ZODB and SQLAlchemy are all for applications built on top of the ZODB, where transactions for external relational databases accessed through SQLAlchemy are tied in with the ZODB transaction manager to ensure that transactions cover both the ZODB and data in other databases. This essential means that when you commit the ZODB transaction, SQLAlchemy is told about this too. | 1 | 0 | 0 | I have been trying to find examples in ZODB documentation about doing a join of 2 or more tables. I know that this is an Object Database, but I am trying to create objects that represent tables.
And I see that ZODB makes use SQLAlchemy.
So I was wondering if I can treat things in ZODB in a relational like sense.
I hope someone can let me know if my train of thought in using this ZODB is OK, or if one has to think in very different way. | ZODB database: table joins | 1.2 | 1 | 0 | 156 |
36,460,118 | 2016-04-06T19:02:00.000 | 1 | 0 | 0 | 0 | python-3.x,python-idle | 36,462,661 | 1 | true | 0 | 0 | I presume you mean syntax coloring, and that your .ape files contain Python code. IDLE currently does not have a way to override file extensions. There is a request somewhere on the Python tracker to be able to turn syntax highlighting on for cases where python code is embedded in another file (.txt, .html, .rst, etc). Such an option would also solve your problem.
On Windows, Notepad++ will apparently apply any language highlighting scheme it knows (about 50) to any file. It is not as nice, though, for running edited code. | 1 | 1 | 0 | I am currently using a custom file extension in the python idle editor (.ape), but when I open it with the standard IDLE editor, it doesn't show the colours, everything is just black on white. So there a way of opening a non-python extension in the IDLE editor, and still have the colours?
Sorry if this isn't the right place to ask, but I have no idea where else to ask.
Thanks in advance for any help. | How can I use the python idle colours with a non-python file extension | 1.2 | 0 | 0 | 69 |
36,460,455 | 2016-04-06T19:19:00.000 | 0 | 0 | 0 | 0 | android-activity,automation,appium,python-appium | 36,461,041 | 1 | false | 1 | 0 | There are two ways to do it what I can think of,
UI prospect :
Capture the screenshot of the webview with 200 response. Let's call it expectedScreen.png
Capture the screenshot of the under test response(be it 200, 400 etc.). Lets call this finalScreen.png
Compare both the images to verify/assert.
API prospect : Since the Activity suppose to be displayed shall never/rarely be changed depending on transitions between different activities on your application as designed, so verifying current activity is a less important check during the test. You can rather verify these using API calls and then(if you get proper response) look for presence of elements on screen accordingly. | 1 | 0 | 0 | I verify launched current activity if it's in browser or in app by comparing with current activity.
activity = driverAppium.current_activity
And then I verify if activity matches with browser activity name e.g. org.chromium.browser...
But can I verify the http response on the webpage e.g. 200 or 404?
With above test always passes even though webpage didn't load or get null response.
Can I verify with current activity and response both? | Verify appium current activity response | 0 | 0 | 1 | 438 |
36,462,875 | 2016-04-06T21:41:00.000 | 0 | 0 | 0 | 0 | python,django,testing,automated-tests | 36,485,158 | 1 | true | 1 | 0 | Ok, so the key is quite simple, the file is not supposed to start with test.
I named it blub_test.py and then called it with
./manage.py test --pattern="blub_test.py" | 1 | 0 | 0 | I have a test file with tests in it which will not be called with the regular
manage.py test
command, only when I specifically tell django to do so.
So my file lives in the same folder as tests.py and its name is test_blub.py
I tried it with
manage.py test --pattern="test_*.py"
Any idea? | how to use --patterns for tests in django | 1.2 | 0 | 0 | 528 |
36,462,908 | 2016-04-06T21:43:00.000 | 0 | 1 | 0 | 1 | php,python,struct | 36,464,750 | 1 | false | 0 | 0 | It ended up being python gzip, that shiftet all bytes. Destroying the data. | 1 | 0 | 0 | I have created a python program using struct, that saves data in files. The data consists of a header (300 chars) and data (36000 int float pairs). On ubuntu this works and i can unpack the data for my php setup.
I unpack the data in php by loading the content into a string and using unpack. I quickly found that 1 pair off int float, consumed the same as 8 chars in the php string.
when I then moved this to windows, the data didn't take as much space, and when i try to unpack them in php, they seem to get unaligned from the binary string quickly.
Is there any way to get the struct in php to use the architecture to produce the same output as ubuntu?
I have tried the alligment options with struct (<,>,!,=).
My ubuntu dev setup is 64bit and the server is also 64bit. I have tried using both 32bit python and 64bit python on the windows server. | Python struct on windows | 0 | 0 | 0 | 50 |
36,462,921 | 2016-04-06T21:44:00.000 | 3 | 0 | 1 | 0 | python,jupyter,jupyter-notebook | 36,562,213 | 5 | false | 0 | 0 | This is not possible at the moment (April 12th 2016), but there is a ticket in the Jupyter github issues that mentions that "soon" we will be able to open several notebooks in the same browser tab; That would allow for the side by side comparisons you are looking for. | 1 | 7 | 0 | Is it possible to add a cell on the side of another cell, splitting the screen vertically?
This seems very useful when comparing two lists of things.
I don't see this options supported out of the box, so I am guessing it would require some extra js? | Jupyter notebook: split screen vertically (i.e. add cell horizontally) | 0.119427 | 0 | 0 | 13,669 |
36,470,440 | 2016-04-07T08:17:00.000 | 0 | 0 | 0 | 0 | python,arrays,numpy,matrix,indexing | 36,471,599 | 2 | true | 0 | 0 | I solved it by using np.squeeze(x) to remove the singleton dimensions. | 1 | 0 | 1 | I have a numpy array which is (1, 2048, 1, 1). I need to assign the first two dimensions to another numpy array which is (1, 2048), but I am confused on how to index it correctly. Hope you can help! | Indexing a Numpy Array (4 dimensions) | 1.2 | 0 | 0 | 96 |
36,472,751 | 2016-04-07T09:59:00.000 | 0 | 0 | 0 | 0 | java,android,python-3.x | 36,472,877 | 5 | false | 1 | 0 | Java Without a Doubt.
The native language for Android Development is Java, So plan on going with Java | 2 | 0 | 0 | I am a beginner in Android development.
But confused b/w the two technologies python and java. | what should I use for android app development : Java or Python? | 0 | 0 | 0 | 1,319 |
36,472,751 | 2016-04-07T09:59:00.000 | 0 | 0 | 0 | 0 | java,android,python-3.x | 36,472,923 | 5 | true | 1 | 0 | JAVA is the main language used in android apps. You can create app using python as well, but I will recommend you to use java as it is more orthodox and you can find tutorial for that too. | 2 | 0 | 0 | I am a beginner in Android development.
But confused b/w the two technologies python and java. | what should I use for android app development : Java or Python? | 1.2 | 0 | 0 | 1,319 |
36,473,334 | 2016-04-07T10:23:00.000 | 1 | 0 | 1 | 0 | python,python-2.7 | 36,473,657 | 1 | false | 0 | 0 | If you have pip installed, try to run pip install parse as root. | 1 | 0 | 0 | Can somebody let me know how to install python "parse" module for python2.7 version?
Server details :
CloudLinux Server release 5.11
cPanel 54.0 (build 21) | Parse python module installtion for python 2.7 | 0.197375 | 0 | 0 | 3,954 |
36,475,003 | 2016-04-07T11:35:00.000 | 1 | 0 | 0 | 0 | python,qt,pyside | 36,476,989 | 1 | true | 0 | 1 | You're assuming two things:
The app has implemented its styling using stylesheets, and
The app uses an application-wide stylesheet.
Either of these assumptions is wrong in your case. You might need to extract stylesheets from individual widgets. Or the application might be using a custom QStyle and not stylesheets. | 1 | 2 | 0 | Is there a way to "rip" Qt stylesheet from Python app?
.styleSheet() returns an empty string. | Get Qt stylesheet from Python app | 1.2 | 0 | 0 | 171 |
36,475,363 | 2016-04-07T11:52:00.000 | 0 | 0 | 1 | 0 | python | 36,477,116 | 2 | false | 0 | 0 | While being treated specially - it's automagically called with the newly created instance at instanciation time -, __init__ is by itself just an ordinary Python function, so it implicitely returns None unless you try to explicitely return something else... But then you will find out (at call time) that it's not allowed to return anything than None, so you can safely assume it won't return anything else ;) | 2 | 0 | 0 | I am confused in return type and value of__init__() in Python.
Does __init__() return value or else and what is return type of__init__()? | Return type and value of init | 0 | 0 | 0 | 109 |
36,475,363 | 2016-04-07T11:52:00.000 | 2 | 0 | 1 | 0 | python | 36,475,713 | 2 | true | 0 | 0 | init doesn't and shouldn't return any thing other than None. The purpose of init, as its name says, is to initialize attributes.
You can actually try to return None in init but not returning anything would by default return None in Python. | 2 | 0 | 0 | I am confused in return type and value of__init__() in Python.
Does __init__() return value or else and what is return type of__init__()? | Return type and value of init | 1.2 | 0 | 0 | 109 |
36,478,336 | 2016-04-07T13:54:00.000 | 0 | 0 | 1 | 0 | python,sockets,memory-leaks,tkinter | 38,483,492 | 1 | true | 0 | 1 | I discovered the cause by myself, so I write it here and close the question.
The cause of memory leak was I was calling tkinter functions from multiple threads.
tkinter is not thread safe so I was violating its rule.
I modified my program so that only main thread use tkinter function then
memory leak has gone.
Thank you. | 1 | 0 | 0 | I made a app with Python 3.4 and Tkinter.
My app runs several (3-5) threads, and each thread does below.
endless loop of recvfrom() to get message from socket(UDP)
endless loop which displays the message and write it to a file
I use my app on windows embedded OS which is based on XP,
and I watch memory usage with TaskManager,
because I want to run my app for a long (maybe more than month) without shutdown.
Then, a problem is
Mem Usage(physical) gradually increase, but sometimes drastically decrease.
So overall it looks okay.
But Virtual Memory Size(VMSize) is increasing with long-term view.
For example, when I started app, VMSize was 26MB.
and next day, it become 29MB,
and next day, it become 32MB.
It sometimes increase a little, sometimes decrease a little(e.g. 20KB).
but overall it's on upward trend.
This is not so large volume but "increasing" makes me uneasy.
I expected GarbageCollection could help so I inserted gc.collect() on my code,
so that it will be invoked every 3 hours.
But nothing changed, and print(gc.collect()) showed always 0 (except right after start-up of app itself).
Is this memory-leak?
I read definition of del() and reference cycle can lead to memory leak in Python,
but I never define del() by myself.
Thank you in advance. | Windows Task Manager: Python app memory usage increase | 1.2 | 0 | 0 | 232 |
36,480,233 | 2016-04-07T15:08:00.000 | 0 | 0 | 0 | 0 | python,math,simulation,kalman-filter | 36,484,420 | 2 | false | 0 | 0 | Steady state KF requires the initial state matches the steady state covariance. Otherwise, the KF could diverge. You can start using the steady state KF when the filter enters the steady state.
The steady state Kalman filter can be used for systems with multiple dimension state. | 2 | 0 | 1 | I am working with discrete Kalman Filter on a system.
x(k+1)=A_k x(k)+B_k u(k)
y(k)=C_k x(k)
I have estimated the state from the available noised y(k), which one is generated from the same system state equations with Reference Trajectory of the state. Then I have tested it with wrong initial state x0 and a big initial co-variance (simulation 1). I have noticed that the KF works very well, after a few steps the gain k quickly converges to a very small value near zero. I think it is may be caused by the process noise Q. I have set it small because the Q stands for the accuracy of the model.
Now I want to modify it to a steady state Kalman Filter. I used the steady gain from simulation-1 as constant instead of the calculation in every iteration. And then, the five equations can be simplified to one equation:
x(k+1)^=(I-KC)A x(k)^+(I-KC)B u(k)+K y(k+1)
I want to test it with same initial state and co-variance matrix as the one in simulation-1. But the result is very different from reference trajectory and even the result of simulation-1. I have tested it with the co-variance matrix p_infi, which is solved from the Discrete Riccati Equation:
k_infi=p_infiC'/(Cp_infi*C'+R)
This neither works.
I am wondering-
How should I apply steady state KF and how should I set the initial state for it?
Is steady state KF used for scalar system?
Should I use it with LQ-controller or some others? | State Estimation of Steady Kalman Filter | 0 | 0 | 0 | 764 |
36,480,233 | 2016-04-07T15:08:00.000 | 1 | 0 | 0 | 0 | python,math,simulation,kalman-filter | 36,666,450 | 2 | true | 0 | 0 | Let me first simplify the discussion to a filter with a fixed transition matrix, A rather then A_k above. When the Kalman filter reaches steady-state in this case, one can extract the gains and make a fixed-gain filter that utilizes the steady-state Kalman gains. That filter is not a Kalman filter, it is a fixed-gain filter. It's start-up performance will generally be worse than the Kalman filter. That is the price one pays for replacing the Kalman gain computation with fixed gains.
A fixed-gain filter has no covariance (P) and no Q or R.
Given A, C and Q, the steady-state gains can be computed directly. Using the discrete Kalman filter model, and setting the a-posteriori covarance matrix equal to the propagated a-posteriori covariance matrix from the prior measurement cycle one has:
P = (I - KC) (A P A^T + Q)
Solving that equation for K results in the steady-state Kalman gains for fixed A, Q and C.
Where is R? Well it has no role in the steady-state gain computation because the measurement noise has been averaged down in steady-state. Steady-state means the state estimate is as good as it can get with the amount of process noise (Q) we have.
If A is time-varying, much of this discussion does not hold. There is no assurance that a Kalman filter will reach steady-state. There may be no Pinf.
Another implicit assumption in using the steady-state gains from a Kalman filter in a fixed gain filter is that all of the measurements will remain available at the same rates. The Kalman filter is robust to measurement loss, the fixed-gain filter is not. | 2 | 0 | 1 | I am working with discrete Kalman Filter on a system.
x(k+1)=A_k x(k)+B_k u(k)
y(k)=C_k x(k)
I have estimated the state from the available noised y(k), which one is generated from the same system state equations with Reference Trajectory of the state. Then I have tested it with wrong initial state x0 and a big initial co-variance (simulation 1). I have noticed that the KF works very well, after a few steps the gain k quickly converges to a very small value near zero. I think it is may be caused by the process noise Q. I have set it small because the Q stands for the accuracy of the model.
Now I want to modify it to a steady state Kalman Filter. I used the steady gain from simulation-1 as constant instead of the calculation in every iteration. And then, the five equations can be simplified to one equation:
x(k+1)^=(I-KC)A x(k)^+(I-KC)B u(k)+K y(k+1)
I want to test it with same initial state and co-variance matrix as the one in simulation-1. But the result is very different from reference trajectory and even the result of simulation-1. I have tested it with the co-variance matrix p_infi, which is solved from the Discrete Riccati Equation:
k_infi=p_infiC'/(Cp_infi*C'+R)
This neither works.
I am wondering-
How should I apply steady state KF and how should I set the initial state for it?
Is steady state KF used for scalar system?
Should I use it with LQ-controller or some others? | State Estimation of Steady Kalman Filter | 1.2 | 0 | 0 | 764 |
36,481,830 | 2016-04-07T16:17:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,matplotlib | 36,506,203 | 2 | false | 0 | 0 | On the first part of the question: a color map in mpl is intended to represent a function from [0, 1] to colors. mpl only has two very simple classes for that purpose: LinearSegmentedColorMap and ListedColorMap. LinearSegmentedColorMap can represent any color map for which each of the three channels (R, G, B) is a piecewise linear function. ListedColorMap is slightly more restricted in that it requires equal sized pieces in that function (bins). Conceptually, there's no difference between ListedColorMap and lists of colors.
There is no need for any more complicated functions because mpl.Normalize objects can be used to first transform the input data in any desired way into numbers from the [0, 1] interval.
Strictly speaking, it would have been enough to only use ListedColorMap objects created from just two colors, but then Normalize objects would have to do a lot of work that deals with color concepts, which shouldn't be their concern. | 1 | 3 | 0 | What's the difference between a colormap (as required by contour and many other plotting functions) and a simple list of colors? How do I convert a list of colors (where each color is supposed to be represent equal-sized step) into a colormap? | Difference between cmap and a list of colors | 0 | 0 | 0 | 1,751 |
36,482,154 | 2016-04-07T16:32:00.000 | 1 | 0 | 0 | 0 | python,matplotlib,event-handling,mouseevent | 36,489,509 | 2 | false | 0 | 0 | I feel that this might be more easily resolved by altering the hardware - can you temporarily unplug the mouse, or tape over the track pad to stop people fiddling with it?
I suggest this because your crashing script will always process mouse-clicks in some way, and if you don't know what's causing the crashes then you may be better off just ensuring that there are no clicks. | 1 | 0 | 1 | I've recently built a python script that interacts with an Arduino and a piece of hardware that uses LIDAR to map out a room. Everything works great, but anytime you click on the plot that is generated with maptotlib, the computer freaks out and crashes the script that is running. This is partly because I was given a $300 computer to run this on, so it's not very powerful. However, I feel like even a $300 computer should be able to handle a mouse click.
How can I ignore mouse clicks entirely with matplotlib so that the computer doesn't freak out and crash the script?
If that's not the correct solution, what might be a better solution?
Edit: This is an interactive plotting session (sort of, I just replace the old data with the new data, there is no plot.ion() command called). So, I cannot just save the plot and show it. The Arduino transmits data constantly. | Ignore all mouse clicks on a matplotlib plot | 0.099668 | 0 | 0 | 175 |
36,482,419 | 2016-04-07T16:45:00.000 | -2 | 1 | 0 | 0 | bdd,python-behave | 37,288,247 | 5 | false | 0 | 0 | You can use predefined "@skip" tag for the scenarios or features that you would like to skip and behave will automatically skip testing the scenario or the whole feature. | 1 | 12 | 0 | I'm juggling code branches that were partly done a few months ago, with intertwined dependencies. So the easiest way to move forward is to mark failing tests on a particular branch as pending (the rspec way) or to be skipped, and deal with them after everything has been merged in.
In its final report, behave reports the number of tests that passed, the # failed, the # skipped, and the # untested (which are non-zero when I press Ctrl-C to abort a run). So behave as a concept of skipped tests. How do I
access that? | How do I skip a test in the behave python BDD framework? | -0.07983 | 0 | 0 | 15,075 |
36,485,392 | 2016-04-07T19:25:00.000 | 0 | 0 | 1 | 1 | ipython,beaker-notebook | 37,124,128 | 2 | false | 0 | 0 | If you want to change the current working directory, I don't think that's possible.
But if you want to serve files as in make them available to the web server that creates the page, use ~/.beaker/v1/web as described in the "Generating and accessing web content" tutorial. | 1 | 2 | 0 | Trying to experiment with Beaker Notebooks, but I can not figure out how to launch from a specified directory. I've downloaded the .zip file (I'm on Windows 10), and can launch from that directory using the beaker.command batch file, but cannot figure out where to configure or set a separate launch directory for a specific notebook. With Jupyter notebooks, launching from the saved .ipynb file serves from that directory, but I cannot figure out how to do the same for Beaker notebooks.
Does anyone know the correct method to serve a Beaker Notebook from various parent directories?
Thanks. | How to serve Beaker Notebook from different directory | 0 | 0 | 0 | 178 |
36,489,891 | 2016-04-08T01:13:00.000 | 0 | 0 | 1 | 0 | python,api,get,onenote,onenote-api | 36,504,337 | 1 | false | 1 | 0 | Yesterday (2016/04/08) there was an incident with the OneNote API which prevented us from updating the list of pages. This was resolved rpughly at 11PM PST and the API should be returning all pages now. | 1 | 2 | 0 | I want to get a list of all the pages in a given section for a given notebook. I have the id for the section I want and use it in a GET call to obtain a dictionary of the section's information. One of the keys in the dictionary is "pagesUrl". A GET call on this returns a list of dictionaries where there's one dictionary for each page in this section.
Up until yesterday, this worked perfectly. However, as of today, pagesUrl only returns pages created within the last minute or so. Anything older isn't seen. Does anyone know why this is happening? | OneNote's PagesUrl Not Including All Pages in a Section | 0 | 0 | 1 | 61 |
36,490,085 | 2016-04-08T01:40:00.000 | 2 | 1 | 0 | 0 | python,python-2.7,twitter,tweepy | 42,499,529 | 3 | false | 0 | 0 | Once you have a tweet, the tweet includes a user, which belongs to the user model. To call the location just do the following
tweet.user.location | 1 | 5 | 0 | I am starting to make a python program to get user locations, I've never worked with the twitter API and I've looked at the documentation but I don't understand much. I'm using tweepy, can anyone tell me how I can do this? I've got the basics down, I found a project on github on how to download a user's tweets and I understand most of it. | How to get twitter user's location with tweepy? | 0.132549 | 0 | 1 | 13,479 |
36,491,776 | 2016-04-08T04:52:00.000 | 1 | 0 | 1 | 0 | python,windows,virtualenv,virtualenvwrapper | 36,491,794 | 2 | false | 0 | 0 | When you do mkvirtualenv name it's creating the virtualenv in the current directory you're in in the shell. To create it in the place you want you either need to specify the path or navigate there and create the virtualenv | 1 | 3 | 0 | I'm extremely new to Python and virtualenv, so I apologize if this is an obvious question. I've got a C drive and a D drive on my pc running windows 10. I have the python and scripts path set to the proper location on the D drive.
In console, i did a pip install virtualenv and pip install virtualenvwrapper-win. After that i navigated to a folder on my D drive where i want my projects. When I ran mkvirtualenv HelloWorld, it seems to have created the virtualenvironment in my C:/users/me folder. Additionally, the virtual env was not activated by default and I was not moved to the correct directory in my console.
How can I ensure that mkvirtualenv creates new virtual environments in the correct folder on my D drive? And what am I doing wrong to not activate virtual env after creation? | Virtualenvwrapper creating project in wrong directory? | 0.099668 | 0 | 0 | 2,709 |
36,494,553 | 2016-04-08T08:05:00.000 | 0 | 1 | 0 | 1 | python,bash,apache,server,pyramid | 36,502,436 | 2 | false | 0 | 0 | Well you changed the owner of the files to root, and then you ran as root, and it worked, so that makes sense. The problem is that root isn't necessarily the user executing the script in your webapp. You need to find which user is trying to execute the script, and then change the files' ownership to that user (depending on how the scripts are invoked, you may need to chmod them as well to make sure they are executable) | 1 | 0 | 0 | I have a web application which is written with python (Pyramid) and in the apache server, inside of the one of the Python we are launching a SH file which is a service to sending SMS.
The problem is always the permission is denied.
we tried the run the SH file by login into the root and it works.
we changed the owner of the both files Python one and SH one to 'root' but not works!
any ideas?! | Permission denied or Host key problems | 0 | 0 | 0 | 39 |
36,498,754 | 2016-04-08T11:45:00.000 | 2 | 0 | 1 | 0 | python,algorithm,numpy,mesh,voxel | 36,692,391 | 2 | true | 0 | 0 | I solved my problem if it can be usefull. Marching Cubes algorithm is good, but it doesn't work well on binarized arrays. So:
1) Gaussian Filter applied to the 3D array (scipy.filters)
2) Marching Cubes algorithm to mesh it (scikit-image tool)
3) Sum up the areas of triangles (scikit-image tool) | 1 | 1 | 0 | I have a voxels assembly which represents a stone in 3D. It's a binarized numpy 3D array in which 1 is assigned to voxels which make up my stone and 0 elsewhere.
I want to:
create its meshed surface
calculate the surface area on it.
But how? | Python: Mesh a voxels assembly to compute the surface area | 1.2 | 0 | 0 | 2,637 |
36,500,141 | 2016-04-08T12:56:00.000 | 1 | 0 | 1 | 0 | python,dependencies,pip,virtualenv,pythonpath | 36,500,820 | 2 | false | 0 | 0 | For dependency isolation and management I always have one virtualenv per application. This prevents issues with inter-application dependency conflicts and if there are dependency conflicts within an application's dependency any hackery to workaround them is limited to the affected environment.
Also, dependency upgrades can be performed independently per application. | 1 | 9 | 0 | we are trying to install several own written python3 applications sharing some libraries with conflicting versions.
We are currently discussing employing the order of packages inside the PYTHONPATH and/ or pythons virtualenv.
How would you handle this? | Python packages with conflicting dependencies | 0.099668 | 0 | 0 | 14,266 |
36,504,357 | 2016-04-08T16:09:00.000 | 5 | 0 | 1 | 0 | python,c++,naming-conventions | 36,504,455 | 2 | false | 0 | 0 | I think that writing the adjective before the noun is the more common thing. But if you think about it for a second, writing it after the noun is easier when reading your code. The adjective can easily be seen as a attribute of the noun. In a logic way of thinking, this is the better way in my opinion. | 1 | 14 | 0 | I find that I am often a little inconsistent in my naming conventions for variables, and I'm just wondering what people consider to be the best approach. The specific convention I am talking about is when a variable needs to be described by a noun and an adjective, and whether the adjective should come before or after the noun. The question is general across all programming languages, although personally I use C++ and Python.
For example, consider writing a GUI, which has two buttons; one on the right, and one on the left. I now need to create two variables to store them. One option would be to have the adjective before the noun, and call them left_button, and right_button. The other option would be to have the adjective after the noun, and call them button_left, and button_right. With the first case, it makes more sense when reading out loud, because in English you would always place the adjective before the noun. However, with the second case, it helps to structure the data semantically, because button reveals the most information about the variable, and left or right is supplementary information.
So what do you think? Should the adjective come before or after the noun? Or is it entirely subjective? | Should variable names have adjectives before or after the noun? | 0.462117 | 0 | 0 | 1,901 |
36,518,400 | 2016-04-09T14:50:00.000 | 3 | 0 | 0 | 1 | python,asynchronous,concurrency,celery | 36,518,663 | 2 | false | 0 | 0 | Asynchronous IO is a way to use sockets (or more generally file descriptors) without blocking. This term is specific to one process or even one thread. You can even imagine mixing threads with asynchronous calls. It would be completely fine, yet somewhat complicated.
Now I have no idea what asynchronous task queue means. IMHO there's only a task queue, it's a data structure. You can access it in asynchronous or synchronous way. And by "access" I mean push and pop calls. These can use network internally.
So task queue is a data structure. (A)synchronous IO is a way to access it. That's everything there is to it.
The term asynchronous is havily overused nowadays. The hype is real.
As for your second question:
Message is just a set of data, a sequence of bytes. It can be anything. Usually these are some structured strings, like JSON.
Task == message. The different word is used to notify the purpose of that data: to perform some task. For example you would send a message {"task": "process_image"} and your consumer will fire an appropriate function.
Task queue Q is a just a queue (the data structure).
Producer P is a process/thread/class/function/thing that pushes messages to Q.
Consumer (or worker) C is a process/thread/class/function/thing that pops messages from Q and does some processing on it.
Message broker B is a process that redistributes messages. In this case a producer P sends a message to B (rather then directly to a queue) and then B can (for example) duplicate this message and send to 2 different queues Q1 and Q2 so that 2 different workers C1 and C2 will get that message. Message brokers can also act as protocol translators, can transform messages, aggregate them and do many many things. Generally it's just a blackbox between producers and consumers.
As you can see there are no formal definitions of those things and you have to use a bit of intuition to fully understand them. | 1 | 7 | 0 | As I understand, asynchronous networking frameworks/libraries like twisted, tornado, and asyncio provide asynchronous IO through implementing nonblocking sockets and an event loop. Gevent achieves essentially the same thing through monkey patching the standard library, so explicit asynchronous programming via callbacks and coroutines is not required.
On the other hand, asynchronous task queues, like Celery, manage background tasks and distribute those tasks across multiple threads or machines. I do not fully understand this process but it involves message brokers, messages, and workers.
My questions,
Do asynchronous task queues require asynchronous IO? Are they in any way related? The two concepts seem similar, but the implementations at the application level are different. I would think that the only thing they have in common is the word "asynchronous", so perhaps that is throwing me off.
Can someone elaborate on how task queues work and the relationship between the message broker (why are they required?), the workers, and the messages (what are messages? bytes?).
Oh, and I'm not trying to solve any specific problems, I'm just trying to understand the ideas behind asynchronous task queues and asynchronous IO. | Asynchronous task queues and asynchronous IO | 0.291313 | 0 | 0 | 2,039 |
36,519,225 | 2016-04-09T16:01:00.000 | 0 | 0 | 0 | 0 | python,speech-recognition,cmusphinx,htk,autoencoder | 36,650,716 | 1 | true | 0 | 0 | I am not aware of any decoder that could help you. Speech recognition software does not work this way.
Usually such thing requires custom implementation for dynamic beam search. That is not a huge task, maybe 100 lines of code. It also depends on what your phonetic decoder produces. Is it phonetic lattice (ideally) or is it a 1-best result with scores or simply 1-best result without scores.
In case you have a proper lattice you might want to try openfst toolkit where you convert LM and dictionary to FST, then compose with lattice FST and then use fstbestpath to find the best path. Still, instead of all those phonetic conversions you can simply write a dynamic search.
Baidu in their projects also convert speech to letters and then use language model to fix letter sequence. But they say that without langauge model it works equally well. | 1 | 0 | 1 | I've implemented a phoneme classifier using an autoencoder (Given an audio file array it returns all the recognized phonemes). I want to extend this project so that word recognition is possible. Does there exist an already trained HMM model (in English) that will recognize a word given a list of phonemes?
Thanks everybody. | Already trained HMM model for word recognition | 1.2 | 0 | 0 | 354 |
36,520,287 | 2016-04-09T17:28:00.000 | 0 | 0 | 1 | 0 | python,linux,input,console | 36,520,388 | 2 | false | 0 | 0 | Have you tried?
myinput = raw_input("Enter your input ->")
I am using windows , it works fine. Don't have a linux to simulate.
Also, why are you pressing arrow keys? Is it a need for program?
You can simply make arrow using a dash and greater than keys. | 2 | 0 | 0 | How can I allow the arrows in raw_input()?
There is a better way?
When I write and I use the left arrow, ^[[D appears.
I am using Linux. | Python raw input with arrows | 0 | 0 | 0 | 1,666 |
36,520,287 | 2016-04-09T17:28:00.000 | 3 | 0 | 1 | 0 | python,linux,input,console | 36,520,529 | 2 | false | 0 | 0 | If you are Windows, the cursor keys work normally to allow editing of your input. On Linux, I find that I need to import readline to get the input editing module.
If you Google for "python readline" you will get many more hits and suggestions on enhanced editing, tab completion, etc. | 2 | 0 | 0 | How can I allow the arrows in raw_input()?
There is a better way?
When I write and I use the left arrow, ^[[D appears.
I am using Linux. | Python raw input with arrows | 0.291313 | 0 | 0 | 1,666 |
36,521,976 | 2016-04-09T19:59:00.000 | 0 | 0 | 0 | 0 | python,oauth,oauth2client | 36,541,353 | 1 | false | 1 | 0 | Update the grant_type to implicit | 1 | 0 | 0 | The standard (for me) OAuth flow is:
generate url flow.step1_get_authorize_url() and ask user to allow app access
get the code
get the credentials with flow.step2_exchange(auth_code)
But I faced with another service, where I just need to initiate a POST request to token_uri with client_id and client_secret passed as form values (application/x-www-form-urlencoded), grant_type is client_credentials and scope is also passed as form field value.
Does oauth2client library supports it? | How to get OAuth token without user consent? | 0 | 0 | 1 | 106 |
36,524,836 | 2016-04-10T01:50:00.000 | 12 | 0 | 1 | 0 | python,list,append,operators,extend | 36,524,869 | 1 | true | 0 | 0 | In python += on a list is equivalent to extend method on that list. | 1 | 3 | 0 | Python lists have a += operator as well as append and extend methods.
If l is a list, is l += ... equivalent to l.append(...), l.extend(...), both, or neither? | Is a Python list's += operator equivalent to append() or extend()? | 1.2 | 0 | 0 | 5,701 |
36,526,219 | 2016-04-10T05:33:00.000 | 2 | 0 | 0 | 0 | sql-server,web-services,ironpython,spotfire | 36,533,707 | 2 | false | 0 | 0 | Shadowfax is correct that you should review the How to Ask guide.
that said, Spotfire offers this feature in two ways:
use IronPython scripting attached to an action control to retrieve the data. this is a very rigid solution that offers no caching, and the data must be retrieved and placed in memory each time the document is opened. I'll leave you to the search feature here on SO; I've posted a sample document somewhere.
the ideal solution is to use a separate product called Spotfire Advanced Data Services. this data federation layer can mashup data and perform advanced, custom caching based on your needs. the data is then exposed as an information link in Spotfire Server. you'll need to talk to your TIBCO sales rep about this. | 1 | 0 | 0 | There is a use case in which we would like to add columns from the data of a webservice to our original sql data table.
If anybody has done that then pls do comment. | How to use a web service as a datasource in Spotfire | 0.197375 | 1 | 0 | 959 |
36,529,242 | 2016-04-10T11:41:00.000 | 3 | 0 | 1 | 0 | python,kivy | 36,529,332 | 1 | true | 0 | 1 | It has nothing to do with kivy, these are basics of python language. Instead of overwriting the varliable with = sign, use += to append. | 1 | 0 | 0 | Is there a way to insert text to a textinput without overwriting the previous text? Textinput().text = "something" deletes the previous text.I also want to add it everytime in a new line. | Insert text to TextInput() in Kivy | 1.2 | 0 | 0 | 799 |
36,531,637 | 2016-04-10T15:26:00.000 | 0 | 0 | 0 | 0 | python,linux,libreoffice,libreoffice-writer | 36,532,645 | 3 | true | 0 | 0 | Macros for OpenOffice/LibreOffice in Python don't have to be executable.
The location is right, though you might want to create a sub-directory (e.g. for CALC or WRITER), and put it in there - since otherwise it will be visible in all other components (where it might not work).
Did you restart LibreOffice after copying? | 2 | 1 | 0 | I'm trying to learn to write macros for LibreOffice in Python. I made simple macro, put in into ~/.config/libreoffice/4/user/Scripts/python/ and found it in Tools/Macros/Organize Macros/Python.../My Macros. So far all works as expected, macro is visible.
But when I click on it, the Run button stays grayed out. I cannot run it. Does anyone know what might be the cause?
Thanks
EDIT: File is executable, I set the rights to 777 just to be sure. Python code is valid.
EDIT2: I reinstalled LibreOffice and it works now. Probably was something wrong with my installation. Script file in ~/.config/libreoffice/4/user/Scripts/python/ set as executable works. And @ngulam examples worked even before reinstall.
Thanks for all the advice you've given me and have a good day. | Can't run Python macro in LibreOffice | 1.2 | 0 | 0 | 1,419 |
36,531,637 | 2016-04-10T15:26:00.000 | 0 | 0 | 0 | 0 | python,linux,libreoffice,libreoffice-writer | 39,428,308 | 3 | false | 0 | 0 | Note: For LO 5.2 you need to put your pythons here..
/opt/libreoffice5.2/share/Scripts/python
.. or better still place a link there into a folder where you can edit without sudo ie. issue a command like this..
cd /opt/libreoffice5.2/share/Scripts/python
sudo ln -s /home/rich/Sources/Pythons rje_pythons
where /home/rich/Sources/Pythons is your more convenient folder. | 2 | 1 | 0 | I'm trying to learn to write macros for LibreOffice in Python. I made simple macro, put in into ~/.config/libreoffice/4/user/Scripts/python/ and found it in Tools/Macros/Organize Macros/Python.../My Macros. So far all works as expected, macro is visible.
But when I click on it, the Run button stays grayed out. I cannot run it. Does anyone know what might be the cause?
Thanks
EDIT: File is executable, I set the rights to 777 just to be sure. Python code is valid.
EDIT2: I reinstalled LibreOffice and it works now. Probably was something wrong with my installation. Script file in ~/.config/libreoffice/4/user/Scripts/python/ set as executable works. And @ngulam examples worked even before reinstall.
Thanks for all the advice you've given me and have a good day. | Can't run Python macro in LibreOffice | 0 | 0 | 0 | 1,419 |
36,532,089 | 2016-04-10T16:05:00.000 | 2 | 0 | 0 | 0 | python,opencv,rotation | 36,532,299 | 2 | false | 0 | 0 | Yes, it is possible for the initial pixel value not to be found in the transformed image.
To understand why this would happen, remember that pixels are not infinitely small dots, but they are rectangles with horizontal and vertical sides, with small but non-zero width and height.
After a 13 degrees rotation, these rectangles (which have constant color inside) will not have their sides horizontal and vertical anymore.
Therefore an approximation needs to be made in order to represent the rotated image using pixels of constant color, with sides horizontal and vertical. | 2 | 0 | 1 | Is it possible the value pixel of image is change after image rotate? I rotate an image, ex, I rotate image 13 degree, so I pick a random pixel before the image rotate and say it X, then I brute force in image has been rotate, and I not found pixel value as same as X. so is it possible the value pixel can change after image rotate? I rotate with opencv library in python.
Any help would be appreciated. | pixel value change after image rotate | 0.197375 | 0 | 0 | 1,412 |
36,532,089 | 2016-04-10T16:05:00.000 | -1 | 0 | 0 | 0 | python,opencv,rotation | 36,532,213 | 2 | true | 0 | 0 | If you just rotate the same image plane the image pixels will remain same. Simple maths | 2 | 0 | 1 | Is it possible the value pixel of image is change after image rotate? I rotate an image, ex, I rotate image 13 degree, so I pick a random pixel before the image rotate and say it X, then I brute force in image has been rotate, and I not found pixel value as same as X. so is it possible the value pixel can change after image rotate? I rotate with opencv library in python.
Any help would be appreciated. | pixel value change after image rotate | 1.2 | 0 | 0 | 1,412 |
36,536,019 | 2016-04-10T21:58:00.000 | 0 | 0 | 0 | 1 | python,intellij-idea | 36,536,584 | 1 | false | 0 | 0 | You can always put your .idea folder into .gitignore (or equivalent for your VS) file and prevent sharing it between systems and developers.
If you don't have project-specific settings there - it's recommended way to do it. | 1 | 0 | 0 | I have an Intellij 15 Python project created using the Python plugin. I'm on OSX and have the Project Python SDK set to /usr/bin/python. I've got another developer on my team that is on Windows and his SDK is at C:\Python27\Python.exe
Is there any way I can share this project via source control without us both stepping over each other toes each time I update and need to change the SDK? | Share an Intellij Python Project Between Windows and OSX | 0 | 0 | 0 | 22 |
36,538,401 | 2016-04-11T01:22:00.000 | 0 | 0 | 1 | 0 | python | 36,538,487 | 1 | false | 0 | 0 | I think simplest answer is - to open another temporary file for writing. Read 1200 characters at a time (read in an array and directly compare the bytes at offset you mention). If your condition is true, write to the temp file or else do nothing. Rename the temp file to your file once you are done reading the file. Something along those lines. | 1 | 0 | 0 | I have a text file. The text file is very large, about 6 million lines, but I need only a relatively small subsection of that file.
The file is divided into chunks of about 1200 characters. I need to see whether characters 921-922 of each chunk correspond to five different numbers. If they don't, I need the entire chunk deleted.
How do I go about doing this? | Removing sections of a text file given a specific value for a character | 0 | 0 | 0 | 26 |
36,538,637 | 2016-04-11T01:54:00.000 | 0 | 0 | 1 | 0 | python | 46,259,141 | 2 | false | 0 | 0 | The diffrence depend on what module you use.
If you use from socket import socket, AF_INET, SOCK_STREAM :
This will work. s will be uninitialized socket object.
Will not work, because socket is constructor and not a module.
This will work. s will be initialized socket object.
If you use import socket :
Will not work, because socket is module and not a constructor(not a function- you cant call it).
This will work. s will be uninitialized socket object.
Will not work, because socket is module and not a constructor.
Hope this help | 1 | 0 | 0 | I'm having some trouble understanding the variant syntax format when creating sockets (I've started learning Python a few weeks ago).
Can someone please explain what is the difference (if any) between the below?
s = socket()
s = socket.socket()
s = socket(AF_INET, SOCK_STREAM)
Thanks. | Difference in socket syntax | 0 | 0 | 1 | 58 |
36,540,220 | 2016-04-11T05:06:00.000 | 0 | 1 | 0 | 0 | android,python,python-appium | 59,561,249 | 1 | false | 1 | 0 | You need to create multiple Udid and modify the same in the code.Inorder to launch in multiple devices you need to create multiple instances of appium server opening in Different ports(eg.4000,4001....etc) | 1 | 1 | 0 | I am trying Appium using Python language. I have written a simple Login script in Python,it executes perfectly in one android device/emultor using Appium. But i have no idea how to run in multiple device/emulators..i read some forums but did not get any solutions(i am very new to Automation-Appium).
Please help me with detailed steps or procedure.
Thank you. | How to run Appium script in multiple Android device/emulators? | 0 | 0 | 0 | 447 |
36,540,431 | 2016-04-11T05:27:00.000 | 0 | 1 | 0 | 0 | c#,php,python,sms,sms-gateway | 36,540,940 | 1 | false | 0 | 0 | If you want to show the image content from the URL all you can do is write a notification Application Which will read from the SMS that you are sending and (using the thirdparty number from which you are sending the sms) and notify the user with image (by reading the content of the SMS and downloading it from URL).
But then all your users will have to download your app.and will need read permissions for SMS. | 1 | 0 | 0 | I am using 3rd party to send and receive SMS, which includes text plus url of image. Is there any way that latest smartphones shows picture instead of link? Like the downloadable content. | SMS with picture link | 0 | 0 | 1 | 107 |
36,542,813 | 2016-04-11T07:50:00.000 | -2 | 1 | 0 | 0 | python,twitter,tweepy | 36,543,167 | 1 | false | 0 | 0 | Yes you can track it. Get the stats(favorite, retweet_count) of your retweet(time when you are retweeting it.) and save this stats somewhere as a check-point. Next time when someone is going to retweet it again you will get an updated stats of the your previous retweet and do compare latest stats with the existing check-point. | 1 | 3 | 0 | If I retweet a tweet, is there a way of finding out how many times "my retweet" has been retweeted / favorited? Are there provisions in the Twitter API to retrieve that? There is no key in retweeted_status which gives that information. What am I missing here? | Find the number of retweets of a retweet using Tweepy? | -0.379949 | 0 | 1 | 1,191 |
36,549,300 | 2016-04-11T12:54:00.000 | 2 | 0 | 0 | 0 | python,django,unicode,django-rest-framework,content-type | 36,549,592 | 1 | false | 1 | 0 | You likely miss the system language settings available within Django. Depending on your stack (apache or supervisor do remove default system settings) you will need to define it explicitly.
The reason is, unicode is for Python internal specific. You need to encode the unicode into an output format. Could be utf8, or any iso code.
Note that this is deferent from the header# -*- coding: utf-8 -*- which goal is to decode the file into unicode using the utf-8 charset. It doesn't mean that any output within that file code will be converted using utf8. | 1 | 1 | 0 | I am trying to display a unicode value u'\u20b9' from my SQLite database, using the browsable API of django-rest-framework 3.1.3
I don't get the expected value ₹ for currency_symbol, it returns the following, depending on the browser:
Chrome 49.0.2623.110 (64-bit):
Browsable API: "" (Blank String)
JSON: "₹"
Safari 9.1 (10601.5.17.4):
Browsable API: ₹
JSON: "₹"
CURL:
JSON: ₹
How do I get it to consistently display ₹? | Unicode not rendering in Django-Rest-Framework browsable API in Chrome | 0.379949 | 0 | 0 | 256 |
36,552,029 | 2016-04-11T14:48:00.000 | 3 | 0 | 0 | 0 | python,flask,formatting,pygal | 36,914,709 | 1 | true | 0 | 1 | graph.value_formatter = lambda y: "{:,}".format(y) will get you the commas.
graph.value_formatter = lambda y: "${:,}".format(y) will get you the commas and the dollar sign.
Note that this formatting seems to be valid for Python 2.7 but would not work on 2.6. | 1 | 2 | 0 | I'm using Pygal (with Python / Flask) relatively successfully in regards to loading data, formatting colors, min/max, etc., but can't figure out how to format a number in Pygal using dollar signs and commas.
I'm getting 265763.557372895 and instead want $265,763.
This goes for both the pop-up boxes when hovering over a data point, as well as the y-axes.
I've looked through pygal.org's documentation to no avail. Does anyone know how to properly format those numbers?
UPDATE:
I'm not quite ready to mark this question "answered" as I still can't get the separating commas. However, I did find the following native formatting option in pygal. This eliminates trailing decimals (without using Python's int()) and adds a dollar sign:
graph.value_formatter = lambda y: "$%.0f" % y
Change the 0f to 2f if you prefer two decimals, etc. | Properly formatting numbers in pygal | 1.2 | 0 | 0 | 784 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.