Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
44,948,644 | 2017-07-06T12:16:00.000 | 0 | 0 | 0 | 0 | xml,python-2.7,odoo-8 | 44,948,828 | 1 | false | 1 | 0 | Use the filter function. Press "Add custom filter" and select "Groups" in the first line, "is equal to" in the second and your group name in the third.
Then press apply. | 1 | 0 | 0 | I need to display only users which is in a group.
For example, I have two fields group and assigned to.
If i select a group, then i should get only list of that group.
Can you provide any solution for this problem please | Display users related to specified group | 0 | 0 | 0 | 29 |
44,953,313 | 2017-07-06T15:34:00.000 | 0 | 0 | 1 | 0 | python,linux,pip | 44,953,394 | 3 | false | 0 | 0 | X and Y represents the version of python you are using. | 1 | 3 | 0 | Sorry for the noob question - just trying to understand the flow of how python works.
Does anyone know what the difference between env/bin/python and env/lib/python3.6 is? It will be really helpful to know. (Note the different paths - bin and lib)
Thanks :)
EDIT: I only have one version of python installed in the environment. The one thing to notice here is that env/bin/python has a symbolic link to env/bin/python3.6 (which is a binary file, obviously). But there is a directory in env/lib/python3.6 in which there are directories like site-packages where the installed packages are stored.
So my questions is... when is the binary file in /env/bin used and when is the directory accessed? When I say 'python' in the shell, it goes to the bin but when I say 'import django' in the interpreter, it goes to python3.6 in lib and gets the package. Am i on the right track? | Difference between env/bin/python and env/lib/pythonX.Y (note the lib) | 0 | 0 | 0 | 664 |
44,953,935 | 2017-07-06T16:04:00.000 | 0 | 0 | 1 | 0 | python | 44,954,082 | 2 | false | 0 | 0 | You can use sys.exit() to terminate the running process, but your best option is probably just to re-intialize/read files rather than terminating your code and restarting (i.e., you could call main() again).
Hope this helps. | 1 | 0 | 0 | In my program, I have a config file which gets read on initialization and sets certain runtime variables. While the program is running a user can access and change these values in the config file through a menu I've created. Once these values have been modified I need the program to shut down and restart, so that it can run with the new config values.
I'm programming this in python and I am not sure how to go about doing this. | Restart Python Program after user changes | 0 | 0 | 0 | 416 |
44,954,521 | 2017-07-06T16:33:00.000 | 4 | 0 | 0 | 0 | python,django,django-models | 44,954,903 | 1 | true | 1 | 0 | You don't need to update models if you just added new data. Models are related to a database structure only. | 1 | 1 | 0 | I am relatively new to Django.
I have managed to create a basic app and all that without problems and it works fine.
The question probably has been asked before.
Is there a way to update existing Django models already mapped to existing databases when the underlying database is modified?
To be specific, I have mysql database that I use for my Django app as well as some standalone python and R scripts. Now, it is much easier to update the mysql database with, say, daily stock prices, everyday from my existing scripts outside Django models. Ideally, what I would like is to have my Django models that are already mapped to these tables to reflect the updated data.
I know there is $ python manage.py inspectdb for creating models from existing databases. But that is not the objective.
From what I have gathered so far from the docs and online searches, it is imperative to update the backend database through Django models. Not outside of it. Is it the case really? As long as the table structure doesn;t change I really don't see the why this should not be allowed. Database is meant to serve multiple customers isn't it? With Django being one of it.
And I can not provide a reproducible example as it is a conceptual question.
If this functionality doesn't exist, imho, it really should.
Thanks,
Kaustubh | Django models update when backend database updated | 1.2 | 1 | 0 | 514 |
44,955,528 | 2017-07-06T17:31:00.000 | 0 | 0 | 1 | 0 | python | 44,977,316 | 3 | true | 0 | 0 | Using coroutines (multithreading) will provide the desired concurrent functionality. Source in the comments of the question and of user2357112's answer. | 1 | 0 | 0 | Suppose I want a program Foo.py which has some arbitrary routines Bar(), Quux(), and Fizz(). Let's say that the usual order of execution from a procedural perspective should be Bar() -> Quux() -> Fizz(). However, Fizz() should conditionally call a function Buzz() depending on some runtime action, and calling Buzz() at any time during Fizz() should return the process back to Quux().
I have a fair understanding of how concurrent processes can be implemented in assembly using system calls depending on the architecture, but what options are available to me in Python, where I can't – and frankly would prefer not to – use lots of jumps and directly move an instruction pointer around? When searching for an answer, I found loops and recursion as a suggestion for going back in a program. I don't think a loop would work without stopping the Fizz() process to wait for the condition check for Buzz(), and I'm not sure how recursion could be implemented in this scenario either. (My Buzz() would be like a "Back" button on a GUI). | What are my options for navigating through subroutines? | 1.2 | 0 | 0 | 55 |
44,955,927 | 2017-07-06T17:57:00.000 | 0 | 0 | 1 | 0 | sql,python-3.x,pycharm | 50,968,882 | 5 | false | 0 | 0 | I had a same problem but its fixed this way.
Copied "rc.exe" and "rcdll.dll" from "C:\Program Files (x86)\Windows Kits\8.1\bin\x86"
Pasted "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin" | 2 | 4 | 0 | I'm working with Pycharm in a project to read SQL DBs ,I'm working in a windows 10 64bits workstation and I'm trying to install the module pymssql, I have already installed VS2015 to get all requirements but now each time that i try to install i got the message:
error: command 'C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\cl.exe' failed with exit status 2
I saw on message details the error in:
_mssql.c(266): fatal error C1083: Cannot open include file: 'sqlfront.h': No such file or directory
How can i figured it out? thanks | Install pymssql 2.1.3 in Pycharm | 0 | 1 | 0 | 3,976 |
44,955,927 | 2017-07-06T17:57:00.000 | 0 | 0 | 1 | 0 | sql,python-3.x,pycharm | 69,630,619 | 5 | false | 0 | 0 | In my case helped me rollback to Python 3.8. Same problem I had on 3.10 x64 | 2 | 4 | 0 | I'm working with Pycharm in a project to read SQL DBs ,I'm working in a windows 10 64bits workstation and I'm trying to install the module pymssql, I have already installed VS2015 to get all requirements but now each time that i try to install i got the message:
error: command 'C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\cl.exe' failed with exit status 2
I saw on message details the error in:
_mssql.c(266): fatal error C1083: Cannot open include file: 'sqlfront.h': No such file or directory
How can i figured it out? thanks | Install pymssql 2.1.3 in Pycharm | 0 | 1 | 0 | 3,976 |
44,956,371 | 2017-07-06T18:25:00.000 | 10 | 0 | 1 | 0 | python,pip,spyder | 47,725,283 | 12 | false | 0 | 0 | Try the command spyder3
If you check the scripts folder you'll find spyder3.exe | 4 | 28 | 0 | I downloaded spyder using the
pip install spyder
in my windows 10 32-bit operating system, but i dont see any desktop icons or exe files to start running the IDE. I downloaded spyder 3, any my python is 3.6.
I even tried creating a shortcut of spyder.exe from the Python3.6/Scripts folder, but it won't open. | How to start Spyder IDE on Windows | 1 | 0 | 0 | 132,467 |
44,956,371 | 2017-07-06T18:25:00.000 | 3 | 0 | 1 | 0 | python,pip,spyder | 53,437,728 | 12 | false | 0 | 0 | In case if you want the desktop icon
In desktop, create a new shortcut, in Location paste this
%comspec% /k spyder3
then type the name Spyder,
Now you may have Desktop Icon for opening Spyder | 4 | 28 | 0 | I downloaded spyder using the
pip install spyder
in my windows 10 32-bit operating system, but i dont see any desktop icons or exe files to start running the IDE. I downloaded spyder 3, any my python is 3.6.
I even tried creating a shortcut of spyder.exe from the Python3.6/Scripts folder, but it won't open. | How to start Spyder IDE on Windows | 0.049958 | 0 | 0 | 132,467 |
44,956,371 | 2017-07-06T18:25:00.000 | 5 | 0 | 1 | 0 | python,pip,spyder | 44,956,442 | 12 | false | 0 | 0 | Open a command prompt. Enter the command spyder. Does anything appear? If an exception is preventing it from opening, you would be able to see the reason here. If the command is not found, update your environment variables to point to the Python3.6/Scripts folder, and run spyder again (in a new cmd prompt). | 4 | 28 | 0 | I downloaded spyder using the
pip install spyder
in my windows 10 32-bit operating system, but i dont see any desktop icons or exe files to start running the IDE. I downloaded spyder 3, any my python is 3.6.
I even tried creating a shortcut of spyder.exe from the Python3.6/Scripts folder, but it won't open. | How to start Spyder IDE on Windows | 0.083141 | 0 | 0 | 132,467 |
44,956,371 | 2017-07-06T18:25:00.000 | 0 | 0 | 1 | 0 | python,pip,spyder | 47,845,290 | 12 | false | 0 | 0 | Install Ananconda packages and within that launch spyder 3 for first time. Then by second time you just click on spyder under anaconda in all programs. | 4 | 28 | 0 | I downloaded spyder using the
pip install spyder
in my windows 10 32-bit operating system, but i dont see any desktop icons or exe files to start running the IDE. I downloaded spyder 3, any my python is 3.6.
I even tried creating a shortcut of spyder.exe from the Python3.6/Scripts folder, but it won't open. | How to start Spyder IDE on Windows | 0 | 0 | 0 | 132,467 |
44,956,676 | 2017-07-06T18:44:00.000 | 0 | 0 | 1 | 0 | python,python-3.x | 44,957,535 | 4 | false | 0 | 0 | This may not be the most efficient solution, but you could also just hard code it e.g. create a variable equivalent to zero, add one to the variable for each word in the line, and append the word to a list when variable = 5. Then reset the variable equal to zero. | 1 | 1 | 0 | What would be a pythonic way to create a list of (to illustrate with an example) the fifth string of every line of a text file, assuming it ressembles something like this:
12, 27.i, 3, 6.7, Hello, 438
In this case, the script would add "Hello" (without quotes) to the list.
In other words (to generalize), with an input "input.txt", how could I get a list in python that takes the nth string (n being a defined number) of every line?
Many thanks in advance! | How to create a list of string in nth position of every line in Python | 0 | 0 | 0 | 202 |
44,958,661 | 2017-07-06T20:53:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,initialization | 44,958,695 | 1 | true | 0 | 0 | What I like to do is:
1 - Store all the config settings within a text file (such as a config.json)
2 - Create a class Config which loads the json file(s) and provides useful methods to load and read the configs, perform some concatenations, or simply returns it as a dict
4 - I ensure I do not load the config(s) multiple times (goal: single disk read and config caching) by using a module Globals (which is just an empty Global.py file) in which I store the dict and controls if it exists before deciding to load the config.
3 - import Config, do a config = Config() and I am good to go | 1 | 0 | 0 | I have a few Python scripts that use the same configs and variable initialization. How would I extract this common code out into a single file to be included everywhere? What is the best practice? | Python: How can I extract common setup code into a single file? | 1.2 | 0 | 0 | 95 |
44,958,993 | 2017-07-06T21:16:00.000 | 6 | 1 | 0 | 1 | python,docker,pytest,pytest-django | 44,959,229 | 1 | true | 0 | 0 | There is no way to do that. You can use a different pytest configuration using pytest -c but tox.ini and setup.cfg must reside in the top-level directory of your package, next to setup.py. | 1 | 5 | 0 | How can I set an environment variable with the location of the pytest.ini, tox.ini or setup.cfg for running pytest by default?
I created a docker container with a volume pointing to my project directory, so every change I make is also visible inside the docker container. The problem is that I have a pytest.ini file on my project root which won't apply to the docker container.
So I want to set an environment variable inside the docker container to specify where to look for the pytest configuration. Does anyone have any idea how could I do that? | pytest: environment variable to specify pytest.ini location | 1.2 | 0 | 0 | 7,501 |
44,959,636 | 2017-07-06T22:07:00.000 | 2 | 0 | 0 | 0 | python-3.x,keras,lstm | 44,966,279 | 1 | false | 0 | 0 | In the first layer of the model you should define input_shape=(n_timesteps,n_features). So in your case input_shape = (25,10).
Your actual input to the model will have shape (1000,25,10).
You should also use keras.np_utils.to_categorical to convert your labels to one-hot-encoded vectors, so that they will become vectors with length X, where X is your class number. Every element will be equal to zero, except the one corresponding to the corresponding class.
Hope this helps! | 1 | 0 | 1 | My input data has 10 features and it is taken at 25 different timestamps. My output data consists of class labels. So, basically, I am having a many to one classification problem.
I want to implement an LSTM for this problem. Total training data consists of 10000 data points. How should the input and output format (shape) for this LSTM network be? | Many to one LSTM input shape | 0.379949 | 0 | 0 | 248 |
44,962,107 | 2017-07-07T03:31:00.000 | 0 | 0 | 0 | 1 | python-2.7,spark-streaming,amazon-kcl | 49,734,017 | 1 | false | 0 | 0 | I encountered the same problem. The kinesis-asl jar had several files missing.
To overcome this problem, I had included the following jars in my spark-submit.
amazon-kinesis-client-1.9.0.jar
aws-java-sdk-1.11.310.jar
jackson-dataformat-cbor-2.6.7.jar
Note: I am using Spark 2.3.0 so the jar versions listed might not be the same as those you should be using for your spark version.
Hope this helps. | 1 | 0 | 0 | I am looking for using KCL on SparkStreaming using pySpark.
Any pointers would be helpful.
I tried few given by spark Kinesis Ingeration link.
But i get the error for JAVA class reference.
Seems Python is using JAVA class.
i tried linking
spark-streaming-kinesis-asl-assembly_2.10-2.0.0-preview.jar
while trying to apply the KCL app on spark.
but still having the error.
Please let me know if anyone has done it already.
if i search online i get more about Twitter and Kafka.
Not able to get much help with regard to Kinesis.
spark verision used: 1.6.3 | using Kinesis Client library with Spark Steaming PySpark | 0 | 0 | 0 | 374 |
44,962,433 | 2017-07-07T04:11:00.000 | 0 | 0 | 0 | 0 | python,deep-learning | 44,965,968 | 2 | false | 0 | 0 | In my honest opinion people overstate the impact of image preprocessing. The only truly important thing is that the test data is similar in value scale to the training data. There are some theoretical benefits of having a pre normalized dataset, with the usage of batch normalization, but in practice it never made much of a difference (2-4% Accuracy).
If you have a running model and you are trying to get those last few % better accuracy without having to increase the amount of parameter than I would suggest tweaking this to your use-case.
In my opinion there is no single method that works for every use-case, but a good starting point is to use the same preprocessing as ImageNet, because the features will be similar to the ones produced for the imagenet classification. | 2 | 0 | 1 | I want to finetune ResNet50 ImageNet pretrained model, and I have a few question about image preprocessing of finetune.
In ImageNet preprocessing, we need to subtract the mean of pixel ([103.939, 116.779, 123.68]). When I use my dataset to finetune, should I subtract mean of ImageNet or subtract the mean of my data.
I do see many people rescale the data to [0,1], but the pretrained model(ImageNet) use image scale in [0,255]. Why do people do that? Is it reasonable? | Image preprocessing of finetune in ResNet | 0 | 0 | 0 | 2,606 |
44,962,433 | 2017-07-07T04:11:00.000 | 0 | 0 | 0 | 0 | python,deep-learning | 44,983,413 | 2 | false | 0 | 0 | I would try both. Subtracting your mean makes sense because generally one tries to get mean 0. Subtracting image net mean makes sense because you want the network as a feature extractor. If you change something that early in the feature extractor it could be that it doesn't work at all.
Just like the mean 0 thing, it is generally seen as a desirable property to have features within a fixed range or with a fixed standard deviation. Again, I can't really tell you what is better but you can easily try it. My guess is that there aren't too big differences.
Most important: Make sure you apply the same preprocessing steps to your training / testing / evaluation data. | 2 | 0 | 1 | I want to finetune ResNet50 ImageNet pretrained model, and I have a few question about image preprocessing of finetune.
In ImageNet preprocessing, we need to subtract the mean of pixel ([103.939, 116.779, 123.68]). When I use my dataset to finetune, should I subtract mean of ImageNet or subtract the mean of my data.
I do see many people rescale the data to [0,1], but the pretrained model(ImageNet) use image scale in [0,255]. Why do people do that? Is it reasonable? | Image preprocessing of finetune in ResNet | 0 | 0 | 0 | 2,606 |
44,967,663 | 2017-07-07T09:45:00.000 | 0 | 0 | 0 | 0 | python,selenium,phantomjs,bdd | 44,968,416 | 3 | false | 1 | 0 | You have 3 options:
implicitly_wait:
Implicitly wait means that the it will wait for a maximum of x seconds to search for an element so if you element will appear after 4 seconds you will get it when it will appear second if you set the implicitly wait greater than 4seconds.
Use driver.implicitly_wait(x) after creating the instance.
explicitly_wait:
Explicitly wait means the it will wait for x seconds and it will search for the element after the seconds passed so if you set 10 seconds and the element will appear after 4 seconds the element will be located with 6 seconds delay.
Use driver.explicitly_wait(x) after creating the instance
time.sleep
You can put your program to sleep and wait for the page to fully load after submitting some actions.
Use time.sleep(x) after submitting a form, clicking a button or loading a page. | 3 | 0 | 0 | I have a web app built by Django, front-end is built by React. I tried to test bdd with behave and selenium. I run a test with Chrome web driver and phantomjs one but the tests only passed using chrome. I captured a screenshot when it runs on phantom and saw that the page is not fully rendered. Please give some suggestions about this issue. Do I need do further configuration to test with phantomjs. Thank you. | BDD - Test pass on chrome but not on Phantomjs | 0 | 0 | 1 | 105 |
44,967,663 | 2017-07-07T09:45:00.000 | 0 | 0 | 0 | 0 | python,selenium,phantomjs,bdd | 44,968,229 | 3 | false | 1 | 0 | Try adding an explicit wait on a locator (waitForElementToBePresent) that exists on the part of page that is not rendered on Phantomjs. | 3 | 0 | 0 | I have a web app built by Django, front-end is built by React. I tried to test bdd with behave and selenium. I run a test with Chrome web driver and phantomjs one but the tests only passed using chrome. I captured a screenshot when it runs on phantom and saw that the page is not fully rendered. Please give some suggestions about this issue. Do I need do further configuration to test with phantomjs. Thank you. | BDD - Test pass on chrome but not on Phantomjs | 0 | 0 | 1 | 105 |
44,967,663 | 2017-07-07T09:45:00.000 | 1 | 0 | 0 | 0 | python,selenium,phantomjs,bdd | 44,972,125 | 3 | false | 1 | 0 | This is a common problem with PhantomJS (the page not being fully rendered), and often isn't something that can be remedied with explicit/implicit waits. Add a long (5 second) sleep to your code and take another screenshot.
If the page is fully rendered, follow @Alex Lucaci's instructions for adding (ideally) explicit waits.
If the page still isn't fully rendered, PhantomJS just wont work for you in this case. Personally, I would advise against using PhantomJS at all, as it is problematic in a myriad of ways, but also because why would you test on a browser that literally no one uses as their actual browser? | 3 | 0 | 0 | I have a web app built by Django, front-end is built by React. I tried to test bdd with behave and selenium. I run a test with Chrome web driver and phantomjs one but the tests only passed using chrome. I captured a screenshot when it runs on phantom and saw that the page is not fully rendered. Please give some suggestions about this issue. Do I need do further configuration to test with phantomjs. Thank you. | BDD - Test pass on chrome but not on Phantomjs | 0.066568 | 0 | 1 | 105 |
44,967,751 | 2017-07-07T09:49:00.000 | 0 | 0 | 0 | 0 | python,c++,tensorflow | 44,968,146 | 1 | false | 0 | 0 | Why would want to train the model in C++? Tensorflows core libraries are in c++. I think you mean use the trained model in C++? Once you've trained a model and exported it (assuming you have the .pb file) you use the model for predicting .Theres no way to retrain an exported model. | 1 | 0 | 1 | I have a model build in Python using keras and tensorflow. I want to export the model and use it for training in C++. I am using TF1.2 and used the tf.train.export_metagraph to export my graph. I am not exactly sure on how to proceed in using the model in C++ for training. Thanks :) | Tensorflow:Using a trained model in C++ | 0 | 0 | 0 | 192 |
44,974,723 | 2017-07-07T15:35:00.000 | 4 | 0 | 1 | 0 | python,python-3.x,python-import | 44,974,790 | 1 | true | 0 | 0 | import statements are executed as they're encountered in normal execution, so if the conditional prevents that line from being executed, the import doesn't occur, and you'll have avoided unnecessary work.
That said, if the module is going to be imported in some other way (say, unconditionally imported module B depends on A, and you're conditionally importing A), the savings are trivial; after the first import of a module, subsequent imports just get a new reference to the same cached module; the import machinery has to do some complicated stuff to handle import hooks and the like first, but in the common case, it's still fairly cheap (sub-microsecond when importing an already cached module).
The only way this will save you anything is if the module in question would not be imported in any way otherwise, in which case you avoid the work of loading it and the memory used by the loaded module. | 1 | 2 | 0 | Question
If I have import statements nested in an if/else block, am I increasing efficiency? I know some languages do "one passes" over code for import and syntax issues. I'm just not sure how in depth Python goes into this.
My Hypothesis
Because Python is interpreted and not compiled, by nesting the import statements within the else block, those libraries will not be imported until that line is reached, thus saving system resources unless otherwise needed.
Scenario
I have written a script that will be used by both the more computer literate and those are lesser so. My department is very comfortable with running scripts from the command line with arguments so I have set it up to take arguments for what it needs and, if it does not find the arguments it was expecting, it will launch a GUI with headings, buttons, and more verbose instructions. However, this means that I am importing libraries that are only being used in the event that the arguments were not provided.
Additional Information
The GUI is very, very basic (A half dozen text fields and possibly fewer buttons) so I am not concerned with just creating and spawning a custom GUI class in which the necessary libraries would be imported. If this gets more complicated, I'll consider it in the future or even push to change to web interface.
My script fully functions as I would expect it to. The question is simply about resource consumption. | When exactly does Python import? | 1.2 | 0 | 0 | 79 |
44,974,936 | 2017-07-07T15:46:00.000 | 2 | 0 | 1 | 0 | python,r | 44,974,986 | 2 | false | 0 | 0 | You cannot change variables names the way you are looking to do, but you might try using a python dictionary to achieve what you're trying to accomplish. | 1 | 4 | 0 | I have a set of table names
1. EOM
2.STMT
3.LOOKUP etc
I want to associate these table names with some variables names such as
1. start_time,
2. end_time,
3. total_time etc.
The way I want to write these variable names is something like
1. start_time_EOM, end_time_EOM, total_time_EOM
2. start_time_STMT, end_time_STMT,total_time_STMT
3. start_time_LOOKUP,end_time_LOOKUP,total_time_LOOKUP
Can this be done in python, and how?
(Note: I am new to Python, and still trying to learn). | Concatenate variable name in python | 0.197375 | 0 | 0 | 8,906 |
44,978,054 | 2017-07-07T19:06:00.000 | 3 | 0 | 1 | 0 | python,module,pycharm | 44,978,123 | 2 | false | 0 | 0 | Have you gone into your settings and set your project interpreter in settings > Project: {project-name}> Project Interpreter to your version of python that has the module you installed? | 2 | 0 | 0 | I have downloaded the py-earth module from GitHub and followed the provided installation procedures as well as done "pip install". Things seem to be installing in the terminal, but I cant find the module when I am in pycharm. I have restarted pycharm, reinstalled, and no matter what I can't find a way to install the module. Please help | Installing Python Module in pycharm From GitHub | 0.291313 | 0 | 0 | 1,357 |
44,978,054 | 2017-07-07T19:06:00.000 | 0 | 0 | 1 | 0 | python,module,pycharm | 50,921,812 | 2 | false | 0 | 0 | From pycharm,
goto settings -> project Interpreter
Click on + button on top right corner and you will get pop-up window of Available packages. Then search for earth python package.
Then click on Install package to install the earth package. | 2 | 0 | 0 | I have downloaded the py-earth module from GitHub and followed the provided installation procedures as well as done "pip install". Things seem to be installing in the terminal, but I cant find the module when I am in pycharm. I have restarted pycharm, reinstalled, and no matter what I can't find a way to install the module. Please help | Installing Python Module in pycharm From GitHub | 0 | 0 | 0 | 1,357 |
44,979,339 | 2017-07-07T20:45:00.000 | 0 | 0 | 0 | 0 | python,linux,user-interface,tkinter,debian | 44,979,879 | 2 | false | 0 | 1 | In order to use tkinter, you must have a graphics system running. For Windows and OSX that simply means you need to be logged in (ie: can't run as a service). For linux and other unix-like systems that means that you must have X running.
Neither tkinter nor any of the other common GUI toolkits will write directly to the screen. | 1 | 0 | 0 | I'm currently at a crossroads. I'm somewhat versed in Python (2.7) and would really like to start getting into GUI to give my (although mini) projects some more depth and versibility.
For the most part, my scripts don't use anything graphical so this is the first time I'm dipping my toes in this water.
That said, I've tried using pygame and tkinter but seem to fail at every turn to get something up and running (although I had some slight success with pygame)
Am I correct to understand that for both I need X started in order to generate any type of interface, and with that, so I need X to get any type of input (touchscreen presses)?
Thanks in advance! | Python GUI without 'video system' | 0 | 0 | 0 | 203 |
44,980,837 | 2017-07-07T23:03:00.000 | 5 | 0 | 1 | 0 | python,pycharm,breakpoints | 44,991,224 | 1 | false | 0 | 0 | You can just create a variable in Python specifically for the breakpoint counting purpose, which you increment every time you you go past the break point line. Then just use that variable in your break point condition (i.e. breakpoint_count == 10000).
Update
If you can't add new code into the real python code you can use the breakpoint condition:
eval("exec('try:\\n x += 1\\nexcept NameError:\\n x = 1') or x == 10000")
What this does is execute a try statement which increments a variable or creates it if it doesn't exist. Then evaluates that along with a statement checking if the variable has been incremented enough times yet with that being your ending condition. Note, the exec is required to run the try, but the eval is needed to "return" the condition to PyCharm. This is absurdly hacky, but it works for your case! | 1 | 2 | 0 | I'm trying to ignore a specific breakpoint in pycharm for the first N times it hits. Since I'm looking to set it to something like 10k, manually doing this is not an option. I found the expanded options for breakpoints, including the condition field, but I'm not sure how I can craft a condition which takes into account how many times the breakpoint has been hit. Thanks. | Ignore pycharm breakpoint for first N hits | 0.761594 | 0 | 0 | 568 |
44,981,576 | 2017-07-08T01:11:00.000 | 0 | 0 | 1 | 0 | python,regex | 44,981,592 | 4 | false | 0 | 0 | You could use the re.findall(regex, string, flags) function in python. That returns non-overlapping matches of the patter in string in a list of strings. You could then grab the second member of the returned list. | 1 | 0 | 0 | The text file I'm searching through looks like a lot of text blocks like this:
MKC,2017-06-23 07:54,-94.5930,39.1230,79.00,73.90,84.41,220.00,4.00,0.00,29.68,1003.90,10.00,M,FEW,M,M,M,9500.00,M,M,M,M,KMKC 230754Z 22004KT 10SM FEW095 26/23 A2968 RMK AO2 SLP039 T02610233
(That's all one line)
I'm looking to grab the 2nd occurrence in the line that matches r',\d\.\d{2},', which in this case would be 0.00
I don't know how to specify that I want the nth occurrence of the pattern.
Extra: I've never seen the first value that matches the same pattern go over 9.99, meaning 10.00 and then it would no longer match the same pattern, but it would be nice if there was a way to take this into account. | How to grab the nth occurence of a float on a line using regex? | 0 | 0 | 0 | 87 |
44,981,793 | 2017-07-08T01:59:00.000 | 0 | 0 | 1 | 1 | python,installation,pip | 47,015,021 | 1 | true | 0 | 0 | I've had the same error, I used the following code to get out of this error:
(I was working on centOS 7)
1) sudo yum install MySQL-devel
2) sudo yum install openssl - devel
3) sudo yum install python-devel
I hope this works for you. | 1 | 0 | 0 | I'm running Ubuntu 17.04 (fresh install) and already installed pip. However, when I try to install anything I get this:
Command "python setup.py egg_info" failed with error code 1 in
/tmp/pip-build-kBfUEp/kivy/
Depending on what I'm installing, I get the same thing but slighty different. For example:
Command "python setup.py egg_info" failed with error code 1 in
/tmp/pip-build-zqj5Ka/pypiwin32/
I've tried everything and I have absolute no idea how to solve this.
Thanks. | python setup.py egg_info failed with error code 1 | 1.2 | 0 | 0 | 2,087 |
44,982,302 | 2017-07-08T03:50:00.000 | 1 | 0 | 0 | 0 | python-2.7,amazon-web-services,amazon-s3,aws-lambda,aws-sdk | 45,005,925 | 3 | true | 1 | 0 | Three steps I followed
1) connected to aws lambda with boto3 used add_permission API
2)also applyed get_policy
3)connected to S3 with boto resource to configuring BucketNotification API,put LambdaFunctionConfigurations | 1 | 2 | 0 | how to add trigger s3 bucket to lambda function with boto3, then I want attach that lambda function to dynamically created s3 buckets using programmatically(boto3) | how to add the trigger s3 bucket to lambda function dynamically(python boto3 API) | 1.2 | 0 | 1 | 3,218 |
44,982,378 | 2017-07-08T04:05:00.000 | 0 | 0 | 0 | 0 | python,arrays,numpy | 44,982,417 | 1 | false | 0 | 0 | You may use logarithmic version of your variables (np.log10), so when dealing with something like 1e-200 you will have -200, less memory and more efficiency. | 1 | 0 | 1 | I'm using numpy's polyfit to find a best fit curve for a set of data. However, numpy's polyfit returns an array of float64 and because the calculated coefficients are so large/small (i.e. 1e-200), it's returning an overflow error that's encountered in multiply :
RuntimeWarning: overflow encountered in multiply
scale = NX.sqrt((lhs*lhs).sum(axis=0))
I've tried casting the initial array to be float128, but that does not seem to work. Is there any way around this overflow issue / any way to handle such large coefficients? | Numpy Float128 Polyfit | 0 | 0 | 0 | 281 |
44,983,165 | 2017-07-08T06:16:00.000 | 22 | 0 | 0 | 0 | python,numpy | 44,983,209 | 1 | false | 0 | 0 | I have run into the same issue on HackerRank. A number of their challenges do support NumPy--indeed, a handful require it. Either import numpy or the idiomatic import numpy as np will work just fine on those.
I believe you're simply trying to use numpy where they don't want you to. Because it's not part of the standard library, HackerRank would need to intentionally provide it. Where they do not, you will need to substitute lower-level, non-numpy code as a result. | 1 | 11 | 1 | I want to use numpy module for solving problems on hackerrank. But, when I imported numpy, it gave me the following error.
ImportError: No module named 'numpy'.
I understand that this might be a very trivial question. But, I am a beginner in programming. Any help is highly appreciated. | importing numpy in hackerrank competitions | 1 | 0 | 0 | 19,903 |
44,983,942 | 2017-07-08T07:59:00.000 | 1 | 0 | 0 | 0 | python,django,python-2.7,django-views | 44,984,970 | 2 | false | 1 | 0 | You should inherit django's AbstractBaseUser in your user model, it already has an inbuilt last_login attribute. In fact it is considered a good practice to inherit AbstractBaseUser for creating your user which is provided in django default auth modules. | 1 | 2 | 0 | How can i show the time of last seen(be online)of user in django?
Is there any default function or library to import it to do that?
Or if there any code inn github tell me please
**Note : ** when a user close page or disconnect the time update | Last seen a user in django | 0.099668 | 0 | 0 | 2,078 |
44,986,375 | 2017-07-08T12:46:00.000 | 0 | 0 | 0 | 0 | python-3.x,nltk | 44,986,475 | 1 | false | 0 | 0 | My bad, I realized the mistake. I had swapped the position of "pron" with "word" thereby causing this problem. The corrected code is:
p3 = [(pron[0] + '-' + pron[2], word)
for word, pron in entries if pron[0] == 'P' and len(pron) == 3]
" | 1 | 0 | 1 | I am practicing the nltk examples from the "Natural language processing in Python" book. While trying to get the words that start with syllable "p" and of syllable length 3 from cmu dictionary (one of the examples provided in chapter 2), I am not getting any values returned. I am using Python 3. Below is the code:
entries = nltk.corpus.cmudict.entries()
p3 = [(pron[0] + '-' + pron[2], word)
for pron, word in entries if pron[0] == 'P' and len(pron) == 3]
But no value returned:
p3 =
[]
However, I know that the value exist. See below:
[(word, pron[0] + '-' + pron[2]) for word, pron in entries if word == 'perch']
[('perch', 'P-CH')] | NLTK: No value returned when searching the CMU dictionary based on syllable value | 0 | 0 | 0 | 414 |
44,986,563 | 2017-07-08T13:07:00.000 | 0 | 0 | 1 | 1 | python,c++ | 44,987,001 | 1 | false | 0 | 0 | There are two approaches you might take:
Use a pipe (the | character) in your command line to redirect the output that is going to the console to the Python script, and code the Python script to read from stdin.
Redirect the output that is going to the console to a file (with >filename) and then code the Python script to read from that file. | 1 | 0 | 0 | I have .bat file which outputs data in console, than I need to read that data with python script and run through "if cycles"
The question is: How to read that data with python script? | Redirect data from console to the python script | 0 | 0 | 0 | 24 |
44,988,422 | 2017-07-08T16:35:00.000 | 2 | 0 | 0 | 0 | python,django,database,cron | 44,988,567 | 2 | false | 1 | 0 | I think you should choose definetely the third alternative, a cron job to update the database regularly seems the best option.
You don' t need to use a seperate python function, you can schedule a task with celery, which can be easily integrated with django using django-celery | 1 | 0 | 0 | I'm learning Django and to practice I'm currently developing a clone page of YTS, it's a movie torrents repository*.
As of right now, I scrapped all the movies in the website and have them on a single db table called Movie with all the basic information of each movie (I'm planning on adding one more for Genre).
Every few days YTS will post new movies and I want my clone-web to automatically add them to the database. I'm currently stuck on deciding how to do this:
I was planning on comparing the movie id of the last movie in my db against the last movie in the YTS db each time the user enters the website, but that'd mean make a request to YTS every time my page loads, it'd also mean some very slow code should be executed inside my index() views method.
Another strategy would be to query the last time my db was updated (new entries were introduced) and if it's let's say bigger than a day then request new movies to YTS. Problem with this is I don't seem to find any method to query the time of last db updates. Does it even exist such method?
I could also set a cron job to update the information but I'm having problems to make changes from a separated Python function (I import django.db and such but the interpreter refuses to execute django db instructions).
So, all in all, what's the best strategy to update my database from a third party service/website without bothering the user with loading times? How do you set such updates in non-intrusive way to the user? How do you generally do it?
* I know a torrents website borders the illegal and I'm not intended, in any way, to make my project available to the public | Updating my Django website's database from a third party service, strategies? | 0.197375 | 0 | 0 | 114 |
44,990,820 | 2017-07-08T21:15:00.000 | 1 | 0 | 0 | 0 | python,django,postgresql,django-migrations,django-database | 47,176,858 | 1 | false | 1 | 0 | If your dump has create table statements and contains all django tables, you can restore it directly onto an empty database. Django will know the status of the migrations as they are stored in a table in the DB.
So the steps would be:
Drop and recreate DB.
If you now run python manage.py showmigrations all migrations will appear unapplied
Restore DB from dump
If you now run python manage.py showmigrations now, the corresponding migrations will appear applied. If your django project has new migrations that weren't applied when the dump was created they will appear unapplied.
And that's it! Now you can apply the new migrations if there are any and keep working on the Django project. | 1 | 3 | 0 | What is the procedure to restore a Django project using an already restored database from a PostgreSQL pg_dump. All django source code also exist. Will Django migration safe? | How to restore Django project with pg_dump file? | 0.197375 | 0 | 0 | 1,502 |
44,991,009 | 2017-07-08T21:44:00.000 | 1 | 0 | 0 | 0 | javascript,python,google-chrome,selenium | 70,308,976 | 2 | false | 1 | 0 | This thread is a few years old, but in case anyone else finds themselves here trying to solve a similar problem:
I also tried using driver.execute_script('console.clear()') to clear the console log between my login process and the page I wanted to check to no avail.
It turns out that calling driver.get_log('browser') returns the browser log and also clears it.
After navigating through pages for which you want to ignore the console logs, you can clear them with something like
_ = driver.get_log('browser') | 1 | 2 | 0 | I have a large application and I am using Headless Chrome, Selenium and Python to test each module. I want to go through each module and get all the JS console errors produced while inside that specific module.
However, since each module is inside a different test case and each case executes in a separate session, the script first has to login on every test. The login process itself produces a number of errors that show up in the console. When testing each module I don't want the unrelated login errors to appear in the log.
Basically, clear anything that is in the logs right now -> go to the module and do something -> get logs that have been added to the console.
Is this not possible? I tried doing driver.execute_script("console.clear()") but the messages in the console were not removed and the login-related messages were still showing after doing something and printing the logs. | Clear Chrome browser logs in Selenium/Python | 0.099668 | 0 | 1 | 3,566 |
44,992,763 | 2017-07-09T03:43:00.000 | 2 | 0 | 0 | 0 | python,html,wget | 44,992,870 | 2 | false | 1 | 0 | use this wget facebook.com --domains website.org --no-parent --page-requisites --html-extension --convert-links if you wanna download all the entire website add --recursive after the web pages | 2 | 1 | 0 | I want to clone a single webpage with all the images and no links in the html. I can achieve this with wget -E -H -k -K -p {url} however this pulls down the webpage with a full structure and you have to navigate to the html file to display the contents. This makes it inconsistent in where the html file to display the webpage would be.
I can also do this wget --no-check-certificate -O index.html -c -k {url} however this keeps the links to images and doesn't make the webpage truly local as it has to go out to the web to display the page properly.
Is there any way to clone a single webpage and spit out an index.html with the images linked locally?
PS: I am using wget through a python script that makes changes to webpages so having an index.html is neccesary for me. I am interested in other methods if there are better ones.
EDIT:
So it seems I haven't explain myself well but a bit background info on this project is I am working on a proof of concept for school on an automated phishing script which is supposed to clone a webpage, modify a few action tags and be placed on a local web server so that a user can navigate to it and the page will display correctly. Previously using the -O worked fine for my but since I am now incorporating DNS spoofing into my project the webpage cant have any links pointing externally as they will just end up getting rerouted to my internal webserver and the webpage will look broken. That is why I need to have just the information necessary for the single webpage to be displayed correctly but also have it predictable so that I am able to be sure that when i navigate to the directory I cloned the website from the webpage will be displayed (with proper links to images,css etc..) | Clone a single webpage (with images) and save to index.html | 0.197375 | 0 | 0 | 2,154 |
44,992,763 | 2017-07-09T03:43:00.000 | 0 | 0 | 0 | 0 | python,html,wget | 44,992,881 | 2 | true | 1 | 0 | wget is a bash command. There's no point in invoking it through Python when you can directly achieve this task in Python. Basically what you're trying to make is a web scraper. Use requests and BeautifulSoup modules to achieve this. Research a bit about them and start writing a script. If you hit any errors, feel free to post a new question about it on SO. | 2 | 1 | 0 | I want to clone a single webpage with all the images and no links in the html. I can achieve this with wget -E -H -k -K -p {url} however this pulls down the webpage with a full structure and you have to navigate to the html file to display the contents. This makes it inconsistent in where the html file to display the webpage would be.
I can also do this wget --no-check-certificate -O index.html -c -k {url} however this keeps the links to images and doesn't make the webpage truly local as it has to go out to the web to display the page properly.
Is there any way to clone a single webpage and spit out an index.html with the images linked locally?
PS: I am using wget through a python script that makes changes to webpages so having an index.html is neccesary for me. I am interested in other methods if there are better ones.
EDIT:
So it seems I haven't explain myself well but a bit background info on this project is I am working on a proof of concept for school on an automated phishing script which is supposed to clone a webpage, modify a few action tags and be placed on a local web server so that a user can navigate to it and the page will display correctly. Previously using the -O worked fine for my but since I am now incorporating DNS spoofing into my project the webpage cant have any links pointing externally as they will just end up getting rerouted to my internal webserver and the webpage will look broken. That is why I need to have just the information necessary for the single webpage to be displayed correctly but also have it predictable so that I am able to be sure that when i navigate to the directory I cloned the website from the webpage will be displayed (with proper links to images,css etc..) | Clone a single webpage (with images) and save to index.html | 1.2 | 0 | 0 | 2,154 |
44,992,913 | 2017-07-09T04:14:00.000 | 2 | 0 | 1 | 0 | python,tkinter | 44,992,975 | 1 | true | 0 | 1 | This is going to be a really generic answer and most of the answers to this will be opinionated anyways. Speaking of which,the answer will likely be downvoted and closed because of this.
Anyways... Let's say you have a big GUI with a bunch of complicated logic sure you could write one huge file with hundreds, if not thousands of lines, and proxy a bunch of stuff through different functions and make it work. But, the logic is messy.
What if you could compartmentalize different sections of the GUI and all the logic surrounding them. Then, takes those components and aggregate them into the sum which makes the GUI?
This is exactly what you can use classes for in Tkinter. More generally, this is essentially what you use classes for - abstracting things into (reusable - instances) objects which provide a useful utility.
Example:
An app I built ages ago with Tkinter when I first learned it was a file moving program. The file moving program let you select the source / destination directory, had logging capabilities, search functions, monitoring processes for when downloads complete, and regex renaming options, unzipping archives, etcetera. Basically, everything I could think of for moving files.
So, what I did was I split the app up like this (at a high level)
1) Have a main which is the aggregate of the components forming the main GUI
Aggregates were essentially a sidebar, buttons / labels for selection various options split into their own sections as needed, and a scrolled text area for operation logging + search.
So, the main components were split like this:
2) A sidebar which had the following components
Section which contained the options for monitoring processes
Section which contained options for custom regular expressions or premade ones for renaming files
Section for various flag such as unpacking
3) A logging / text area section with search functionality build in + the options to dump (save) log files or view them.
That's a high level description of the "big" components which were comprised from the smaller components which were their own classes. So, by using classes I was able to wrap the complicated logic up into small pieces that were self contained.
Granted, you can do the same thing with functions, but you have "pieces" of a GUI which you can consider objects (classes) which fit together. So, it just makes for cleaner code / logic. | 1 | 2 | 0 | Studying Tkinter and I've only found tutorials on Tkinter without OOP, but looking at the Python.org documentation it looks like it's all in OOP. What's the benefit of using classes? It seems like more work and the syntax looks night and day from what I've learned so far. | Tkinter with or without OOP | 1.2 | 0 | 0 | 916 |
44,994,358 | 2017-07-09T08:11:00.000 | 0 | 0 | 0 | 0 | python,django,session,cookies,session-cookies | 44,994,487 | 2 | false | 1 | 0 | Well you can't know if users disconnected their internet or WiFi.
But you can check if user is still online and browsing the website.
to achieve that you can use javascript to send a request every 10 second (less or more) and check if user is still on the site. and if user is not online anymore you can make some changes or etc but in general you can't access to the users device and check the status for wifi or ... | 1 | 1 | 0 | I have a website and i want to destroy some session or cookie in django when user discoonet suddenly or get offline (wifi discoonect or disconnect mobil data).
But i dont know to how do this!
Is there any default library to do this? | Destroy session or cookie in django when user get offline | 0 | 0 | 0 | 748 |
44,996,544 | 2017-07-09T12:47:00.000 | 0 | 0 | 1 | 0 | python,class,ide | 44,996,622 | 1 | false | 0 | 0 | You can split PyCharm's window and open the same file in several tabs, viewing the part of the file you want in each.
Open the file once, right click on the tab's title, then click Split Vertically / Horizontally. | 1 | 2 | 0 | So I'm fairly new in the coding world and I'm currently using PyCharm in developing a personal project.
This project has started to be somewhat "large" with many classes and I thought about splitting them up into module files like having MyClass in MyClass.py and use "from MyClass import *" in my main .py-file.
But I rather quickly realized that circular importing was a pain in the ass and that it was not really helping me in making the code more structured. I also found I could hide/collapse classes in my main file with the [-] which basically solved the bulk of my initial problem.
However, it got me thinking. I still liked the idea of visually having the classes in separate windows so I wondered if there's any IDE that actually do or allow this.
TL;DR: Is there an IDE which visually allow me to visually separate parts of code, like classes, in separate windows/pages, but still have them in the same py-file "physically"? Naturally with fully functional refactoring and code completion maintained. | IDE show classes independently | 0 | 0 | 0 | 16 |
44,996,563 | 2017-07-09T12:49:00.000 | 0 | 0 | 0 | 1 | python,ssl,cloud9-ide | 44,998,382 | 1 | false | 0 | 0 | Cloud9 runs your app behind an https proxy, so you need to just use http, since cloud9 proxy won't accept your self signed certificate. | 1 | 0 | 0 | I am migrating my personal hobby python web application from 127.0.0.1 to cloud 9 lately, but found myself completely new to the idea of setting up ssl certificate. I did some online research on openssl and its python wrapper but still couldn't find any definitive guide on how to set it up in practice, specifically for the cloud 9 IDE platform.
Could someone please give a walkthough, or point out some references link here? Thanks.
By the way, I'm using cherrypy for the python server.
EDIT: specifically, I have the following questions:
is it requred to run openssl from the server(in my case, cloud9 bash), or I can run openssl from my local laptop then upload the generated key and cert?
does it make any sense to use passphrase to protect the key? I don't see any point here, correct me if I'm wrong please
how to install it to cloud9? | Could someone please provide a walkthrough on how to setup a self signing ssl certificate on cloud 9? | 0 | 0 | 0 | 184 |
44,996,829 | 2017-07-09T13:16:00.000 | 4 | 0 | 1 | 0 | python,r,text,latex | 44,996,919 | 5 | true | 0 | 0 | sed -i -e 's/-/–/g' /myfolder/* should work.
The expression does a search globally and replaces all - inside the files the shell expands from /myfolder/* with –. Sed does the change in-place, that is, overwriting the original file (you need to explicitly specify a backup-file on MacOS, I can't remember the parameter though).
Absolutely no care is taken about wether or not the - is a verbatim hyphen or part of the latex syntax. Be aware of that. | 1 | 5 | 0 | I have a folder /myfolder containing many latex tables.
I need to replace a character in each of them, namely replacing any minus sign -, by an en dash –.
Just to be sure: we are replacing hypens INSIDE all of the tex file in that folder. I dont care about the tex file names.
Doing that manually would be a nightmare (too many files, too many minuses). Is there a way to loop over the files automatically and do the replacement? A solution in Python/R would be great.
Thanks! | how to replace a character INSIDE the text content of many files automatically? | 1.2 | 0 | 0 | 473 |
44,997,542 | 2017-07-09T14:36:00.000 | 2 | 0 | 1 | 0 | python,anaconda | 44,997,598 | 4 | false | 0 | 0 | I don't know about conda but it probably installed Seaborn in a directory other than Python's directory. Try installing it with pip instead.
pip install Seaborn | 1 | 0 | 0 | I've installed the package using conda install seaborn in my terminal. This stated that the package was already installed.
When I try to import Seaborn into my editor (I'm using Canopy) and run a simple program, I am met with the following error:
ImportError: No module named seaborn.
What is the reason for this and how can I solve it? | Python does not import seaborn-package, although installed | 0.099668 | 0 | 0 | 4,383 |
44,997,969 | 2017-07-09T15:20:00.000 | 0 | 0 | 1 | 1 | python,cmake,anaconda | 44,999,337 | 1 | false | 0 | 0 | Since the "REQUIRED" option to find_package() is not working, you can be explicit about which Python library using CMake options with cache variables:
cmake -DPYTHON_INCLUDE_DIR=C:\Python36\include -DPYTHON_LIBRARY=C:\Python36\libs\python36.lib .. | 1 | 1 | 0 | FindPythonLibs.cmake is somehow finding Python versions that don't exist/were uninstalled.
When I run find_package(PythonLibs 3 REQUIRED) CMake properly finds my Python3.6 installation and adds its include path, but then I get the error
No rule to make target 'C:/Users/ultim/Anaconda2/libs/python27.lib', needed by 'minotaur-cpp.exe'. Stop.
This directory doesn't exist, and I recently uninstalled Anaconda and the python that came with it. I've looked through my environment variables and registry, but find no reference to this location.
Would anyone know where there might still be a reference to this location? | CMake's find packages finds nonexisting python library | 0 | 0 | 0 | 114 |
44,999,814 | 2017-07-09T18:45:00.000 | 2 | 0 | 0 | 0 | python,pyspark,parquet,dask | 45,001,380 | 1 | false | 0 | 0 | If you are doing a groupby-aggregation with a known aggregation like count or mean then your partitioning won't make that much of a difference. This should be relatively fast regardless.
If you are doing a groupby-apply with a non-trivial apply function (like running an sklearn model on each group) then you will have a much faster experience if you store your data so that the grouping column is sorted in parquet.
Edit:
That being said, even though groupby-count doesn't especially encourage smart partitioning it's still nice to switch to Parquet. You'll find that you can read the relevant columns much more quickly.
As a quick disclaimer, dask.dataframe doesn't currently use the count statistics within parquet to accelerate queries, except by filtering within the read_parquet function and to help identify sorted columns. | 1 | 2 | 1 | We have a 1.5BM records spread out in several csv files. We need to groupby on several columns in order to generate a count aggregate.
Our current strategy is to:
Load them into a dataframe (using Dask or pyspark)
Aggregate columns in order to generate 2 columns as key:value (we are not sure if this is worthwhile)
Save file as Parquet
Read the Parquet file (Dask or pyspark) and run a groupby on the index of the dataframe.
What is the best practice for an efficient groupby on a Parquet file?
How beneficial is it to perform the groupby on the index rather then on a column (or a group of columns)?
We understand that there is a partition that can assist - but in our case we need to groupby on the entire dataset - so we don't think it is relevant. | Best practice for groupby on Parquet file | 0.379949 | 0 | 0 | 1,447 |
45,002,397 | 2017-07-10T00:48:00.000 | 0 | 0 | 0 | 1 | python,anaconda,navigator | 67,814,968 | 2 | false | 0 | 0 | I just had this problem! The cause was in a corrupted registry affecting the launch of CMD.EXE (Used by Navigator to launch applications).
Solution: Empty the HKEY_CURRENT_USER\Software\Microsoft\Command Processor\AutoRun key
(It contained several occurrences of "if exist"!?)
If you used "conda init", then you will need to re-run the command. | 2 | 3 | 0 | For whatever reason, when I click on launch for any app in the Navigator, will say launching app with the status bar moving but it will eventually stop and the app never opens. I have tried starting with administrative privileges and I have even tried uninstalling and reinstalling and then rebooting but still the apps never launch. | Anaconda Navigator Applications not launching apps | 0 | 0 | 0 | 1,414 |
45,002,397 | 2017-07-10T00:48:00.000 | 0 | 0 | 0 | 1 | python,anaconda,navigator | 59,505,434 | 2 | false | 0 | 0 | If you are using Debian distribution os then simply you can run following command in your shell after installing anaconda distribution.
export PATH= YOUR_ANACONDA_INSTALLATION_FOLDER_LOCATION/bin:PATH
then run,
anaconda-navigator
for example mine anaconda3 was installed at root/anaconda3,So i used following commands:
┌──[root@kali]─[/]
└──╼ # export PATH=/root/anaconda3/bin:PATH
┌─[root@kali]─[/]
└──╼ # anaconda-navigator
and that worked fine for me:) | 2 | 3 | 0 | For whatever reason, when I click on launch for any app in the Navigator, will say launching app with the status bar moving but it will eventually stop and the app never opens. I have tried starting with administrative privileges and I have even tried uninstalling and reinstalling and then rebooting but still the apps never launch. | Anaconda Navigator Applications not launching apps | 0 | 0 | 0 | 1,414 |
45,003,301 | 2017-07-10T03:15:00.000 | 1 | 0 | 0 | 0 | python,pandas,matplotlib,pyspark-sql | 66,233,233 | 2 | false | 0 | 0 | For small data, you can use .select() and .collect() on the pyspark DataFrame. collect will give a python list of pyspark.sql.types.Row, which can be indexed. From there you can plot using matplotlib without Pandas, however using Pandas dataframes with df.toPandas() is probably easier. | 1 | 15 | 1 | I am new to pyspark. I want to plot the result using matplotlib, but not sure which function to use. I searched for a way to convert sql result to pandas and then use plot. | How to use matplotlib to plot pyspark sql results | 0.099668 | 1 | 0 | 30,940 |
45,004,514 | 2017-07-10T05:46:00.000 | 0 | 0 | 0 | 0 | python-3.x,tensorflow,neural-network,deep-learning,data-science | 45,005,490 | 2 | false | 0 | 0 | You can treat this as a multi-label problem, and append the sentiment and the tone labels together.
Now since the network has to predict multiple outputs (2 in this case) you need to use an activation function like sigmoid and not softmax. And your prediction can be made using tf.round(logits). | 1 | 0 | 1 | I am trying to classify sentiment on movie review and predict the genres of that movie based on the review itself. Now Sentiment is a Binary Classification problem where as Genres can be Multi-Label Classification problem.
Another example to clarify the problem is classifying Sentiment of a sentence and also predicting whether the tone of the sentence is happy, sarcastic, sad, pitiful, angry or fearful.
More to that is, I want to perform this classification using Tensorflow CNN. My problem is in structuring the y_label and training the data such that the output helps me retrieve Sentiment as well as the genres.
Eg Data Y Label: [[0,1],[0,1,0,1,0]] for sentiment as Negative and mood as sarcastic and angry
How do you suggest I tackle this? | How to classify both sentiment and genres from movie reviews using CNN Tensorflow | 0 | 0 | 0 | 254 |
45,010,036 | 2017-07-10T10:45:00.000 | 1 | 0 | 1 | 0 | python-3.x | 45,010,179 | 1 | true | 0 | 0 | 23*89 = 2047, so it's not a prime and certainly not a Mersenne prime (even though it is one less than a power of 2.) There must be a mistake in Codeeval. | 1 | 0 | 0 | I would like to know if 2047 is a Mersenne prime number ?
Codeeval accepts solution with 2047 as a mersenne prime number.
However when I searched online I found 2047 not to be a mersenne prime number? Could someone tell me which is correct? | Codeeval challenge : Confusion for a beginner(Mersenne Prime number) | 1.2 | 0 | 0 | 47 |
45,010,745 | 2017-07-10T11:22:00.000 | 1 | 1 | 0 | 0 | python,selenium,automated-tests,cucumber,python-behave | 46,268,432 | 2 | false | 1 | 0 | Your functions in "environment.py" can have any parameter that you like them to have. Only the hooks have a specified signature (as any API function). Therefore, if the feature object is sufficient to your processing, you should avoid to require somebody to pass the context object, too. | 1 | 0 | 0 | I am attempting to put the custom method, that would send results of automated tests to JIRA, in the Behave's environment.py. It would be in after_scenario() or after_feature(). So I want it to send the results to JIRA after closing tests.
It seems that those methods in environment.py only take in the methods that are part of context class. Is that right? Is there any walkaround this issue? | Put custom methods in Behave's environment.py | 0.099668 | 0 | 0 | 452 |
45,012,005 | 2017-07-10T12:25:00.000 | 1 | 0 | 0 | 0 | python,jdbc,teradata,snowflake-cloud-data-platform | 46,125,739 | 1 | false | 0 | 0 | If your table has 170M records, then using JDBC INSERT to Snowflake is not feasible. It would perform millions of separate insert commands to the database, each requiring a round-trip to the cloud service, which would require hundreds of hours.
Your most efficient strategy would be to export from Teradata into multiple delimited files -- say with 1 - 10 million rows each. You can then either use the Amazon's client API to move the files to S3 using parallelism, or use Snowflake's own PUT command to upload the files to Snowflake's staging area for your target table. Either way, you can then load the files very rapidly using Snowflake's COPY command once they are in your S3 bucket or Snowflake's staging area. | 1 | 1 | 0 | I am trying to write a data migration script moving data from one database to another (Teradata to snowflake) using JDBC cursors.
The table I am working on has about 170 million records and I am running into the issue where when I execute the batch insert a maximum number of expressions in a list exceeded, expected at most 16,384, got 170,000,000.
I was wondering if there was any way around this or if there was a better way to batch migrate records without exporting the records to a file and moving it to s3 to be consumed by the snowflake. | JDBC limitation on lists | 0.197375 | 1 | 0 | 215 |
45,012,914 | 2017-07-10T13:07:00.000 | 1 | 0 | 0 | 0 | python,ruby-on-rails,ruby | 45,013,053 | 1 | false | 1 | 0 | Ruby on Rails 5 supports API only app, if you are using this app for APIs only, else just create a normal app and expose API endpoints that render json outputs. Use Active Record for mapping your MySql database and jbuilder for json views (both of which are available by default when you create new app). You will find lots of tutorials if you google Use Ruby on Rails as API app.
Good luck with it.. :) | 1 | 0 | 0 | I am to develop an app, and I have to choose between Ruby on Rails or Python+Django. So far I want to do it with Ruby on Rails, because i feel more comfortable doing what I have to do on it.
But there is a problem. There would be a client app written on Python that has to communicate with mine. Cause that app should be able to communicate with mine.
First: I think if it is a matter of communicating with the MySQL database there wouldn't be an issue, cause the Python app is able to query to MySQL server with proper authentication right?
Second: and more important question: If I have a Ruby written API, to ease the queries, Could the Python app be able to invoke functions in that API and get the results? If it is possible, How could I achieve that? | How to communicate from a Python client to Ruby on Rails application? | 0.197375 | 0 | 0 | 344 |
45,014,226 | 2017-07-10T14:07:00.000 | 3 | 0 | 1 | 1 | python,boot | 45,014,631 | 1 | false | 0 | 0 | Linux, being a UNIX type OS, has the concept of runlevels. Each runlevel has a certain number of services stopped or started, giving the user control over the behavior of the machine. As far as I know for Linux, seven runlevels exist, numbered from zero to six. The "Operating System Desktop" becomes available at run level 5. At boot time the system will pass through several other runlevels before getting to 5. At level 3 the system will be have Multi-User Mode with Networking, and this would be a good level to run what ever python script you need. Maybe check into configuring Linux init scripts. | 1 | 2 | 0 | I was on freelancer website and I found this work proposal:
Project Description
Hello
We need experience developer in python.
Only bit that person who has a experience in python and Linux.
I want to execute python code in Booting time before execute Operating
System Desktop.
I know that unless I candidate, I won't have any detail about the project, but anyway it seems odd to me.
From my understanding python is interpreted, which means that it needs a virtual environment, and that's what makes it platform independent. Therefore how can a python script (which doesn't convert 1:1 to machine instructions) run before the operative system? Since I know little about what's going on at boot time (I guess some pre-defined instructions laying in the motherboard ROM are executed, then the bootloader loads in the RAM the OS, and the program counter holds the address for the entry point of the OS itself, but I am just supposing) I ask you whether such a thing could be possible. | Python before booting? | 0.53705 | 0 | 0 | 112 |
45,014,245 | 2017-07-10T14:08:00.000 | 0 | 0 | 0 | 0 | python,go,stream,buffer,transfer | 45,017,089 | 1 | false | 1 | 0 | There is no single "Best" way. If the protocol has to go over ports 80/443 on the open internet, you could use web-sockets. You could also POST base64 encoded chunks of data from python back to your server.
If the robot and server are on the same network, you could send UDP packets from the robot to your server. (Usually a missing packet or two on audio is not a problem). Even if you have a web based Go server, you can still fire off a go routine to listen on UDP for incoming packets.
If you could be more specific, maybe I or someone else could give a better answer? | 1 | 0 | 0 | I am a beginner trying to find a way to stream audio on a local server. I have a Python script that creates some binary data from a robot's microphone, and I want to send this data to be displayed on a local Go server I created.
I read somewhere that web sockets could be a solution.
But what's the simplest way to upload the audio buffers from the Python script? And how would I retrieve this raw binary data so that it can be streamed from the web app?
Many many thanks. | Transfer audio buffers from a Python script to a Go server? | 0 | 0 | 1 | 268 |
45,014,964 | 2017-07-10T14:38:00.000 | 0 | 1 | 0 | 1 | python,linux,lambda,active-directory,aws-lambda | 45,064,312 | 1 | false | 0 | 0 | I can see 2 aspects that you should pay attention.
Lambda runs in a Linux environment. So, if you have some library that uses internal resources from windows, it won't work on AWS Lambda environment. You should search another option, like python-ldap or something similar.
Lambda environment provides only basic python modules. For sure pyad or python-ldap is not included. So, if you want to use it, make sure you will add this module in your zip lambda file. | 1 | 0 | 0 | I am trying to create a function using Lambda in Python on Linux. I have tried to use pyad, but it gave me Exception: Must be running Windows in order to use pyad.
What other way can i create user and group in AD?
Thanks | How to create user and group in AD using Lambda | 0 | 0 | 0 | 1,731 |
45,015,116 | 2017-07-10T14:46:00.000 | 3 | 0 | 0 | 1 | python,airflow | 45,117,737 | 1 | false | 1 | 0 | We use docker to run the code with different dependencies and DockerOperator in airflow DAG, which can run docker containers, also on remote machines (with docker daemon already running). We actually have only one airflow server to run jobs but more machines with docker daemon running, which the airflow executors call.
For continuous integration we use gitlab CI with the Gitlab container registry for each repository. This should be easily doable with Jenkins. | 1 | 6 | 0 | I'm thinking of starting to use Apache Airflow for a project and am wondering how people manage continuous integration and dependencies with airflow. More specifically
Say I have the following set up
3 Airflow servers: dev staging and production.
I have two python DAG'S whose source code I want to keep in seperate repos.
The DAG's themselves are simple, basically just use a Python operator to call main(*args, **kwargs). However the actually code that's run by main is very large and stretches several files/modules.
Each python code base has different dependencies
for example,
Dag1 uses Python2.7 pandas==0.18.1, requests=2.13.0
Dag2 uses Python3.6 pandas==0.20.0 and Numba==0.27 as well as some cythonized code that needs to be compiled
How do I manage Airflow running these two Dag's with completely different dependencies?
Also, how do I manage the continuous integration of the code for both these Dags into each different Airflow enivornment (dev, staging, Prod)(do I just get jenkins or something to ssh to the airflow server and do something like git pull origin BRANCH)
Hopefully this question isn't too vague and people see the problems i'm having. | Apache Airflow Continous Integration Workflow and Dependency management | 0.53705 | 0 | 0 | 1,922 |
45,015,568 | 2017-07-10T15:06:00.000 | 2 | 0 | 1 | 0 | python,python-2.7,python-3.x,compatibility | 45,015,646 | 1 | false | 0 | 0 | Generally speaking, module that are provided for python >3 are absolutely not compatible with python 2. The special future import allows to create a module that is compatible by importing into python 2 some of the behaviors of python 3 that would otherwise be breaking, such as the division.
So most of the time, there is no simple way to import a 3.x module in 2.x, sorry | 1 | 0 | 0 | I need to use one python library which is available only in 3.4 but my framework runs on python version 2.7 and i want to use that lib to build
extension search a lot but found very less info about forward compatibility.
checked lib called future but didn't get much info on docs.python.
anyone has any clue then please share. | How we can import/use python 3.4 installed library in python2.7? | 0.379949 | 0 | 0 | 27 |
45,017,617 | 2017-07-10T16:48:00.000 | 4 | 0 | 0 | 0 | python,cplex,gurobi,pulp | 45,036,043 | 1 | false | 0 | 0 | I don't think there are artificial limits on the size of models you can generate with PuLP
For larger, more difficult problems, commercial solvers like Cplex or Gurobi typically are much faster and more reliable than open source solvers. Of course you can use an open source solver like glpk or CBC for prototyping, even if the final model is large. Note also that Cplex and Gurobi come with their own Python based modeling interfaces (these may offer access to the more esoteric aspects of the solvers). One advantage of Pulp is that you can develop the model with an open source solver and then switch to a commercial solver without changing the model code. | 1 | 0 | 0 | I am interested to develop a code who uses PulP.
I Have some questions and I Will be very gratful if you can help me.
• Does PulP have a restriction in the number of linear constraint or integer variables?
• If I have a problem with many constraintes or integer variables, Have I to buy a solver like CPLEX or Gurobi
I really thank you for your time. | PULP , CPLEX or GUROBI for Mixed Integer Programming (MIP) | 0.664037 | 0 | 0 | 2,461 |
45,022,643 | 2017-07-10T22:32:00.000 | 0 | 0 | 0 | 0 | python,google-chrome,amazon-web-services,selenium,selenium-chromedriver | 45,022,691 | 1 | false | 1 | 0 | There are many things that could happen. You sould check your logs and add them to the question. One thing I am sure in is that you don't use any virtual display, and you don't have a display on AWS. So you should google for "python running selenium headless" | 1 | 1 | 0 | I built a program to go on a website and click a link which automatically downloads a file. It works when I run it on my mac (Chrome), but when I use the exact same code on AWS, nothing gets downloaded.
If it helps at all, I tried to expedite the process and found the raw link, I could download that via wget but not through python (on any computer). | Link doesn't work on AWS with Selenium | 0 | 0 | 1 | 39 |
45,024,466 | 2017-07-11T02:38:00.000 | 0 | 0 | 0 | 0 | python,django,git,github | 45,024,535 | 4 | false | 1 | 0 | I generally use different settings.py for each stage (development, testing and prodcution). The only one that I keep on a version control is the one corresponding to development. The other settings.py are internal and, when required, they are copied to each instance of the server (testing and production).
Hope this helps. | 2 | 10 | 0 | How can I handle the security of web frameworks like django on github or any other public domain version control site.
The settings.py can and will often contain sensitive database information, passwords and secret keys, which must not be uploaded on the repository and in plain view.
What is the common practice and least hassle way of handling that? | Django Projects and git | 0 | 0 | 0 | 1,794 |
45,024,466 | 2017-07-11T02:38:00.000 | 0 | 0 | 0 | 0 | python,django,git,github | 45,024,539 | 4 | false | 1 | 0 | Easy answer: Add it to your .gitignore. That said, if you're intending to share your Django app, you'll want to provide at least the parts you edited for your app . | 2 | 10 | 0 | How can I handle the security of web frameworks like django on github or any other public domain version control site.
The settings.py can and will often contain sensitive database information, passwords and secret keys, which must not be uploaded on the repository and in plain view.
What is the common practice and least hassle way of handling that? | Django Projects and git | 0 | 0 | 0 | 1,794 |
45,026,092 | 2017-07-11T05:31:00.000 | 1 | 0 | 0 | 0 | python,api,ibm-cloud-infrastructure | 45,037,499 | 1 | true | 0 | 0 | Package: 835 (Public Virtual Server) is a new package that will be released very soon, meanwhile I recommend to continue using Package: 46 | 1 | 0 | 0 | background
Use sl_product_order.placeOrder api to order a vsi.
'dataCenter': 'ams03'
exception
SoftLayerAPIError(SoftLayer_Exception_Order_InvalidData): Invalid data on the order for property: packageId. Package (835) requires a preset configuration.
question
Package (835) requires a preset configuration means what?
How can I check which param is invalid ? | Softlayer Api: Exception: Invalid data on the order for property: packageId. Package (835) requires a preset configuration | 1.2 | 0 | 0 | 177 |
45,030,827 | 2017-07-11T09:39:00.000 | 0 | 0 | 0 | 0 | python,tensorflow,matrix-inverse,bigdata | 45,031,046 | 2 | false | 0 | 0 | You mean you need to swap rows and columns? If that's the case then you might use tf.transpose. | 1 | 0 | 1 | I'm a Beginner in bigdata.
I had learned Python.
I want get Reverse a matrix with tensorflow (matrix n*n in input), but office boss will to do it with tensorflow, so i wanna do it without Adjoining matrix.
help me, please.
thank you In advance. <3 | Reverse a matrix with tensorflow | 0 | 0 | 0 | 435 |
45,034,266 | 2017-07-11T12:11:00.000 | 0 | 0 | 1 | 0 | python,import | 45,034,487 | 3 | false | 0 | 0 | Short answer is NO... But you could and should catch ImportError for when the module is not there, and handle it then. Otherwise replacing all import statements with something else is the clever thing to do. | 1 | 7 | 1 | Is it possible to somehow override import so that I can do some more sophisticated operations on a module before it gets imported?
As an example: I have a larger application that uses matplotlib for secondary features that are not vital for the overall functionality of the application. In case that matplotlib is not installed I just want to mock the functionality so that the import and all calls to matplotlib functions appear to be working, just without actually doing anything. A simple warning should then just indicate that the module is not installed, though that fact would not impair the core functionality of the application. I already have an import function that, in the case that matplotlib is not installed, returns a MagicMock object instead of the actual module which just mimics the behavior of the matplotlib API.
So, all import matplotlib... or from matplotlib import... should then be automatically overridden by the corresponding function call. I could replace all import and from ... import expressions by hand but I'd like to make but there are a lot of them. I'd rather like to have this functionality automatically by overriding import.
Is that possible? | Override `import` for more sophisticated module import | 0 | 0 | 0 | 2,180 |
45,036,243 | 2017-07-11T13:41:00.000 | 0 | 0 | 1 | 1 | python,extract,rpm | 45,277,220 | 1 | false | 0 | 0 | Most solutions are variants of rpm2cpio | cpio which has the disadvantage that the entire cpio ball is unpacked on the file system rather than extracting a single file onto a pipeline that can be read into a python variable. This is largely a limitation on the cpio(1) archive file selection mechanism.
These days GNU tar can/will handle cpio formats, and so a GNU tar (which can extract a single file from an archive) rpm2cpio | tar pipeline might be able to extract a single file without unpacking the entire archive. | 1 | 0 | 0 | In python i want to get file was located in rpm package, can i open this package, get file et save this in a python variable ? (without extract all package in /tmp) | python get file in rpm | 0 | 0 | 0 | 986 |
45,036,714 | 2017-07-11T14:00:00.000 | 1 | 0 | 0 | 0 | python,json,database,oracle | 45,044,543 | 1 | true | 0 | 0 | As you described querying the db too many times is not an option. OK in that case I would do this the following way :
When your program starts you get the data for all tools as a set of JSON-Files per tool right? OK. I am not sure how you get the data by querying the tools directly or by querying the db .. does not matter.
You check if you have old data in the "cache-dictionary" for that tool. If yes do your compare and store the "new data" as "previous data" in the cache. Ready for the next run. Do this for all tools. This loops forever :-)
This "cache dictionary" now can be implemented in memory or on disk. For your amount of data I think memory is just fine.
With that approach you do not have to query the db for the old data. The case that you cannot do the compare if you do not have old data in the "cache" at program start could be handled that you try to get it from db (risking long query times but what to do :-) | 1 | 1 | 0 | Background:
I have an application written in Python to monitor the status of tools. The tools send their data from specific runs and it all gets stored in an Oracle database as JSON files.
My Problem/Solution:
Instead of connecting to the DB and then querying it repeatedly when I want to compare the current run data to the previous run's data, I want to make a copy of the database query so that I can compare the new run data to the copy that I made instead of to the results of the query.
The reason I want to do this is because constantly querying the server for the previous run's data is slow and puts unwanted load/usage on the server.
For the previous run's data there are multiple files associated with it (because there are multiple tools) and therefore each query has more than one file that would need to be copied. Locally storing the copies of the files in the query is what I intended to do, but I was wondering what the best way to go about this was since I am relativity new to doing something like this.
So any help and suggestions on how to efficiently store the results of a query, which are multiple JSON files, would be greatly appreciated! | How To Store Query Results (Using Python) | 1.2 | 1 | 0 | 465 |
45,037,029 | 2017-07-11T14:13:00.000 | 0 | 0 | 1 | 0 | python,google-chrome-os | 58,956,274 | 2 | false | 0 | 0 | On some chromebooks, like the one I'm using now, there is a Linux(beta) option in the options menu. Alternatively, you can use repl.it instead, although be aware that playing sound and using geocoder will work server-side instead, so the sound will not play, and the ip adress will be in New York. | 1 | 0 | 0 | I just recently bought an Acer Chromebook 11 and I would like to do some Python programming on it. How do I run Python from a USB stick on an Acer Chromebook 11? (Also, I don't have access to wifi at the place I want to use it.) | Programming with Python using Chrome OS | 0 | 0 | 1 | 455 |
45,037,851 | 2017-07-11T14:46:00.000 | 0 | 1 | 0 | 0 | python,azure,azure-cosmosdb | 45,041,022 | 1 | true | 0 | 0 | The RU calculation is based on a variety of factors one of which is document size. Right now you're trying to do micro optimizations on RU when you should be designing based on your read/write patterns and ensuring that you can efficiently access the data you need. The difference between 10*0.1 and 1.0 in this case should take a backseat as the RU cost difference will be negligible. | 1 | 0 | 0 | I am trying to optimize my usage of Request Units. Say in a span of one minute, is it better to upload 10 0.1MB documents or a single 1MB document? I heard that if the total amount of data is the same, then the RU usage would be the same, but it makes sense to me that if I access the database to write to it more frequently then it would be more costly in terms of RUs.
Thanks. | Azure CosmosDB - Most efficient way to upload documents (size, frequency) | 1.2 | 0 | 0 | 122 |
45,039,917 | 2017-07-11T16:23:00.000 | 1 | 0 | 0 | 0 | python,django | 45,046,493 | 2 | false | 1 | 0 | Is it a static message? I'd just override the necessary admin templates and not use django.contrib.messages as you may be tempted to - it may be confusing to the user. | 1 | 0 | 0 | I don't want it to show up after performing an action, I want it to show up on all pages of the Django admin panel.
Is this possible? | How can I display a global alert in Django admin panel? | 0.099668 | 0 | 0 | 553 |
45,041,154 | 2017-07-11T17:31:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,python-3.x,user-interface,kivy | 45,068,664 | 1 | false | 0 | 1 | i think you should first try and convert the video to an image format (gif) and then load it in the Image class in the kv file and then use clock to schedule it to load a new screen(login) after some seconds depending on the duration of the gif | 1 | 0 | 0 | I have a GUI that starts off with a video written in Kivy. That GUI is supposed to then begin loading the whole program in the background while the clip is playing, and after the clip, a window for login is supposed to come up. How do I load the whole program and at the same time load the video to play at the start of the program?
I used event dispatcher but it didn't work.
Additionally, how do I tell the window to open from the video to the login to the first page of the GUI without being separate GUIs to load from?
Thank you very much. | Placing a Video at Start of GUI to Transition to Main Code Kivy | 0 | 0 | 0 | 28 |
45,042,402 | 2017-07-11T18:47:00.000 | 0 | 0 | 0 | 0 | python,django,forms | 45,043,143 | 1 | true | 1 | 0 | Figured it out! I just used a Textarea instead of a TextField. | 1 | 0 | 0 | I am in the process of updating a project so that, in one of the apps, the user can copy a vertical list of values from a text/excel file/[file format] into a form. I want the form to hold as many values as the user will paste (so hopefully it will be dynamic in length).
When the user submits the form, my views.py will process the data. What field type should I use to do this? (If it is even possible to do this) | Django Form Field for Copying and Pasting Vertical List | 1.2 | 0 | 0 | 329 |
45,043,181 | 2017-07-11T19:37:00.000 | 0 | 0 | 0 | 1 | python,windows,eclipse,odoo-10 | 56,924,563 | 2 | false | 1 | 0 | You can try below
python ./odoo-bin -c odoo.conf
Hope this help you | 1 | 1 | 0 | I'm running ODOO 10 from source code in Eclipse on Windows 10. It's running ok in the web interface (on localhost)
I want to control the odoo via command line at the same time. Can I do so while its running in the web interface?
If so how do I invoke the odoo commands to the server? | How to control odoo 10 from command line while its running in the web | 0 | 0 | 0 | 1,605 |
45,043,654 | 2017-07-11T20:09:00.000 | 0 | 0 | 0 | 0 | python,flask,wtforms,flask-wtforms | 45,047,577 | 1 | false | 1 | 0 | The file was uploaded from the user, the browser gets it and keeps it on the sky, so you can save it with a path. That's why you can't get its full path.
If an apple flies in the sky, how do you know which apple tree he comes from? | 1 | 0 | 0 | When you make a file field with WTForms in Flask, it only returns the filename. Does anyone know how to get it to return the full path of the file? | Python Flask Wtforms File Field full path | 0 | 0 | 0 | 786 |
45,043,961 | 2017-07-11T20:31:00.000 | 0 | 0 | 0 | 0 | python,tensorflow,protocol-buffers,bazel | 45,048,559 | 2 | false | 0 | 0 | Are you using load in the BUILD file you're building?
load("@protobuf//:protobuf.bzl", "py_proto_library")?
The error seems to indicate the symbol py_proto_library isn't loaded into skylark. | 1 | 0 | 0 | I'm getting the following error when trying to run
$ bazel build object_detection/...
And I'm getting ~20 of the same error (1 for each time it attempts to build that). I think it's something with the way I need to configure bazel to recognize the py_proto_library, but I don't know where, or how I would do this.
/src/github.com/tensorflow/tensorflow_models/object_detection/protos/BUILD:325:1: name 'py_proto_library' is not defined (did you mean 'cc_proto_library'?).
I also think it could be an issue with the fact that initially I had installed the cpp version of tensorflow, and then I built it for python. | Bazel has no definition for py_proto_library | 0 | 0 | 0 | 802 |
45,044,482 | 2017-07-11T21:07:00.000 | 1 | 0 | 1 | 0 | python,regex,python-3.x | 45,044,560 | 2 | false | 0 | 0 | final = '(' + date + ')|(' + date + ')(' + timestamp ')'
If we also suppose we have a regex for the separator between the date and timestamp, we can just use
final = '(' + date + ')|((' + date + ')(' + separator + ')(' + timestamp + '))'
If this doesn't work for you, please explain why. | 1 | 0 | 0 | This community has helped me immensely with my previous regex questions, I do have a question on combining these two regular expressions.
My goal is to have the regex to be: date OR date timestamp
date = (\d{1,2}|[a-zA-Z]{2,8})(?:[/-]{1})(\d{1,2}|[a-zA-Z]{2,8})(?:[/-]{1})(\d*)
timestamp = (\d{1,2})(?:[:]{1})(\d{1,2})(?:[:]{1})(\d{1,2})
I am not able to combine the two of these into one single regex statement. Any help would be great! | Combining these two regex statements into a single statement? | 0.099668 | 0 | 0 | 95 |
45,045,043 | 2017-07-11T21:51:00.000 | 0 | 0 | 0 | 0 | python,pandas | 45,047,193 | 1 | false | 0 | 0 | Read your file using,
df = pd.read_excel('zipfilename 2017-06-28.xlsx',compression='zip', header=1, names=cols) | 1 | 1 | 1 | I am creating a new dataframe in pandas as below:
df = pd.read_excel(zipfile.open('zipfilename 2017-06-28.xlsx'), header=1, names=cols)
The single .xlsx within the .zip is dynamically named (so changes based on the date).
This means I need to change the name of the .xlsx in my code each time I open the .zip to account for the dynamically named .xlsx.
Is there a way to make pandas read the file within the .zip, regardless of the name of the file? Or to return the name of the .xlsx within the line of code somehow?
Thanks | pandas - reading dynamically named file within a .zip | 0 | 0 | 0 | 71 |
45,045,165 | 2017-07-11T22:01:00.000 | 0 | 0 | 0 | 0 | python,django,reactjs,django-rest-framework | 45,045,751 | 2 | false | 1 | 0 | For dev:
You can run both of them on two different shells. By default, your django rest api will be at 127.0.0.1:8000 and React will at 127.0.0.1:8081. I do not think there will be any issues for the two to communicate via the fetch api. Just make sure you have ALLOWED_HOSTS=['127.0.0.1'] in your django's settings file.
For production:
It will work like any other mobile app or web app does! Host your your RESTful API on an application server (aws, heroku or whatever you choose) and create and host your React App separately. Use Javascript's fetch api to send requests to your django endpoints and use the json/xml response to render your views in React. | 1 | 3 | 0 | I have an application backend in Django-rest-framework, and I have a reactjs app.
How can I do to they work together ?
For development I open 2 terminals and run them separately. There is some way to make them work together ?
Also to deploy it to production I have no idea how I can do that.
I tried to look for some github project ready, but couldn't find anything
Thanks! | How to configure Django Rest Framework + React | 0 | 0 | 0 | 999 |
45,048,005 | 2017-07-12T03:39:00.000 | 0 | 0 | 1 | 0 | python-3.x | 52,459,802 | 2 | false | 0 | 0 | As far as I know, in order to store the value of a variable permanently, you should either save it to a file or to a database. The first option would be the easiest one as you can simply create a .txt file within your directory and save the variable value. This can be updated, deleted, read anytime you want. | 1 | 1 | 0 | As the title says, I want to set a permanent variable in python, no matter if the program is restarted. I'm not talking about saving the variable in a document, instead of that, I mean doing like when you type "copyright" or "credits".
For example: in the command prompt of Windows, we have stuff like %USERNAME%. This is a permanent thing, but you can enter regedit and set a new permanent stuff like %PYTHON% and it will do whatever you write there.
Is this possible to do something similar in python? | Assign permanent variable in Python 3 | 0 | 0 | 0 | 3,790 |
45,050,839 | 2017-07-12T07:15:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,pyspark | 45,052,125 | 1 | false | 0 | 0 | I don't think so, you use pyspark.ml.regression.GeneralizedLinearRegression to train, and then you get a pyspark.ml.regression.GeneralizedLinearRegressionModel, that is what you have saved.
AFIK, the model can't be refitted, you have to use the regression fit again to get a new model. | 1 | 1 | 1 | I trained a linear regression model using pyspark ml and save it.now i want to re-train it on the bases of new data batch.. is it possible?? | how to re-train Saved linear regression ML model in pyspark when new data is coming | 0 | 0 | 0 | 126 |
45,053,733 | 2017-07-12T09:27:00.000 | 1 | 1 | 1 | 0 | python,amazon-web-services,lambda,virtualenv,travis-ci | 45,054,273 | 1 | true | 0 | 0 | Solved it. I was installing the Python modules into a subdirectory of my project root, rather than in the project root itself.
Essentially was doing this:
pip install -r requirements.txt ./virtualenv/
when I should have been doing this:
pip install -r requirements.txt ./ | 1 | 0 | 0 | I have an AWS Lambda handler in Python 2.7 that is deployed from Travis CI. However, when I try running the function I received an error from AWS saying that it cannot import the enum module (enum34). Is there a simple way to resolve this? Should Travis CI include the virtual environment that Python is running in? If not, how do I include that virtualenv?
Additionally, when I deploy from Travis CI, it seems to prepend an "index." onto the handler_name field. Does anyone know why this happens, or how to disable it? I can't seem to find an answer. | Enum Module with AWS Lambda Python 2.7, Deployed with Travis CI | 1.2 | 0 | 0 | 297 |
45,056,037 | 2017-07-12T11:10:00.000 | 1 | 0 | 1 | 0 | python,conda | 45,056,341 | 1 | true | 0 | 0 | Anaconda and miniconda are designed to be installed by each user individually, into each users $HOME/miniconda directory. If you installed it as a shared install as root, all users would need to access /root/miniconda.
Also, environments will be created in $HOME/miniconda/envs, so environments of several people will interfere with each other (plus the whole issue of permissions, file ownership etc.).
Bottom line: Don't install it as root, install it as yourself.
Any third party dependencies you'd still install as root using apt-get, but once they're installed they're accessible by everyone, no matter if they use miniconda or not. | 1 | 0 | 1 | I've always used virtualenv(wrapper) for my python needs, but now I'm considering trying conda for new projects, mainly because theano docs "strongly" recommend it, and hoping that it will save me some hassle with pygpu config. I'm on linux mint 16( I guess, kernel in uname is from ubuntu 14.04) and there are no system packages for conda/miniconda so I'll have to use their shell script for installation.
Now I have a dilemma - should I install as my user or as root? What is likely to give me less hassle in the future (given that I'm going to use (nvidia) GPU for computation). | Installing miniconda for theano with gpuarray: as root or as user? | 1.2 | 0 | 0 | 94 |
45,060,419 | 2017-07-12T14:24:00.000 | 0 | 0 | 0 | 0 | python,matlab | 45,061,095 | 1 | false | 0 | 0 | Matlab's radon() function is not circular. This was the problem. Although the output image sizes do still differ, I am getting essentially the result I want. | 1 | 0 | 1 | I am trying to translate some matlab code to python. In the matlab code, I have a radon transform function. I start with a 146x146 image, feed it into the radon() function, and get a 211x90 image. When I feed the same image into my python radon() function, I get a 146x90 image. The documentation for the python radon () function says it is a circular radon transform. Is the matlab function also circular? Why are these returning different shaped images and how can I get the outputs to match? | is the Matlab radon() function a "circular" radon transform? | 0 | 0 | 0 | 269 |
45,061,306 | 2017-07-12T15:00:00.000 | 0 | 0 | 0 | 0 | python,pandas,google-cloud-datastore,google-bigquery,google-cloud-platform | 45,395,282 | 3 | false | 0 | 0 | As far as I can tell there is no support for Datastore in Pandas. This might affect your decision. | 1 | 0 | 1 | Currently we are uploading the data retrieved from vendor APIs into Google Datastore. Wanted to know what is the best approach with data storage and querying the data.
I will be need to query millions of rows of data and will be extracting custom engineered features from the data. So wondering whether I should load the data into BigQuery directly and query it for faster processing or store it in Datastore and then move it to BigQuery for querying?. I will be using pandas for performing statistics on stored data. | Is Google Cloud Datastore or Google BigQuery better suited for analytical queries? | 0 | 1 | 0 | 577 |
45,062,817 | 2017-07-12T16:09:00.000 | 2 | 0 | 0 | 1 | python,sockets,curses | 45,063,103 | 1 | true | 0 | 0 | This probably won't work well. curses has to know what sort of terminal (or terminal emulator, these days) it's talking to, in order to choose the appropriate control characters for working with it. If you simply redirect stdin/stdout, it's going to have no way of knowing what's at the other end of the connection.
The normal way of doing something like this is to leave the program's stdin/stdout alone, and just run it over a remote login. The remote access software (telnet, ssh, or whatever) will take care of identifying the remote terminal type, and letting the program know about it via environment variables. | 1 | 2 | 0 | So I've decided to learn Python and after getting a handle on the basic syntax of the language, decided to write a "practice" program that utilizes various modules.
I have a basic curses interface made already, but before I get too far I want to make sure that I can redirect standard input and output over a network connection. In effect, I want to be able to "serve" this curses application over a TCP/IP connection.
Is this possible and if so, how can I redirect the input and output of curses over a network socket? | Python3 - curses input/output over a network socket? | 1.2 | 0 | 1 | 779 |
45,063,425 | 2017-07-12T16:45:00.000 | 2 | 0 | 0 | 0 | python,pandas | 45,063,449 | 1 | false | 0 | 0 | Use df['your column name'].value_counts()['your value name']. | 1 | 0 | 1 | I have a pandas df with 5 columns, one of them being State. I want to find the number of times each state appears in the State column. I'm guessing I might need to use groupby, but I haven't been able to figure out the exact command. | Using pandas, how can I return the number of times an element appears in a column? | 0.379949 | 0 | 0 | 354 |
45,064,690 | 2017-07-12T18:02:00.000 | 0 | 0 | 1 | 0 | python,garbage-collection | 45,065,084 | 1 | false | 0 | 0 | The way this is handled is dependent on the python implementation. The reference implementation, the one you're probably using, is sometimes called CPython, because it is written in C.
CPython uses reference counting to clean up object which are obviously no longer used. However, every once in a while, it pauses execution of the program, and begins will the objects directly referenced by variables alive in the program. Then, it follows all references as long as it can, marking which objects have been visited. Once it has followed all references, it finds all the objects which aren't reachable from the main program, and deletes them. This is called tracing garbage collection, of which mark and sweep is a particular implementation.
If you want, and you're sure your program has no circular references, you can turn this feature off to improve performance. If you have circular references, however, you'll accidentally cause memory leaks, so it's usually not worth doing unless you're really worried about performance. | 1 | 0 | 0 | I know that python uses reference counting for garbage collection.
Every object that is allocated on the heap has counter that counts the number of object that refer to it, when the counter hits zero, the object is delete.
but how python handle with circle pointer?
if one of then delete the second stay with 1 counter but need to be delete. | how python handle with circle on GC? | 0 | 0 | 0 | 72 |
45,065,232 | 2017-07-12T18:34:00.000 | 0 | 0 | 0 | 0 | python,django,django-filter | 45,172,083 | 1 | false | 1 | 0 | Filters actually construct an underlying Django form Field in order to perform input validation, rendering, etc... Arguments that the filter does not expect are passed to the form field's constructor. If you want to use a plain text input, then you can simply pass the appropriate widget to the Filter. | 1 | 0 | 0 | I am using Django_filters to create filters. I have to use ModelChoiceFilter due to foreign key in models. The filter automatically returns a drop down list, is there a way to display a text input box instead of drop down list when I use ModelChoiceFilter? Thanks
code:
GPA = django_filters.ModelChoiceFilter(name='persontoschool__GPA', queryset=PersonToSchool.objects.values_list('GPA',flat=True).distinct(),to_field_name='GPA', lookup_expr='gte') | Django_filters ModelChoiceFilter display text input | 0 | 0 | 0 | 306 |
45,068,082 | 2017-07-12T21:50:00.000 | 3 | 0 | 1 | 1 | python,anaconda | 45,068,095 | 2 | false | 0 | 0 | Found a solution
This is what to do:
run: $ source activate root and then
$ anaconda-navigator | 1 | 2 | 0 | Trying to start Anaconda navigator in Linux. Getting this error:
byte indices must be integers or slices, not str | Anaconda-navigator: byte indices must be integers or slices, not str | 0.291313 | 0 | 0 | 1,155 |
45,069,855 | 2017-07-13T01:07:00.000 | 0 | 0 | 0 | 1 | python,operating-system | 45,069,902 | 3 | false | 0 | 0 | Have you tried os.system(‘\r\n’)? I think that’s the newline character on windows.
Edit: Your answer also used a forward slash instead of a backslash--definitely try the other way too, unless that’s just a typo. | 2 | 4 | 0 | I need to use an old-fashioned DOS/Windows executable (the source is not available). It uses two input files and produces one output file.
I have to run this several thousands times, using different input files. I wrote a simple python script looping over input files to automate this.
The problem is that this exe finishes every single run with the immortal "press Enter".
I start the script, keep the key pressed, 'returns' accumulate in the bufor and the script runs for a while producing several outputs.
Is there any more elegant way to proceed (i.e. without using the finger and staring at the monitor)?
I have already tried some obvious solutions (e.g. os.system('return'), os.system('\n')) but they do not work.
Next day edit:
@Eric, many thanks for the code, it works. I also thank others who contribute, and sorry for slopply written question and unformatted code in the comment (it was 3.30 am :) | Python (os): how to simulate pressing Enter while executing an external application | 0 | 0 | 0 | 4,732 |
45,069,855 | 2017-07-13T01:07:00.000 | 1 | 0 | 0 | 1 | python,operating-system | 45,070,691 | 3 | false | 0 | 0 | Use Python's subprocess module and run your executable with Popen.
Then you can send "enter" to the process with communicate. | 2 | 4 | 0 | I need to use an old-fashioned DOS/Windows executable (the source is not available). It uses two input files and produces one output file.
I have to run this several thousands times, using different input files. I wrote a simple python script looping over input files to automate this.
The problem is that this exe finishes every single run with the immortal "press Enter".
I start the script, keep the key pressed, 'returns' accumulate in the bufor and the script runs for a while producing several outputs.
Is there any more elegant way to proceed (i.e. without using the finger and staring at the monitor)?
I have already tried some obvious solutions (e.g. os.system('return'), os.system('\n')) but they do not work.
Next day edit:
@Eric, many thanks for the code, it works. I also thank others who contribute, and sorry for slopply written question and unformatted code in the comment (it was 3.30 am :) | Python (os): how to simulate pressing Enter while executing an external application | 0.066568 | 0 | 0 | 4,732 |
45,070,186 | 2017-07-13T01:49:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,text-classification,multilabel-classification | 45,070,245 | 2 | false | 0 | 0 | Not sure what you want.
If the point is to use just company names, maybe break names into syllables/phonemes, and train on that data.
If the point is to use Word2Vec, I'd recommend pulling the Wikipedia page for each company (easier to automate than an 'about me'). | 1 | 2 | 1 | What I'm trying to do is to ask the user to input a company name, for example Microsoft, and be able to predict that it is in the Computer Software industry. I have around 150 000 names and 60+ industries. Some of the names are not English company names.
I have tried training a Word2Vec model using Gensim based on company names only and averaged up the word vectors before feeding it into SKlearn's logistic regression but had terrible results. My questions are:
Has anyone tried these kind of tasks? Googling on short text classification shows me results on classifying short sentences instead of pure names. If anyone had tried this before, mind sharing a few keywords or research papers regarding this task?
Would it be better if I have a brief description for each company instead of only using their names? How much would it help for my Word2Vec model rather than using only the company names? | Machine learning to classify company names to their industries | 0 | 0 | 0 | 2,136 |
45,071,567 | 2017-07-13T04:43:00.000 | 0 | 1 | 0 | 0 | python,grpc | 70,484,074 | 3 | false | 0 | 0 | If you metadata has one key/value you can only use list(eg: [(key, value)]) ,If you metadata has Mult k/v you should use list(eg: [(key1, value1), (key2,value2)]) or tuple(eg: ((key1, value1), (key2,value2)) | 1 | 9 | 0 | I want to know how to send custom header (or metadata) using Python gRPC. I looked into documents and I couldn't find anything. | How to send custom header (metadata) with Python gRPC? | 0 | 0 | 1 | 12,988 |
45,075,568 | 2017-07-13T08:40:00.000 | 0 | 0 | 0 | 0 | python,tensorflow | 45,084,990 | 1 | false | 0 | 0 | During training, restoring from checkpoints, etc, you want to use the saver. You only want to use the saved model if you're loading your exported model for inference. | 1 | 4 | 1 | I'd like to know the differences between tf.train.Saver().restore() and
tf.saved_model.loader().
As far as I know, tf.train.Saver().restore() restores the previously saved variables from the checkpoint file; and tf.saved_model.loader() loads the graph def from the pb file.
But I have no idea about when I should choose restore() or loader()? | what the differences between tf.train.Saver().restore() and tf.saved_model.loader | 0 | 0 | 0 | 323 |
45,082,883 | 2017-07-13T13:58:00.000 | 3 | 0 | 1 | 0 | python,django,virtualenv | 45,082,989 | 2 | true | 1 | 0 | Don't do this. OneDrive - and similar systems like Dropbox - are meant for sharing documents. They are not meant for code, and even less for installed libraries.
Store your code in a version control system like git, and push it up regularly to a host like Github. Then on each of your computers, clone the repo and install the dependencies locally inside a virtualenv. | 1 | 0 | 0 | I have my project stored on OneDrive. It sometimes works on my pc and laptop both of which have Windows 10. The project on both is in the same directory- C:/OneDrive/code/etc...
When I use virtualenv and download different packages, it works fine, but when I use my laptop nothing works at all (same applies the other way around). I get the following error:
Could not import runpy module ImportError:
No module named 'runpy'
What can I do to fix this problem on my laptop and PC? Anyone experiencing a similar issue? | Python virtualenv can`t work through OneDrive | 1.2 | 0 | 0 | 2,283 |
45,085,345 | 2017-07-13T15:36:00.000 | 6 | 0 | 1 | 0 | python,datetime,timezone | 45,085,371 | 2 | false | 0 | 0 | How about making a copy of the column (say, a Series called time_series) and removing the timezone using time_series = time_series.apply(lambda d: d.replace(tzinfo=None))? | 1 | 2 | 0 | How could I remove Timezone information from my time column without converting it into any other timezone when column datatype is object:
Sat Jun 10 2017 22:50:45 GMT+0300 (IDT)
Request result: 2017-6-10 22:50:45
Request dtype: datetime64[ns] | Remove Timezone info | 1 | 0 | 0 | 8,571 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.