Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
45,087,840
2017-07-13T17:53:00.000
0
0
0
0
python,activemq,qpid,jms-topic
45,198,167
2
false
0
0
Found out that the issue was , In On_start method we have to use event.container.create_receiver() and the URL has to be in format topic://
2
0
0
Using Qpid for python, I am using the Container to connect to ActiveMq with the connector URL as: username:password@hostname:5672/topicName. In the web console i can see that for AMQP the connection is up . But instead of subscribing to existing topic it create a new queue with that name . Can someone help me in the format which has to be given to subscribe for a topic. Or if i am missing something please point me in right direction. Thank You.
Qpid for python , Not able to subscribe for a topic
0
0
1
278
45,087,840
2017-07-13T17:53:00.000
0
0
0
0
python,activemq,qpid,jms-topic
45,104,859
2
false
0
0
I'm not entirely sure of the Qpid for python URI syntax but from the ActiveMQ side a destination is addressed directly by using a destination prefix. For a topic the prefix is topic:// and for queue it is queue:// unsurprisingly. In the absence of a prefix the broker defaults the address in question to a Queue type as that is generally the preference. So to fix your issue you need to construct a URI that uses the correct prefix which in your case would be something using topic://some-name and that should get you the results you expect.
2
0
0
Using Qpid for python, I am using the Container to connect to ActiveMq with the connector URL as: username:password@hostname:5672/topicName. In the web console i can see that for AMQP the connection is up . But instead of subscribing to existing topic it create a new queue with that name . Can someone help me in the format which has to be given to subscribe for a topic. Or if i am missing something please point me in right direction. Thank You.
Qpid for python , Not able to subscribe for a topic
0
0
1
278
45,088,254
2017-07-13T18:17:00.000
0
0
1
0
python,django,localhost
45,088,312
2
false
1
0
Started a new project. And you replaced the settings.py from another project? If so just update your database and install the required packages with pip. To update the database: python manage.py makemigrations and then python manage.py migrate.
1
1
0
i started a new project in Django but local environment settings come from the previous project. So how can i reset local environment settings? Thank you..
Django Local Environment Settings
0
0
0
1,354
45,089,263
2017-07-13T19:15:00.000
1
0
0
0
python,bots,discord
45,089,303
1
false
0
0
You are doing a comprehension that results in a generator. You can probably fix it by doing len([s for s in self.servers]). EDIT: Generator is an object that does not hold elements in memory but you can still loop over them. Since it doesn't create a list from which to ask the length from you can't perform len().
1
0
0
Attempting to count how many servers/guilds my bot is in. I've check a few forums before, and seems like for do it, I need to use the len(). I tried making it, by doing the follow command: Guilds = len([s] for s in self.servers) When doing it, I get the following error: "TypeError: object of type 'generator' has no len()" I'm not sure of what I'm doing wrong. Could someone help me?
Couting Discord Bot's Guilds(In Python)?
0.197375
0
1
706
45,089,805
2017-07-13T19:50:00.000
0
0
1
0
python,windows,pip
71,182,264
2
false
0
0
I had the same problem on Fusion, which was resolved by upgrading pip.
1
8
0
I'm get the following error when running pip install cryptography: build\temp.win32-2.7\Release\_openssl.c(434) : fatal error C1083: Cannot open include file: 'openssl/opensslv.h': No such file or directory I'm running windows 10, 64 bit, with python 2.7. I'm trying to install cryptography 1.9.
Pip install cryptography in windows
0
0
0
15,008
45,090,277
2017-07-13T20:23:00.000
2
0
1
1
python
45,090,303
2
false
0
0
You cannot do that. Child processes inherit environment of parent processes by default, but the opposite is not possible. Indeed, after the child process has started, the memory of all other processes is protected from it (segregation of context execution). This means it cannot modify the state (memory) of another process. At best it can send IPC signals... More technical note: you need to debug a process (such as with gdb) in order to control its internal state. Debuggers use special kernel APIs to read and write memory or control execution of other processes.
1
1
0
Is there a way within Python to set an OS environment var that lives after the Python script has ended?.. So if I assign a var within the Python script and the script ends I want it to be available once I run a "printenv" via terminal. I've tried using the sh library using os.system but once the program finishes that var is not available via "printenv".
Set OS env var via python
0.197375
0
0
264
45,090,427
2017-07-13T20:33:00.000
0
0
0
0
xml,python-pptx
45,090,679
1
false
0
0
The short answer is no, you cannot determine the rendered height of a table you have just created with python-pptx. This is because any height adjustments for "fit" are made by the PowerPoint rendering engine, which is not available until you open the file with PowerPoint. You may be able to use the approach you mention (adding up row heights), but it would only be accurate after opening the presentation with PowerPoint and possibly visiting that slide and perhaps making a small adjustment to the table. You'll have to experiment to see what, if anything, causes PowerPoint to rewrite the table row heights to their rendered values. Note that this behavior may be different between PowerPoint and LibreOffice.
1
0
0
I am working with a table and would like to query its final height after it has been populated with values. This way I will be able to adjust its position on the slide (graphicframe.top) according to how large it is. I add the table with these lines: graphicframe = shapes.add_table(rows = 41, cols = 5, left = Inches(0), top = Inches(2), width = Inches(5.7), height = Inches(4)) table = graphicframe.table And then loop through its cells to populate them individually. The table height changes to accommodate the input (for example: text may wrap on to 2 lines, increasing the height, a large number of rows may push the table to be taller than the height I originally specified, etc). graphicframe.height returns the original height which I inputted to add_table, and not the final height after data population. table.rows[i].height does the same. I found what looks to be the height of each row in the XML file, and those also correspond to the original height from add_table, not the empirical final height of the table after data population. From the XML file, there are 41 lines that specify height h, corresponding to the 41 rows in my table. They look like this: < a:tr h="89209" > ...< /a:tr > Do you know if the empirical height of my table (after it is populated) is recorded anywhere?? Thank you!!
Find height of table after creating & populating it in python-pptx
0
0
0
349
45,090,562
2017-07-13T20:43:00.000
0
0
0
0
python,bokeh,glyph
45,105,205
1
false
0
0
In order to avoid tripling (or quintupling) memory usage in the browser, Bokeh only supports setting "single values" for non-selection colors and alphas. That is, non-selection properties can't be vectorized by pointing them at a ColumnDataSource column. So there's only two options I can think of: Split the glyphs into different groups of glyphs, that each have a different different nonselection_color. This might be feasible if you only have a few groups. Of course now you have to partition your data to have e.g. five calls to p.circle instead of one, but it would entirely avoid JS. Use a tiny amount of JavaScript in a CustomJS callback. You can have an additional column in the CDS that provides the non-selected colors. When a selection happens, the CustomJS callback switches the glyph's normal color field to point to the other column and when a selection is cleared, changes it back to the "normal" field.
1
0
1
I have a glyph that's a series of Circles. I want to click on one point and change the colour / alpha of the unselected glyphs such that each unselected glyph has a custom colour based on it's relationship with the selected point. For example, I'd want the closest points to the selected point to have alpha near to 1 and the furthest to have alpha near to 0. I've seen other questions where the unselected glyphs have different alphas, but the alphas are independent of what is selected. Is it possible to do this without using JavaScript? Edited for more details: The specific dataset I'm working on is a dataset of a bike sharing system, with data on trips made between specific stations. When I click on a specific station, I want to show the destination stations to which users go to when they start from the station selected. For n stations, the data thus has a n * n format: for each station, we have the probability of going to every other station. Ideally, this probability will be the alpha of the unselected stations, such the most popular destinations would have alpha near to 1, and the less popular ones an alpha near to 0.
Visual properties of unselected glyphs in Bokeh based on what is selected
0
0
0
195
45,091,955
2017-07-13T22:36:00.000
1
0
0
0
python,flask
45,095,487
2
true
1
0
When you request a page from the node server on every request you receive in the flask app that handles CRUD changes you effectively created a web hook (one server requesting or posting another). You may want to offload this to a background thread or job system, like beanstalkd. Giving you asynchronous webhook calls. If you want the page you monitor to also update you might be interested in web sockets.
1
1
0
I have a chrome extension ingesting data from various web pages I visit and storing it to a database (Python/Flask) I also have a dashboard visualizing that database (using react-create-app node/react/redux). I want the dashboard to be automatically updated every time I add/delete/modify a record in the database. From what I understand that is specifically what a webhook is for. What I want to do is create a "listener" on the database so that every time a change is made, it will fire off a request to the node server. A few things: 1.) How do I create "something" to listen for changes in a database? 2.) Normally my webpage initiates a web request and listens for data in the call back. How do I structure it so it just "listens" for new updates?
Creating a webhook in Flask
1.2
0
0
2,099
45,092,884
2017-07-14T00:37:00.000
1
1
0
0
python,amazon-web-services,aws-lambda,alexa,alexa-skills-kit
45,093,014
1
false
1
0
You can use the session object to save data, for example, save the state of the conversation.
1
0
0
When I read the code to make an Alexa skill in python, I am confused by session. Can you tell me what session means in the function? (session_attribute, or session.get('attributes', {})) Thank you
What does session mean in the function for Alexa skills?
0.197375
0
0
103
45,096,004
2017-07-14T06:20:00.000
0
0
0
0
macos,python-3.x,pyqt5,cx-freeze,py2app
45,117,297
2
true
0
1
Most of the default styling of qt widgets are derived from the OS or the interface that you are using, try changing some style sheet properties to get the desired layout and since there are no code, I can't pinpoint to what can be done to change.
1
0
0
I have developed an application with pyqt5 and it is working fine on windows but when i run this app on mac osx it's graphics are not working fine like the layout of the buttons, labels and other stuff are not showing perfectly. I have created app in PyQt5 and using Cx_freeze i made executable for windows as well as mac osx. I tried py2app also still on mac side my application is not working.
my pyqt5 application is working fine on windows but when i run this on Mac the graphics are not as it is
1.2
0
0
1,149
45,098,004
2017-07-14T08:14:00.000
0
0
0
0
python,amazon-web-services,amazon-sqs,amazon-aurora
45,124,304
3
false
1
0
There isn't any built-in integration that allows SQS to interact with Aurora. Obviously you can do this externally, with a queue consumer that reads from the queue and invokes the procedures, but that doesn't appear to be relevant, here.
2
0
0
I am trying to export data from Aurora into S3, I have created a stored procedure to perform this action. I can schedule this on the Aurora Scheduler to run at a particular point in time. However, I have multiple tables - could go up to 100; so I want my process controller which is a python script sitting in Lambda to send a Queue Message - Based on this Queue message the stored procedure in Aurora will be started I am looking at this for the following reasons I do not want too much time lag between starting two exports I also do not want two exports overlapping in execution time
Does anyone know if we can start a storedprocedure in Aurora based on SQS
0
1
0
679
45,098,004
2017-07-14T08:14:00.000
0
0
0
0
python,amazon-web-services,amazon-sqs,amazon-aurora
55,167,030
3
false
1
0
I have used lambda with alembic package to create schema and structures. I know we could create users and execute other database commands - the same way execute a stored procedure Lambda could prove to be expensive - we probably could have an container to do it
2
0
0
I am trying to export data from Aurora into S3, I have created a stored procedure to perform this action. I can schedule this on the Aurora Scheduler to run at a particular point in time. However, I have multiple tables - could go up to 100; so I want my process controller which is a python script sitting in Lambda to send a Queue Message - Based on this Queue message the stored procedure in Aurora will be started I am looking at this for the following reasons I do not want too much time lag between starting two exports I also do not want two exports overlapping in execution time
Does anyone know if we can start a storedprocedure in Aurora based on SQS
0
1
0
679
45,102,833
2017-07-14T12:18:00.000
0
0
0
0
python,pdf,adobe,photoshop,acrobat
45,153,829
1
false
1
0
Max Wyss has the best suggestion; using Acrobat's Print Production Tools. Specifically you want to use the 'Convert Colors' Tool. The key is the 'Conversion Attributes'. Convert Command: Convert to Profile Conversion Profile: (Choose from:) Dot Gain ??% or Gray Gamma ?.? Rendering Intent: for a gray-scale conversion like this, I lean toward 'Perceptual', but it's not a strong preference. You might want to check the 'preserve black' option. Test your settings with a small page range before doing the entire document.
1
0
0
So I have this 700 page pdf that's in black, white, and hot pink. The hot pink is both text and non-text (design elements). Is there any way to edit that one color on all the pages at once? I can do it element-by-element on adobe acrobat and page-by-page in photoshop, but those methods just seem impossibly tedious. I don't know much about coding but can maybe pull together something on python if I'm given great instructions. Thanks!
PDF: Change 1 color on all pages?
0
0
0
982
45,104,190
2017-07-14T13:25:00.000
1
0
0
0
python,django,unit-testing
45,110,882
1
false
1
0
There are a couple of different ways that could handle this: Stub out the loading of Elasticsearch depedency dynamically using unittest.patch Create some sort of "seam" into your view that allows the test to import the view and replace the elasticsearch class with a test implementation Define the elastic search class in settings as a module path and switch it out for the test runs (django actually does this with a lot of its depdendencies)
1
0
0
I have a Django view where a user fills out and submits a form. The data is taken from this form and a document is created in an Elasticsearch index. My question, is how can I test this view without impacting my Elasticsearch index? It is a development index, but I would prefer not to muddle it with unit test data. One option would be to create the record in the unit test and then delete it during tear down - but, if possible, I would really like to avoid touching the index at all. Are there any other options?
Django View Unit Tests when the View Accesses an Elasticsearch Index
0.197375
0
0
181
45,104,994
2017-07-14T14:05:00.000
0
0
1
0
python-3.x,matplotlib,statistics,histogram
45,105,114
2
false
0
0
As you pointed out, len(set(list)) is the number of unique values for the "delivery days" variable. This is not the same thing as the bin size; it's the number of distinct bins. I would use "bin size" to describe the number of items in one bin; "bin count" would be a better name for the number of bins. If you want to generate a histogram, supposing the original list of days is called days_list, a quick high-level approach is: Make a new set unique_days = set(days_list) Iterate over each value day in unique_days For the current day, set the height of the bar (or size of the bin) in the histogram to be equal to days_list.count(day). This will tell you the number of times the current "day" value for number of delivery days appeared in the days_list list of delivery times. Does this make sense? If the problem is not that you're manually calculating the histogram wrong but that pyplot is doing something wrong, it would help if you included some code for how you are using pyplot.
1
0
0
I have this list of delivery times in days for cars that are 0 years old. The list contains nearly 20,000 delivery days with many days being repeated. My question is how do i get the histogram to show bin sizes as 1 day. I have set the bin size to the amount of unique delivery days there by: len(set(list)) but when i generate the histogram, the frequency of 0 delivery days is over 5000, however when i do list.count(0) it returns with 4500.
Histogram bins size to equal 1 day - pyplot
0
0
0
1,029
45,106,240
2017-07-14T15:06:00.000
1
0
0
0
python,numpy
45,106,390
1
true
0
0
You can check the dtype, or iterate through and check if the values are not in the set {True, False} as well as checking if the values are not in the set {0,1} Boolean masks must be the same shape as the array they are intended to index into, so that's another check. But there's no hard and fast way to distinguish a priori whether an array of consisting of only values in {0,1} is one or the other without additional knowledge.
1
0
1
If I am given an array of indices but I don't know whether it is a regular index array or a boolean mask, what is the best way to determine which it is?
Best way to differentiate an array of indices vs a boolean mask
1.2
0
0
57
45,106,431
2017-07-14T15:15:00.000
1
0
0
0
python,pycharm
69,032,680
2
false
0
0
If you put a cursor on the text field just below the displayed dataframe and hit Enter, it'll update itself.
2
17
1
When you select "View as DataFrame" in the variables pane it has a nice spreadsheet like view of the DataFrame. That said, as the DataFrame itself changes, the Data View does not auto update and you need to reclick the View as DataFrame to see it again. Is there a way to make PyCharm autoupdate this? Seems like such a basic feature.
PyCharm: Is there a way to make the "data view" auto update when dataframe is changed?
0.099668
0
0
835
45,106,431
2017-07-14T15:15:00.000
1
0
0
0
python,pycharm
66,154,095
2
false
0
0
Unfortunately, no. The only thing you can do is use 'watches' to watch the variable and open when you want it. It requires a lot of background processing and memory usage to display the dataframe.
2
17
1
When you select "View as DataFrame" in the variables pane it has a nice spreadsheet like view of the DataFrame. That said, as the DataFrame itself changes, the Data View does not auto update and you need to reclick the View as DataFrame to see it again. Is there a way to make PyCharm autoupdate this? Seems like such a basic feature.
PyCharm: Is there a way to make the "data view" auto update when dataframe is changed?
0.099668
0
0
835
45,106,689
2017-07-14T15:28:00.000
0
0
1
0
python,python-2.7,list
45,106,760
6
false
0
0
[(l[i]+l[i+1])/2 for i in range(len(l)-1)]
1
3
0
I have a list of values: [0,2,3,5,6,7,9] and want to get a list of the numbers in the middle in between each number: [1, 2.5, 4, 5.5, 6.5, 8]. Is there a neat way in python to do that?
Getting list of middle values in between values of list
0
0
0
227
45,107,012
2017-07-14T15:46:00.000
2
0
0
1
python,docker,docker-compose
45,107,280
2
true
0
0
docker builds the container image from cache if nothing is changed. When it founds a change in a line, it executes again all the lines from the change. So, if you need to add libraries, just add more lines at the end of the dockerfile.
1
0
0
I have a docker image that takes about 45min to build. As I'm working with it, I'm finding that I sometimes need to add python packages to it for the code I'm working on. I want to be able to install these packages such that it persists. What's the best way to achieve this? G
Best way to enhance a Docker image for small changes that need to be persisted
1.2
0
0
185
45,108,835
2017-07-14T17:39:00.000
0
0
1
0
python,dictionary,python-import
45,108,875
4
false
0
0
Copy paste the script into the shell. Make sure there are no blank lines inside indented blocks or you'll get a SyntaxError.
3
0
0
I have a little a script containing some "dictionaries". "Is there any way I can take this script and import it's contents to idle?"* using a command like import. What want to do then is to edit (or view) these dictionaries "live" in idle.....
Python: Is there any way I can take a script and import it's contents to idle in order to edit them there?
0
0
0
57
45,108,835
2017-07-14T17:39:00.000
0
0
1
0
python,dictionary,python-import
45,108,892
4
false
0
0
Yes, import command should work: Just put "import myScript". You can add a path to your script with a line like sys.path.append(r"D:\myPath") before that statement
3
0
0
I have a little a script containing some "dictionaries". "Is there any way I can take this script and import it's contents to idle?"* using a command like import. What want to do then is to edit (or view) these dictionaries "live" in idle.....
Python: Is there any way I can take a script and import it's contents to idle in order to edit them there?
0
0
0
57
45,108,835
2017-07-14T17:39:00.000
0
0
1
0
python,dictionary,python-import
45,108,900
4
false
0
0
If you are on Windows right click on the script > Open with > Choose your Idle if it's there if not click on more(not sure what it's named it's the last option) then there should be some more options to open the file if your idle is still not there you can browse your files and go to the folder where your idle is saved and choose the exe to open the script in the Idle
3
0
0
I have a little a script containing some "dictionaries". "Is there any way I can take this script and import it's contents to idle?"* using a command like import. What want to do then is to edit (or view) these dictionaries "live" in idle.....
Python: Is there any way I can take a script and import it's contents to idle in order to edit them there?
0
0
0
57
45,110,781
2017-07-14T20:02:00.000
0
0
0
1
python,airflow
45,120,003
1
false
1
0
If you use trigger_dag_run (through command line or from another dag) to trigger a dag, you can pass any json as payload. Another option would be to store the argument list on a file, and store that file location as a Variable. The DAG can then pass this file location to python operator and the operator can then handle reading that file and parsing arguments from it. If both these solutions don't work for your use case, giving more details about your dag and kind of arguments might help.
1
0
0
We have a front-end server that is executing dags directly with the dag API (DagBag(), get_dag() and then dag_run()) Dags run fine, the problem is, that we could not find a way to execute such dags with specific arguments. The closest solution was to use the Variable API, which uses set() and get() methods, but these variables are global and might have conflicts when working in concurrent operations that might use same variable names. How could we run a dag and set arguments available to its execution? We are mostly using PythonOperator. Edit 1: Our program is a Python Django front end server. So, we are speaking with Airflow through another Python program. This means we trigger dags through Python, hence, using DagBag.get_dag() to retrieve information from airflow service. run_dag() does not have a way to pass direct parameters though
How to submit parameters to a Python program in Airflow?
0
0
0
1,227
45,110,936
2017-07-14T20:12:00.000
-1
0
0
0
python,heroku
45,110,978
2
false
1
0
This is a very generic error and question without providing any code. But the most I can see is that you're missing a favicon.ico. Just make one and add it to your project root.
2
0
0
I'm getting an Application error when running my python heroku app and the logs show: at=error code=H10 desc="App crashed" method=GET path="/favicon.ico" host=python-blackjack.herokuapp.com request_id=38f050c2-bc7e-499a-8112-ee7d4b66bf0c fwd="90.205.68.255" dyno= connect= service= status=503 bytes= protocol=https does anyone know the issue/s thanks
Heroku web app deployment error
-0.099668
0
0
88
45,110,936
2017-07-14T20:12:00.000
0
0
0
0
python,heroku
45,112,975
2
false
1
0
This most likely has nothing to do with favicon.ico. H10 App Crashed means your server could not start, and every request sent to it is erring out. I recommend to use heroku restart, and look in the logs what is causing server to die on startup.
2
0
0
I'm getting an Application error when running my python heroku app and the logs show: at=error code=H10 desc="App crashed" method=GET path="/favicon.ico" host=python-blackjack.herokuapp.com request_id=38f050c2-bc7e-499a-8112-ee7d4b66bf0c fwd="90.205.68.255" dyno= connect= service= status=503 bytes= protocol=https does anyone know the issue/s thanks
Heroku web app deployment error
0
0
0
88
45,111,640
2017-07-14T21:10:00.000
1
0
0
0
python,python-2.7,statistics
45,115,173
1
false
0
0
I assume fractional Logit in the question refers to using the Logit model to obtain the quasi-maximum likelihood for continuous data within the interval (0, 1) or [0, 1]. The discrete models in statsmodels like GLM, GEE, and Logit, Probit, Poisson and similar in statsmodels.discrete, do not impose an integer condition on the response or endogenous variable. So those models can be used for fractional or positive continuous data. The parameter estimates are consistent if the mean function is correctly specified. However, the covariance for the parameter estimates are not correct under quasi-maximum likelihood. The sandwich covariance is available with the fit argument, cov_type='HC0'. Also available are robust sandwich covariance matrices for cluster robust, panel robust or autocorrelation robust cases. eg. result = sm.Logit(y, x).fit(cov_type='HC0') Given that the likelihood is not assumed to be correctly specified, the reported statistics based on the resulting maximized log-likelihood, i.e. llf, ll_null and likelihood ratio tests are not valid. The only exceptions are multinomial (logit) models which might impose the integer constraint on the explanatory variable, and might or might not work with compositional data. (The support for compositional data with QMLE is still an open question because there are computational advantages to only support the standard cases.)
1
0
1
can anyone let me know what is the method of estimating the parameters in fractional logit model in statsmodel package of python? And can anyone refer me the specific part of the source code of fractional logit model?
statsmodel fractional logit model
0.197375
0
0
1,645
45,111,731
2017-07-14T21:17:00.000
0
0
0
0
python-2.7,sockets,qpython3
69,265,192
2
false
0
1
It's your loopback address this wont work HOST = '127.0.0.1' Instead that use true ip address on network for your host and make sure port of 5000 on server is open already
1
0
0
Someone know how can I send string by socket qpython3 android (client) to python2.7 linux (server)? For python2.7 linux (server) ok, I know, but I dont know how create the client with qpython3 android. Someone Know? TKS
Sending string via socket qpython3 android (client) to python2.7 linux (server)
0
0
0
217
45,112,111
2017-07-14T21:53:00.000
2
0
1
1
python,linux,pip,virtualenv,apt-get
45,112,548
3
false
0
0
Are you sure pip install is "failing"? To me, it sounds like the directory to which pip is installing modules on your machine is not in your PATH environment variable, so when virtualenv is installed, your computer has no idea where to find it when you just type in virtualenv. Find where pip is installing things on your computer, and then check if the directory where the pyenv executable is placed is in your PATH variable (e.g. by doing echo $PATH to print your PATH variable). If it's not, you need to update your PATH variable by adding the following to your .bashrc or .bash_profile or etc.: export PATH="PATH_TO_WHERE_PIP_PUTS_EXECUTABLES:$PATH"
2
1
0
I am using a form of Lubuntu called GalliumOS (optimized for Chromebooks). I installed pip using $ sudo apt-get install python-pip. I then used pip install --user virtualenv and pip install virtualenv, and then when I tried to subsequently use virtualenv venv I experienced the message bash: virtualenv: command not found. Between the pip installs above, I used pip uninstall virtualenv to get back to square one. The error remained after a reinstall. I read several other posts, but all of them seemed to deal with similar problems on MacOS. One that came close was installing python pip and virtualenv simultaneously. Since I had already installed pip, I didn't think that these quite applied to my issue. Why is pip install virtualenv not working this way on LUbuntu / GalliumOS?
bash: virtualenv: command not found "ON Linux"
0.132549
0
0
4,437
45,112,111
2017-07-14T21:53:00.000
2
0
1
1
python,linux,pip,virtualenv,apt-get
45,114,004
3
true
0
0
What finally worked for me was this. I used $ sudo apt-get install python-virtualenv. I was then able to create a virtual environment using $ virtualenv venv. I was seeking to avoid using $ sudo pip install virtualenv, because of admonitions in other posts to not do this, and agreed, because of experiences I'd had with subsequent difficulties when doing this.
2
1
0
I am using a form of Lubuntu called GalliumOS (optimized for Chromebooks). I installed pip using $ sudo apt-get install python-pip. I then used pip install --user virtualenv and pip install virtualenv, and then when I tried to subsequently use virtualenv venv I experienced the message bash: virtualenv: command not found. Between the pip installs above, I used pip uninstall virtualenv to get back to square one. The error remained after a reinstall. I read several other posts, but all of them seemed to deal with similar problems on MacOS. One that came close was installing python pip and virtualenv simultaneously. Since I had already installed pip, I didn't think that these quite applied to my issue. Why is pip install virtualenv not working this way on LUbuntu / GalliumOS?
bash: virtualenv: command not found "ON Linux"
1.2
0
0
4,437
45,112,625
2017-07-14T22:51:00.000
8
0
1
0
python,tabs,pycharm,indentation,spaces
45,112,830
2
false
0
0
There's a setting to make white space visible in: Settings -> Editor -> General -> Appearance -> Show whitespaces
1
13
0
this may sound dumb, but I am still fairly new, is there clear way to spot if I have tabs or spaces in the wrong place with PyCharm? Or even to display all tabs and spaces visually? I just spent ages looking for problem with 'invalid syntax' error at Def line of function, I had thought it might still be some wrong indent, which I did have before, so checked this painstakingly and found nothing. In the end, was missing ) from end of function before this. I realised the coloured lines on right hand side show errors and got this one from there. Also I understand you can mix tabs and 4 space character indents in PyCharm with no problem? But if you use tab one line and 4 spaces the next with, for example, simple text editor, Python will say 'no I'm not going to run this because I'm a strict pedant and this is just too naughty'? Any other common sense best practice habits in this area?
How to 'see' / highlight tabs and spaces in PyCharm for checking indentation?
1
0
0
8,641
45,112,983
2017-07-14T23:42:00.000
1
1
0
0
python,security,pyramid,environment,dev-to-production
45,113,051
1
true
1
0
In order to instantiate the server application with that debug feature in environment, the attacker would have to have the hand over your webserver, most probably with administrative privileges. From an outside process, an attacker cannot modify the environment of the running server, which is loaded into memory, without at least debug capabilities and a good payload for rewriting memory. It would be easier to just reload the server or try executing a script within it. I think you are safe the way you go. If you are paranoid, ensure to isolate (delete) the backdoor from the builds to production.
1
2
0
In my pyramid app it's useful to be able to log in as any user (for test/debug, not in production). My normal login process is just a simple bcrypt check against the hashed password. When replicating user-submitted bug reports I found it useful to just clone the sqlite database and run a simple script which would change everyone's password to a fixed string (just for local testing). Now that I'm switching over to postgresql that's less convenient to do, and I'm thinking of installing a backdoor to my login function. Basically I wish to check os.environ (set from the debug.wsgi file which is loaded by apache through mod_wsgi) for a particular variable 'debug'. If it exists then I will allow login using any password (for any user), bypassing the password check. What are the security implications of this? As I understand it, the wsgi file is sourced once when apache loads up, so if the production.wsgi file does not set that particular variable, what's the likelihood of an attacker (or incompetent user) spoofing it?
Security implications of a pyramid/wsgi os.environ backdoor?
1.2
1
0
64
45,115,582
2017-07-15T07:16:00.000
2
0
0
0
python,keras,prediction
46,335,988
1
true
0
0
@petezurich Thanks for your comment. Generator.reset() before model.predict_generator() and turning off the shuffle in predict_generator() fixed the problem
1
10
1
When I use model.predict_generator() on my test_set (images) I am getting a different prediction and when I use mode.predict() on the same test_Set I am getting a different set of predictions. For using model.predict_generator I followed the below steps to create a generator: Imagedatagenerator(no arguments here) and used flow_from_directory with shuffle = False. There are no augmentations nor preprocessing of images(normalization,zero-centering etc) while training the model. I am working on a binary classification problem involving dogs and cats (from kaggle).On the test set, I have 1000 cat images. and by using model.predict_generator() I am able to get 87% accuracy()i.e 870 images are classified correctly. But while using model.predict I am getting 83% accuracy. This is confusing because both should give identical results right? Thanks in advance :)
difference in predictions between model.predict() and model.predict_generator() in keras
1.2
0
0
5,757
45,117,857
2017-07-15T11:49:00.000
6
0
0
0
python,django,pandas,numpy
45,118,234
1
false
1
0
You can use any framework to do so. If you worked with Python before I can recommend using Django since you have the same (clear Python) syntax along your project. This is good because you keep the same logic everywhere but should not be your major concern when it comes to choosing the right framework for your needs. So for example if you are a top Ruby-On-Rails developer I would not suggest to learn Django just because of Pandas. In general: A lot of packages/libraries are written in other languages but you will still be able to use them in Django/Python. So for example the famous "Elasticsearch" Searchbackend has its roots in JAVA but is still used in a lot of Django apps. But it also goes the other way around "Celery" is written in Python but can be used in Node.js or PHP. There are hundreds of examples but I think you get the Point. I hope that I brought some light into the darkness. If you have questions please leave them in the comments.
1
1
1
I am trying to build a web application that requires intensive mathematical calculations. Can I use Django to populate python charts and pandas dataframe?
Can Django work well with pandas and numpy?
1
0
0
7,691
45,122,724
2017-07-15T21:05:00.000
0
1
0
0
python,django,websocket,django-channels
45,270,805
5
false
1
0
DDoS = Distributed Denial of Service The 'Distributed' part is the key: you can't know you're being attacked by 'someone' in particular, because requests come from all over the place. Your server will only accept a certain number of connections. If the attacker manages to create so many connections that nobody else can connect, you're being DDoS'ed. So, in essence you need to be able to detect that a connection is not legit, or you need to be able to scale up fast to compensate for the limit in number of connections. Good luck with that! DDoS protection should really be a service from your cloud provider, at the load balancer level. Companies like OVH use sophisticated machine learning techniques to detect illegitimate traffic and ban the IPs acting out in quasi-real time. For you to build such a detection machinery is a huge investment that is probably not worth your time (unless your web site is so critical and will lose millions of $$$ if it's down for a bit)
3
21
0
Is there anything specific that can be done to help make a Django Channels server less susceptible to light or accidental DDoS attack or general load increase from websocket/HTTP clients? Since Channels is not truly asynchronous (still workers behind the scenes), I feel like it would be quite easy to take down a Channels-based website - even with fairly simple hardware. I'm currently building an application on Django Channels and will run some tests later to see how it holds up. Is there some form of throttling built in to Daphne? Should I implement some application-level throttling? This would still be slow since a worker still handles the throttled request, but the request can be much faster. Is there anything else I can do to attempt to thwart these attacks? One thought I had was to always ensure there are workers designated for specific channels - that way, if the websocket channel gets overloaded, HTTP will still respond. Edit: I'm well aware that low-level DDoS protection is an ideal solution, and I understand how DDoS attacks work. What I'm looking for is a solution built in to channels that can help handle an increased load like that. Perhaps the ability for Daphne to scale up a channel and scale down another to compensate, or a throttling method that can reduce the weight per request after a certain point. I'm looking for a daphne/channels specific answer - general answers about DDoS or general load handling are not what I'm looking for - there are lots of other questions on SO about that. I could also control throttling based on who's logged in and who is not - a throttle for users who are not logged in could help. Edit again: Please read the whole question! I am not looking for general DDoS mitigation advice or explanations of low-level approaches. I'm wondering if Daphne has support for something like: Throttling Dynamic worker assignment based on queue size Middleware to provide priority to authenticated requests Or something of that nature. I am also going to reach out to the Channels community directly on this as SO might not be the best place for this question.
Load spike protection for Django Channels
0
0
0
1,825
45,122,724
2017-07-15T21:05:00.000
0
1
0
0
python,django,websocket,django-channels
45,345,786
5
false
1
0
Theres a lot of things you cant to do about DDOS..however there are some neat 'tricks' depending on how much resources you have at your disposal, and how much somebody wants to take you offline. Are you offering a total public service that requires direct connection to the resource you are trying to protect? If so, you just going to need to 'soak up' DDOS with the resources you have, by scaling up and out... or even elastic... either way it's going to cost you money! or make it harder for the attacker to consume your resources. There are number of methods to do this. If you service requires some kind of authentication, then separate your authentication services from the resource you are trying to protect. Many applications, the authentication and 'service' run on the same hardware. thats a DOS waiting to happen. Only let fully authenticated users access the resources you are trying to protect with dynamic firewall filtering rules. If your authenticated then gate to the resources opens (with a restricted QOS in place) ! If your a well known, long term trusted users, then access the resource at full bore. Have a way of auditing users resource behaviour (network,memory,cpu) , if you see particular accounts using bizarre amounts, ban them, or impose a limit, finally leading to a firewall drop policy of their traffic. Work with an ISP that can has systems in place that can drop traffic to your specification at the ISP border.... OVH are your best bet. an ISP that exposes filter and traffic dropping as an API, i wish they existed... basically moving you firewall filtering rules to the AS border... niiiiice! (fantasy) It won't stop DDOS, but will give you a few tools to keep resources wasted a consumption by attackers to a manageable level. DDOS have to turn to your authentication servers... (possible), or compromise many user accounts.... at already authenticated users will still have access :-) If your DDOS are consuming all your ISP bandwidth, thats a harder problem, move to a larger ISP! or move ISP's... :-). Hide you main resource, allow it to be move dynamically, keep on the move! :-). Break the problem into pieces... apply DDOS controls on the smaller pieces. :-) I've tried a most general answer, but there are a lot a of depends, each DDOS mitigation requires a bit of Skin, not tin approach.. Really you need a anti-ddos ninja on your team. ;-) Take a look at distributed protocols.... DP's maybe the answer for DDOS. Have fun.
3
21
0
Is there anything specific that can be done to help make a Django Channels server less susceptible to light or accidental DDoS attack or general load increase from websocket/HTTP clients? Since Channels is not truly asynchronous (still workers behind the scenes), I feel like it would be quite easy to take down a Channels-based website - even with fairly simple hardware. I'm currently building an application on Django Channels and will run some tests later to see how it holds up. Is there some form of throttling built in to Daphne? Should I implement some application-level throttling? This would still be slow since a worker still handles the throttled request, but the request can be much faster. Is there anything else I can do to attempt to thwart these attacks? One thought I had was to always ensure there are workers designated for specific channels - that way, if the websocket channel gets overloaded, HTTP will still respond. Edit: I'm well aware that low-level DDoS protection is an ideal solution, and I understand how DDoS attacks work. What I'm looking for is a solution built in to channels that can help handle an increased load like that. Perhaps the ability for Daphne to scale up a channel and scale down another to compensate, or a throttling method that can reduce the weight per request after a certain point. I'm looking for a daphne/channels specific answer - general answers about DDoS or general load handling are not what I'm looking for - there are lots of other questions on SO about that. I could also control throttling based on who's logged in and who is not - a throttle for users who are not logged in could help. Edit again: Please read the whole question! I am not looking for general DDoS mitigation advice or explanations of low-level approaches. I'm wondering if Daphne has support for something like: Throttling Dynamic worker assignment based on queue size Middleware to provide priority to authenticated requests Or something of that nature. I am also going to reach out to the Channels community directly on this as SO might not be the best place for this question.
Load spike protection for Django Channels
0
0
0
1,825
45,122,724
2017-07-15T21:05:00.000
0
1
0
0
python,django,websocket,django-channels
45,558,858
5
false
1
0
Let's apply some analysis to your question. A DDoS is like a DoS but with friends. If you want to avoid DDoS explotation you need minimize DoS possibilities. Thanks capitan obvious. First thing is to do is make a list with what happens in your system and wich resources are affected: A tcp handshake is performed (SYN_COOKIES are affected) A ssl handshake comes later (entropy, cpu) A connection is made to channel layer... Then monitorize each resource and try to implement a counter-measure: Protect to SYN_FLOOD configuring your kernel params and firewall Use entropy generators Configure your firewall to limit open/closed connection in short time (easy way to minimize ssl handshakes) ... Separate your big problem (DDoS) in many simple and easy to correct tasks. Hard part is get a detailed list of steps and resources. Excuse my poor english.
3
21
0
Is there anything specific that can be done to help make a Django Channels server less susceptible to light or accidental DDoS attack or general load increase from websocket/HTTP clients? Since Channels is not truly asynchronous (still workers behind the scenes), I feel like it would be quite easy to take down a Channels-based website - even with fairly simple hardware. I'm currently building an application on Django Channels and will run some tests later to see how it holds up. Is there some form of throttling built in to Daphne? Should I implement some application-level throttling? This would still be slow since a worker still handles the throttled request, but the request can be much faster. Is there anything else I can do to attempt to thwart these attacks? One thought I had was to always ensure there are workers designated for specific channels - that way, if the websocket channel gets overloaded, HTTP will still respond. Edit: I'm well aware that low-level DDoS protection is an ideal solution, and I understand how DDoS attacks work. What I'm looking for is a solution built in to channels that can help handle an increased load like that. Perhaps the ability for Daphne to scale up a channel and scale down another to compensate, or a throttling method that can reduce the weight per request after a certain point. I'm looking for a daphne/channels specific answer - general answers about DDoS or general load handling are not what I'm looking for - there are lots of other questions on SO about that. I could also control throttling based on who's logged in and who is not - a throttle for users who are not logged in could help. Edit again: Please read the whole question! I am not looking for general DDoS mitigation advice or explanations of low-level approaches. I'm wondering if Daphne has support for something like: Throttling Dynamic worker assignment based on queue size Middleware to provide priority to authenticated requests Or something of that nature. I am also going to reach out to the Channels community directly on this as SO might not be the best place for this question.
Load spike protection for Django Channels
0
0
0
1,825
45,125,919
2017-07-16T06:56:00.000
1
0
0
0
python,scala,mxnet
45,171,965
1
true
0
0
How about grad_dict in executor? It returns a dictionary representation of the gradient arrays.
1
0
1
Trying to understand some Scala's code on network training in MXNet. I believe you can access gradient on the executor in Scala by calling networkExecutor.gradDict("data"), what would be equivalent of it in Python MXNet? Thanks!
MXNet - what is python equivalent of getting scala's mxnet networkExecutor.gradDict("data")
1.2
0
0
64
45,129,728
2017-07-16T14:34:00.000
0
1
1
1
python,python-3.x,automation
45,130,847
3
false
0
0
If you want to run a single script on multiple computers without installing Python everywhere you can "compile" the script to .exe using py2exe, cx_Freeze or PyInstall. The "compilation" actually packs Python and libraries into the generated .exe or accompanied files. But if you plan to run many Python scripts you'd better install Python everywhere and distribute your scripts and libraries as Python packages (eggs or wheels).
1
4
0
I am developing some scripts that I am planning to use in my LAB. Currently I installed Python and all the required modules only locally on the station that I am working with (the development station). I would like to be able to run the scripts that i develop through each of my LAB stations. What is the best practice to do that ? Will I need to Install the same environment, except for the IDE of course, in all my stations ? If yes, then what is the recommended way to do that ? By the way, is it mostly recommended to run those scripts from the command line screen (Windows) or is there any other elegant way to do that ?
What is the best practice for running a python script?
0
0
0
4,897
45,130,283
2017-07-16T15:28:00.000
-1
0
0
1
python,google-app-engine,ubuntu,anaconda,google-app-engine-python
54,668,398
3
false
1
0
For those who are using Windows and still facing the same issue, the easiest way is to remove all the other python versions except version 2.7x.
1
1
0
I think that this problem is due to the python version. I used Anaconda with python 3.6 for learning django. Now I've to work on google app engine using python2.7. I uninstalled anaconda. Now when i run "python" I get: "Python 3.6.1 |Continuum Analytics, Inc.| (default, May 11 2017, 13:09:58)". Is there a way to default back to python2.7? I'm on ubuntu 16.04 edit: problem is not due to python version
When I run localserver on googleappengine error is "File "~/dev_appserver.py", line 102, in assert sys.version_info[0] == 2 AssertionError"
-0.066568
0
0
3,516
45,132,753
2017-07-16T19:53:00.000
1
1
0
0
python,bash,installation,sh,raspberry-pi3
45,138,161
1
false
0
0
I would configure one installation until you are satisfied and than use dd to clone your sd-card image. You can us dd again to perform the installation on another raspi. Best regards, Georg
1
0
0
the question seems not to be concrete enough, so let me explain: I programmed an Webapplication to viszualize data of different sensors in a wireless network. The sensordata is stored in a SQLite-database, which is connected as client to a MQTT-Broker. The whole project is implemented on a RaspberryPi3, which is also the central node of the network. For the whole project I used differnet softwares like apache2, mosquitto, sqlite3. Furthermore the RPi needs to be configurated, so external Hardwre can be connected to it (GPIO, I2C, UART and some modules). I wrote an installationguide with more then 60 commands. What is the most efficient way to write a tool, which installs an configurate the Raspberry with all needed components? (sh, bash, python ...) Maybe you can recommend me some guides which explains sh and bash.
How to program a tool which combines different components? (RPi)
0.197375
0
0
30
45,132,902
2017-07-16T20:12:00.000
2
1
1
0
python,algorithm,hash,hashmap,hashtable
45,133,269
1
true
0
0
Testing if a given key is in the hash table doesn't need to test all slots. You simply calculate the hash value for the given key (1). This hash value identifies which slot the key has to be in, if it is in the hash table. So, you simply need to compare all entries (α) in that slot with the given key, yielding Θ(1+α). You don't need to look at the other slots because the key cannot be stored in any of the other slots.
1
0
0
I get the Θ(1) part is the time that uses to calculate hash table, but I don't understand the Θ(α) part. In my view, the time complexity is Θ(n). Assume α is the expected length and the table has m slots. To ensure the key is not in the table, we need to search each slot, and each slot has α excepted length, so the total time is α times m, then it's Θ(n). Could anyone tell me which part I didn't understand correctly?
Why has an unsuccessful search time for a hash table a time complexity Θ(1+α)?
1.2
0
0
1,052
45,133,278
2017-07-16T21:01:00.000
1
0
0
0
python,ckan
45,255,776
1
false
1
0
Extend PackageController, define custom route and from there you will need to call organization_list_for_user action that will return the organizations that user is member of, and choose which extras you will return depending if user is member of organization or not.
1
0
0
I have a CKAN 2.6.2 installation deployed with a few hundred datasets added using python via the API, including a number of custom fields, added with ckan.action.package_patch(id=i, extras=extra_fields). I would like to make one of these extra fields visible only if a user has logged in to the organization. I think either src/ckan/ckan/templates/package/snippets/additional_info.html or src/ckan/ckan/templates/snippets/additional_info.html are the templates used to generate the lines of HTML that I'd like to selectively filter, but I'm stuck on the next step. Can anyone help with some pointers?
Ckan - require login to view certain metadata
0.197375
0
0
137
45,136,647
2017-07-17T05:38:00.000
0
0
0
1
python,python-3.x,flask,python-asyncio,gevent
45,175,533
2
false
1
0
It's better to go with asyncio preferably aiohttp which is more mainstream.
2
0
0
I am going to 3.6 now.... 1) I see for my worker servers ...in 2.7 I used gevent with great success for running one worker per core with N gevent threads per core... 2) For my web dev..for low level..close to CGI as possible I used bottle with nginx/uWSGI with the gevent loop 3) For api's I used Flask with nginx/uWSGI with the gevent loop My api apps are screaming fast...and faster then nodejs for async calls to my backend databases... Enter 3.6 ... I am confused.... 1) It appears I can run my workers using asyncio since not dependent on a framework...so here I am OK 2) It appears that gevent is available for 3.6 and I assume I can still use gevent for flask with the nginx/uWSGI with the gevent loop 3) uWSGI supports asyncio 4) flask support for asyncio does not seem to be widely supported 5) I refuse to use Django ...so dont event go there.. :) So my question is that if I want to embrace asyncio with 3.6 is it bye-bye Flask in favor of e.g. aiohttp or sanic? On other words...those that build async api's for python 2.7 how did you transition to 3.6 while maintaining non blocking calls? It appears that I can still use gevent with flask with python 3 but this is a monkey patch to force async non blocking calls whereas asyncio is native and part of the STL... Thanks
New to python 3.6 from 2.7 - is flask still relevant for async calls with gevent?
0
0
0
840
45,136,647
2017-07-17T05:38:00.000
2
0
0
1
python,python-3.x,flask,python-asyncio,gevent
45,570,572
2
false
1
0
Flask + gevent works like a charm for python 3.6. There is no any close solution to Flask-Admin and other robust time-tested libraries (like SQLAlchemy). For real applications I can get the same amount of rps from a flask as for aiohttp or sanic or whatever.
2
0
0
I am going to 3.6 now.... 1) I see for my worker servers ...in 2.7 I used gevent with great success for running one worker per core with N gevent threads per core... 2) For my web dev..for low level..close to CGI as possible I used bottle with nginx/uWSGI with the gevent loop 3) For api's I used Flask with nginx/uWSGI with the gevent loop My api apps are screaming fast...and faster then nodejs for async calls to my backend databases... Enter 3.6 ... I am confused.... 1) It appears I can run my workers using asyncio since not dependent on a framework...so here I am OK 2) It appears that gevent is available for 3.6 and I assume I can still use gevent for flask with the nginx/uWSGI with the gevent loop 3) uWSGI supports asyncio 4) flask support for asyncio does not seem to be widely supported 5) I refuse to use Django ...so dont event go there.. :) So my question is that if I want to embrace asyncio with 3.6 is it bye-bye Flask in favor of e.g. aiohttp or sanic? On other words...those that build async api's for python 2.7 how did you transition to 3.6 while maintaining non blocking calls? It appears that I can still use gevent with flask with python 3 but this is a monkey patch to force async non blocking calls whereas asyncio is native and part of the STL... Thanks
New to python 3.6 from 2.7 - is flask still relevant for async calls with gevent?
0.197375
0
0
840
45,137,395
2017-07-17T06:33:00.000
4
0
1
0
python,python-3.x
45,138,688
9
false
0
0
Python 2.x and Python 3.x are different. If you would like to download a newer version of Python 2, you could just download and install the newer version. If you want to install Python 3, you could install Python 3 separately then change the path for Python 2.x to Python 3.x in Control Panel > All Control Panel Items > System > Advanced System Settings > Environment Variables.
5
205
0
I have a Python 2.7.11 installed on one of my LAB stations. I would like to upgrade Python to at least 3.5. How should I do that ? Should I prefer to completely uninstall 2.7.11 and than install the new one ? Is there a way to update it ? Is an update a good idea ?
How do I upgrade the Python installation in Windows 10?
0.088656
0
0
1,031,748
45,137,395
2017-07-17T06:33:00.000
1
0
1
0
python,python-3.x
65,466,489
9
false
0
0
Just install python newest version's installer it will automatically detect your python version and will say upgrade python and starts upgrading
5
205
0
I have a Python 2.7.11 installed on one of my LAB stations. I would like to upgrade Python to at least 3.5. How should I do that ? Should I prefer to completely uninstall 2.7.11 and than install the new one ? Is there a way to update it ? Is an update a good idea ?
How do I upgrade the Python installation in Windows 10?
0.022219
0
0
1,031,748
45,137,395
2017-07-17T06:33:00.000
4
0
1
0
python,python-3.x
67,872,169
9
false
0
0
A quick and painless way for me was to do the following: Do a pip freeze > requirements.txt on my affected environments (or whatever method you want for backing up your requirements) Remove the Old version of Python (in my case it was 3.8) Remove the associated environments Install the new version (3.9.5 in my case) Recreate my environments python -m venv venv or however you wish Reinstall my plug-ins/apps pip install -r requirements.txt or however you wish
5
205
0
I have a Python 2.7.11 installed on one of my LAB stations. I would like to upgrade Python to at least 3.5. How should I do that ? Should I prefer to completely uninstall 2.7.11 and than install the new one ? Is there a way to update it ? Is an update a good idea ?
How do I upgrade the Python installation in Windows 10?
0.088656
0
0
1,031,748
45,137,395
2017-07-17T06:33:00.000
0
0
1
0
python,python-3.x
67,898,573
9
false
0
0
I was able to execute PowerShell with the following command and python upgraded with no issue. python -m pip install --upgrade pip please see image
5
205
0
I have a Python 2.7.11 installed on one of my LAB stations. I would like to upgrade Python to at least 3.5. How should I do that ? Should I prefer to completely uninstall 2.7.11 and than install the new one ? Is there a way to update it ? Is an update a good idea ?
How do I upgrade the Python installation in Windows 10?
0
0
0
1,031,748
45,137,395
2017-07-17T06:33:00.000
162
0
1
0
python,python-3.x
45,138,817
9
true
0
0
Every minor version of Python, that is any 3.x and 2.x version, will install side-by-side with other versions on your computer. Only patch versions will upgrade existing installations. So if you want to keep your installed Python 2.7 around, then just let it and install a new version using the installer. If you want to get rid of Python 2.7, you can uninstall it before or after installing a newer version—there is no difference to this. Current Python 3 installations come with the py.exe launcher, which by default is installed into the system directory. This makes it available from the PATH, so you can automatically run it from any shell just by using py instead of python as the command. This avoids you having to put the current Python installation into PATH yourself. That way, you can easily have multiple Python installations side-by-side without them interfering with each other. When running, just use py script.py instead of python script.py to use the launcher. You can also specify a version using for example py -3 or py -3.6 to launch a specific version, otherwise the launcher will use the current default (which will usually be the latest 3.x). Using the launcher, you can also run Python 2 scripts (which are often syntax incompatible to Python 3), if you decide to keep your Python 2.7 installation. Just use py -2 script.py to launch a script. As for PyPI packages, every Python installation comes with its own folder where modules are installed into. So if you install a new version and you want to use modules you installed for a previous version, you will have to install them first for the new version. Current versions of the installer also offer you to install pip; it’s enabled by default, so you already have pip for every installation. Unless you explicitly add a Python installation to the PATH, you cannot just use pip though. Luckily, you can also simply use the py.exe launcher for this: py -m pip runs pip. So for example to install Beautiful Soup for Python 3.6, you could run py -3.6 -m pip install beautifulsoup4.
5
205
0
I have a Python 2.7.11 installed on one of my LAB stations. I would like to upgrade Python to at least 3.5. How should I do that ? Should I prefer to completely uninstall 2.7.11 and than install the new one ? Is there a way to update it ? Is an update a good idea ?
How do I upgrade the Python installation in Windows 10?
1.2
0
0
1,031,748
45,139,240
2017-07-17T08:23:00.000
1
0
0
0
python,cassandra,floating-point,precision,cassandra-python-driver
50,065,729
2
false
0
0
Also if you cannot change your column definition for some reason, converting your float value to string and passing str to the cassandra-driver will also solve your problem. It will be able to generate the precise decimal values form str.
1
4
0
I'm sending data back and forth Python and Cassandra. I'm using both builtin float types in my python program and the data type for my Cassandra table. If I send a number 955.99 from python to Cassandra, in the database it shows 955.989999. When I send a query in python to return the value I just sent, it is now 955.989990234375. I understand the issue with precision loss in python, I just wanted to know if there's any built-in mechanisms in Cassandra that could prevent this issue.
Python Cassandra floating precision loss
0.099668
1
0
639
45,139,710
2017-07-17T08:50:00.000
0
0
0
0
python,excel,automation,vba
45,140,227
1
false
0
0
I found a solution!! I changed the name of my macro to "Auto_open" and now it makes it when I open the file. Thanks All!
1
0
0
I have many xlsm files that I create from an existing xlsm file and add to it data from csv using python. every file has a macro that need to be run (keyshortcut: ctrl+q) Is there a way to make it run automatically for every file and save the file after the macro was running? Thanks!
Run macro automatically
0
0
0
211
45,144,525
2017-07-17T12:37:00.000
1
0
1
1
python,python-3.x,pythonpath
45,145,568
2
false
0
0
I assume you are using Linux Before executing your application u can metion pythonpath=path && execution script Other elegant way is using virtualenv. Where u can have diff packages for each application. Before exection say workon env and then deactivate Python3 has virtualenv by default
1
0
0
Can anyone let me know how to set PYTHONPATH? Do we need to set it in the environment variables (is it system specific) or we can independently set the PYTHONPATH and use it to run any independent python application? i need to pick the module from a package available in directory which is different from the directory from which I am running my application . How to include these packages in my application
how to use PYTHONPATH for independent python application
0.099668
0
0
371
45,145,427
2017-07-17T13:20:00.000
0
0
1
0
python,python-2.7,add-on,screen-readers,nvda
71,015,909
1
false
0
0
While you might need to work with a specific TTS module of some sorts, to produce spoken output, a trick I use to let me know when tasks/activities have finished off in background is to use print(chr(7)) - evokes windows default beep sound, if you are working under windows, but, presume you are if you mention using NVDA - I use it myself, currently under windows 10
1
1
0
I want NVDA to speak out whenever I start a for loop in my python code. How can I do this programmatically as an addon? Any kind of help appreciated. For instance, I type in the text editor: for i in range(0,3): print "hello world" As soon as I am done writing for, NVDA should speak out that the for loop has started.
Programming using NVDA screen reader
0
0
0
373
45,146,249
2017-07-17T13:57:00.000
1
1
1
0
python,types,paradigms
45,202,415
1
true
0
0
It is not possible to implement a list of heterogeneous types if you don't know all types you'll need at compile time. Example: you load with input() a user script that defines a new value of a new type defined there. Then you want to insert that value in a list in your program. I guess a lot of thing that come from interactions with input() are impossible to implement.
1
0
0
There have been many questions and answers about the relative benefits of static and dynamic typing. Each has their camp, and there are obviously good reasons for both. I'm wondering though, are there any specific features of Python that wouldn't be possible to implement or use in a statically-typed language? I would expect that some of the more functional parts would be challenging, but we obviously have Haskell and C++14 and onward. Again, specific examples would be appreciated!
What Python features wouldn't be possible with static typing?
1.2
0
0
67
45,148,750
2017-07-17T16:00:00.000
1
0
1
0
python,airflow
55,362,147
1
false
0
0
YAML it's a good combination between human-readbility and a good document storage format.
1
6
0
We are looking for the best approach for setting up the configuration file for each DAG, I know we can use JSON in Variable, but want to see if there are suggested approaches in Airflow or other format (i.e.YAML). Thanks!
What is the suggested way for DAG config file in Airflow
0.197375
0
0
814
45,150,223
2017-07-17T17:25:00.000
1
0
1
1
python-2.7,pyinstaller,advanced-installer
45,391,696
1
false
0
0
according to Microsoft the SetDllDirectory function which is used by Pyinstaller is currently no supported by UWP and according to Pyinstaller experts there is no provision to change this in the near future. So right now this is not the way to go. If there is anyone who knows something better now is the time to speak up..
1
2
0
When I try to create a Windows AppX with Advanced Installer at the digital signing stage the program stops saying "The application calls the SetDllDirectory function which is currently not supported by windows UWP applications. A digitally unsigned exe or msi installer works perfectly but the AppX, as not digitally signed, does not run! Is there a work around to this problem? I searched the Pyinstaller docs and also asked a question on the Pyinstaller Google groups. They did not even list my question.
Digital signing halts with the error "SetDllDirectory function in Pyinstaller is not currently supported by Windows UWP applications"
0.197375
0
0
89
45,150,977
2017-07-17T18:13:00.000
0
0
1
0
python,pip,artifactory,pypi
45,487,020
1
true
0
0
Adding "**/*mypackage*" to the blacklist fixed the issue. This might cause problems if you have packages like "mypackage2" but it works for my usecase. As advised by JFrog Support
1
6
0
I'd like to be able to override some packages from upstream PyPI transparently for our users. I have the following Artifactory set up: Local repository X-local Remote repository X-remote (pointing to PyPI) Virtual repository X-virtual For some specificities with my environment, I'd like to ensure that users only download package 'mypackage' from X-local. At the moment I have included a rule to forbid the expression "**/mypackage-*" in X-remote and I publish my internal version of "mypackage" to X-local. This all works great until "mypackage" has wheels or a new version is published. It seems that when pip goes to list all artifacts of "mypackage" in "X-virtual" it does not only finds the ones in X-local but also the ones in X-Remote. Is there any way to block that? In brief, to prevent all packages from a remote from being listed.
Full overriding artifactory PyPI package
1.2
0
0
690
45,151,146
2017-07-17T18:25:00.000
2
0
1
0
python,linux,deployment,python-venv
45,151,176
1
true
0
0
You can include the dependencies in your package. That is, download the library and copy the contents of the folder to your package directory. And yes, virtual environments are useful in production. Not so much in your example, though. If you were deploying multiple webapps on a single server it would be very useful, though.
1
7
0
I'm trying to fully grasp how virtual environments are used with Python. I understand what it is they accomplish for the programmer - allowing you to install different dependencies locally for different projects without them conflicting. However, what I don't understand is how this translates into deploying a production Python program to an end user. Let's say I've made a program and it works and it's all debugged and ready to go. I want to make this available to people. Do people have to download this, put it all into its own virtual environment, pip install from there and then go source the activate script every time they want to run the program? I feel like, using Linux, I must have at least some Python programs on my machine and I know I don't do this - I just sudo apt install the program and it runs.
Are Python virtual environments needed in production?
1.2
0
0
3,245
45,153,178
2017-07-17T20:31:00.000
2
0
1
0
python,pip,virtualenv
45,153,345
2
true
0
0
Does installing packages in the home directory of a use with pip install --user provide the same level of protection against system-breaking changes as using a virtualenv? By "system-breaking changes", I suppose you mean packages installed by the operating system's package manager tool. With the --user option of pip, packages will be installed in the user's home directory. And since the package manager is not supposed to depend on user directories, but use only packages installed at the designated location in the system, independent from users's junk, a properly managed system should not be possible to break using pip install --user. However, if you work with more than one Python project with your user, it makes sense to always use virtualenv consistently, to prevent versioning conflicts between the projects.
2
2
0
Does installing packages in the home directory of a user with pip install --user provide the same level of protection against system-breaking changes as using a virtualenv?
is using the pip --user option as safe as creating a virtualenv?
1.2
0
0
6,573
45,153,178
2017-07-17T20:31:00.000
4
0
1
0
python,pip,virtualenv
45,153,469
2
false
0
0
Using a virtualenv is preferable for a few small reasons, and one big reason. virtualenv has a "relocate" option (Note: this feature has been flagged as having issues and may not work in all circumstances). Using --user you would need to reinstall all packages if you tried to relocate your project to another machine. Unless you change the PYTHONPATH so that modules in site-packages are not loaded, and reinstall every module in your user directory, python will continue to search for modules that are installed in the system directory. If you are considering using --user, I assume you either don't have permission to install system packages, or you are worried about breaking links in the future. Unlike --user, virtualenv keeps track of all modules (including system-wide modules and modules installed in the virtualenv), and as such I'd imagine it will be less likely to "break something" (or, at least, it will be easier to identify what the problem is) if you're using virtualenv. These problems could be nuisances, but they are surmountable. The biggest difference between --user and virtualenv is that virtualenv will allow you to store one version of each package for every environment you create, thereby eliminating versioning concerns (i.e., you build a project to work with one version of a package, then later you upgrade the package to work on a new project using some new feature and find that your old project is now broken). This is a pretty big deal and --user does nothing to help in this respect (unless you want to create a new user account on your machine for each project you work on, which I don't recommend).
2
2
0
Does installing packages in the home directory of a user with pip install --user provide the same level of protection against system-breaking changes as using a virtualenv?
is using the pip --user option as safe as creating a virtualenv?
0.379949
0
0
6,573
45,154,180
2017-07-17T21:47:00.000
0
0
0
0
python,tensorflow,neural-network,keras,theano
45,154,694
3
false
0
0
Can you use tf.stop_gradient to conditionally freeze weights?
2
8
1
I would like to train a GAN in Keras. My final target is BEGAN, but I'm starting with the simplest one. Understanding how to freeze weights properly is necessary here and that's what I'm struggling with. During the generator training time the discriminator weights might not be updated. I would like to freeze and unfreeze discriminator alternately for training generator and discriminator alternately. The problem is that setting trainable parameter to false on discriminator model or even on its' weights doesn't stop model to train (and weights to update). On the other hand when I compile the model after setting trainable to False the weights become unfreezable. I can't compile the model after each iteration because that negates the idea of whole training. Because of that problem it seems that many Keras implementations are bugged or they work because of some non-intuitive trick in old version or something.
How to dynamically freeze weights after compiling model in Keras?
0
0
0
7,178
45,154,180
2017-07-17T21:47:00.000
0
0
0
0
python,tensorflow,neural-network,keras,theano
47,122,897
3
false
0
0
Maybe your adversarial net(generator plus discriminator) are wrote in 'Model'. However, even you set the d.trainable=False, the independent d net are set non-trainable, but the d in the whole adversarial net is still trainable. You can use the d_on_g.summary() before then after set d.trainable=False and you would know What I mean(pay attention to the trainable variables).
2
8
1
I would like to train a GAN in Keras. My final target is BEGAN, but I'm starting with the simplest one. Understanding how to freeze weights properly is necessary here and that's what I'm struggling with. During the generator training time the discriminator weights might not be updated. I would like to freeze and unfreeze discriminator alternately for training generator and discriminator alternately. The problem is that setting trainable parameter to false on discriminator model or even on its' weights doesn't stop model to train (and weights to update). On the other hand when I compile the model after setting trainable to False the weights become unfreezable. I can't compile the model after each iteration because that negates the idea of whole training. Because of that problem it seems that many Keras implementations are bugged or they work because of some non-intuitive trick in old version or something.
How to dynamically freeze weights after compiling model in Keras?
0
0
0
7,178
45,154,390
2017-07-17T22:05:00.000
0
1
1
0
python,redis
45,154,433
1
false
0
0
None of the Redis client modules are part of the Python standard library. To work with Redis, you will need to install a client module, I recommend redis-py, using your preferred package manager.
1
0
0
Is there a way to determine which Redis Python modules are installed by default? I do not have a Redis installation, but would like to know for planning purposes. Thank you.
Redis Python modules installed by default
0
0
0
104
45,154,751
2017-07-17T22:41:00.000
0
0
0
0
python,machine-learning,scikit-learn,xgboost
54,628,350
1
false
0
0
I don't think the sklearn wrapper has an option to incrementally train a model. The feat can be achieved to some extent using the warm_start parameter. But, the sklearn wrapper for XGBoost doesn't have that parameter. So, if you want to go for incremental training you might have to switch to the official API version of xgboost.
1
4
1
According to the API, it seems like the normal xgboost interface allows for this option: xgboost.train(params, dtrain, num_boost_round=10, evals=(), obj=None, feval=None, maximize=False, early_stopping_rounds=None, evals_result=None, verbose_eval=True, xgb_model=None, callbacks=None, learning_rates=None). In this option, one can input xgb_model to allow continued training on the same model. However, I'm using the scikit learn API of xgboost so I can put the classifier in a scikit pipeline, along with other nice tools such as random search for hyperparameter tuning. So does anyone know of any (albeit hacky) way of allowing online training for the scikitlearn api for xgboost?
Scikit learn API xgboost allow for online training?
0
0
0
571
45,154,854
2017-07-17T22:53:00.000
0
0
0
0
python,opencv,edge-detection
45,161,655
1
false
0
0
The result of Hough Line transform is an array of (rho,theta) parameter pairs. The equation of the line represented by the pair is y + x/tan(theta) + rho/sin(theta) = 0 You can check whether the (x, y) coordinates of the point satisfy this condition, to find lines that pass throught the point (practically, use a small value instead of 0).
1
0
1
I've got a script that uses the Canny method and probabilistic Hough transform to identify line segments in an image. I need to be able to filter out all line segments that are NOT connected to a specific pixel. How would one tackle this problem?
OpenCV Python, filter edges to only include those connected to a specific pixel
0
0
0
276
45,155,117
2017-07-17T23:23:00.000
2
0
0
0
python,google-bigquery,import-from-csv
45,156,763
1
false
0
0
When you import a CSV into BigQuery the columns will be mapped in the order the CSV presents them - the first row (titles) won't have any effect in the order the subsequent rows are read. To be noted, if you were importing JSON files, then BigQuery would use the name of each column, ignoring the order.
1
0
1
I have a python script that execute a gbq job to import a csv file from Google cloud storage to an existing table on BigQuery. How can I set the job properties to import to the right columns provided in the first row of the csv file? I set parameter 'allowJaggedRows' to TRUE, but it import columns in order regardless of column names in the header of csv file.
How to import CSV to an existing table on BigQuery using columns names from first row?
0.379949
1
0
3,297
45,155,616
2017-07-18T00:30:00.000
0
0
0
1
python,azure,azure-blob-storage,azure-functions
45,165,615
3
false
0
0
Storing secrets can (also) be done using App Settings. In Azure, go to your Azure Functions App Service, Then click "Application Settings". Then, scroll down to the "App Settings" list. This list consists of Key-Value pairs. Add your key, for example MY_CON_STR and the actual connection string as the value. Don't forget to click save at this point Now, in your application (your Function for this example), you can load the stored value using its key. For example, in python, you can use: os.environ['MY_CON_STR'] Note that since the setting isn't saved locally, you have to execute it from within Azure. Unfortunately, Azure Functions applications do not contain a web.config file.
1
2
0
I'm using a queue trigger to pass in some data about a job that I want to run with Azure Functions(I'm using python). Part of the data is the name of a file that I want to pull from blob storage. Because of this, declaring a file path/name in an input binding doesn't seem like the right direction, since the function won't have the file name until it gets the queue trigger. One approach I've tried is to use the azure-storage sdk, but I'm unsure of how to handle authentication from within the Azure Function. Is there another way to approach this?
Access Blob storage without binding?
0
0
0
1,135
45,156,080
2017-07-18T01:36:00.000
44
0
1
0
python-3.x,anaconda
45,167,575
5
true
0
0
There are several ways to achieve this, I'm describing one here, which should be relatively straight forward, even if your default python variable is not anaconda's. Check what is your desired anaconda environment (if you're not sure what does this mean, it probably means that you are using root, the default environment) Run: conda info --envs to see the path where your environment is installed Go to that path, and find the absolute path to python.exe, for example: "C:\Program Files\Anaconda3\python.exe" Now, run the following command: <absolute path to python.exe> -m pip install <path to tar.gz> for example: C:\Program Files\Anaconda3\python.exe -m pip install c:\mymodule\great.tar.gz Note that <path to tar.gz> can be relative, absolute and even an online link.
3
30
0
When I want to install modules to Anaconda, I run conda install. However, now I have a .tar.gz file and want to install this. How to do?
Installing modules to Anaconda from .tar.gz
1.2
0
0
80,104
45,156,080
2017-07-18T01:36:00.000
0
0
1
0
python-3.x,anaconda
52,236,101
5
false
0
0
If you are using Anaconda and downloaded the package from Anaconda Cloud, then you can place your "package.tar.bz2" files in the path shown in Anaconda prompt (Eg. C:\Users) and type in the following command in Anaconda Prompt conda install package.tar.bz2 I believe it will work for .tar.gz files too.
3
30
0
When I want to install modules to Anaconda, I run conda install. However, now I have a .tar.gz file and want to install this. How to do?
Installing modules to Anaconda from .tar.gz
0
0
0
80,104
45,156,080
2017-07-18T01:36:00.000
0
0
1
0
python-3.x,anaconda
66,228,093
5
false
0
0
Just a PSA please don't use conda install <pkg.tar> when updating python from a tar.bz. This has the potential to break Anaconda.
3
30
0
When I want to install modules to Anaconda, I run conda install. However, now I have a .tar.gz file and want to install this. How to do?
Installing modules to Anaconda from .tar.gz
0
0
0
80,104
45,156,282
2017-07-18T02:00:00.000
1
0
1
0
python,multithreading,thread-safety,python-import,reload
45,157,091
2
true
0
0
Is Python reload thread safe? No. The reload() executes all the pure python code in the module. Any pure python step can thread-switch at any time. So, this definitely isn't safe.
1
1
0
Using python, I am writing a nasty cralwer system that cralws something from the websites of each local government, and total websites count to over 100, just in case their webpage changes, I have to use reload to do hot-update. But I am wondering if reload is thread safe. because say, I am reloading moudle Cralwer1 in thread 1, but at the same time, thread 2 is using Cralwer1. Will thread 1's reload cause thread 2 to fail? If so, I have to do a lock or something, otherwise, I can happily do the reload without extra work. Can any one help me out?
Is Python reload thread safe?
1.2
0
0
781
45,156,570
2017-07-18T02:37:00.000
0
0
1
1
python,opencv3.0
45,159,452
1
false
0
0
Try using pip, You can create a pip-reqs.txt file and pin things to a specific version Then just run pip -r pip-reqs.txt pip will then take care of installing opencv for you for the python version that is currently configured
1
0
1
Python 3.5.2 is installed, and I need to ensure it doesn't upgrade to 3.6 due to some other dependencies. When I install OpenCV 3 via brew (see below), brew invokes python3 and upgrades to Python 3.6, the latest build: brew install opencv3 --with-python3 How can I install OpenCV 3 without changing my Python build?
Install OpenCV 3 Without Upgrading Python
0
0
0
150
45,156,592
2017-07-18T02:39:00.000
0
0
0
0
python,proxy,python-requests,urllib
45,156,637
1
false
0
0
Check if there is any proxy setting in chrome
1
0
0
I'm writing this application where the user can perform a web search to obtain some information from a particular website. Everything works well except when I'm connected to the Internet via Proxy (it's a corporate proxy). The thing is, it works sometimes. By sometimes I mean that if it stops working, all I have to do is to use any web browser (Chrome, IE, etc.) to surf the internet and then python's requests start working as before. The error I get is: OSError('Tunnel connection failed: 407 Proxy Authentication Required',) My guess is that some sort of credentials are validated and the proxy tunnel is up again. I tried with the proxies handlers but it remains the same. My doubts are: How do I know if the proxy need authentication, and if so, how to do it without hardcoding the username and password since this application will be used by others? Is there a way to use the Windows default proxy configuration so it will work like the browsers do? What do you think that happens when I surf the internet and then the python requests start working again? I tried with requests and urllib.request Any help is appreciated. Thank you!
Python URL Request under corporate proxy
0
0
1
1,617
45,156,934
2017-07-18T03:23:00.000
2
0
0
0
python-3.x,amazon-web-services,boto3,amazon-iam
57,577,221
4
false
0
0
Upon further testing, I've come up with the following which runs in Lambda. This function in python3.6 will email users if their IAM keys are 90 days or older. Pre-requisites all IAM users have an email tag with a proper email address as the value. Example; IAM user tag key: email IAM user tag value: [email protected] every email used, needs to be confirmed in SES import boto3, os, time, datetime, sys, json from datetime import date from botocore.exceptions import ClientError iam = boto3.client('iam') email_list = [] def lambda_handler(event, context): print("All IAM user emails that have AccessKeys 90 days or older") for userlist in iam.list_users()['Users']: userKeys = iam.list_access_keys(UserName=userlist['UserName']) for keyValue in userKeys['AccessKeyMetadata']: if keyValue['Status'] == 'Active': currentdate = date.today() active_days = currentdate - \ keyValue['CreateDate'].date() if active_days >= datetime.timedelta(days=90): userTags = iam.list_user_tags( UserName=keyValue['UserName']) email_tag = list(filter(lambda tag: tag['Key'] == 'email', userTags['Tags'])) if(len(email_tag) == 1): email = email_tag[0]['Value'] email_list.append(email) print(email) email_unique = list(set(email_list)) print(email_unique) RECIPIENTS = email_unique SENDER = "AWS SECURITY " AWS_REGION = os.environ['region'] SUBJECT = "IAM Access Key Rotation" BODY_TEXT = ("Your IAM Access Key need to be rotated in AWS Account: 123456789 as it is 3 months or older.\r\n" "Log into AWS and go to your IAM user to fix: https://console.aws.amazon.com/iam/home?#security_credential" ) BODY_HTML = """ AWS Security: IAM Access Key Rotation: Your IAM Access Key need to be rotated in AWS Account: 123456789 as it is 3 months or older. Log into AWS and go to your https://console.aws.amazon.com/iam/home?#security_credential to create a new set of keys. Ensure to disable / remove your previous key pair. """ CHARSET = "UTF-8" client = boto3.client('ses',region_name=AWS_REGION) try: response = client.send_email( Destination={ 'ToAddresses': RECIPIENTS, }, Message={ 'Body': { 'Html': { 'Charset': CHARSET, 'Data': BODY_HTML, }, 'Text': { 'Charset': CHARSET, 'Data': BODY_TEXT, }, }, 'Subject': { 'Charset': CHARSET, 'Data': SUBJECT, }, }, Source=SENDER, ) except ClientError as e: print(e.response['Error']['Message']) else: print("Email sent! Message ID:"), print(response['MessageId'])
1
2
0
I am trying to figure out a way to get a users access key age through an aws lambda function using Python 3.6 and Boto 3. My issue is that I can't seem to find the right api call to use if any exists for this purpose. The two closest that I can seem to find are list_access_keys which I can use to find the creation date of the key. And get_access_key_last_used which can give me the day the key was last used. However neither or others I can seem to find give simply the access key age like is shown in the AWS IAM console users view. Does a way exist to get simply the Access key age?
Getting access key age AWS Boto3
0.099668
0
1
5,998
45,157,059
2017-07-18T03:38:00.000
0
0
1
0
python
45,157,080
1
true
0
0
3.6.1. "RC" indicates a release candidate--something that is still officially along the lines of a beta, and not something that should be used for a production system. It still has to go through final testing before being declared the stable version.
1
0
0
I know 3.6.2 is already released, and in most cases it makes few differences. I'm just curious, in the stage that 3.6.2rc has just released, should I use 3.6.1 or 3.6.2rc if I want the latest stable version? In other words, which one is supposed to be more stable, i.e., has less bugs?
When an rc version is available, which python version is supposed to use, 3.6.1 or 3.6.2rc?
1.2
0
0
63
45,163,450
2017-07-18T10:00:00.000
0
0
0
0
java,android,python,testing,monkeyrunner
49,088,774
2
false
1
1
In addition to @ohbo's solution, copying AdbWinApi.dll, AdbWinUsbApi.dll into framework folder solved my problem.
1
2
0
i try to run my android test script by "monkeyrunner cameraTest.py" but it can't work, the cmd show me this SWT folder '..\framework\x86' does not exist. Please set ANDROID_SWT to point to the folder containing swt.jar for your platform. anyone know how to deal with this?thanks
SWT folder '..\framework\x86' does not exist. Please set ANDROID_SWT to point to the folder containing swt.jar for your platform
0
0
0
2,025
45,167,237
2017-07-18T12:46:00.000
5
0
0
0
python,scipy,sparse-matrix
49,570,892
1
true
0
0
This will depend a lot on the detail of the implementation of these different ways of exponentiating the matrix. In general terms, I would expect the eigen-decomposition (expm2) to be poorly suited to sparse matrices, because it is likely to remove the sparseness. It will also be more difficult to apply to non-symmetric matrices, because this will require the use of complex arithmetic and more expensive algorithms to compute the eigen-decomposition. For the Taylor-series approach (expm3), this sounds risky if there are a fixed number of terms independent of the norm of the matrix. When computing e^x for a scalar x, the largest terms in the Taylor series are around that for which n is close to x. However, the implementation details of these (deprecated) functions may use tricks like diagonally loading the matrix so as to improve the stability of these series expansion.
1
12
1
Matrix exponentiation can be performed in python using functions within the scipy.linalg library, namely expm, expm2, expm3. expm makes use of a Pade approximation; expm2 uses the eigenvalue decomposition approach and expm3 makes use of a Taylor series with a default number of terms of 20. In SciPy 0.13.0 release notes it is stated that: The matrix exponential functions scipy.linalg.expm2 and scipy.linalg.expm3 are deprecated. All users should use the numerically more robust scipy.linalg.expm function instead. Although expm2 and expm3 are deprecated since release version SciPy 0.13.0, I have found that in many situations these implementations are faster than expm. From this, some questions arise: In what situations could expm2 and expm3 result in numerical instabilities? In what situations (e.g. sparse matrices, symmetric, ...) is each of the algorithms faster/more precise?
Matrix exponentiation with scipy: expm, expm2 and expm3
1.2
0
0
1,827
45,168,222
2017-07-18T13:29:00.000
0
1
0
1
python,linux,vim
45,168,297
3
false
0
0
by default, Ctrl-P in insert mode do an autocompletion with all the words already present in the file you're editing
1
4
0
I am coding Python3 in Vim and would like to enable autocompletion. I must use different computers without internet access. Every computer is running Linux with Vim preinstalled. I dont want to have something to install, I just want the simplest way to enable python3 completion (even if it is not the best completion), just something easy to enable from scratch on a new Linux computer. Many thanks
Enable python autocompletion in Vim without any install
0
0
0
2,519
45,168,408
2017-07-18T13:36:00.000
12
0
1
1
python,pypi
45,175,607
1
true
0
0
python setup.py sdist creates .tar.gz source archive and doesn't create eggs. That's probably what you want instad of python setup.py install.
1
5
0
I have a project for which I want to create a tar.gz with python setup.py install. Problem is, I only get egg files in my dist/ folder when running python setup.py install. I need the project.tar.gz file so that I can easily make it installable from conda. How do I make python setup.py install create a tar.gz (I do not need any egg files, really). What I ultimately want is a tar.gz archive showing on pypi with a download link and md5, which I used to get before the PYPI update.
Creating tar.gz in dist folder with python setup.py install
1.2
0
0
6,897
45,168,747
2017-07-18T13:49:00.000
1
0
0
0
python,statistics,deep-learning
45,189,807
1
true
0
0
The mean value of the dataset is the mean value of the pixels of all the images across all the colour channels (e.g. RBG). Grey scale images will have just one mean value and colour images like ImageNet will have 3 mean values. Usually mean is calculated on the training set and the same mean is used to normalize both training and test images.
1
2
1
In deep learning experiments,there is a consensus that mean subtraction from the data set could improve the accuracy.For example,the mean value of ImageNet is [104.0 117.0 124.0],so before feeding the network,the mean value will be subtracted from the image. My question is How the mean value is calculated? Should I calculate the mean value on training and testing data set separately?
Calculate mean value of an image dataset
1.2
0
0
2,860
45,169,563
2017-07-18T14:24:00.000
0
1
0
0
python,unit-testing,testing,tdd
45,198,217
3
false
1
0
Agree with the previous answer. Yes, you can start using TDD at every single point of development. When you try the 'red line' tests which should fall at the beginning there is no surprise for you that they show up lack of functionality. When you start implementing TDD on the big system with a huge amount of code already in you should check first for some functionality availability by also writing the test (or the group of tests). Then if it works well you carry on until the test start showing up the limits of code's potential. Would add that if you start now and decide to cover not only the new features (but the old ones as well as mentioned above) at this point you more likely will spend more time for the old features. And in that case it would not be called pure TDD (just saying) since some tests is going to be implemented after the code has been written but if it's suitable for the project that's also the option (definitely could worth it). And still it's most likely better to test something now than on the beta-testing stage. At least because it will be easier to come across the solution if you find something is going wrong with your old features. So later you test so harder it's getting to solve the issues. It takes more time then. And if the people apply tests long time after the code has been written it comes to the double work since you need to rethink how your code works. That`s why in the TDD writing the tests goes almost along with writing the code. Even at this point it's probably easier for you to remember what you were coding at the beginning.
2
3
0
I am halfway in the development of a large Flask web application. Back in the beginning I chose not to use test-driven-development, for learning curve vs project deadline reasons. Thus these last weeks I had the opportunity to learn more about it, and I am quite exited about starting to use it. However, regarding I didn't wrap my project with TDD from the start, is there any cons of starting to use it now ? I would not refactor my entire app, only the new features I am going to design in the near future.
Starting TDD halfway a project, good practice?
0
0
0
140
45,169,563
2017-07-18T14:24:00.000
3
1
0
0
python,unit-testing,testing,tdd
45,183,671
3
false
1
0
TDD "done right" is always good practice - no matter at which point in time you start using it. But in order to avoid getting to a situation where one part of the code base is nicely unit-tested and others are not: look for ways to not only cover new features. In other words: if time allows don't only test those parts of a file/class that a new feature will go into - but instead try to get that whole unit into "tests". The core aspect of unit tests is that they allow you to make changes to your code base without breaking things. Unit tests enable you to go with higher speed. Thus look for ways to ensure that you can go with "high speed" for the majority of your code base.
2
3
0
I am halfway in the development of a large Flask web application. Back in the beginning I chose not to use test-driven-development, for learning curve vs project deadline reasons. Thus these last weeks I had the opportunity to learn more about it, and I am quite exited about starting to use it. However, regarding I didn't wrap my project with TDD from the start, is there any cons of starting to use it now ? I would not refactor my entire app, only the new features I am going to design in the near future.
Starting TDD halfway a project, good practice?
0.197375
0
0
140
45,170,586
2017-07-18T15:05:00.000
1
0
0
0
python,machine-learning,regression
45,170,697
1
false
0
0
No you shouldn't do it. If your mode always predict 1.5 times more than the actual values then that means your model is just not performing well and the data cannot be linearly fitted. To prevent this, you should look at other models that is able to capture the structure of your data or you might have outliers and removing them would help the linear regression model.
1
0
1
I have a question about regression model in machine learning and I am wondering if my way is correct or not. I have built my regression model and already trained it with my data, but my model always predict 1.5 times more than actual values. I understood that this is my model's habit, consider as it is predict alway 1.5 times. After considering as it is, I divided predicted value by 1.5 times. Let's say, my model predict 100 in some case, and I calculated 100/1.5 and get approximately 66.6 in a result. Actually 66.6 is not predicted value and I manipulated it. Is this manipulation acceptable for regression? Can I supply this 66.6 to my customer?
Way to predict with regression model
0.197375
0
0
64
45,170,589
2017-07-18T15:05:00.000
1
0
1
0
python,gensim,word2vec
45,172,785
1
false
0
0
The window is trimmed to the edges of the current text example. So, the first word of a text only gets its context words from subsequent words in the same text. (No words are retained from previous examples.) Similarly, the last word in a text only gets its context words from previous words in the same text. (No words are pulled in from the next text example.) Each text example (aka sentence) stands alone.
1
0
1
When training, what will word2vec do to cope with the words at the end of a sentence . Will it use the exact words at the beginning of another sentence as the context words of the center words which is at the end of last sentence.
How word2vec deal with the end of a sentence
0.197375
0
0
438
45,173,476
2017-07-18T17:29:00.000
1
0
0
0
python,cluster-analysis
45,178,600
1
true
0
0
You can use Silhouette with DTW as distance function. But don't forget that this just a heuristic. A different k can be better by subjective use.
1
1
1
I am using dynamic time warping (DTW) as a similarity metric to cluster ~3500 time series using the k-means algorithm in Python. I am looking for a similar metric to the popular silhouette score used in sklearn.metrics.silhouette_score but relevant to DTW. Wondering if anyone can provide any help?
Metric for appropriate number of DTW clusters
1.2
0
0
240
45,177,528
2017-07-18T21:36:00.000
0
0
1
0
python,parsing
45,177,569
1
false
0
0
If you're looking to maintain backwards compatibility, you'll want to have clear modules and clear function. Don't have one section of code trying to do too much. The spec shouldn't change too much, so you could just have if-else statements within your functions. Make sure the actual behavior of the functions doesn't evolve, though.
1
0
0
I plan to write a parser that parses input data, selects and modifies a subset of fields and output them into a different format. That's the easy part. The schema of the input data might change in future and I want my parser to be able to handle last n input schemas for backward compatibility. Hopefully, the output schema doesn't have to change but if it does, I'd like it to keep it to the minimum. My question is - how should I organize the parser code to handle such incremental change to the input schema while reusing code as much as possible. I'd also like to keep it simple for a new guy to come in and easily add support for the next version. If it matters, the input data has records with types and sub-types (so modular parsing possible). The programming language will be python (so reflection possible). Input format is message pack and output format is json. There are few options on my mind. Open to whatever suggestions - Have completely different versions of parser and maintain a mapping of input schema to parser version. Copy paste code as needed. Have a single parser with switch case on input schema version, within the code as needed. Have inheritance based structure where the new version of parser inherits from the older version of parser, overrides whatever functions necessary.
How to design a parser for backward compatibility?
0
0
0
97
45,177,975
2017-07-18T22:12:00.000
1
0
1
0
python,audio,pyaudio,channels
46,686,683
1
false
0
0
My solution is not very elegant, but it does work. Open separate streams with the appropriate input_device_index for each. stream1 = audio.open(input_device_index = 1 ...) stream2 = audio.open(input_device_index = 2 ...)
1
1
0
I need to open a multi-channel audio file (two or more microphones) and record the audio of each of them on a different file. With PyAudio I know how to open a multi-channel file (open method) and stop when 1.5 seconds of silence are recorded, but eventually I end up with a single (multi-channel) file. I would like to work live on each of input channels separately: record them on a separate file when a pause is detected. For instance if channel 1 has a silence after 5 seconds I stop its recording on a file, while I keep on recording channel 2 until a silence on that channel is detected as well (e.g., after 10 seconds). Could anyone tell me if this is possible with PyAudio, or point me to the right (Python) library if not?
Read different streams separately
0.197375
0
0
476
45,178,197
2017-07-18T22:33:00.000
0
0
1
0
python-2.7,opencv,anaconda
45,392,458
1
true
0
0
Anaconda is maintained by Continuum so it seems like they have not had a chance to update to the newer version of OpenCV. I will try to see if I can bring it to their attention.
1
0
1
I am trying to download the latest version of OpenCV using anaconda, but Anaconda only has version 3.1.0. I ended up installing it with pip, but can someone explain why anaconda does not have 3.2.0 version of OpenCV. Also, I am using Python 2.7. Thanks
Anaconda and OpenCV using old version
1.2
0
0
355
45,181,769
2017-07-19T05:38:00.000
0
0
0
0
python,database,pandas,csv
45,181,917
2
false
0
0
For me I usually merge the file into a DataFrame and save it as a pickle but if you merge it the file will pretty big and used up a lot of ram when you used it but it is the fastest way if your machine have a lot of ram. Storing the database is better in the long term but you will waste your time uploading the csv to the database and then waste even more of your time retrieving it from my experience you use the database if you want to query specific things from the table such as you want a log from date A to date B however if you use pandas to query all of that than this method is not very good. Sometime for me depending on your use case you might not even need to merge it use the filename as a way to query and get the right log to process (using the filesystem) then merge the log files you are concern with your analysis only and don't save it you can save that as pickle for further processing in the future.
1
1
1
I have nearly 60-70 timing log files(all are .csv files, with a total size of nearly 100MB). I need to analyse these files at a single go. Till now, I've tried the following methods : Merged all these files into a single file and stored it in a DataFrame (Pandas Python) and analysed them. Stored all the csv files in a database table and analysed them. My doubt is, which of these two methods is better? Or is there any other way to process and analyse these files? Thanks.
How to analyse multiple csv files very efficiently?
0
0
0
160
45,182,755
2017-07-19T06:42:00.000
1
0
0
0
python,flask
45,182,963
2
false
1
0
Instead of trying to create a standalone executable for a flask app in windows, you should use docker in order to containerize/package your flask app (if it must be packaged). Docker will give you something that is system agnostic, is well understood to work, and is in line with best practices.
1
2
0
There are a bunch of tools out there like pyInstaller, py2exe, etc. But none of them seems to work while making a Windows standalone executable for a Flask application. Can someone guide me with proper instructions as to how one can create executable for a Flask application? I want to distribute the application as single clickable exe for end users. To be specific, the application has following dependencies. - Flask - Sqlalchemy - Requests - Httplib2 - Database used is SQLite I don't want to open the code as of now, but if someone has a good solution, I can send the project repo privately for testing purposes.
How to make windows standalone exe for python flask/jinja/sqlite application
0.099668
0
0
2,222
45,184,482
2017-07-19T08:05:00.000
0
0
1
1
json,python-2.7,google-app-engine,google-cloud-datastore
45,204,510
1
false
1
0
I don't know what your exact searching needs are, but the datastore API allows for querying that is decently good, provided you give the datastore the correct indexes. Plus it's very easy to go take the entities in the datastore and pull them back out as .json files.
1
1
0
I need to store json objects on the google cloud platform. I have considered a number of options: Store them in a bucket as a text (.json) file. Store them as text in datastore using json.dumps(obj). Unpack it into a hierarchy of objects in datastore. Option 1: Rejected because it has no organising principles other than the filename and cannot be searched across. Option 2: Is easy to implement, but you cannot search using dql. Option 3: Got it to work after a lot of wrangling with the key and parent key structures. While it is searchable, the resulting objects have been split up and held together by the parent key relationships. It is really ugly! Is there any way to store and search across a deeply structured json object on the google cloud platform - other than to set up mongodb in a compute instance?
Storing json objects in google datastore
0
1
0
1,421
45,184,741
2017-07-19T08:17:00.000
2
0
0
0
python,deep-learning,pytorch
45,187,882
2
false
0
0
Seems to me like that your Sigmoids are saturating the activation maps. The images are not properly normalised or some batch normalisation layers are missing. If you have an implementation that is working with other images check the image loader and make sure it does not saturate the pixel values. This usually happens with 16-bits channels. Can you share some of the input images? PS Sorry for commenting in the answer. This is a new account and I am not allowed to comment yet.
1
1
1
I'm implementing a UNet for binary segmentation while using Sigmoid and BCELoss. The problem is that after several iterations the network tries to predict very small values per pixel while for some regions it should predict values close to one (for ground truth mask region). Does it give any intuition about the wrong behavior? Besides, there exist NLLLoss2d which is used for pixel-wise loss. Currently, I'm simply ignoring this and I'm using MSELoss() directly. Should I use NLLLoss2d with Sigmoid activation layer? Thanks
BCELoss for binary pixel-wise segmentation pytorch
0.197375
0
0
4,348
45,184,791
2017-07-19T08:19:00.000
0
0
1
0
python,gps,utc,gps-time
53,649,771
2
false
0
0
use gpsd service daemon and then use code gpsd.utc
1
0
0
I have been researching everywhere for a solution to this problem. Any help would be much appreciated. I would like to take the time from the GPS data and need to convert that to a human readable format, may be like YYYY-MM-DD HH:MM:SS. Following is the GPS dataformat: $GPRMC,123519,A,4807.038,N,01131.000,E,022.4,084.4,230394,003.1,W*6A Here, 123519 is fix taken at 12:35:19 UTC and 230394 is the date - 23rd of March 1994 Thanks.
Python - Convertion of GPS time to UTC
0
0
0
1,646
45,186,425
2017-07-19T09:29:00.000
1
0
1
0
compiler-construction,python-import,abstract-syntax-tree,interpreter
45,187,365
2
false
0
0
The solution does depend on whether the includes are signatures, or headers, for functions (like C, etc.) or source bodies (like Java). Include as header or signatures Lex and check for any include statements Load any includes and create a symbol table for each and put in stack (subjective implementation warning). No need to create a full AST for each as only the symbols are important for resolving references. During parse and AST emit, use the stack of symbol tables for lookups The assumptions are: That which make up the include has already been parsed, verified and code emitted from it's source. Includes as source When you include source you are extending your compilation unit and the imported source would be considered in scope. This would entail: Lex the base source Identify, load and lex the included source. This should pre-pend your lex token list unless you have specific behaviors as part of your language semantic Proceed to now generate your one (1) AST. It will just be bigger.
2
0
0
I am implementing an interpreter as hobby project nowadays using tutorials. Basically, I lex, parse code and produce an abstract syntax tree and evaluate it. From one source code file, I generate 1 tree. Now I want to add a functionality similar to Python's import or C's #include statement. What are my approaches? How other languages overcome this problem? Are they creating two separate trees and combine them or just copy a file's content to the actual file and create one big tree? Can you give me any tutorial or paper that explains the possible approaches?
What are the approaches to add an import or include support to a programming language?
0.099668
0
0
49
45,186,425
2017-07-19T09:29:00.000
1
0
1
0
compiler-construction,python-import,abstract-syntax-tree,interpreter
45,221,571
2
false
0
0
There are at least two general approaches compilers take the "include" problem. Literally include the text of the file. That is the approach C and enhanced FORTRAN 77 compilers used. Compile the "include" separately and just include the definitions. That is the approach Ada takes. Some compilers use both. There are some compilers (e.g. some Pascals) that support both methods.
2
0
0
I am implementing an interpreter as hobby project nowadays using tutorials. Basically, I lex, parse code and produce an abstract syntax tree and evaluate it. From one source code file, I generate 1 tree. Now I want to add a functionality similar to Python's import or C's #include statement. What are my approaches? How other languages overcome this problem? Are they creating two separate trees and combine them or just copy a file's content to the actual file and create one big tree? Can you give me any tutorial or paper that explains the possible approaches?
What are the approaches to add an import or include support to a programming language?
0.099668
0
0
49
45,188,890
2017-07-19T11:13:00.000
3
0
0
0
python,loops,numpy,matrix
45,188,959
1
true
0
0
a = np.arange(1000).reshape(100, 10) should do it.
1
0
1
Good morning to all. I want to generate an array for example with 10 columns and 100 rows with the vector a = np.arange (1,1001), but I do not want to use loop since my web page gets saturated if I put a loop. Someone knows of some numpy or math command or another. Thank you very much for your attention.
How do I generate a matrix with x dimension and a vector and without using loops? Python
1.2
0
0
86
45,190,226
2017-07-19T12:12:00.000
0
0
0
0
python,eclipse,user-interface,pydev
45,400,633
1
false
0
1
It seems to be an issue in PyDev itself when dealing with wx... maybe wx broke that API. You can try just commenting out that line and checking if other things work -- as that code in the debugger is pure python, you can tweak it in place ;)
1
0
0
I am trying to configure Pydev in Eclipse. Even PYTHONPATH with python.exe file and downloaded wxpython package as well to interact with GUI environment But, getting an error when i am starting python console in the "console" environment in Eclipse Am wandering about "Thread_IsMain" error AttributeError: module 'wx' has no attribute 'Thread_IsMain' Traceback (most recent call last): File "D:\Software\eclipse\plugins\org.python.pydev_5.8.0.201706061859\pysrc\pydevconsole.py", line 194, in process_exec_queue inputhook() File "D:\Software\eclipse\plugins\org.python.pydev_5.8.0.201706061859\pysrc\pydev_ipython\inputhookwx.py", line 117, in inputhook_wx3 assert wx.Thread_IsMain() # @UndefinedVariable
Error While Configuring pydev in eclipse
0
0
0
92
45,190,558
2017-07-19T12:25:00.000
0
0
1
0
python,numpy,cython,numpy-random
45,191,351
2
false
0
0
Found an answer thanks to kazemakase: _rand is accessible directly, I'd just need to import mtrand. But __self__ may be more future proof, if syntax doesn't change.
1
3
1
Is there are more direct way to access the RandomState object created on import other than np.random.<some function>.__self__? Both np.random._rand and getattr(np.random, "_rand") raise AttributeError. The former works fine but doesn't seem very transparent/Pythonic, though the most transparent might just be creating a separate RandomState object. The purpose is passing the interal_state variable to a cython function that calls randomkit functions directly.
Direct way to access Numpy RandomState object
0
0
0
238