Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
45,190,707 | 2017-07-19T12:31:00.000 | 2 | 0 | 0 | 0 | python,deep-learning,keras | 45,191,429 | 3 | false | 0 | 0 | Given just this information it is hard to tell what might be the underlying problem. In general, the machine learning engineer is always working with a direct trade-off between overfitting and model complexity. If the model isn't complex enough, it may not be powerful enough to capture all of the useful information necessary to solve a problem. However, if our model is very complex (especially if we have a limited amount of data at our disposal), we run the risk of overfitting. Deep learning takes the approach of solving very complex problems with complex models and taking additional countermeasures to prevent overfitting.
Three of the most common ways to do that are
Regularization
Dropout
Data augmentation
If your model is not complex enough:
Make it bigger (easy)
Make it smarter (hard) | 3 | 0 | 1 | I am trying to train InceptionV3 network with my custom dataset (36 classes 130 samples per each). And some parameters fro my network: | Keras deep learning why validation accuracy stuck in a value every time? | 0.132549 | 0 | 0 | 3,006 |
45,190,707 | 2017-07-19T12:31:00.000 | 0 | 0 | 0 | 0 | python,deep-learning,keras | 45,191,017 | 3 | false | 0 | 0 | Be more specific on the example, post the code you used to build the Sequential Model.
At the moment I can say that your problem could be in the initial dataset.
You have 130 sample for 36 classes that means 3.6 example for each class? | 3 | 0 | 1 | I am trying to train InceptionV3 network with my custom dataset (36 classes 130 samples per each). And some parameters fro my network: | Keras deep learning why validation accuracy stuck in a value every time? | 0 | 0 | 0 | 3,006 |
45,190,707 | 2017-07-19T12:31:00.000 | 1 | 0 | 0 | 0 | python,deep-learning,keras | 45,190,981 | 3 | false | 0 | 0 | It could mean that the model has learned everything possible and can't improve further.
One of the possible ways to improve accuracy is to get new data. You have ~4 samples per class, which is rather low. Try to get more samples or use data augmentation technics. | 3 | 0 | 1 | I am trying to train InceptionV3 network with my custom dataset (36 classes 130 samples per each). And some parameters fro my network: | Keras deep learning why validation accuracy stuck in a value every time? | 0.066568 | 0 | 0 | 3,006 |
45,191,428 | 2017-07-19T13:00:00.000 | 3 | 0 | 1 | 0 | python,python-2.7,python-3.x,numpy | 45,191,580 | 1 | true | 0 | 0 | Yes you need to install them using pip3 as well as python3.4 bundles pip in along side of python | 1 | 2 | 1 | I have brew install python2 for OsX, then I install numpy, scipy using pip install.
I also need python3, so I brew install python3, but when I import numpy under python3, import error occurs.
I known I can fix this by install numpy using pip3 install numpy, but do I have to do this? Since I have the package already installed for python2, can I just tell python3 where it is and then use it? | Do I have to install numpy, spicy again for python3? | 1.2 | 0 | 0 | 494 |
45,193,502 | 2017-07-19T14:24:00.000 | 1 | 0 | 0 | 0 | python,pandas,data-science | 45,193,581 | 1 | true | 0 | 0 | I don't think you should use customerID as a variable. This is an unique value for each customer. It can be used as an index - to know for what customer belongs the prediction.
So you'de better drop this column from training/test data. | 1 | 0 | 1 | I am trying to do Segmentation in Customer Data in Python using Pandas. I have a customer ID variable in my dataset. I am confused over here, even though it won't be considered as a variable that affects the Output variable. How do we actually treat this variable if needed, a Categorical or a numerical ?
Also, Is there a business case that you could think of where the customerID will be considered? | Unique Key - CustomerID, A Categorical or a Numerical Variable? | 1.2 | 0 | 0 | 655 |
45,194,182 | 2017-07-19T14:52:00.000 | 0 | 0 | 0 | 0 | python,django,django-1.11 | 71,277,812 | 5 | false | 1 | 0 | Also, if you have a service like gunicorn between your localhost and your nginx server or apache2 server. Remember to restart it too.
sudo systemctl restart gunicorn | 1 | 14 | 0 | I tried to launch a Django 1.11 project on production server. When I start the app I see the following error:
Invalid HTTP_HOST header: 'bla-bla-bla.bla-bla-vla.com'. You may need to add u'bla-bla-bla.bla-bla-vla.com' to ALLOWED_HOSTS**
But, host "bla-bla-bla.bla-bla-vla.com" has been added to ALLOWED_HOSTS in settings.py already!
I tried to switch DEBUG from False to True and back. It works fine, then.
What am I doing wrong? | ALLOWED_HOSTS and Django | 0 | 0 | 0 | 57,463 |
45,197,710 | 2017-07-19T17:51:00.000 | -1 | 0 | 1 | 0 | python,django,installation | 46,387,180 | 1 | false | 1 | 0 | I've got the same problem like as you.
If you have installed django by easy_install method in powershell and the django is installed in C:\Program Files\Python36\Lib\site-packages\django-1.11.5-py3.6.egg
I've solved the issue by writing the below line in between import sys and import django in manage.py
sys.path.append("C:\Program Files\Python36\Lib\site-packages\django-1.11.5-py3.6.egg")
or else try executing the following command :
python -m pip install django | 1 | 0 | 0 | I've been learning the basics of python (first language for me) over the last few months.
I now want to try doing something practical and get into using Django. I'm finding the setup process extremely difficult (thank god for youtube tutorials).
I've installed python, pip, django and virtualenv. EDIT: Have double checked and these are all installed.
I activated my first project:
PS C:\Users---\Desktop\first_project> virtualenv first_project
Using base prefix 'c:\users\---\anaconda3'
As soon as I try to run the server:
python manage.py runserver
I get the ImportError - "ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment?".
I'm using Windows 10..any idea what the problem might be?
Thanks in advance. | Setting up Django first_project - "ImportError - Did you forget to activate a virtual environment?" | -0.197375 | 0 | 0 | 2,725 |
45,197,851 | 2017-07-19T17:59:00.000 | 0 | 0 | 0 | 0 | python,sql-server,pandas,sqlalchemy | 45,197,852 | 1 | false | 0 | 0 | I had to work around the datetime column from my SQL query itself just so SQLAlchemy/Pandas can stop reading it as a NaN value.
In my SQL query, I used CONVERT() to convert the datetime column to a string. This was ready with no issue, and then I used pandas.to_datetime() to convert it back into datetime.
Anyone else with a better solution or know what's really going on, please share your answer, I'd really appreciate it!!! | 1 | 0 | 1 | I encountered the following irregularities and wanted to share my solution.
I'm reading a sql table from Microsoft SQL Server in Python using Pandas and SQLALCHEMY. There is a column called "occurtime" with the following format: "2017-01-01 01:01:11.000". Using SQLAlchemy to read the "occurtime" column, everything was returned as NaN. I tried to set the parse_date parameter in the pandas.read_sql() method but with no success.
Is anyone else encountering issue reading a datetime column from a SQL table using SQLAlchemy/Pandas? | Pandas read_sql | 0 | 1 | 0 | 752 |
45,197,995 | 2017-07-19T18:07:00.000 | 0 | 0 | 0 | 0 | python,tensorflow | 45,219,444 | 1 | false | 0 | 0 | You need to build the tool inside the tensorflow directory, not the model directory, and then run the tool on the model. | 1 | 0 | 1 | I have frozen the tensorflow graph and I have noticed that the using the graph in realtime for prediction is too slow. so I would like to do graph optimization.
Within my project file, I have got a folder called model which contains the model file (pb file).
Now I tried running the following command inside the model dir
bazel build tensorflow/python/tools:strip_unused
But it is firing the following error,
bazel build tensorflow/python/tools:strip_unused
ERROR: no such package 'tensorflow/python/tools': BUILD file not found on package path. | Optimizing the frozen graph | 0 | 0 | 0 | 170 |
45,198,564 | 2017-07-19T18:37:00.000 | 1 | 0 | 1 | 0 | python,pandas,dataframe | 45,198,690 | 5 | false | 0 | 0 | If you have a string you can always just choose parts of it by writing:
foo = 'abcdefg'
foo2 = foo[2:4]
print foo2
then the output would be:
cd | 1 | 1 | 1 | I have a column in my dataframe (call it 'FY') which has financial year values in the format: 2015/2016 or 2016/2017.
I want to convert the whole column so it says 15/16 or 16/17 etc instead.
I presume you somehow only take the 3rd, 4th and 5th character from the string, as well as the 8th and 9th, but haven't got a clue how to do it.
Could anyone help me? Thank you. | Python Pandas - Dataframe column - Convert FY in format '2015/2016' to '15/16' | 0.039979 | 0 | 0 | 138 |
45,198,591 | 2017-07-19T18:39:00.000 | 4 | 0 | 1 | 0 | python,python-3.x | 45,198,626 | 1 | true | 0 | 0 | The * is used to unpack argument lists when calling a function. In this case it unpacks your list of names. | 1 | 3 | 0 | I have a list of names and I want to print each element of the list in a different line without a for loop. So, after some research I found this example: print(*names, sep='\n'), witch results in exactly what I want. But what does this * character before the list name means? | * before iterable inside a print() in Python | 1.2 | 0 | 0 | 180 |
45,198,716 | 2017-07-19T18:46:00.000 | 1 | 0 | 0 | 0 | python,django,django-queryset | 45,198,887 | 1 | true | 1 | 0 | I had it wrapped in the Q() parentheses. Duplicating the query and storing it in the variable list(lesson_ids) fixed it. | 1 | 1 | 0 | After doing a query in a Django script I wrote, I was given this Q object.
<Q: (AND: ('lesson_object_id__in', [322, 327, 328, 329, 330, 332, 1120, 1176]))>
I want to break down that Q object so I only have the list [322, 327, 328, 329, 330, 332, 1120, 1176]. How would I go by doing that? Thanks | Break Down Django Query object (Q) | 1.2 | 0 | 0 | 110 |
45,198,936 | 2017-07-19T18:58:00.000 | 5 | 0 | 1 | 0 | python,string,python-3.x | 45,198,998 | 1 | false | 0 | 0 | CPython implements string slicing by making a new string object containing the extracted characters. That takes time proportional to the number of characters copied, so takes time proportional to j-i (the number of characters copied). | 1 | 1 | 0 | Given a string s of length n, the slicing operation s[i : j] in Python 3, where
(0 <=i <= j <= n), takes how much time in Big-O notation?
Is it O(n) or O(1) or something else?
Edit
Also is there any implementation difference in slicing of a list and a string in python 3? | Time Complexity on string slicing operation in python 3 | 0.761594 | 0 | 0 | 3,964 |
45,200,604 | 2017-07-19T20:40:00.000 | 0 | 1 | 0 | 0 | python,python-2.7,psychopy | 45,201,798 | 1 | false | 0 | 0 | Windows 10 enterprise can sometimes limit cpu bandwidth to certain programs at times. | 1 | 0 | 0 | I'm afraid this issue might be too idiosyncratic but maybe someone can point me in the right direction via questions or hypotheses.
I'm using psychoPy 1.84.2 which uses python 2.7. The computers I am hoping to use for my study are dells running windows 10 enterprise.
The issue is that ~80% of the time mouse clicks are completely missed and keyboard presses take 3-4 seconds to have an effect in my program. 20% of the time, there is no issue. While this is happening, I can still see the mouse cursor moving and I can open up programs and folders on the desktop and click around the psychoPy GUI without any lag. Additionally, videos played in the python window run fine.
These same files run perfectly well on two different macs but fail on the 10 dell desktops I was hoping to use for my study. Lastly, I did try using psychoPy 1.85.1 but had the same issues (plus a couple more).
Thanks! | mouse and keyboard presses are missed or slow in python window | 0 | 0 | 0 | 312 |
45,205,162 | 2017-07-20T04:25:00.000 | 0 | 0 | 0 | 0 | python,database,excel,filemaker,data-extraction | 45,215,938 | 2 | false | 0 | 0 | You can also save records as a spreadsheet for use in Microsoft Excel. For more information, see Saving and sending records as an Excel file in the FileMaker Help file. Use export when you want to export records in the current found set or export in a format other than an Excel spreadsheet. Use Save as Excel when you want to create an Excel spreadsheet that contains all the fields you have access to on the current layout.
If your FileMaker Pro source file contains summary fields, you can group by a sorted field in order to export subsummary values, such as subtotals generated by a report with grouped data. This process exports one record for each group. For example, if you have a report that totals sales by region, you can export one summary value for each region. | 1 | 1 | 0 | As a user of the database, are there any quicker ways of exporting data from Filemaker using languages like python or java? Perhaps to an Excel.
My job involves exporting selected data constantly from our company's Filemaker database. However, the software is super slow, and the design of our app is bad which makes selecting which data to export a pain. (I have to manually select data one by one by opening the full record of each data. There's no batch export function.)
Please provide me with alternative methods. I feel very stupid in doing this. | Any quick way to export data from Filemaker? | 0 | 1 | 0 | 1,900 |
45,205,559 | 2017-07-20T05:02:00.000 | 0 | 0 | 0 | 0 | python | 45,205,670 | 1 | false | 1 | 1 | Probably the most conventional way to go about this would be to write some server-side code to communicate with the device, and save responses (all of them? the most recent n?) to a database. You'd expose this to a REST API, which you'd then call at a specified interval using AJAX from the browser. | 1 | 0 | 0 | I am trying to create a web interface to display some data that is being extracted live from a piece of hardware.
I have already written a program using pyQt that extracts data every second. I was wondering if it is possible to simultaneously push this data to a web interface with Python.
I want the webpage to not continuously refresh as data is coming in every second. Charts are essential, I have to have the ability to plot the data that is being pushed.
Can anyone suggest me how I would go about doing this? | creating a web interface that displays live data from hardware | 0 | 0 | 0 | 183 |
45,207,569 | 2017-07-20T07:03:00.000 | 15 | 0 | 0 | 0 | python,openai-gym | 45,640,420 | 4 | false | 0 | 0 | There's a couple ways of understanding the ram option.
Let's say you wanted to learn pong. If you train from the pixels, you'll likely use a convolutional net of several layers. interestingly, the final output of the convnet is a a 1D array of features. These you pass to a fully connected layer and maybe output the correct 'action' based on the features the convnet recognized in the image(es). Or you might use a reinforcement layer working on the 1D array of features.
Now let's say it occurs to you that pong is very simple, and could probably be represented in a 16x16 image instead of 160x160. straight downsampling doesn't give you enough detail, so you use openCV to extract the position of the ball and paddles, and create your mini version of 16x16 pong. with nice, crisp pixels. The computation needed is way less than your deep net to represent the essence of the game, and your new convnet is nice and small. Then you realize you don't even need your convnet any more. you can just do a fully connected layer to each of your 16x16 pixels.
So, think of what you have. Now you have 2 different ways of getting a simple representation of the game, to train your fully-connected layer on. (or RL algo)
your deep convnet goes through several layers and outputs a 1D array, say of 256 features in the final layer. you pass that to the fully connected layer.
your manual feature extraction extracts the blobs (pattles/ball) with OpenCV, to make a 16x16 pong. by passing that to your fully connected layer, it's really just a set of 16x16=256 'extracted features'.
So the pattern is that you find a simple way to 'represent' the state of the game, then pass that to your fully connected layers.
Enter option 3. The RAM of the game may just be a 256 byte array. But you know this contains the 'state' of the game, so it's like your 16x16 version of pong. it's most likely a 'better' representation than your 16x16 because it probably has info about the direction of the ball etc.
So now you have 3 different ways to simplify the state of the game, in order to train your fully connected layer, or your reinforcment algorithm.
So, what OpenAI has done by giving you the RAM is helping you avoid the task of learning a 'representation' of the game, and that let's you move directly to learning a 'policy' or what to do based on the state of the game.
OpenAI may provide a way to 'see' the visual output on the ram version. If they don't, you could ask them to make that available. But that's the best you will get. They are not going to reverse engineer the code to 'render' the RAM, nor are they going to reverse engineer the code to 'generate' 'RAM' based on pixels, which is not actually possible, since pixels are only part of the state of the game.
They simply provide the ram if it's easily available to them, so that you can try algorithms that learn what to do assuming there is something giving them a good state representation.
There is no (easy) way to do what you asked, as in translate pixels to RAM, but most likely there is a way to ask the Atari system to give you both the ram, and the pixels, so you can work on ram but show pixels. | 1 | 6 | 1 | In some OpenAI gym environments, there is a "ram" version. For example: Breakout-v0 and Breakout-ram-v0.
Using Breakout-ram-v0, each observation is an array of length 128.
Question: How can I transform an observation of Breakout-v0 (which is a 160 x 210 image) into the form of an observation of Breakout-ram-v0 (which is an array of length 128)?
My idea is to train a model on the Breakout-ram-v0 and display the trained model playing using the Breakout-v0 environment. | How to interpret the observations of RAM environments in OpenAI gym? | 1 | 0 | 0 | 5,417 |
45,208,088 | 2017-07-20T07:26:00.000 | 0 | 0 | 1 | 1 | python,virtualenv | 45,208,247 | 1 | true | 0 | 0 | the purpose of virtualenv is to separate your python environment, so one virtualenv may have, let's say Django version 1.11 with Python 3, while another virtualenv have django 1.10 with python 2.7 , Using the same python virtualenv in different OS system is doable, pip inside the virtualenv will handle the libraries while the python will handle OS differences when you install python 2.7 or python 3.0 inside the virtualenv.
One of the example use of these two virtualenv is let's say your Nginx Server is using different python environment for each domain (one nginx server can handle many domains). | 1 | 0 | 0 | What will be the effect if I use the same python virtualenv in different OS system?
If not possible, why?
Its said that what can confirm if the virtualenv can used in the os system? | What will be the effect if I use the same python virtualenv in different OS system | 1.2 | 0 | 0 | 48 |
45,210,357 | 2017-07-20T09:14:00.000 | 3 | 0 | 1 | 0 | python,django,pip,virtualenv | 45,210,417 | 1 | true | 1 | 0 | No. The libraries are not part of your code and shouldn't live in your project directory. They're dependencies, and should be installed by pip when you deploy just as in your development environment. | 1 | 0 | 0 | I'm new to django and have a virtualenv outside my django project directory.
When I download open source django apps like python_social_auth using pip install, the apps reside in the virtualenv's site-packages directory and not in the project root. But I import them in my projects. Should I keep a copy of the downloaded apps in my projects root? Would that be necessary if I wanted to deploy the project? | I'm confused with django project root and the virtualenv. Please guide | 1.2 | 0 | 0 | 237 |
45,214,909 | 2017-07-20T12:35:00.000 | 4 | 0 | 1 | 0 | python,cython,python-asyncio | 45,299,732 | 1 | true | 0 | 0 | To answer my own question, as far as I can see there is no obligations to use abstractions provided by Protocol or Transport in asyncio for structuring applications. The best modeling for this I found is to use a regular class with its methods defined as async. The class then can be made to look like whatever pattern fits your requirement. This is especially relevant if the code you are wrapping doesn’t have same overall use case as a socket. The asyncio provided abstractions themselves are pretty barebones.
For things that are complicated like Cython wrapped C++ blocking code, you will need to deal with it with multiprocessing. This is to avoid hanging the interpreter. Asyncio does not make it possible to run blocking code without changes. The code must be specifically written to be asyncio compatible.
What I did was put the entire blocking code including the construction of the object into a function that was executed with event_loop.run_in_executor. In addition to this I used a unix socket to communicate with the process for commands and callback data. Due to using unix sockets you can use asnycio methods in your main application, same goes for pipes.
Here are some results I got from sending 128 bytes from the multiprocess Process producer to the asyncio main process. The data was generated at a 10-millisecond interval. The duration was timed using time.perf_counter(). Results below are in nanoseconds. The machine itself was Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz running Linux kernel 4.10.17.
Asyncio with uvloop
count 10001.000000
mean 76435.956504
std 8887.459462
min 63608.000000
25% 71709.000000
50% 74104.000000
75% 79496.000000
max 287204.000000
Standard Asyncio event loop
count 10001.000000
mean 199741.937506
std 27900.377114
min 173321.000000
25% 185545.000000
50% 191839.000000
75% 205279.000000
max 529246.000000 | 1 | 3 | 0 | I've got an external library in C++ that has been wrapped by Cython. This C++ library itself I cannot change. I would like to combine the library to be used as part of Python application that uses asyncio as its primary process control.
The Cython library essentially does network work with a proprietary protocol. The Cython library however is blocking when the event handler for the library is started in python. I've gotten it to a stage where I can pass a Python function and receive callbacks for events received from the C++ library. I can resolve the library hanging the application at the library event handler if I run the event handler within event_loop.run_in_executor.
My question is, how can I best model this to work with asnycio that fits well with its interfaces rather than hack up ad hoc solutions to use the Cython library methods? I had a look into writing this as a asyncio.Protocol and asyncio.Transport that then uses the Cython library as it's underlying communication mechanism. However, it looks like it's a lot of effort with some monkey patching to make it look like a socket. Is there a better way or abstraction to put a wrapper on external libraries to make it work with asyncio? | Using Python asyncio interfaces with Cython libraries | 1.2 | 0 | 0 | 1,770 |
45,215,219 | 2017-07-20T12:47:00.000 | 2 | 0 | 1 | 0 | python,performance,python-3.x | 45,215,597 | 7 | false | 0 | 0 | If you use + operator, at the end of the creation of the whole string, and only after that, it will be printed, with the , it would be created by parts and printed as well. And as a lot of users already said, there is another factor, the concat of " " (white space), it takes times too. | 2 | 19 | 0 | I just noticed that if I use + for concatenation in the print() function and run the code in the shell the text appears more quickly. Whereas, when using ,, the text appears much slower as if with a "being typed" animation effect.
Is there a efficiency difference between these two? | Is concatenating with "+" more efficient than separating with "," when using print? | 0.057081 | 0 | 0 | 5,428 |
45,215,219 | 2017-07-20T12:47:00.000 | 3 | 0 | 1 | 0 | python,performance,python-3.x | 45,215,740 | 7 | false | 0 | 0 | print(a + b + c), evaluates an expression and passes a single argument to print. The second, print(a, b, c), just passes three arguments to print. This is almost the same output purely by chance: that's how print works. The second is not doing any concatenation, and is simply outputting each value separated by a space (which is print's default behavior).
the plus in the expression is doing concatenation, but the comma is not doing concatenation.
Concatenation creates each string in memory, and then combines them together at their ends in a new string (so this may not be very memory friendly), and then prints them to your output at the same time | 2 | 19 | 0 | I just noticed that if I use + for concatenation in the print() function and run the code in the shell the text appears more quickly. Whereas, when using ,, the text appears much slower as if with a "being typed" animation effect.
Is there a efficiency difference between these two? | Is concatenating with "+" more efficient than separating with "," when using print? | 0.085505 | 0 | 0 | 5,428 |
45,215,741 | 2017-07-20T13:09:00.000 | 1 | 0 | 1 | 0 | python,parallel-processing,multiprocessing | 45,217,471 | 1 | true | 0 | 0 | As out of the box -solution, no.
What you are describing is a grid. Grid computing does exist, but it is not something that is delivered alongside Python or any programming language.
You would need a server, where worker clients connect and request for more work when they are idle, and deliver results. You could use an existing framework (Boinc for example) or build your own client and server, but this would not be a simple task to make it work right.
For example, your server needs to handle a situation where it hands a task to a worker, but the worker does not seem to be sending back any results. At which stage you would declare the worker dead and resubmit the task to another worker? If the first worker was just slow, you would have two results delivered for the same task. What to do then, especially if the results differ?
This would also require co-operation from administrators of that network segment. They would need to maintain the array of workers and install your code alongside any other software they manage in the computers. You would also need to convince your information security people that your ability to run arbitrary code in every computer in the network does not compromise security. You could mitigate this by containerisation, but this adds another layer of complexity.
Then there is the question of performance. Grid computing has a tail effect. A grid starts processing tasks quickly, but when you approach the end of your run, the last tasks take a long time to complete. This is because of unresponsive and very slow workers doing the last tasks. At some point you would need to declare that you do not want to wait any longer, and actually start processing the last tasks locally to mitigate this. Again, not very complicated but building the logic to do so adds to your program.
All this requires quite a lot of work and maintenance, and programming resources from you to make your program and its tasks suitable for grid (tasks need to be embarrassingly parallel).
If this is something you do 24/7 and you want to use computing resources standing idle overnight, it might be useful. If this is not a constantly running thing, it might be cheaper to just buy CPU power from a cloud and run parts of your tasks there. Sure, there is a cost attached, but you will easily spend that money to coffee and biscuits for meetings with your infosec people. Adding work from you and administrators on top of that, there might not be a business case to do this.
Having said that, there are a couple of grid computing projects for Python if you decide to follow this route. The wiki page mentioned in comments lists a couple of them that are still active. You might get some help from there in task and job management. | 1 | 0 | 0 | I am running many multi-threading python programs and my CPU is almost always on 100%.
I'm not expert in this area but is it possible to use CPU from others employees that are in my network because the only issue on my PC is that I need more cores to process data faster the RAM is ok from 16GB 10 GB is used.
We are using Windows 7,8.1 and 10. | How can I use available CPU from other computers in my network | 1.2 | 0 | 0 | 361 |
45,218,374 | 2017-07-20T14:57:00.000 | 0 | 0 | 0 | 0 | python,tensorflow,object-detection | 45,222,284 | 2 | false | 0 | 0 | To find what to use for output_node_names, just checkout the graph.pbtxt file. In this case it was Softmax | 1 | 1 | 1 | I've followed the pet detector tutorial, i have exported the model using "export_inference_graph.py".
However when I try to freeze the graph using the provided "freeze_graph.py" but now sure what --output_node_names to use.
Does anyone know which I should use, or more importantly how I find out what to use for when I train my own model. | freezing the "tensorflow object detection api pet detector" graph | 0 | 0 | 0 | 1,483 |
45,219,177 | 2017-07-20T15:31:00.000 | 0 | 0 | 1 | 0 | python,probability,sympy | 45,219,608 | 1 | false | 0 | 0 | Let x=stats.Bernoulli('x', 1/2, succ=1, fail=-1). Then the E and variance are what I want. So if I compute any expression with only the first two moments and take E and variance, I get the answer I want. | 1 | 1 | 1 | I have a symbol, x, in my sympy code and have been computing a number of expressions with it. Ultimately the expression I have is very long and I am only interested in its expectation under the assumption E(x) = 0, E(x^2) = 1. Is there some way to set the expectation and variance of x in advance and then ask sympy to compute expectation of my entire expression? | sympy set expectation and variance of random variable | 0 | 0 | 0 | 93 |
45,223,111 | 2017-07-20T18:55:00.000 | 2 | 0 | 1 | 0 | python,regex | 45,223,297 | 3 | true | 0 | 0 | Try the RegEx \d\.\d\.\d-[a-z]\d\.\d
The \d will match digits, the \. will match periods, the - will match the hyphen, and the [a-z] will match lowercase letters. | 1 | 1 | 0 | I have extracted a string from a device within which I have to find and match its version using regex. The string contains the version number of a software as well as other text.
Something of a form 1.2.3.x.z-2
Basically it contains a mixture of numbers alphabets colons hyphens etc.
What do I match for in regex.
I tried the index of first and last number in the string and print all the data in between the 2 index.
But this are not the only numbers present in the string. example, It may have something like date.
Even if I filter the other numbers later, What can I match for something that follows this kind of pattern ? | How to match version of a software using regex | 1.2 | 0 | 0 | 2,147 |
45,224,390 | 2017-07-20T20:14:00.000 | 0 | 0 | 0 | 0 | python,gunicorn | 45,306,560 | 1 | true | 1 | 0 | My app is running on a container group on the Bluemix cloud, and there is a 120s time out limit for the load balancer in front of the container. This was overriding whatever timeout I had in my app. | 1 | 1 | 0 | I am running a Gunicorn server with a Flask web app. I use the following command to start the server
gunicorn -c /config/serverConfig.py app:APP
Inside serverConfig.py, I have timeout = 180
Howevever, the long running calls timeout after 120 seconds.
Is there something I should be doing differently? | Gunicorn timeout not working | 1.2 | 0 | 0 | 399 |
45,225,010 | 2017-07-20T20:56:00.000 | 1 | 0 | 0 | 0 | python,excel,keyboard | 55,856,869 | 1 | false | 0 | 0 | If this still helps, you can use from pywin32 (which should be a default package) to use win32com.client.
Sample code:
import win32com.client
xl = win32com.client.Dispatch("Excel.Application")
xl.sendkeys("^+s") # saves file
Use "%" to access alt so you can get hotkeys. | 1 | 1 | 0 | I need to send keys to excel to refresh formulas.
What are my best options?
I am already using Openpyxl but it does not satisfy all my needs. | Using python to send keys to active Excel Window | 0.197375 | 1 | 0 | 1,564 |
45,225,500 | 2017-07-20T21:27:00.000 | 1 | 0 | 0 | 0 | python-3.x,pandas,hdf5,pytables,blaze | 45,231,429 | 1 | false | 0 | 0 | good question! one option you may consider is to not use any of the libraries aformentioned, but instead read and process your file chunk-by-chunk, something like this:
csv="""\path\to\file.csv"""
pandas allows to read data from (large) files chunk-wise via a file-iterator:
it = pd.read_csv(csv, iterator=True, chunksize=20000000 / 10)
for i, chunk in enumerate(it):
... | 1 | 0 | 1 | I am working on an exploratory data analysis using python on a huge Dataset (~20 Million records and 10 columns). I would be segmenting, aggregating data and create some visualizations, I might as well create some decision trees liner regression models using that dataset.
Because of the large data set I need to use a data-frame that allows out of core data storage. Since I am relatively new to Python and working with large data-sets, i want to use a method which would allow me to easily use sklearn on my data-sets. I'm confused weather to use Py-tables, Blaze or s-Frame for this exercise. If someone could help me understand what are their pros and cons. What are the factors that are important in this kind of decision making that would be much appreciated. | Py-tables vs Blaze vs S-Frames | 0.197375 | 0 | 0 | 170 |
45,225,526 | 2017-07-20T21:29:00.000 | 0 | 0 | 1 | 0 | python,exe,cx-freeze,file-not-found | 45,226,640 | 1 | true | 0 | 0 | please give the absolute path for 'XYZ.txt' in your python code. I guess that might solve the problem. | 1 | 0 | 0 | I have python exe file which has been made using cx_Freeze. Basically The script uses the text file "XYZ.txt" to generate a word cloud. When I run the python exe file everything is fine and the code runs well, but I am planning to run it from an external program (a game which generates the text file) and when I'm trying to run it from that game, python exe runs but gives me the FileNotFoundError: [Error 2] No such file or directory: 'XYZ.txt'. But the 'XYZ.txt' actually is in the same folder and at the same time. Besides, I tried to run the text file from that game and it opened but the python exe cannot find it.
I have also tried to run my game as administrator and then running the python exe and then I got: "dll load failed the specified module could not be found" instead of the previous error.
I would be thankful if anyone could help me with that. | python exe errors when running by an external program | 1.2 | 0 | 0 | 103 |
45,227,546 | 2017-07-21T01:26:00.000 | 1 | 0 | 0 | 0 | python,amazon-web-services,amazon-dynamodb,alexa-skills-kit | 45,266,031 | 5 | false | 1 | 0 | DynamoDB is a NoSQL-like, document database, or key-value store; that means, you may need to think about your tables differently from RDBMS. From what I understand from your question, for each user, you want to store information about their preferences on a list of objects; therefore, keep your primary key simple, that is, the user ID. Then, have a singe "column" where you store all the preferences. That can either be a list of of tuples (object,color) OR a dictionary of unique {object:color}.
When you explore the items in the web UI, it will show these complex data structures as json-like documents which you can expand as you will. | 2 | 9 | 0 | I am developing a skill for Amazon Alexa and I'm using DynamoDB for storing information about the users favorite objects. I would like 3 columns in the database:
Alexa userId
Object
Color
I currently have the Alexa userId as the primary key. The problem that I am running into is that if I try to add an entry into the db with the same userId, it overwrites the entry already in there. How can I allow a user to have multiple objects associated with them in the db by having multiple rows? I want to be able to query the db by the userId and receive all the objects that they have specified.
If I create a unique id for every entry, and there are multiple users, I can't possibly know the id to query by to get the active users' objects. | How to create multiple DynamoDB entries under the same primary key? | 0.039979 | 1 | 0 | 10,290 |
45,227,546 | 2017-07-21T01:26:00.000 | 0 | 0 | 0 | 0 | python,amazon-web-services,amazon-dynamodb,alexa-skills-kit | 45,693,902 | 5 | false | 1 | 0 | you cannot create multiple entries with same primary key. Please create composite keys (multiple keys together as primary key). Please note you cannot have multiple records of same combination | 2 | 9 | 0 | I am developing a skill for Amazon Alexa and I'm using DynamoDB for storing information about the users favorite objects. I would like 3 columns in the database:
Alexa userId
Object
Color
I currently have the Alexa userId as the primary key. The problem that I am running into is that if I try to add an entry into the db with the same userId, it overwrites the entry already in there. How can I allow a user to have multiple objects associated with them in the db by having multiple rows? I want to be able to query the db by the userId and receive all the objects that they have specified.
If I create a unique id for every entry, and there are multiple users, I can't possibly know the id to query by to get the active users' objects. | How to create multiple DynamoDB entries under the same primary key? | 0 | 1 | 0 | 10,290 |
45,227,791 | 2017-07-21T01:56:00.000 | 1 | 0 | 1 | 0 | python,grammar | 45,239,345 | 1 | false | 0 | 0 | You can certainly do it without AI. Just step through each position in the string, and ask the checker if a grammatical sentence begins at that point. That's assuming the checker can return success when there's still some (unparseable) input remaining -- if not, you'd have to iterate over start and end positions, asking if the specified substring constitutes a grammatical sentence.
That's if the checker is a black box. If you know more about the grammar/language, you could improve efficiency. E.g., if, as you suggest, you have a list of all valid words in the language, then you could first find all spans of valid words separated by whitespace, then only submit those to checking. | 1 | 1 | 0 | If I have a random string that has a sentence in it somewhere.
"e,ktdo.ba Hello, my name is CodeMaker efq ,z unqusiug.."
Can I use a grammar checker to find the sentence without knowing what it is ? I know finding individual words is as easy as using a list that has all the words used in the language and check if any of the words are in the string (or sometimes, even most words will do) but I don't think it's even worthy to discuss making a list of all sentences. I want to know if grammar checkers can "understand" sentence structures and find sentences in random strings (I would prefer a python library but if there isn't any that does what I wan't, I will have to use another language). Or maybe there is a library that isn't a grammar checker but can do what I want (which is doubtful because what I want to do is pretty specific). Is this even possible without AI ? | Can a grammar checking library be used to find meaningful sentences in random strings? | 0.197375 | 0 | 0 | 372 |
45,228,994 | 2017-07-21T04:29:00.000 | 1 | 0 | 0 | 0 | python,robotframework,robotframework-ide | 45,238,381 | 1 | true | 1 | 0 | RIDE only supports text (*.txt, or *.robot), TSV (tab separated values), and HTML files.
The .py or .pyi files are on our wish list.
The .pyc does not make any sense to open in an Editor, they are byte code. | 1 | 0 | 0 | How can I see *.py and *.pyc in RIDE (robot framework ide)?
Does ride support them?
Thank you in advance. | How to see py and pyc in robot framework RIDE | 1.2 | 0 | 0 | 189 |
45,234,336 | 2017-07-21T09:42:00.000 | 3 | 0 | 0 | 0 | python,scikit-learn,k-means | 45,234,643 | 2 | false | 0 | 0 | The cluster centre value is the value of the centroid. At the end of k-means clustering, you'll have three individual clusters and three centroids, with each centroid being located at the centre of each cluster. The centroid doesn't necessarily have to coincide with an existing data point. | 1 | 6 | 1 | On doing K means fit on some vectors with 3 clusters, I was able to get the labels for the input data.
KMeans.cluster_centers_ returns the coordinates of the centers and so shouldn't there be some vector corresponding to that? How can I find the value at the centroid of these clusters? | Value at KMeans.cluster_centers_ in sklearn KMeans | 0.291313 | 0 | 0 | 17,090 |
45,238,671 | 2017-07-21T13:15:00.000 | 0 | 0 | 0 | 0 | python,arrays,scikit-learn | 45,238,795 | 2 | false | 0 | 0 | Look into the pandas package which allows you to import CSV files into a dataframe. pandas is supported by scikit-learn. | 1 | 0 | 1 | How do I apply scikit-learn to a numpy array with 4 columns each representing a different attribute?
Basically, I'm wanting to teach it how to recognize a healthy patient from these 4 characteristics and then see if it can identify an abnormal one.
Thanks in advance! | Using Scikit-learn on a csv dataset | 0 | 0 | 0 | 773 |
45,240,311 | 2017-07-21T14:32:00.000 | 1 | 0 | 0 | 0 | python,mysql,django,django-models,django-south | 45,638,008 | 2 | true | 1 | 0 | I did some more tests and problem only happens when you are using de development server python manage.py runserver. In that case, it forces a connection with the database.
Using an actual WSGI server it doesn't happen as @Alasdair informed.
@JohnMoutafis in the end I didn't test your solution, but that could work. | 1 | 6 | 0 | As far as I know Django apps can't start if any of the databases set in the settings.py are down at the start of the application. Is there anyway to make Django "lazyload" the initial database connection?
I have two databases configured and one of them is a little unstable and sometimes it can be down for some seconds, but it's only used for some specific use cases of the application. As you can imagine I don't want that all the application can't start because of that. Is there any solution for that?
I'm using Django 1.6.11, and we also use Django South for database migrations (in case that it's related somehow). | Django: How to disable Database status check at startup? | 1.2 | 1 | 0 | 2,860 |
45,241,027 | 2017-07-21T15:08:00.000 | 0 | 0 | 1 | 0 | python,python-import,matplotlib-basemap | 45,241,186 | 2 | false | 0 | 0 | Move your library to the site-packages folder (in your python install directory) and you should be able to import the modules normally from elsewhere, without having to change PATH. | 2 | 0 | 0 | I'm wondering whether it is possible to use classes and methods contained within a library without:
Importing the library through pip install
Moving folders into site-packages
Editing the system PATH
So far the answers I've seen on stack overflow use the methods listed above but my specific use-case requires that I don't pip install things and that the system PATH remains unchanged.
The specific library I'm interested in is Basemap
Is this possible and if so, how? | Python: Import library from local folder without editing system path | 0 | 0 | 0 | 876 |
45,241,027 | 2017-07-21T15:08:00.000 | 0 | 0 | 1 | 0 | python,python-import,matplotlib-basemap | 45,242,874 | 2 | true | 0 | 0 | I was able to solve my problem by installing Basemap and pyproj on a separate computer using pip install. I then took those folders contained within Python\Python35\Lib\site-packages, copied them over to the computer I can't edit stuff on and into the folder containing my main script.
From there I can call stuff pretty much like normal, I just changed from mpl_toolkits.basemap import Basemap to from basemap import Basemap | 2 | 0 | 0 | I'm wondering whether it is possible to use classes and methods contained within a library without:
Importing the library through pip install
Moving folders into site-packages
Editing the system PATH
So far the answers I've seen on stack overflow use the methods listed above but my specific use-case requires that I don't pip install things and that the system PATH remains unchanged.
The specific library I'm interested in is Basemap
Is this possible and if so, how? | Python: Import library from local folder without editing system path | 1.2 | 0 | 0 | 876 |
45,241,469 | 2017-07-21T15:30:00.000 | 0 | 0 | 1 | 0 | python,3d,rotation | 45,242,163 | 1 | false | 0 | 1 | EDITED to include angle calculations.
Since you do not provide any real code in your question, I'll just provide an algorithm and angles. If you improve your question with code I can add some code of my own.
To clarify, we must have length AC equal to length BC, and the midpoint of side AB must be the origin. That latter means that the coordinates of points A and B are negatives of each other. (Actually, the real requirements are that the vector OC where O is the origin is perpendicular to the vector AB and that origin is in line AB. Your conditions are more strict than this.)
Let's say that at any point the coordinates of A are (Ax, Ay, Az) and similarly for B and C.
First, move point A to the xy plane. Do this with a rotation of all three points around the x-axis. Due to the conditions, point B will also be in the xy plane. One possible angle of the rotation is -atan2(Az, Ay) though others are also possible. Check that the resulting values of Az and Bz are zero or close to it in floating-point precision.
Second, do a rotation around the z-axis to move point A to the x-axis (and the other points appropriately). Point B will now also be on the x-axis. One angle of rotation for this is -atan2(Ay, Ax). Check the resulting Ay and By.
Third and last, do a rotation around the x-axis to move point C to the y-axis. Points A and B will not be affected by this last rotation. One angle of rotation for this is -atan2(Cz, Cy). Check the resulting Cx (which should have been zero before this last rotation) and Cz.
Your triangle is now in the desired position, provided your original triangle actually met the conditions. Note that this algorithm did not use any rotation around the y-axis: it was not needed, though you could replace my initial rotation around x with one around y if you want. | 1 | 0 | 0 | I have 3 points: A, B, C in 3D space. AC = BC in length. They represent a triangle object called T. Each point is a tuple of floats representing it's coordinates.
T is placed such as median point from A and B is in axes origin already.
In my API, I can rotate T globally, that is with respect to any one global axis at a time, for rotation.
Pseudocode for this API is like:
T.rotate('x', angle) for rotating T around global x axis of angle value, with right hand rule.
My question is for the code to rotate T such as:
A and B are on x axis
C is on y axis
I suppose I will need 3 calls in succession, for rotating around each of the axes. But I have troubles figuring the angles by initial points coordinates. | Triangle rotation with Python | 0 | 0 | 0 | 1,165 |
45,244,683 | 2017-07-21T18:42:00.000 | 2 | 0 | 1 | 0 | iis,concurrency,python-requests | 45,244,851 | 3 | false | 0 | 0 | These 2 properties are not the same as I think you are implying they are.
MaxConcurrentRequestsPerCPU
Controls the number of incoming requests being handled per CPU
maxconnection
Controls the maximum number of outgoing HTTP connections that you can initiate from a client to a specific IP address. | 1 | 10 | 0 | How many concurrent requests can be executed in IIS 8.5?
I could not find proper values for how many concurrent requests can be executed in IIS 8.5
As I found out below 2 different values:
By default IIS 8.5 can handle 5000 concurrent requests as per MaxConcurrentRequestsPerCPU settings in aspnet.config
In machine.config, the maxconnection is 2 per CPU as default. So if have 6 core CPU then 12 concurrent requests are possible by default.
So I would like to know that Point 1 is right or Point 2 is right for concurrent requests for IIS 8.5. | how many concurrent requests settings for IIS 8.5 | 0.132549 | 0 | 0 | 62,133 |
45,246,185 | 2017-07-21T20:25:00.000 | 0 | 0 | 1 | 1 | python,jenkins,tfs | 45,246,242 | 2 | false | 0 | 0 | You could create a Jenkins job that checks the modified time (If the scripts are under version control, the commit id) periodically, and if there's a change since the last build, kick off the new build. | 2 | 0 | 0 | Inside a TFS Project, I store some set of python scripts inside a specific folder. So I want to trigger a Jenkins build if a change is made on scripts on the particular folder and not others. Is there anyway to do this ? | Trigger a Jenkins build when a checkin is made on TFS | 0 | 0 | 0 | 683 |
45,246,185 | 2017-07-21T20:25:00.000 | 0 | 0 | 1 | 1 | python,jenkins,tfs | 45,294,907 | 2 | false | 0 | 0 | You can use Jenkins service hook in TFS:
Go to team project admin page
Select Service Hooks
Click + to add a new service
Select Jenkins> Next
Select Code checked in event and specify filter path >Next
Specify Jenkins base URL, User name, Password, Build job etc…> Finish
After that, it will trigger a Jenkins build when a checkin is made on specified path. | 2 | 0 | 0 | Inside a TFS Project, I store some set of python scripts inside a specific folder. So I want to trigger a Jenkins build if a change is made on scripts on the particular folder and not others. Is there anyway to do this ? | Trigger a Jenkins build when a checkin is made on TFS | 0 | 0 | 0 | 683 |
45,247,320 | 2017-07-21T22:02:00.000 | 1 | 0 | 1 | 0 | python,datetime,time | 45,247,574 | 2 | false | 0 | 0 | Assuming that you mean 6.5 hours of elapsed time, that would be a timedelta. The time object is for time-of-day, as on a 24-hour clock. These are different concepts, and shouldn't be mixed.
You should also not think of time-of-day as "time elapsed since midnight", as some days include daylight saving time transitions, which can increase or decrease this value. For example, for most locations of the United States, on 2017-11-05 the time you gave of 06:30:00 will have 7.5 hours elapsed since midnight, as the hour between 1 and 2 repeats for the fall-back transition.
So the answer to your question is - don't. | 1 | 6 | 0 | I have a datetime stamp (e.g. time(6,30)) which would return 06:30:00.
I was wondering how I could then convert this into 6.5 hrs.
Kind regards | Convert datetime into number of hours? | 0.099668 | 0 | 0 | 5,634 |
45,249,896 | 2017-07-22T01:59:00.000 | 1 | 0 | 1 | 0 | python,flask,installation | 45,250,691 | 1 | false | 1 | 0 | The problem comes mainly if you want to run multiple different applications/projects, they would need to use the exact same version of flask and its dependencies. Anything else would cause conflict among the libraries.
There's also ease of packaging for each application, by installing only what you need for the current application in the virtualenv, it acts as a delimiter (for instance, when using pip freeze --local) so you don't include global packages you might not need.
The principle of least privilege might come into play as well. It would be rare for even global libraries to go beyond their scope but, hey, reducing the attack surface only to what your virtualenv is can't hurt. | 1 | 0 | 0 | The Flask documentation describes how to install it either inside a virtualenv or system-wide. The documentation for a system-wide installation states
This is possible as well, though I do not recommend it.
Why is a system-wide installation not recommend? What problems can occur with such an installation? | System-Wide Flask Installation | 0.197375 | 0 | 0 | 117 |
45,250,632 | 2017-07-22T04:21:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,unicode | 45,251,129 | 4 | false | 0 | 0 | There are at least two reasons:
the str type has an important property "one element = one character".
the str type does not depend on encoding.
Just imagine how would you implement a simple operation like reversing a string (rword = word[::-1]) if word were a bytestring with some encoding. | 2 | 5 | 0 | Python3 has unicode strings (str) and bytes. We already have bytestring literals and methods. Why do we need two different types, instead of just byte strings of various encodings? | Why do we need str type? Why not just byte-strings? | 0.049958 | 0 | 0 | 1,123 |
45,250,632 | 2017-07-22T04:21:00.000 | 2 | 0 | 1 | 0 | python,python-3.x,unicode | 45,251,113 | 4 | false | 0 | 0 | Byte strings with different encodings are incompatible with each other, but until Python 3 there was nothing in the language to remind you of this fact. It turns out that mixing different character encodings is a surprisingly common problem in today's world, leading to far too many bugs.
Also it's often just easier to work with whole characters, without having to worry that you just modified a byte that accidentally rendered your 4-byte character into an invalid sequence. | 2 | 5 | 0 | Python3 has unicode strings (str) and bytes. We already have bytestring literals and methods. Why do we need two different types, instead of just byte strings of various encodings? | Why do we need str type? Why not just byte-strings? | 0.099668 | 0 | 0 | 1,123 |
45,250,797 | 2017-07-22T04:48:00.000 | 2 | 0 | 1 | 0 | python,linux,bash,python-2.7,python-3.x | 45,251,074 | 1 | true | 0 | 0 | In your Python 2 pip, run pip freeze > requirements.txt. This will write all your installed packages to a text file.
Then, using your Python 3 pip (perhaps pip3), run pip install -r /path/to/requirements.txt. This will install all of the packages as listed in the requirements.txt file. | 1 | 1 | 0 | I installed Anaconda with Python 2.7 and then later installed the Python 3.6 kernel. I have lots of Python 2 packages and I don't want to have to manually install all of the packages for Python 3. Has anyone written, or does anyone know how to write, a bash script that will go through all my Python 2 packages and just run pip3 install [PACKAGE NAME]? | How can I install all my python 2 packages for python 3? | 1.2 | 0 | 0 | 120 |
45,251,198 | 2017-07-22T05:53:00.000 | 1 | 0 | 1 | 0 | python,pygame | 45,264,734 | 1 | false | 0 | 1 | To generate random coordinates, you can use a while loop and random.randrange to get new x and y coords, check if the coords are blocked, if yes, continue to generate new coords and check again if the area is blocked, if it's not blocked return the coords. Of course if the whole map is filled, this loop will run infinitely, so you'd need a way to break out of it in this special case or quit the game.
Alternatively you could create a list of the non-blocked coords and then use random.choice to pick one coord pair. | 1 | 1 | 0 | I am making a snake game with Python. I am following Sentdex's Pygame tutorials, but I wanted to try to make obstacles, so I went off and tinkered with the code a bit.
How do I make the apple stop spawning inside other foreign obstacles? | Python Pygame Snake Apple Spawning Inside obstacles | 0.197375 | 0 | 0 | 258 |
45,254,244 | 2017-07-22T11:53:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine,google-cloud-datastore,google-cloud-platform,google-cloud-storage | 45,258,584 | 1 | true | 1 | 0 | We need more information and see some code to be able to help you better but in general the work you describe should be able to get done via http and you don't need any special C libraries, hence you could go with appengine and create task queues for your jobs.
Be prepared that using only appengine can be trickier than having an operating system that you can leverage. There is no operating system with appengine once you've deployed, you must use only the functionality supplied in appengine.
But yes, as far as I can tell from the information you provide, an appengine app should be able to do the work you describe. Try writing some code, deploy the appengine app and get back here and ask if you have specific trouble.
You can always add compute engine to your appengine project if you need it later. | 1 | 2 | 0 | Sorry, I don't really have much technical background and I know it sounds like a confused question. However, I will try my best to explain what I want to do in here.
My daily routine tasks involved lots of digital marketing data (very large data >20GB+) from different types of platform. As you can see, when I try to analyze these data, I need to aggregate these data into a similar format. The tedious part of my job is it normally involved lots of manual downloads, lots of data cleansing, and lots of upload(I upload the cleaned data to Google Cloud Storage so I can use BigQuery!).
I feel do these tasks manually are extremely inefficient, and I think the only logical choice is automate these tasks on Google Cloud Platform.
After months of effort, I am managed to do these tasks in semi-auto fashion, which I wrote some python programs and make a schtask batch for following:
Download (A python program makes API calls to download platform data to my local drive)
Cleansing (A python program cleansing these data locally)
Upload to Cloud Storage (A python program upload "cleaned" data using gsutil)
Although, it saved lots of my time, but everything is still done locally on my desktop PC.
Here are my real questions, I am sure there is a way to manage all these tasks (download, cleansing, upload) in Google Cloud without touching my local PC, where should I start?
How can I run these Python program on Google Cloud? I know that I can deploy these Python programs in App Engine, however, to allow these program to do their jobs, do I also need a compute engine? or simple deployment would do the job?
How do I schtask for these apps on Google Cloud?
I know Cloud Storage is only one of many ways to store the data on GCP, since I have these data from different types of platform, and they are all in different formats and metrics. So what would be the best way to store these data efficiently on Google Cloud? CloudSQL, Datastore or BigTable?
Thanks! | Google Cloud: Do we need a compute engine to run a deployed python code? | 1.2 | 0 | 0 | 118 |
45,254,735 | 2017-07-22T12:47:00.000 | 6 | 0 | 1 | 0 | python,node.js,heroku | 45,299,479 | 1 | true | 1 | 0 | All I had to do was add a requirements.txt file to my project and run: heroku buildpacks:add --index 1 heroku/python -a appname | 1 | 3 | 0 | I have a basic nodejs app that works fine on its own on heroku, but I want to add a python script that nodejs will call that uses a numpy package. I have gotten it to work on my local host but I am struggling to get it to work on heroku as it does not recognise the numpy package and I cannot seem to install it with pip as it does not get recognised either. | How to deploy a nodejs app that uses python scripts on heroku | 1.2 | 0 | 0 | 717 |
45,256,140 | 2017-07-22T15:20:00.000 | 0 | 0 | 0 | 0 | python,graphlab,sframe | 45,256,348 | 2 | false | 0 | 0 | The note tells you that the operation (filtering in your case) isn't applied to whole date set right away, but only to some portion of it. This is to save resources -- in case the operation doesn't do what you intended, resources want be wasted by applying the operation on whole possibly large data set but only on the needed portion (head in your case, that is output by default). Materialization forces propagation of the operation on the whole data set. | 1 | 1 | 1 | When I was trying to get the rows of my dataset belonging to column of userid =1 through graphlab's sframe datastructure, sf[sf['userid'] == 1],
I got the rows,however I also got this message, [? rows x 6 columns]
Note: Only the head of the SFrame is printed. This SFrame is lazily evaluated.
You can use sf.materialize() to force materialization.
I have gone through the documention, yet I can't understand what sf.materialize() do! could someone help me out here. | What is use of SFrame.materialize() in Graphlab? | 0 | 0 | 0 | 661 |
45,257,330 | 2017-07-22T17:23:00.000 | 0 | 1 | 0 | 0 | wordpress,python-3.x,rest | 45,266,601 | 2 | false | 0 | 0 | You cannot run tkinter applications via a website. | 2 | 2 | 0 | I'm pretty new to programming and I've made a small Python application with tkinter and would like to host it on my GoDaddy website. I just can't seem to figure out how to connect the two. | How to get a Python script running on GoDaddy hosting? | 0 | 0 | 0 | 1,443 |
45,257,330 | 2017-07-22T17:23:00.000 | 1 | 1 | 0 | 0 | wordpress,python-3.x,rest | 45,258,321 | 2 | true | 0 | 0 | If you have shared hosting account, Then you cannot use python scripts on go daddy. Because both cpanel and plesk shared hosting account does not support python. If you have deluxe or premium types hosting account then yes you can use python scripts. Even there also you can't use the modules which requires a compiler within virtual environment.
You have to enable SSH. For more you must contact their helping team.. | 2 | 2 | 0 | I'm pretty new to programming and I've made a small Python application with tkinter and would like to host it on my GoDaddy website. I just can't seem to figure out how to connect the two. | How to get a Python script running on GoDaddy hosting? | 1.2 | 0 | 0 | 1,443 |
45,258,333 | 2017-07-22T19:23:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,image-processing,bitmap,python-imaging-library | 45,258,561 | 2 | false | 0 | 0 | You can't. Simple answer. Python on its own does not have any libraries pre-installed or pre-imported for manipulating pictures. Why wouldn't you use libraries? If you are building a big project or even a smaller one, I think you will need to import a library. Python is very dependent on libraries! Maybe in the future if python has a pre-installed & pre-imported library for picture editing! :) | 1 | 0 | 0 | I don't want to use any library. And learn popular image processing algorithms and implement it. Purely using Python.
It's my project in college. Not anything fancy, just want to implement simple image processing algorithms on Bitmap. | How should I make a Image Editor using python, without using any library? | 0 | 0 | 0 | 259 |
45,259,272 | 2017-07-22T21:11:00.000 | 1 | 1 | 0 | 0 | python,raspberry-pi,debian,raspberry-pi3 | 45,259,716 | 2 | true | 0 | 1 | For editing text using the terminal vim is an excellent choice (vim mygame.py). Initially it is going to be confusing, because it has two different modes and it is easy to forget which one you are in. But in the long term learning it will pay off, because it can do some incredible things for you. Once you get used to it, it will make nano look like a bad joke. And this is probably the best time for your daughter to learn it: later on it is only going to get more difficult to learn a more abstract, and more powerful editor.
The first thing to remember is that initially, after starting vim, you are in command mode so you cannot type text like you would expect. To change into editing mode, just press i (without a colon), then you can type text like in any other editor, until you press Esc, which goes back to command mode. Commands start with a colon. For example you can quit vim by typing :q (with the colon) and then pressing Enter. You write the file (i.e. save your changes) using :w. You can give it a filename too, which works exactly like "Save as...". To open another file for editing you can use :e otherfile.py.
These were the most essential things I could think of, but there are other modes for selecting lines, characters, rectangular blocks. For copy & pasting, and other things I would recommend going through a tutorial, or just searching for vim copy paste or whatever is needed. I cannot emphasize enough that it is worth learning these, because of the advanced features of the editor, especially if you are planning to use the editor for coding! As a quick example, you can completely reindent your whole code by typing gg=G in command mode.
The default settings of vim will give you a very basic look and feel, but you can download (and later customize) a .vimrc file which simply goes into your home directory, and from then on this will be used at every start. If you just Google vimrc, you will find a lots of good examples to start with, which will turn on syntax highlighting with pretty colours, and give you some more sensible settings in general. I would recommend downloading one or two versions of the .vimrc file early on, and trying out the difference it can make.
Another option would be emacs, which is equally powerful, and equally confusing for a beginner. If you want an editor that is intuitive using the terminal, nano is probably your best bet, from those that are installed by default. Yes, nano counts as intuitive. Anything else will be somewhat more difficult, and far more powerful. | 2 | 0 | 0 | I just purchased a "Kano" (raspberry pi) for my daughter, and we're trying to create a python script using the terminal. I've been using the nano text editor, and so far it's been going well, but I know that there are better code editors for python.
Does anyone have a recommendation for a code editor for python that I can launch from the LXTerminal? For example, in a manner similar to how the nano editor is launched to edit a python script ("nano mygame.py")
Ideally, I want something that comes re-installed with kano/Debian that I can use out of the box, which is very user-friendly. I feel like always having to resort to ^O and ^X etc. to save and exit is really not user-friendly. Also, nano doesn't seem to have good syntax highlighting and indentation, etc. which would be nice for coding.
I have the Pi 3 with all the latest software updates (as of the writing of this post)
thanks,
Darren | user-friendly python code editor in debian (raspberry pi) | 1.2 | 0 | 0 | 455 |
45,259,272 | 2017-07-22T21:11:00.000 | 0 | 1 | 0 | 0 | python,raspberry-pi,debian,raspberry-pi3 | 45,260,780 | 2 | false | 0 | 1 | Geany is a nice little GUI editor in Raspbian. I use it over nano every time. No frills. but familiar menu commands and easy interface. | 2 | 0 | 0 | I just purchased a "Kano" (raspberry pi) for my daughter, and we're trying to create a python script using the terminal. I've been using the nano text editor, and so far it's been going well, but I know that there are better code editors for python.
Does anyone have a recommendation for a code editor for python that I can launch from the LXTerminal? For example, in a manner similar to how the nano editor is launched to edit a python script ("nano mygame.py")
Ideally, I want something that comes re-installed with kano/Debian that I can use out of the box, which is very user-friendly. I feel like always having to resort to ^O and ^X etc. to save and exit is really not user-friendly. Also, nano doesn't seem to have good syntax highlighting and indentation, etc. which would be nice for coding.
I have the Pi 3 with all the latest software updates (as of the writing of this post)
thanks,
Darren | user-friendly python code editor in debian (raspberry pi) | 0 | 0 | 0 | 455 |
45,259,985 | 2017-07-22T22:54:00.000 | 0 | 0 | 1 | 0 | python,sympy | 45,288,121 | 3 | false | 0 | 0 | I think the most straightfoward way to do this is to use sympy.symarray, like so:
x = sympy.symarray("x",(5,5,5))
This creates an accordingly sized (numpy) array - here the size is 5x5x5 - that contains sympy variables, more specifically these variables are prefixed with whatever you chose - here "x"- and have as many indices as you provided dimensions, here 3. Of course you can make as many of these arrays as you need - perhaps it makes sense to use different prefixes for different groups of variables for readability etc.
You can then use these in your code by using e.g. x[i,j,k]:
In [6]: x[0,1,4]
Out[6]: x_0_1_4
(note that you can not access the elements via x_i_j_k - I found this a bit counterintuitive when I started using sympy, but once you get the hang on python vs. sympy variables, it makes perfect sense.)
You can of course also use slicing on the array, e.g. x[:,0,0].
If you need a python list of your variables, you can use e.g. x.flatten().tolist().
This is in my opinion preferable to using sympy.MatrixSymbol because (a) you get to decide the number of indices you want, and (b) the elements are "normal" sympy.Symbols, meaning you can be sure you can do anything with them you could also do with them if you declared them as regular symbols.
(I'm not sure this is still the case in sympy 1.1, but in sympy 1.0 it used to be that not all functionality was implemented for MatrixElement.) | 1 | 2 | 1 | I need to do symbolic manipulations to very large systems of equations, and end up with well over 200 variables that I need to do computations with. The problem is, one would usually name their variables x, y, possibly z when solving small system of equations. Even starting at a, b, ... you only get 26 unique variables this way.
Is there a nice way of fixing this problem? Say for instance I wanted to fill up a 14x14 matrix with a different variable in each spot. How would I go about doing this? | Is there a good way to keep track of large numbers of symbols in scipy? | 0 | 0 | 0 | 109 |
45,261,303 | 2017-07-23T03:45:00.000 | 11 | 0 | 0 | 0 | python,django,postgresql,django-models,migration | 45,261,424 | 2 | true | 1 | 0 | AFAIK, there's no officially supported way to do this, because fields are supposed to be atomic and it shouldn't be relevant. However, it messes with my obsessive-compulsive side as well, and I like my columns to be ordered for when I need to debug things in dbshell, for example. Here's what I've found you can do:
Make a migration with python manage.py makemigrations
Edit the migration file and reorder the fields in migrations.createModel
Good luck! | 2 | 5 | 0 | I use Django 1.11, PostgreSQL 9.6 and Django migration tool. I couldn't have found a way to specify the column orders. In the initial migration, changing the ordering of the fields is fine but what about migrations.AddField() calls? AddField calls can also happen for the foreign key additions for the initial migration. Is there any way to specify the ordering or am I just obsessed with the order but I shouldn't be?
Update after the discussion
PostgreSQL DBMS doesn't support positional column addition. So it is practically meaningless to expect this facility from the migration tool for column addition. | Django Migration Database Column Order | 1.2 | 1 | 0 | 5,299 |
45,261,303 | 2017-07-23T03:45:00.000 | 0 | 0 | 0 | 0 | python,django,postgresql,django-models,migration | 59,406,349 | 2 | false | 1 | 0 | I am not 100% sure about the PostgreSQL syntax but this is what it looks like in SQL after you have created the database. I'm sure PostgreSQL would have an equivalent:
ALTER TABLE yourtable.yourmodel
CHANGE COLUMN columntochange columntochange INT(11) NOT NULL AFTER columntoplaceunder;
Or if you have a GUI (mysql workbench in my case) you can go to the table settings and simply drag and drop colums as you wish and click APPLY. | 2 | 5 | 0 | I use Django 1.11, PostgreSQL 9.6 and Django migration tool. I couldn't have found a way to specify the column orders. In the initial migration, changing the ordering of the fields is fine but what about migrations.AddField() calls? AddField calls can also happen for the foreign key additions for the initial migration. Is there any way to specify the ordering or am I just obsessed with the order but I shouldn't be?
Update after the discussion
PostgreSQL DBMS doesn't support positional column addition. So it is practically meaningless to expect this facility from the migration tool for column addition. | Django Migration Database Column Order | 0 | 1 | 0 | 5,299 |
45,261,395 | 2017-07-23T04:07:00.000 | 0 | 0 | 0 | 0 | python,apache-spark,pyspark | 45,265,597 | 1 | false | 0 | 0 | More information about the job would help:
Some of the generic suggestions:
Arrangement operators is very important. Not all arrangements will result in the same performance. Arrangement of operators should be such to reduce the number of shuffles and the amount of data shuffled. Shuffles are fairly expensive operations; all shuffle data must be written to disk and then transferred over the network.
repartition , join, cogroup, and any of the *By or *ByKey transformations can result in shuffles.
rdd.groupByKey().mapValues(_.sum) will produce the same results as rdd.reduceByKey(_ + _). However, the former will transfer the entire dataset across the network, while the latter will compute local sums for each key in each partition and combine those local sums into larger sums after shuffling.
You can avoid shuffles when joining two datasets by taking advantage
of broadcast variables.
Avoid the flatMap-join-groupBy pattern.
Avoid reduceByKey When the input and output value types are different.
This is not exhaustive. And you should also consider tuning your configuration of Spark.
I hope it helps. | 1 | 0 | 1 | I am new to spark and want to know about optimisations on spark jobs.
My job is a simple transformation type job merging 2 rows based on a condition. What are the various types of optimisations one can perform over such jobs? | Optimisations on spark jobs | 0 | 0 | 0 | 49 |
45,263,156 | 2017-07-23T08:40:00.000 | 2 | 0 | 0 | 0 | python,machine-learning,tensorflow,neural-network,conv-neural-network | 45,263,297 | 1 | true | 0 | 0 | Theoretically there is no limit on the size of images being fed into a CNN. The most significant problem with larger image sizes is the increased memory footprint, especially with large batches. Moreover, you would need to use more convolutional layers to down sample the input image. Downsizing an image is a possibility of course, but you will lose discriminative information, naturally. For downsampling you can use scipy's scipy.misc.imresize. | 1 | 1 | 1 | I understand reating convolutional nerural network for 32 x 32 x 3 image, but i am planning to use larger image with different pixels. How can I reduce the image size to the required size ? does the pixel reduction impact accuracy in tensor flow ? | Building Convolutional Neural Network using large images? | 1.2 | 0 | 0 | 971 |
45,263,446 | 2017-07-23T09:15:00.000 | 3 | 1 | 1 | 0 | python,pip,pygsheets | 46,321,626 | 2 | false | 0 | 0 | Use
pip3 install command to access it | 2 | 1 | 0 | I installed pygsheets module with this command: pip install https://github.com/nithinmurali/pygsheets/archive/master.zip
When I tried to execute script, I got following error:
Traceback (most recent call last): File
"/usr/local/bin/speedtest-to-google", line 7, in
import pygsheets ImportError: No module named 'pygsheets'
I executed pip list and found: pygsheets (v1.1.2). | ImportError: No module named 'pygsheets' | 0.291313 | 0 | 0 | 5,115 |
45,263,446 | 2017-07-23T09:15:00.000 | 1 | 1 | 1 | 0 | python,pip,pygsheets | 45,689,214 | 2 | true | 0 | 0 | Script uses Python3 packages, so command pip3 install has to be used. | 2 | 1 | 0 | I installed pygsheets module with this command: pip install https://github.com/nithinmurali/pygsheets/archive/master.zip
When I tried to execute script, I got following error:
Traceback (most recent call last): File
"/usr/local/bin/speedtest-to-google", line 7, in
import pygsheets ImportError: No module named 'pygsheets'
I executed pip list and found: pygsheets (v1.1.2). | ImportError: No module named 'pygsheets' | 1.2 | 0 | 0 | 5,115 |
45,264,225 | 2017-07-23T10:49:00.000 | 0 | 0 | 0 | 1 | android,appium,python-appium | 45,297,308 | 1 | false | 0 | 0 | If you are using the appium command line tool,then you can update to the desired version by using below mentioned command:
npm update -g appium
If you want to install version 1.6.3 :
Command : npm install appium 1.6.3
Also,if you want to update through appium UI,then you can update by clicking on 'Check for Updates' button available under 'File' menu. | 1 | 0 | 0 | I need to update the latest version of the appium, alas, in the Russian forums there is not much information for it. I have a 1.1.0 beta | How to update appium 1.1.0 beta | 0 | 0 | 0 | 83 |
45,266,976 | 2017-07-23T15:46:00.000 | 0 | 0 | 0 | 0 | python,tensorflow,batch-processing | 45,868,691 | 1 | true | 0 | 0 | I solved the ratio problem!! I create two batch, one for validation, one for train. Then i concatenate them with image_batch = tf.concat([image_validation_batch, image_train_batch], 0). This is only for image batch, i will investigate on the label. | 1 | 1 | 1 | I can't find an appropriate question title, sorry.
I have a graph composed by two main data flow: image classification and label cleaning. I have two type of data:
(image_data, noisy_label, verified_label) from validation set
(image_data, noisy_label) from train set
The first is used to train the label cleaning part of the graph.
The second is used to train the image classification after its noisy label is cleaned.
Every batch need to have a ratio of 1:9.
How can i create this type of batch?? is it possible in tensorflow?? | Tensorflow: how to create batch with different type of data from different source (folder)? | 1.2 | 0 | 0 | 258 |
45,270,129 | 2017-07-23T21:38:00.000 | 5 | 0 | 1 | 0 | python,python-3.x | 45,270,151 | 4 | false | 0 | 0 | Your error indicates that you either did for i in x (which makes no sense) or for i in s (where s is an element of x, a string). What you meant to do was for i in range(len(s)). Even better would be c.isalpha() for c in s. | 1 | 0 | 0 | Say that list(x) = ["12/12/12", "Jul-23-2017"]
I want to count the number of letters (which in this case is 0) and the number of digits (which in this case is 6).
I tried calling x[i].isalpha() and x[i].isnumeric() while iterating through a for loop and an error was thrown stating
"TypeError: list indices must be integers or slices, not str"
Any help would be greatly appreciated! | Count the number of digits and letters inside a list? | 0.244919 | 0 | 0 | 3,190 |
45,270,138 | 2017-07-23T21:39:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,sublimetext3,sublimetext,sublimerepl | 45,355,836 | 1 | true | 0 | 0 | This probably means that the program is still running in the background, one way to end the instance is to right-click somewhere on the open tab that is running the Python instance and select Kill from the contextual menu, instead of going to the Task Manager. | 1 | 3 | 0 | I'm running Python 3.6 and Sublime Text 3. I tend to use Sublimerepl to quickly run my code and verify that everything works whenever I make a few changes and close the tab right afterwords. This does mean that when I exit Sublime, I also need to go into Task Manager and end > 10 instances of python ususally. Is there a way to make closing the sublimerepl tab also close the instance of Python that it created? | How do I make sublimerepl close the instance of Python it created when I close the sublimerepl tab? | 1.2 | 0 | 0 | 459 |
45,270,330 | 2017-07-23T22:10:00.000 | 1 | 1 | 1 | 0 | python-3.x,aws-lambda | 46,840,601 | 1 | false | 1 | 0 | I don't think you can write in the /var/task/ folder. If you want to write something to disk inside of the lambda runtime try the /tmp folder. | 1 | 1 | 0 | In the configuration for my Pyhon 3.6 AWS Lambda function I set the environment variable "PYTHONVERBOSE" with a setting of 1
Then in the Cloudwatch logs for my function it shows lots of messages similar to:
could not create '/var/task/pycache/auth.cpython-36.pyc': OSError(30, 'Read-only file system')
Is this important? Do I need to fix it? | AWS Lambda Python lots of "could not create '/var/task/__pycache__/FILENAMEpyc'" messages | 0.197375 | 0 | 0 | 868 |
45,274,732 | 2017-07-24T07:37:00.000 | 2 | 0 | 1 | 0 | python,pip | 45,274,832 | 1 | true | 0 | 0 | The simplest way to ensure this is to adjust your PATH so the directory with pip2 in it comes before the one with pip3 in it.
Another alternative is to run the pip2 command instead of pip.
Nowadays, however, it's safest to use the command python -m pip rather than just pip. As long as you know which python you are running this guarantees it uses the right version of pip.
So you might consider python2 -m pip and python3 -m pip to keep the environments separate from each other. | 1 | 0 | 0 | I have python 2.7 and 3.5 installed on my Ubuntu. There are also corresponding pip installed. Just pip refers to the pip of python 3.5 , or in other words pip=pip3. I want pip to refer to pip2. How do I do it? | Pip version change | 1.2 | 0 | 0 | 1,188 |
45,275,166 | 2017-07-24T08:01:00.000 | 1 | 0 | 1 | 0 | python,machine-learning,sentiment-analysis,vader | 57,711,059 | 2 | false | 0 | 0 | I tried NLTK Vader in another language. It works fairly well with German - after all, the languages are not too far from each other.
There is some work involved - we can't just translate the lexicon:
Change the vader_lexicon.txt
Change the NEGATE words in the code
Change the BOOSTER words in the code
Change SPECIAL_CASE_IDIOMS in the code
In general, negations work, but there are cases which involve some additional work which I haven't figured out yet. | 1 | 6 | 0 | I'm stuck in sentiment analysis and I found Vader solution which is the best I could find so far. My issue is that I don't find any doc on how to feed it with languages other than English. | Is Vader SentimentIntensityAnalyzer Multilingual? | 0.099668 | 0 | 0 | 11,919 |
45,276,669 | 2017-07-24T09:17:00.000 | 1 | 0 | 0 | 0 | python,neural-network | 45,276,971 | 1 | false | 0 | 0 | A hard drive could simply store a file that represents your network. Perhaps a JSON file with the thousands of optimized weight values, etc. Or if it is optimized, simply a template on layers and neurons, depending on what you hope to do (test or train?). Then the program you have on your computer can load this file, and test/train it. The fact it is on a hard drive makes no difference. | 1 | 0 | 1 | I was wondering if anybody either had experience or suggestions regarding the
possibility of storing a neural network (nodes, synapses) on an external hard-drive, and using a computer to run it. I do not know whether this is possible.
I would like to be able to run a convolutional Neural Network while not loading my
computer up. Thanks. | Neural Network on an external harddrive | 0.197375 | 0 | 0 | 180 |
45,277,365 | 2017-07-24T09:50:00.000 | 0 | 0 | 0 | 1 | python,django,apache,dispy,dask-distributed | 45,279,560 | 1 | false | 1 | 0 | Simple answer : stop wasting your time trying to do complicated things that will never work right with your typical web server and store your data in a database (doesn't have to be a mysql database FWIW).
Longest answer: in a production environment you typically have several parallel (sub)processes handling incoming requests, and any of those process can serve any user at any time, so keeping your data in memory in a process will never work reliably. This is by design and this is a sane design, so trying to fight against it is just a waste of time and energy. Web server processes are not meant to persist data before requests, that's what your database is for, so use it. | 1 | 0 | 0 | I'm developing with Apache and Django an web application application where users interacts with a data model (C++ implementation wrapped into Python).
To avoid load / save data in a file or database after each user operation, I prefer keep data model in memory as long as the user is connected to the app.
Until now, data models are stored into a variable attached to web service. As Python running under Apache has sometimes strange behavior, I'd prefer execute user operation into separated python process, today on same server, maybe tomorrow on a different node.
I'm under the impression that Distributed computing library (dispy, dask distributed) does not enable to keep memory attached to a node. Does anyone have a solution / idea about what libraries could I use ? | Node process with dedicated memory in Python | 0 | 0 | 0 | 65 |
45,279,148 | 2017-07-24T11:13:00.000 | 0 | 0 | 0 | 0 | django,python-3.x,rest,soap,django-rest-framework | 45,458,723 | 2 | false | 1 | 0 | Lets Discuss both the Approaches and their pros and cons
Seperate SOAP Service
Reusing Same Code - if you are sure the code changes will not impact the two code flow ,it is good to go.
Extension of Features - if you are sure that new feature extension will not impact other parts it is again best to go.
Scalablity - if new API are part of same application and you are sure that it will be scalable with more load ,it is again a good option.
Extension - if you are sure in future adding more API will not create a mess of code, it is again good to go for.
Soap Wrapper Using Python (my favourate and suggested way to go)
Seperation of Concern with this you can make sure ,what ever code you write is sperate from main logic and you can easly plug in and plug out new things.
Answer for all the above question in case of this is YES.
Your Call ,
Comments and critisicsm are most welcome | 1 | 15 | 0 | I have existing REST APIs, written using Django Rest Framework and now due to some client requirements I have to expose some of them as SOAP web services.
I want to know how to go about writing a wrapper in python so that I can expose some of my REST APIs as SOAP web services. OR should I make SOAP web services separately and reuse code ?
I know this is an odd situation but any help would be greatly appreciated. | Write a wrapper to expose existing REST APIs as SOAP web services? | 0 | 0 | 1 | 3,292 |
45,280,020 | 2017-07-24T11:56:00.000 | 2 | 0 | 0 | 0 | python,k-means,gensim,word2vec | 45,289,131 | 1 | false | 0 | 0 | In gensim's Word2Vec model, the raw number_of-words x number_of_features numpy array of word vectors is in model.wv.vectors. (In older Gensim versions, the .vectors property was named .syn0 matching the original Google word2vec.c naming).
You can use the model.wv.key_to_index dict (previously .vocab) to learn the string-token-to-array-slot assignment, or the model.wv.index_to_key list (previously .index2word) to learn the array-slot-to-word assignment.
The pairwise distances aren't pre-calculated, so you'd have to create that yourself. And with typical vocabulary sizes, it may be impractically large. (For example, with a 100,000 word vocabulary, storing all pairwise distances in the most efficient way possible would require roughly 100,000^2 * 4 bytes/float / 2 = 20GB of addressable space. | 1 | 0 | 1 | I have generated a word2vec model using gensim for a huge corpus and I need to cluster the vocabularies using k means clustering for that i need:
cosine distance matrix (word to word, so the size of the matrix the number_of_words x number_of_words )
features matrix (word to features, so the size of the matrix is the number_of_words x number_of_features(200) )
for the feature matrix i tried to use x=model.wv and I got the object type as gensim.models.keyedvectors.KeyedVectors and its much smaller than what I expected a feature matrix will be
is there a way to use this object directly to generate the k-means clustering ? | getting distance matrix and features matrix from word2vec model | 0.379949 | 0 | 0 | 1,198 |
45,280,650 | 2017-07-24T12:27:00.000 | 2 | 0 | 0 | 1 | python,airflow | 45,298,761 | 8 | false | 0 | 0 | In this case I would use a PythonOperator from which you are able to get a Hook on your database connection using
hook = PostgresHook(postgres_conn_id=postgres_conn_id). You can then call get_connection on this hook which will give you a Connection object from which you can get the host, login and password for your database connection.
Finally, use for example subprocess.call(your_script.sh, connection_string) passing the connection details as a parameter.
This method is a bit convoluted but it does allow you to keep the encryption for database connections in Airflow. Also, you should be able to pull this strategy into a separate Operator class inheriting the base behaviour from PythonOperator but adding the logic for getting the hook and calling the bash script. | 1 | 37 | 0 | We are using Airflow as a scheduler. I want to invoke a simple bash operator in a DAG. The bash script needs a password as an argument to do further processing.
How can I store a password securely in Airflow (config/variables/connection) and access it in dag definition file?
I am new to Airflow and Python so a code snippet will be appreciated. | Store and access password using Apache airflow | 0.049958 | 0 | 0 | 49,127 |
45,281,971 | 2017-07-24T13:29:00.000 | 1 | 0 | 0 | 0 | python,django | 45,282,175 | 1 | true | 1 | 0 | It totally depends on your application.
If you are an only developer working on project .
It is advisable to write one view for each web page or event.
If you have multiple developers in your house you can split the view if you want to make a part of it reusable or something like that.
Again its all about how your team work,better stick to the same style for the entire project
all the best | 1 | 0 | 0 | This is a non-specific question about best practice in Django. Also note when I say "app" I'm referring to Django's definition of apps within a project.
How should you go about deciding when to use a new view and when to create an entirely new app? In theory, you can have a simple webapp running entirely on one views.py for an existing app.
So how do you go about deciding when to branch off to a new app or just add a new function in your views.py? Is it just whatever makes the most sense? | When To Use A View Vs. A New Project | 1.2 | 0 | 0 | 28 |
45,282,194 | 2017-07-24T13:39:00.000 | 0 | 0 | 0 | 0 | python,tensorflow | 45,282,483 | 1 | true | 0 | 0 | This is impossible. You have tensor, which contains batch_size * max_word_length
elements and tensor which contains batch_size * predicted_label elements. Hence there are
batch_size * (max_word_length + predicted_label)
elements. And now you want to create new tensor [batch_size, max_word_length, predicted_label] with
batch_size * max_word_length * predicted_label
elements. You don't have enough elements for this. | 1 | 0 | 1 | I am doing some sentiment analysis with Tensorflow, but there is a problem I can't solve:
I have one tensor (input) shaped as [?, 38] [batch_size, max_word_length] and one (prediction) shaped as [?, 3] [batch_size, predicted_label].
My goal is to combine both tensors into a single tensor with the shape of [?, 38, 3].
This tensor is used as the input of my second stage.
Seems easy, but i can't find a way of doing it.
Can (and will) you tell me how to do this? | Tensorflow: combining two tensors with dimension X into one tensor with dimension X+1 | 1.2 | 0 | 0 | 834 |
45,282,330 | 2017-07-24T13:46:00.000 | 1 | 0 | 1 | 0 | python,import,pyqt5,pyqtgraph | 45,329,119 | 1 | true | 0 | 1 | In the documentation it implies that the promoted name can be anything, however, this seems to be untrue (at least at the moment).
Under “Promoted class name”, enter the class name you wish to use (“PlotWidget”, “GraphicsLayoutWidget”, etc).
After re-doing my QGraphicsWidget for the 6th time, I decided to name it one of the example names in the tutorial, which seems to have solved the problem.
In other words name your widget and promoted widget "PlotWidget", "ImageView", "GraphicsLayoutView", or "GraphicsView". (Please keep in mind I have only tested "PlotWidget".) | 1 | 0 | 0 | I'm trying to make a program using PyQt5 and pyqtgraph, however, I keep running into this error:
ImportError: cannot import name 'QGraphWidget'
I used QtDesigner to make a form and promoted a QGraphicsWidget. I know I did it correctly (I've done it at least 10 times to try and resolve the issue), but the error persists.
I'm using Windows 7, Anaconda, and PyCharm, but I tried running the code in other environments and still got the error. | pyqtgraph QGraphicsWidget import error | 1.2 | 0 | 0 | 644 |
45,283,914 | 2017-07-24T15:01:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,user-interface,tkinter,py2app | 45,290,904 | 1 | false | 0 | 1 | So if some one else is facing the same issue, the best way to figure out what is wrong is to run the console version of the app by going to the MacOS folder. In this specific case, sierra was not letting my app create a log file, siting lack of permissions, which was wierd, it seems like sierra has some extra security feature that doesnt let 3rd party apps create new files (just a guess, could be some other reason too), so when I opened the app from terminal with 'sudo', the problem was fixed. I had to do this only once though, even after restarts double clicking the icon opened up the application and updated the log file too. Hope this helps if you came in search of answers here. | 1 | 0 | 0 | I built a py2app and Tkinter based application, and sent it to a friend, it does not seem to be working on the friends laptop that runs OSX sierra. Is there anything I can do?
When I try to open the application on my friends computer it just says Hook Error (name of the application is hook). | py2app tkinter application built on El capitan not working on a sierra | 0 | 0 | 0 | 181 |
45,285,743 | 2017-07-24T16:32:00.000 | 1 | 0 | 0 | 0 | python,pandas,numpy | 45,285,878 | 5 | false | 0 | 0 | Numpy is very fast with arrays, matrix, math.
Pandas series have indexes, sometimes it's very useful to sort or join data.
Dictionaries is a slow beast, but sometimes it's very handy too.
So, as it was already was mentioned, it depends on use case which data types and tools to use. | 1 | 48 | 1 | I am new to learning Python, and some of its libraries (numpy, pandas).
I have found a lot of documentation on how numpy ndarrays, pandas series and python dictionaries work.
But owing to my inexperience with Python, I have had a really hard time determining when to use each one of them. And I haven't found any best-practices that will help me understand and decide when it is better to use each type of data structure.
As a general matter, are there any best practices for deciding which, if any, of these three data structures a specific data set should be loaded into?
Thanks! | When to use pandas series, numpy ndarrays or simply python dictionaries? | 0.039979 | 0 | 0 | 27,323 |
45,286,053 | 2017-07-24T16:52:00.000 | 0 | 0 | 1 | 0 | python,rocksdb | 45,400,410 | 2 | false | 0 | 0 | RocksDB is a key-value store, and both key and value are binary strings.
If you want to filter by given keys, just use the Get interface to search the DB.
If you want to filter by given key patterns, you have to use the Iterator interface to iterating the whole DB, and filter the records with keys that match the pattern.
If you want to filter by values or value patterns, you still need to iterating the whole DB. For each key-value pair, deserialize the value, and check if it equals to the give value or matches the give pattern.
For case 1 and case 2, you don't need to deserialize all values, but only values that equal to the give key or match the pattern. However, for case 3, you have to deserialize all values.
Both case 2 and case 3 are inefficient, since they need to iterate the whole key space.
You can configure RocksDB's key to be ordered, and RocksDB have a good support for prefix indexing. So you can efficiently do range query and prefix query by key. Check the documentation for details.
In order to efficiently do value filter/search, you have to create a value index with RocksDB. | 1 | 2 | 0 | I want to interface with rocksdb in my python application and store arbitrary dicts in it. I gather that for that I can use something like pickle to for serialisation. But I need to be able to filter the records based on values of their keys. What's the proper approach here? | How to facilitate dict record filtering by dict key value? | 0 | 0 | 0 | 73 |
45,288,631 | 2017-07-24T19:31:00.000 | 2 | 0 | 0 | 1 | python,protocol-buffers,macports | 45,290,029 | 2 | false | 0 | 0 | MacPorts has a port for OLA. Installing it will automatically install the required protobuf module.
Command is sudo port install ola
Or is there some reason that you need to install a version from GitHub? | 1 | 0 | 0 | Using MacPorts to try and install OLA for work. Came across this error when trying to build OLA after getting it from github:
checking for python module: google.protobuf... no
configure: error: failed to find required module google.protobuf
Tried googling around to see if there was a solution, didn't find one. Like the title implies, I'm on a Macbook Air running Sierra, Python version is 2.7.10. | macOS 10.12; Configure: error: failed to find required module google.protobuf | 0.197375 | 0 | 0 | 593 |
45,290,682 | 2017-07-24T21:53:00.000 | 0 | 0 | 0 | 1 | python,ibm-doors | 45,373,289 | 1 | false | 0 | 0 | I would recommend trying your command string C:\\myPython.exe H:\\myscript.py in the command line first, seeing if that works. If it does then it could be a permissions error (can't tell unless you have an error code to go with the CreateProcess error, as there are several types).
It might then be worth checking you can run any kind of command in the command line from DOORs (system("notepad") should do the trick)
If that doesn't work running DOORS as an admin may fix your issue, you can do this by going to the doors.exe right-clicking Properties -> Compatibility and selecting "Run this program as an administrator". | 1 | 0 | 0 | How do you run a python script from inside DXL/DOORS? I attempted using the system() command but only gotten errors. | Running Python Script from inside DXL/DOORS | 0 | 0 | 0 | 1,617 |
45,290,798 | 2017-07-24T22:02:00.000 | 4 | 0 | 1 | 1 | python,module,directory,pip,command-prompt | 45,666,971 | 1 | false | 0 | 0 | use 'python -m puse 'python -m pip install package-name' instead. | 1 | 2 | 0 | None of the current error inquiries addressed my specific situation (it seems like a pretty well rounded problem). I am trying to install Pillow (image module for Python). I have the correct version of the whl file, and the correct installation of Python 3.6. My paths have been confirmed.
Steps that I took:
Downloaded the whl file
Opened downloads in command window
Typed the pip path, install, and then my whl file.
Then I got the error: "Fatal error in launcher: Unable to create process using" | What does "Fatal error in launcher: Unable to create process using" mean? | 0.664037 | 0 | 0 | 4,784 |
45,294,801 | 2017-07-25T05:44:00.000 | 0 | 0 | 0 | 0 | python,tensorflow | 45,297,247 | 2 | false | 0 | 0 | By default, if the GPU version of TensorFlow is installed, TensorFlow will use all the GPU available.
To control the GPU memory allocation, you can use the tf.ConfigProto().gpu_options. | 1 | 2 | 1 | I use two GTX 980 gpu.
When I am dealing with slim in tensorflow.
Usually, i have a problem so called 'Out of Memory'.
So, I want to use two gpu at the same time.
How can I use 2 gpu?
Oh, sorry for my poor english skill. :( | How can I use 2 gpu to calculate in tensorflow? | 0 | 0 | 0 | 599 |
45,295,727 | 2017-07-25T06:42:00.000 | 0 | 0 | 0 | 0 | python,amazon-web-services,flask,amazon-elastic-beanstalk,firebase-admin | 45,299,181 | 1 | true | 1 | 0 | I've Fixed it!
As it states the problem is in Requirements.txt.
Instead of>> pip freeze > requirement.txt
Just added limited resources in the requirements file without version.
fix:requirement.txt contains
flask
flask_sqlalchemy
firebase_admin
pymysql | 1 | 1 | 0 | I'm using python-flask and firebase-admin (for authentication) in my mobile app backend. I'm deploying my code to AWS Elastic Beanstalk. Everything is fine until I install the firebase-admin through "pip install firebase-admin".
I've committed changed to git.
Now, the deployment fails and displays the following message.
*MacBook-Pro:pets-friend-api santosh.guruju$ eb deploy
WARNING: Git is in a detached head state. Using branch "default".
WARNING: Git is in a detached head state. Using branch "default".
WARNING: Git is in a detached head state. Using branch "default".
WARNING: Git is in a detached head state. Using branch "default".
WARNING: Git is in a detached head state. Using branch "default".
Creating application version archive "app-d517-170725_142037".
Uploading PetsFrenzAPI/app-d517-170725_142037.zip to S3. This may take a while.
Upload Complete.
INFO: Environment update is starting.
INFO: Deploying new version to instance(s).
ERROR: Your requirements.txt is invalid. Snapshot your logs for details.
ERROR: [Instance: i-054100c8ffb51643c] Command failed on instance. Return code: 1 Output: (TRUNCATED)...)
File "/usr/lib64/python2.7/subprocess.py", line 541, in check_call
raise CalledProcessError(retcode, cmd)
CalledProcessError: Command '/opt/python/run/venv/bin/pip install -r /opt/python/ondeck/app/requirements.txt' returned non-zero exit status 1.
Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03deploy.py failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
INFO: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
ERROR: Unsuccessful command execution on instance id(s) 'i-054100c8ffb51643c'. Aborting the operation.
ERROR: Failed to deploy application.* | Firebase-admin:- Deployment Failed: ERROR: Your requirements.txt is invalid. Snapshot your logs for details. | 1.2 | 0 | 0 | 471 |
45,297,258 | 2017-07-25T07:58:00.000 | 2 | 0 | 1 | 0 | python,json,django,elasticsearch-dsl | 45,313,746 | 1 | true | 1 | 0 | You can always do MyDoc(**my_dict) | 1 | 1 | 0 | I'm using the elasticsearch-dsl library to define mappings in elasticsearch and to index django model objects. For initial indexing I, however, want to use json data for all models. Is there a way to instantiate DocType subclass objects directly from json or from a python dict? | Creating elasticsearch-dsl DocType objects from dict or json | 1.2 | 0 | 0 | 762 |
45,298,393 | 2017-07-25T08:53:00.000 | 0 | 0 | 0 | 0 | python,treeview,openerp,one2many | 45,298,797 | 1 | false | 1 | 0 | to use one2many field you need many2one field in products
to this new model that you create. to make it easy use many2many
field it's better that way and use onchange to fill it.
just search for product that have parent_id equals to the selected
product and add this record to your many2many field.
if you need to keep it using o2m field it's better to add more code
to see what you did and what is the many2one field that you added
in your product to your new model. | 1 | 0 | 0 | I'm writing a module in odoo. I hve defined some parent products and their child products. I want to do, when I'm selecting a parent product from many2one field, this parent product's childs will open in Treeview lines automatically. This tree view field is defined as one2many field.
I used onchange_parent_product funtion, also added filter according to parent_product_id.
But treeview nothing show when I select a parent product..
Please help me how can I fill treeview lines automatically ? | How to autofill child produts in treeview, when parent product (BOM) is selected in odoo? | 0 | 0 | 0 | 433 |
45,299,079 | 2017-07-25T09:23:00.000 | 2 | 0 | 1 | 0 | python | 45,299,339 | 1 | true | 0 | 0 | Yes, that's correct. Most of the time, just think of the value as the object itself.
You could also use the word 'state' to describe the object value; for mutable objects the value can change, but in general the object, it's type and identity, do not change.
Some examples:
2048 is an int with the integer value 2048. int is an immutable type, so the value will never change. You generally create a new object of the same type with a different value; 2048 + 1 produces a new int object with value 2049, with a new identity.
[42] is a list with a single reference to another object. You can change the contents, changing the list value. But the identity and type would not change.
For instances of a Python class, __class__ is writable, letting you change the type dynamically. This is rarely needed, but the option exists. | 1 | 4 | 0 | I am reading python's language reference and in the 3rd chapter 'Data model' it is said that every object has an identity, type and value. The identity I understood. The type I guess means the object referenced by __class__ (please correct if wrong). I guess that value means the attributes of the object, or in other words the objects referenced by the names in the object's namespace. Is this correct ? | What does value mean in relation to python objects? | 1.2 | 0 | 0 | 118 |
45,299,561 | 2017-07-25T09:43:00.000 | 0 | 0 | 0 | 0 | python,tensorflow,resolution,inference,tensor | 45,456,995 | 1 | true | 0 | 0 | Okay so here is what I did:
input and output tensors now have the shape (batchsize, None, None, channels)
The training images now have to be resized outside of the network.
Important reminder: training images have to be the same size since they are in batches! Images in one batch have to have the same size. When inferencing the batch size is 1 so the size does not matter. | 1 | 1 | 1 | I am using tensorflow to scale images by a factor of 2. But since the tensor (batchsize, height, width, channels) determines the resolution it only accepts images of only one resolution for inference and training.
For other resolutions I have to modify the code and retrain the model. Is it possible to make my code resolution independent? In theory convolutions of images are resolution independent, I don't see a reason why this wouldn't be possible.
I have no idea how to do this in tensorflow though. Is there anything out there to help me with this?
Thanks | Variable Resolution with Tensorflow for Superresolution | 1.2 | 0 | 0 | 208 |
45,302,681 | 2017-07-25T12:01:00.000 | 2 | 0 | 1 | 0 | python-3.4,screen-resolution,pyautogui | 69,426,639 | 2 | false | 0 | 1 | I'm the creator of PyAutoGUI. The problem you have isn't with the screen resolution, but the screen scaling. Your program will work fine on monitors at different resolutions. But at high resolutions, the text and buttons of your programs become too small and so Windows and macOS fix this with "screen scaling", which can increase the size of UI elements like text and buttons by 150% or 200%. If you have multiple monitors, each monitor can have it's own screen scaling setting; for example one monitor at 100% and another at 150%.
If you take a screenshot while a monitor is at, for example, 100% and then try to use it on a monitor that is scaled at 200%, the image won't match because the screenshot is literally half the width and length of what it would have been on the 200% monitor.
So far, there is no work around for this. Resizing the screenshot might not work because there could be subtle differences and the screenshot mechanism currently needs a pixel-perfect match. You just need to retake the screenshots on the monitor with the different screen scaling setting. | 1 | 1 | 0 | I have a python script which runs perfectly on my work computer (1600 x 900 resolution). It is on this computer that I took all the screenshot images used by pyautogui.locateOnScreen. I tried to run this program on my home laptop with a different resolution (1340 x 640) and the script does not seem to find the image location. I am guessing that it is because of the different resolution. (I have copied the script folder from my work computer to the home computer, so the path to the image file is exactly the same). Is there anything I can change in my script so that pyautogui.locateOnScreen would identify the image on any computer resolution? | Running Pyautogui on a different computer with different resolution | 0.197375 | 0 | 0 | 4,334 |
45,307,072 | 2017-07-25T15:02:00.000 | 24 | 0 | 0 | 0 | python,tensorflow,neural-network | 45,308,393 | 4 | false | 0 | 0 | If alpha < 1 (it should be), you can use tf.maximum(x, alpha * x) | 1 | 16 | 1 | How can I change G_h1 = tf.nn.relu(tf.matmul(z, G_W1) + G_b1) to leaky relu? I have tried looping over the tensor using max(value, 0,01*value) but I get TypeError: Using a tf.Tensor as a Python bool is not allowed.
I also tried to find the source code on relu on Tensorflow github so that I can modify it to leaky relu but I couldn't find it.. | using leaky relu in Tensorflow | 1 | 0 | 0 | 19,787 |
45,310,481 | 2017-07-25T17:55:00.000 | 3 | 0 | 0 | 0 | python,matplotlib | 45,312,185 | 2 | false | 0 | 0 | @Cedric's Answer.
Additionally, if you get the pickle error for pickling functions, add the 'dill' library to your pickling script. You just need to import it at the start, it will do the rest. | 1 | 4 | 1 | I want to create a python script that zooms in and out of matplotlib graphs along the horizontal axis. My plot is a set of horizontal bar graphs.
I also want to make that able to take any generic matplotlib graph.
I do not want to just load an image and zoom into that, I want to zoom into the graph along the horizontal axis. (I know how to do this)
Is there some way I can save and load a created graph as a data file or is there an object I can save and load later?
(typically, I would be creating my graph and then displaying it with the matplotlib plt.show, but the graph creation takes time and I do not want to recreate the graph every time I want to display it) | python matplotlib save graph as data file | 0.291313 | 0 | 0 | 6,630 |
45,310,817 | 2017-07-25T18:14:00.000 | 1 | 1 | 0 | 0 | python,python-2.7,python-3.x | 45,310,903 | 1 | true | 0 | 0 | To save time, I would convert the smaller of the two libraries you mention to the other version. If they're about the same size, I'd convert the 2.x library to 3. The motor hat library is only ~350 lines of code, much of which will not change in conversion to 3.x. Would be a good self-teaching exercise... | 1 | 2 | 0 | Iv written a script that uses the adafruit motor hat library to control motors when it recives 433MHz ex transmitted codes! Very short range at the moment however this is the best way for my project!
The problem is the 433MHz rx/tx library is python3 and won't work on python2
And the adafruit_motor_hat_library is pyrhon2 and wont work on pyrhon3?
As I need both of these to work in the same scrip how can I go about this?
Also if I try and run from command line it won't work, (if I python3.scrip.py it brings up error with adafruit motor hat, if I python.script.py it brings up error on 433MHz?
If anybody needs my full scrip I can copy and past it here but as the problem doesn't seem to be with the actual scrip it seemed pretty pointless! But if you need it I can provide it | Adafruit motor hat python3 | 1.2 | 0 | 0 | 293 |
45,311,601 | 2017-07-25T19:04:00.000 | 0 | 0 | 1 | 0 | python | 45,311,699 | 1 | false | 0 | 0 | Type
which python
on terminal. | 1 | 0 | 0 | I'm trying to download pygame on my computer and use it, but from what I've seen, I need the 32-bit python not the 64-bit one I have. however, I cannot find where the file is on my computer to delete it. I looked through all of my files with the name of 'python' but nothing has shown up about the 64-bit pre installed program. Anyone know how to find it? | Where is python on mac? (osx el capitan) | 0 | 0 | 0 | 98 |
45,314,387 | 2017-07-25T22:13:00.000 | 2 | 0 | 1 | 0 | python,multithreading,web-scraping,multiprocessing | 45,314,501 | 1 | true | 0 | 0 | Since the multiprocessing module was developed to be largely compatible with the threading model that pre-dates it, you should hopefully not find it too difficult to move to threaded operations in a single process.
Any blocking calls (I/O, mostly) will cause the calling thread to be suspended (become non-runnable) and other threads will therefore get chance to use the CPU.
While it's possible to use multi-threading in multiple processes, it isn't usual to do so. | 1 | 1 | 0 | I'm building a web scraper that makes multiple requests concurrently. I'm currently using the multiprocessing module to do so, but since it's running on a Digital Ocean droplet, I'm running into processor/memory bottlenecks.
Since this is a web scraper and most of the time spent on the script is likely waiting for the network, isn't it more efficient to use threading instead in order to reduce resource usage? Does threading detect a blocking network call and release locks? Is it feasible to intertwine multiprocessing and multithreading? | Multiprocess vs multithreading for network operations | 1.2 | 0 | 1 | 384 |
45,316,569 | 2017-07-26T02:46:00.000 | 1 | 0 | 1 | 0 | python,tensorflow,module,pip,installation | 45,316,714 | 5 | false | 0 | 0 | If you are using windows:
If you take a gander at TensorFlow website under windows PIP installation first line says.
"Pip installation on Windows
TensorFlow supports only 64-bit Python 3.5 on Windows. We have tested the pip packages with the following distributions of Python:"
Now either install python 3.5, or use the unofficial version of Tensorflow from ANACONDA.
other way is to Download and install docker toolbox for windows https://www.docker.com/docker-toolbox Open a cmd window, and type: docker run -it b.gcr.io/tensorflow/tensorflow This should bring up a linux shell. Type python and I think all would be well! | 1 | 31 | 1 | I try to install TensorFlow via pip (pip install tensorflow) but get this error
could not find a version that satisfies the requirement tensorflow (from versions: )
Is there a solution to this problem? I still wish to install it via pip | How to install Tensorflow on Python 2.7 on Windows? | 0.039979 | 0 | 0 | 88,521 |
45,318,019 | 2017-07-26T05:18:00.000 | 0 | 0 | 0 | 1 | linux,python-2.7,ubuntu | 45,318,450 | 2 | false | 0 | 0 | os.system can do this work. for example, you want to run 'ls' under a shell. want_run='ls';os.system('bash -c '+ want_run); | 1 | 1 | 0 | There are mainly two questions that I would like to ask, thanks in advance.
(1) How can I open an external program in Linux?
I know in Windows there is a command os.startfile() to open another program, the equivalent for Ubuntu is open(), but there's no response after I run the code, and the alternative one is subprocess.call(). This works well in Windows, but in Ubuntu it fails, could someone provide a standard templete I can use for? (Similarly like to double click the icon of a program)
(2) How can I realize functions like the code is able to open the terminal and write down several commands in terminal automatically using python? | How can I open an external program using Python in Ubuntu? | 0 | 0 | 0 | 3,833 |
45,320,643 | 2017-07-26T07:50:00.000 | 0 | 0 | 0 | 0 | mysql,django,python-2.7 | 47,134,025 | 1 | false | 1 | 0 | It's not possible to do with default implementation. You need to download the source code and customize as per your needs. | 1 | 1 | 0 | I have a django sql explorer which is running with 5 queries and 3 users.
Query1
Query2
Query3
Query4
Query5
I want to give access of Query1 and Query5 to user1
and Query4 and Query2 to user2 and likewise.
my default url after somebody logins is url/explorer
based on users permission he should see only those queries but as of now all users can see all queries,
I tried to search stackoverflow and also other places through google but there is no direct answer. Can someone point me to right resource or help me with doing this. | django sql explorer - user based query access | 0 | 1 | 0 | 233 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.