Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
42,977,272 | 2017-03-23T13:20:00.000 | 0 | 0 | 0 | 0 | python,rest,http,networking,eventlet | 42,987,615 | 1 | false | 0 | 0 | Most likely reason in this case: DNS server request throttling. You can easily check if that's the case by eliminating DNS resolving (request http://{ip-address}/path, don't forget to add proper Host: header). If you do web crawling these steps are not optional, you absolutely must:
control concurrency automatically (without human action) based on aggregate (i.e. average) execution time. This applies at all levels independently. Back off concurrent DNS requests if you get DNS responses slower. Back off TCP concurrency if you get response speed (body size / time) slower. Back off overall request concurrency if your CPU is overloaded - don't request more than you can process.
retry on temporary failures, each time increase wait-before-retry period, search backoff algorithm. How to decide if an error is temporary? Mostly research, trial and error.
run local DNS server, find and configure many upstreams
Next popular problem with high concurrency that you'll likely face is OS limit of number of open connections and file descriptors. Search sysctl somaxconn and ulimit nofile to fix those. | 1 | 0 | 0 | I have a python application that uses eventlet Green thread (pool of 1000 green threads) to make HTTP connections. Whenever the client fired more than 1000 parallel requests ETIMEDOUT occurs. Can anyone help me out with the possible reason? | ETIMEDOUT occurs when client(jmeter) fired more than 1000 parallel HTTP requests | 0 | 0 | 1 | 194 |
42,978,349 | 2017-03-23T14:03:00.000 | -2 | 0 | 1 | 0 | python,anaconda | 62,708,019 | 7 | false | 0 | 0 | It is very simple, first, you need to be inside the virtualenv you created, then to install a specific version of python say 3.5, use Anaconda, conda install python=3.5
In general you can do this for any python package you want
conda install package_name=package_version | 1 | 47 | 1 | I want to install tensorflow with python 3.5 using anaconda but I don't know which anaconda version has python 3.5. When I go to anaconda download page am presented with Anaconda 4.3.1 which has either version 3.6 or 2.7 of python | Anaconda version with Python 3.5 | -0.057081 | 0 | 0 | 123,158 |
42,983,117 | 2017-03-23T17:34:00.000 | 3 | 0 | 0 | 0 | python,django | 42,983,217 | 1 | true | 1 | 0 | You would need to add null=True to the field parameter that gives you this error in your model to allow for null values or give it a default value default=<value> and then re-run your migration | 1 | 0 | 0 | I am new to Django.
I was trying to design a model in django. First I did with adding some fields, then I migrated the code. Later I found that I should have some other fields. I added some new fields, let say a CharField. Then while I was doing the migration, its showing error like you are trying to add a non-nullable field without a default. Can anybody tell should I add every time a default value to a new field OR Is there any other way to handle this? | What is the right way to add a field in the middle in django | 1.2 | 0 | 0 | 80 |
42,983,291 | 2017-03-23T17:43:00.000 | 1 | 0 | 0 | 0 | python,sockets | 42,983,798 | 1 | true | 0 | 0 | Blocking operations are operations which can not fully handled locally but where it might need to wait for the peer of the connection. For TCP sockets this includes therefore obviously accept, connect and recv. But it also includes send: send might block if the local write socket buffer is full, i.e. no more data can be written to it. In this case it must wait for the peer to receive and acknowledge enough data so that these data get removed from the write buffer and there is again room to write new data. | 1 | 1 | 0 | I am using the socket library to emulate sending packets over the network.
Documentation for socket.settimeout() method says..
... socket.settimeout(value)
Set a timeout on blocking socket
operations. The value argument can be a nonnegative float expressing
seconds, or None. If a float is given, subsequent socket operations
will raise a timeout exception if the timeout period value has elapsed
before the operation has completed. Setting a timeout of None disables
timeouts on socket operations. s.settimeout(0.0) is equivalent to
s.setblocking(0); s.settimeout(None) is equivalent to
s.setblocking(1).
What exactly are the blocking socket operations? Is it just recv* calls, or does it also include send calls?
Thank you in advance. | Python socket - what exactly are the "blocking" socket operations? | 1.2 | 0 | 1 | 641 |
42,983,784 | 2017-03-23T18:08:00.000 | 0 | 0 | 0 | 0 | python,bokeh | 43,007,775 | 1 | false | 0 | 0 | As of Bokeh 0.12.5 update messages are:
triggered immediately for a given property change
granular (are not batched in any way)
So, updating model.foo triggers an immediate message sent to the browser, and that message only pertains to the corresponding model.foo in the browser.
There Bokeh protocol allows for batching updates, but this capability is not really used anywhere yet. It is an open feature request to allow for some kind of batching that would delay sending update messages until some grouping of them could be collected. | 1 | 1 | 1 | I have a Bokeh document with many plots/models, each of which has its own ColumnDataSource. If I update one ColumnDataSource does that trigger updates to all of my models or only to the models to which the changed source is relevant?
I ask because I have a few models, some of which are complex and change slowly and others which are simple and change quickly. I want to know if it makes sense performance-wise to scale the update frequencies on a per-plot basis or if I have to actually have different documents for this to be effective.
I am running a Bokeh server application | Does updating one Bokeh ColumnDataSource affect the entire document? | 0 | 0 | 0 | 247 |
42,986,686 | 2017-03-23T20:48:00.000 | 1 | 0 | 0 | 0 | python,machine-learning,tensorflow,keras,conv-neural-network | 42,996,091 | 1 | false | 0 | 0 | The general approach is to use binary masks. Tensorflow provides several boolean functions such as tf.equal and tf.not_equal. For selecting only enterings which are equal to a certain value, you could use tf.equal and then multiply the loss tensor by the obtained binary mask. | 1 | 0 | 1 | I am currently building a CNN with Keras and need to define a custom loss function. I would only like to consider specific parts of my data in the loss and ignore others based on a certain parameter value. But, I am having trouble iterating over the Tensor objects that the Keras loss function expects.
Is there a simple way for me to compute the mean squared error between two Tensors, only looking at selected values in the Tensor?
For example, each Tensor in my case represents a 2D 16x16 grid, with each cell having 2 parameters - shape (16, 16, 2). I only want to compare cells where one of their parameters is equal to 1. | Selectively Iterate over Tensor | 0.197375 | 0 | 0 | 707 |
42,986,692 | 2017-03-23T20:49:00.000 | 2 | 0 | 1 | 0 | python,gil | 42,987,681 | 2 | false | 0 | 0 | During I/O the GIL is released to other threads can run.
Also some extensions (like numpy) can release the GIL when doing calculations.
So an important purpose is to improve performance on not CPU-bound programs. From the Python documentation for the threading module:
CPython implementation detail: In CPython, due to the Global Interpreter Lock, only one thread can execute Python code at once (even though certain performance-oriented libraries might overcome this limitation). If you want your application to make better use of the computational resources of multi-core machines, you are advised to use multiprocessing or concurrent.futures.ProcessPoolExecutor. However, threading is still an appropriate model if you want to run multiple I/O-bound tasks simultaneously.
Another benefit of threading is to do long-running calculations in a GUI program without having to chop up your calculations in small enough pieces to make them fit in timeout functions.
Also keep in mind that while CPython has a GIL now, that might not always be the case in the future. | 1 | 0 | 0 | CPython has a Global Interpreter Lock (GIL).
So, multiple threads cannot concurrently run Python bytecodes.
What then is the use and relevance of the threading package in CPython ? | threading package in CPython | 0.197375 | 0 | 0 | 777 |
42,988,519 | 2017-03-23T22:59:00.000 | 0 | 0 | 0 | 0 | python,websocket,tornado,python-asyncio | 43,017,774 | 1 | true | 0 | 0 | Full-duplex servers are necessarily concurrent. In Tornado and asyncio, concurrency is based on the asynchronous programming model, so if you use a websocket library based on one of those packages, your code will need to be asynchronous.
But that's not the only way: full-duplex websockets could be implemented in a synchronous way by dedicating a thread to reading from the connection (in addition to whatever other threads you're using). I don't know if there are any python websocket implementations that support this kind of multithreading model for full-duplex, but that's how Go's websocket implementation works for example.
That said, the asynchronous/event-driven model is a natural fit for websockets (it's how everything works on the javascript side), so I would encourage you to get comfortable with that model instead of trying to find a way to work with websockets synchronously. | 1 | 0 | 0 | I'm getting started with websockets. Trying to write a python server for a browser based (javascript) client.
I have also never really done asynchronous programming before (except "events"). I was trying to avoid it - I have searched and searched for an example of websocket use that did not involve importing tornado or asyncio. But I've found nothing, even the "most basic examples" do it.
So now I'm internalising it, but clear it up for me - is "full duplex" server code necessarily asynchronous? | Does server code involving both sending and receiving have to be asynchronous? | 1.2 | 0 | 1 | 93 |
42,988,797 | 2017-03-23T23:26:00.000 | 0 | 0 | 1 | 0 | python,python-3.6,wing-ide | 43,001,067 | 1 | true | 0 | 1 | Wing has never been able to debug an unsaved file. I suspect you used Wing 101 previously and now use Wing Personal or Wing Pro where the toolbar buttons are a bit different and run things in the debugger instead of evaluating in the Python Shell.
You can still evaluate the file or a selection in the file in the Python Shell using the items in the Source menu. As of Wing 6 you can also debug things you execute in this way by enabling debug in the Python Shell's Options menu.
You can also select a range in the file, click on the active range icon in top right of the Python Shell to make it the active range and then use the cog icon that appears in top right of the Python Shell whenever you want to reevaluate the range as you edit it in the editor. | 1 | 2 | 0 | Whenever I create a new wing file in the wing IDE interface, I find the "run code" (play symbol) button is greyed out and I can't click it unless I save the file somewhere on my computer. It didn't used to to this.
This is annoying because it forces you to save every single code you write even if it's just 5 lines you've written to test something out.
I can't recall exactly when the button changed, I didn't notice it immediately. But I've updated both my wing IDE and python to the latest version. I've also shifted the directory that I keep all my python saves in, but I can't see why that would matter.
I've taken a look through settings, but I couldn't make much sense of it. I'm new to programming. | Wing ide "run" button is greyed out unless I save the file | 1.2 | 0 | 0 | 1,614 |
42,989,247 | 2017-03-24T00:16:00.000 | 1 | 1 | 0 | 0 | python,heroku | 43,195,564 | 2 | false | 1 | 0 | Not just ephemeral, but immutable, which means you can't write to the file system. You'll have to put the file in something like S3, or just put the data into a database like Postgres. | 2 | 1 | 0 | apologies upfront. I am an extreme newbie and this is probably a very easy question. After much trial and error, I set up an app on Heroku that runs a python script that scrapes data off of a website and stores it in a text file. (I may switch the output to a .csv file). The script and app are running on a Heroku Scheduler, so the scraping takes place on a schedule and the data automatically gets written to the file that is on the Heroku platform. I simply want to download the particular output file occasionally so that I can look at it. (Part of the data that is scraped is being tweeted on a twitter bot that is part of the script.)
(Not sure that this is relevant but I uploaded everything through Git.)
Many thank in advance. | How to Access text files uploaded to Heroku that is running with Python Script | 0.099668 | 0 | 0 | 1,626 |
42,989,247 | 2017-03-24T00:16:00.000 | 1 | 1 | 0 | 0 | python,heroku | 42,989,347 | 2 | false | 1 | 0 | You can run this command heroku run cat path/to/file.txt, but keep in mind that Heroku uses ephemeral storage, so you don't have any guarantee that your file will be there.
For example, Heroku restarts your dynos every 24 hours or so. After that you won't have that file anymore. The general practice is to store files on some external storage provider like Amazon S3. | 2 | 1 | 0 | apologies upfront. I am an extreme newbie and this is probably a very easy question. After much trial and error, I set up an app on Heroku that runs a python script that scrapes data off of a website and stores it in a text file. (I may switch the output to a .csv file). The script and app are running on a Heroku Scheduler, so the scraping takes place on a schedule and the data automatically gets written to the file that is on the Heroku platform. I simply want to download the particular output file occasionally so that I can look at it. (Part of the data that is scraped is being tweeted on a twitter bot that is part of the script.)
(Not sure that this is relevant but I uploaded everything through Git.)
Many thank in advance. | How to Access text files uploaded to Heroku that is running with Python Script | 0.099668 | 0 | 0 | 1,626 |
42,990,200 | 2017-03-24T02:07:00.000 | 0 | 0 | 1 | 0 | python-3.x | 42,990,327 | 1 | false | 0 | 0 | There are no obvious errors in the code. Here are some things to checks:
1) Do the lines in the pos/neg file have just one word? If not, it needs to be split.
2) Is the case the same? If not, be sure to casefold both the target words and the input text.
3) Use of str.split() usually isn't the best way to split natural text that might contain punctuation. Consider something like re.findall(r"[A-Za-z\'\-]+", text).
4) You will be much better lookup performance is the pos/neg words are stored in sets rather than lists. | 1 | 0 | 0 | I am writing a program to keep count of 'good' and 'bad' words. The program is using two text files, one with good words and one with bad words, to detect the score. I currently have the following:
...
The program executes in Python, but I can't get it to keep count of the score. I'm not sure what's wrong. | Keeping Count Of Words | 0 | 0 | 0 | 39 |
42,995,027 | 2017-03-24T08:49:00.000 | 1 | 0 | 0 | 0 | python,interpolation,coefficients | 42,995,646 | 2 | false | 0 | 0 | If you're doing linear interpolation you can just use the formula that the line from point (x0, y0) to (x1, y1) the line that interpolates them is given by y - y0 = ((y0 - y1)/(x0 - x1)) * (x - x0). You can take 2 element slices of your list using the slice syntax; for example to get [2.5, 3.4] you would use x[1:3].
Using the slice syntax you can then implement the linear interpolation formula to calculate the coefficients of the linear polynomial interpolations. | 1 | 3 | 1 | I'm fairly new to programming and thought I'd try writing a piecewise linear interpolation function. (perhaps which is done with numpy.interp or scipy.interpolate.interp1d)
Say I am given data as follows: x= [1, 2.5, 3.4, 5.8, 6] y=[2, 4, 5.8, 4.3, 4]
I want to design a piecewise interpolation function that will give the coefficents of all the Linear polynomial pieces between 1 and 2.5, 2.5 to 3.4 and so on using Python.
of course matlab has the interp1 function which do this but im using python and i want to do exactly the same job as matlab but python only gives the valuse but not linear polynomials coefficient ! (in matlab we could get this with pp.coefs) .
but how to get pp.coefs in python numpy.interp ? | piecewise linear interpolation function in python | 0.099668 | 0 | 0 | 4,195 |
42,995,878 | 2017-03-24T09:34:00.000 | 0 | 0 | 1 | 1 | python,pip | 43,003,863 | 1 | false | 0 | 0 | You may have two versions of pip installed i did too and it was a pain but i fixed it myself with the following command: pip2 download/install (enter your package here)
That should fix the issue you have encountered. | 1 | 0 | 0 | I have Python 2.7 and 3.4 on my work computer for compatibility reasons with older scripts.
Now I wanted to install "aenum" for Py2.7 but "pip" only installs the package for Py3.4. telling me "aenum-2.0.4-py2-none-any.whl is not a supported wheel on this platform".
In the CMD terminal I changed to the designated Python's "site-packages" folder where it's installed in Py3.4.
"pip" was updated before. pip is installed in both Python folders
How can I set this up properly? | Python: Can't install .whl package for two python versions on Windows | 0 | 0 | 0 | 637 |
42,995,988 | 2017-03-24T09:40:00.000 | 2 | 0 | 0 | 0 | python,xml,printing,zebra-printers | 43,110,314 | 1 | true | 0 | 0 | Finally I found a way to make this.
With ZebraDesigner (not pro) I design the label template to my automated labels and I export to a file changing the route of the printer in Windows Preferences.
With the Online ZPL viewer of Labelary and minimum knowledges of ZPL (always with the manual near) I modified the label to make it editable on python with the use of .format() and {0},{1}, etc.
And finally done this, I call a batch with the command PRINT 'FILE' 'ZEBRAPORT' like PRINT C:\FILE.ZPL USB003 to print the specific modified label.
If someone want specific code of how I do this please, just ask me. | 1 | 1 | 0 | In my work place, we have a Dymo printer that picks up data from a database and place it in a template label that I made, it prints with python automatically with a program.
Recently we bought a Zebra Thermal Printer, and I need to update the program to do the same thing, but with the Zebra printer.
I was looking around, and I found the ZebraDesigner for XML and I design a few labels like the ones I need it, but the zebra package for python is not able to print XML format, and I tried to print .lbl format but I wasn't able.
Note that .lbl files can't be edit as a text... And I need to do this...
Is there any solution? | Print XML on Zebra printer using Python | 1.2 | 0 | 1 | 2,011 |
42,997,677 | 2017-03-24T10:55:00.000 | 1 | 0 | 0 | 0 | python,python-xarray | 42,999,562 | 1 | true | 0 | 0 | You can use xarray.concat to achieve this:
da = xarray.DataArray(0, coords={"x": 42})
xarray.concat((da,), dim="x") | 1 | 1 | 1 | I have a DataArray for which da.dims==(). I can assign a coordinate da.assign_coords(foo=42). I would like to add a corresponding dimension with length one, such that da.dims==("foo",) and the corresponding coordinate would be foo=[42]. I cannot use assign_coords(foo=[42]), as this results in the error message cannot add coordinates with new dimensions to a DataArray.
How do I assign a new dimension of length one to a DataArray? I could do something like DataArray(da.values.reshape([1]), dims="foo", coords={"foo": [42]}) but I wonder if there is a method that does not require copying the entire object. | assign new dimension of length one | 1.2 | 0 | 0 | 281 |
43,001,729 | 2017-03-24T14:12:00.000 | 2 | 0 | 0 | 0 | python,numpy,fft | 43,012,808 | 3 | false | 0 | 0 | Also note the ordering of the coefficients in the fft output:
According to the doc: by default the 1st element is the coefficient for 0 frequency component (effectively the sum or mean of the array), and starting from the 2nd we have coeffcients for the postive frequencies in increasing order, and starts from n/2+1 they are for negative frequencies in decreasing order. To have a view of the frequencies for a length-10 array:
np.fft.fftfreq(10)
the output is:
array([ 0. , 0.1, 0.2, 0.3, 0.4, -0.5, -0.4, -0.3, -0.2, -0.1])
Use np.fft.fftshift(cf), where cf=np.fft.fft(array), the output is shifted so that it corresponds to this frequency ordering:
array([-0.5, -0.4, -0.3, -0.2, -0.1, 0. , 0.1, 0.2, 0.3, 0.4])
which makes for sense for plotting.
In the 2D case it is the same. And the fft2 and rfft2 difference is as explained by others. | 1 | 11 | 1 | Obviously the rfft2 function simply computes the discrete fft of the input matrix. However how do I interpret a given index of the output? Given an index of the output, which Fourier coefficient am I looking at?
I am especially confused by the sizes of the output. For an n by n matrix, the output seems to be an n by (n/2)+1 matrix (for even n). Why does a square matrix ends up with a non-square fourier transform? | How should I interpret the output of numpy.fft.rfft2? | 0.132549 | 0 | 0 | 5,101 |
43,002,043 | 2017-03-24T14:26:00.000 | 0 | 0 | 0 | 0 | python,rest,botframework | 43,031,505 | 1 | false | 1 | 0 | ConversationUpdate should be sent any time a bot is added to a channel with a user in it. Are you not seeing that on install? | 1 | 0 | 0 | I use microsoft bot framework and a flask based bot server in my application.
When someone installs the bot, the Botframework stores the JSON POSTed by slack, including data like SLACK_TEAM_ID, SLACK_USER_ID and BOT_ACCESS_TOKEN. Its great, that from this point whenever, an user mentions or directmessages the botuser, the Bot Framework POSTs a JSON to the flask server.
What I would like is, right when the user installs the bot, the Bot Framework does a POST call to the flask server, so that I can (say) congratulate the user for installing my bot.
In short: How to get my flask application notified as to who installs my bot as soon as they install it? | How can my bot server know INSTANTLY when someone installed the bot using Add-to-slack button? | 0 | 0 | 1 | 51 |
43,004,341 | 2017-03-24T16:13:00.000 | 0 | 0 | 1 | 0 | python,linux,python-3.x,pip,virtualenv | 43,005,042 | 3 | false | 0 | 0 | Suppose you want to install latest Django.
Download the .gz file from pypi.python.org locally somewhere and unzip it. You should have setup.py file visible.
Now either activate your virtualenv and go to the Django folder where you see setup.py and type command python setup.py install.
Or grab the full path of python binary/executable in your virtualenv and go to the folder where you have the setup.py and do your-complete-path/python setup.py install | 2 | 1 | 0 | I'm working with a virtualenviroment that don't have a downloads for some modules, so doing pip freeze > requirements.txt and then pip install -r requirements.txt won't work. There's a way to avoid this??
After that, I have to copy this virtualenv inside another machine, so maybe there are some PATH to change or something else, right? | Clone a virtualenv without use pip freeze | 0 | 0 | 0 | 443 |
43,004,341 | 2017-03-24T16:13:00.000 | 0 | 0 | 1 | 0 | python,linux,python-3.x,pip,virtualenv | 43,004,710 | 3 | false | 0 | 0 | You could use a source control tool like git, an install script, or a combination of both. Keep the install script in your top level directory and run it on the new machine. Use curl to download what you need into the proper directory. | 2 | 1 | 0 | I'm working with a virtualenviroment that don't have a downloads for some modules, so doing pip freeze > requirements.txt and then pip install -r requirements.txt won't work. There's a way to avoid this??
After that, I have to copy this virtualenv inside another machine, so maybe there are some PATH to change or something else, right? | Clone a virtualenv without use pip freeze | 0 | 0 | 0 | 443 |
43,005,000 | 2017-03-24T16:49:00.000 | 0 | 1 | 0 | 0 | python,email,outlook | 43,005,956 | 1 | false | 0 | 0 | You won't be able to fix this. Outlook does it no matter how many line breaks you put in there in my experience. You might be able to trick it by adding spaces on each line, or non-breaksping space characters. It's a dumb feature of outlook.... | 1 | 0 | 0 | I am sending some emails from Python using smtplib MIMEtext; and have the often noted problem that Outlook will "sometimes" remove what it considers "extra line breaks".
It is odd, because I print several headers and they all break find, but the text fields in a table are all smooshed - unless the recipient manually clicks Outlook's "restore line breaks".
Because some come through alright, I wonder what is Outlook's criteria for "extra", and thus how to avoid it?
Do I need to format the message as HTML? | python sending email - line breaks removed by Outlook | 0 | 0 | 0 | 795 |
43,008,166 | 2017-03-24T20:00:00.000 | 1 | 0 | 0 | 0 | python-3.x,sqlalchemy,flask-sqlalchemy,sqlacodegen | 49,161,491 | 1 | false | 1 | 0 | try specifying your database schema with option --schema | 1 | 1 | 0 | I am trying to generate a flask-sqlalchemy for an existing mysql db.
I used the following command
flask-sqlacodegen --outfile rcdb.py mysql://username:password@hostname/tablename
The project uses python 3.4. Any clues?
```Traceback (most recent call last):
File "/var/www/devaccess/py_api/ds/venv/bin/flask-sqlacodegen", line 11, in
sys.exit(main())
File "/var/www/devaccess/py_api/ds/venv/lib/python3.4/site-packages/sqlacodegen/main.py", line 59, in main
Traceback (most recent call last):
File "/var/www/devaccess/py_api/ds/venv/bin/flask-sqlacodegen", line 11, in
sys.exit(main())
File "/var/www/devaccess/py_api/ds/venv/lib/python3.4/site-packages/sqlacodegen/main.py", line 59, in main
args.flask, ignore_cols, args.noclasses)
File "/var/www/devaccess/py_api/ds/venv/lib/python3.4/site-packages/sqlacodegen/codegen.py", line 606, in init
model = ModelClass(table, links[table.name], inflect_engine, not nojoined)
File "/var/www/devaccess/py_api/ds/venv/lib/python3.4/site-packages/sqlacodegen/codegen.py", line 335, in init
relationship_ = ManyToManyRelationship(self.name, target_cls, association_table, inflect_engine)
File "/var/www/devaccess/py_api/ds/venv/lib/python3.4/site-packages/sqlacodegen/codegen.py", line 501, in init
self.kwargs['secondary'] = repr(assocation_table.schema + '.' + assocation_table.name)
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
``` | flask-sqlacodegen suports python 3.4? | 0.197375 | 1 | 0 | 362 |
43,010,515 | 2017-03-24T23:04:00.000 | 1 | 0 | 0 | 1 | python-2.7,lftp | 43,010,672 | 1 | false | 0 | 0 | looks like lftp supports regular linux commands. incase anyone else runs into this just do a du -h | 1 | 1 | 0 | I've been using smartftp (windows only )to work out the file size of remote directories before downloading them. I've switched over to ubuntu and ive been looking around but I don't see if lftp has this feature or maybe someone can show me a way to do this via cli or maybe with a python script
Thanks | lftp show remote recursive directory size | 0.197375 | 0 | 0 | 323 |
43,012,974 | 2017-03-25T05:39:00.000 | 0 | 0 | 0 | 0 | python,wxpython,wxwidgets | 43,017,687 | 1 | false | 0 | 1 | It looks like your "larger body of code" is not using the correct manifest and so uses the old, "classic" controls instead of the "themed" controls implemented by comctl32.dll v6. To check this easily, just compare the look of the buttons in the 2 cases, this should make it clear whether you use the correct manifest or not. | 1 | 0 | 0 | Ive written some wxpython code to display a progress bar/gauge. Ive noticed that if i run my code by itself, i can simply call self.gauge.Pulse() one time (with no timer running) and the gauge will pulse/move in a green bar.
However, when running my code as part of a larger body of code, the bar becomes a solid blue color and self.gauge.Pulse() does not pulse the bar. Just stays constant.
The larger body of code does contain other wxFrames.
Is there some kind of frame style flag or something else that would disable the "auto pulse" feature and turn the bar from green to blue?
this is windows 7, btw
thanks guys | wxpython gauge styles and pulse | 0 | 0 | 0 | 339 |
43,013,263 | 2017-03-25T06:23:00.000 | 1 | 0 | 1 | 0 | python,pycharm,keymapping | 63,578,877 | 4 | false | 0 | 0 | "Insert" button. I had the same issue with Pycharm Enter Key that didn't worked. Insert key on keyboard will solved that, just press "Insert" button on your keyboard. | 4 | 11 | 0 | I am facing a very weird problem...
My enter key is not causing a line break in pycharm. When I press enter at the end of a line, the cursor jumps to the front of the next line, without causing a line break.
This suddenly happened, I have no idea why.
I checked my keymap and have reset it, with enter key mapping to enter but this problem still persists.
Does anyone have any solution? | Pycharm Enter Key is not working | 0.049958 | 0 | 0 | 23,565 |
43,013,263 | 2017-03-25T06:23:00.000 | 2 | 0 | 1 | 0 | python,pycharm,keymapping | 63,068,191 | 4 | false | 0 | 0 | Check if Ideavim OR Vimware is running on the bottom right corner of your Pycharm window. Disable it you will be able to add a new line | 4 | 11 | 0 | I am facing a very weird problem...
My enter key is not causing a line break in pycharm. When I press enter at the end of a line, the cursor jumps to the front of the next line, without causing a line break.
This suddenly happened, I have no idea why.
I checked my keymap and have reset it, with enter key mapping to enter but this problem still persists.
Does anyone have any solution? | Pycharm Enter Key is not working | 0.099668 | 0 | 0 | 23,565 |
43,013,263 | 2017-03-25T06:23:00.000 | 30 | 0 | 1 | 0 | python,pycharm,keymapping | 47,047,092 | 4 | false | 0 | 0 | I had the same issue. I figured that I had pressed the "insert" key in the keyboard by mistake. When I pressed insert again, it went back to normal. Hope this could be an easy debugging effort before going deeper as suggested above.
Cheers | 4 | 11 | 0 | I am facing a very weird problem...
My enter key is not causing a line break in pycharm. When I press enter at the end of a line, the cursor jumps to the front of the next line, without causing a line break.
This suddenly happened, I have no idea why.
I checked my keymap and have reset it, with enter key mapping to enter but this problem still persists.
Does anyone have any solution? | Pycharm Enter Key is not working | 1 | 0 | 0 | 23,565 |
43,013,263 | 2017-03-25T06:23:00.000 | 7 | 0 | 1 | 0 | python,pycharm,keymapping | 43,013,449 | 4 | true | 0 | 0 | Found the reason causing that. Just like to post a possible solution if anyone else is facing similar problems. It appears that my 'use block caret' box was checked.
Go to file -> Editor -> General -> Appearance -> 'Use block caret' and uncheck it if it's checked.
Otherwise, it might be the issue with the keymapping. Go to File -> Settings -> Keymap, search for enter and make sure that it is mapped to enter. OR, just reset the keymap by clicking the reset button on the same page. | 4 | 11 | 0 | I am facing a very weird problem...
My enter key is not causing a line break in pycharm. When I press enter at the end of a line, the cursor jumps to the front of the next line, without causing a line break.
This suddenly happened, I have no idea why.
I checked my keymap and have reset it, with enter key mapping to enter but this problem still persists.
Does anyone have any solution? | Pycharm Enter Key is not working | 1.2 | 0 | 0 | 23,565 |
43,018,609 | 2017-03-25T16:02:00.000 | 0 | 0 | 1 | 0 | python,indentation,python-idle | 43,018,783 | 1 | false | 0 | 0 | Nevermind. Apparently i hadn't indented a break command, so every new line after it was automatically indented. | 1 | 0 | 0 | Whenever i break to a new line in IDLE, my cursor starts at an indentation regardless of whether a colon precedes it or not. How do i disable this? | How do i disable automatic indentation in Python/IDLE on every new line? | 0 | 0 | 0 | 655 |
43,018,769 | 2017-03-25T16:15:00.000 | 0 | 0 | 1 | 0 | python,arrays,list,numpy,types | 43,018,809 | 4 | false | 0 | 0 | You can't make that array. Arrays is numpy are similar to matrices in math. They have to be mrows, each having n columns. Use a list of lists, or list of np.arrays | 2 | 1 | 1 | I have a numpy array Z1, Z2, Z3:
Z1 = [1,2,3]
Z2 = [4,5]
Z3 = [6,7,8,9]
I want new numpy array Z that have Z1, Z2, Z3 as array like:
Z = [[1,2,3],[4,5],[6,7,8,9]
print(type(Z),type(Z[0]))
>>> <class 'numpy.ndarray'> <class 'numpy.ndarray'>
I used np.append, hstack, vstack, insert, concatenate ...but all I failed.
There is only 2 case:
Z = [1,2,3,4,5,6,7,8,9]
or ERROR
so I made a list Z first, and append list Z1, Z2, Z3 and then convert list Z into numpy array Z.
BUT
Z = [[1,2,3],[4,5],[6,7,8,9]]
print(type(Z),type(Z[0]))
>>> <class 'numpy.ndarray'> <class 'list'>
I want to do not use 'while' or 'for'. Help me please.. | Python How to make array in array? | 0 | 0 | 0 | 1,326 |
43,018,769 | 2017-03-25T16:15:00.000 | 0 | 0 | 1 | 0 | python,arrays,list,numpy,types | 43,039,088 | 4 | false | 0 | 0 | everybody thanks! Answers are a little bit different with that I want, but eventually I solve that without use 'for' or 'while'.
First, I made "numpy array" Z1, Z2, Z3 and put them into "list" Z. There are array in List.
Second, I convert "list" Z into "numpy array" Z. It is array in array that I want. | 2 | 1 | 1 | I have a numpy array Z1, Z2, Z3:
Z1 = [1,2,3]
Z2 = [4,5]
Z3 = [6,7,8,9]
I want new numpy array Z that have Z1, Z2, Z3 as array like:
Z = [[1,2,3],[4,5],[6,7,8,9]
print(type(Z),type(Z[0]))
>>> <class 'numpy.ndarray'> <class 'numpy.ndarray'>
I used np.append, hstack, vstack, insert, concatenate ...but all I failed.
There is only 2 case:
Z = [1,2,3,4,5,6,7,8,9]
or ERROR
so I made a list Z first, and append list Z1, Z2, Z3 and then convert list Z into numpy array Z.
BUT
Z = [[1,2,3],[4,5],[6,7,8,9]]
print(type(Z),type(Z[0]))
>>> <class 'numpy.ndarray'> <class 'list'>
I want to do not use 'while' or 'for'. Help me please.. | Python How to make array in array? | 0 | 0 | 0 | 1,326 |
43,019,666 | 2017-03-25T17:34:00.000 | 0 | 0 | 0 | 0 | python,django | 43,020,395 | 1 | false | 1 | 0 | Choices field like positiveintegerfield will has smaller size than charfield
if you save cities as [(1, 'NY'),(2, ' LA')] in positiveintegerfield but at query complexity time I think integer will win
Int comparisons are faster than varchar comparisons, for the simple fact that ints take up much less space than varchars.
This holds true both for unindexed and indexed access. The fastest way to go is an indexed int column.
for postgresql, you might be interested in the space usage of different date types:
int fields occupy between 2 and 8 bytes, with 4 being usually more
than enough ( -2147483648 to +2147483647 )
character types occupy 4 bytes plus the actual strings. | 1 | 0 | 0 | If database(Django ORM) is based on data which is parsed . And there is a 'city' field, which is often used when filtering data. Is it much better to use Chocies or just Charfield for saving 'city' into the field. Will be filtering much faster? | Choices VS Charfield in filtering | 0 | 0 | 0 | 23 |
43,021,624 | 2017-03-25T20:29:00.000 | 0 | 0 | 0 | 0 | python,scikit-learn,anaconda,data-mining,prediction | 43,021,757 | 1 | true | 0 | 0 | The training set and the evaluation set must be different. The whole point of having an evaluation set is guard against over-fitting.
In this case what you should do is take say 100,000 customers, picked at random. Then use the data to try and learn what is about customers that make them likely purchase A. Then use the remaining 40,000 to test how well you model works. | 1 | 0 | 1 | I'm creating a model to predict the probability that customers will buy product A in a department store that sells product A through Z. The store has it's own credit card with demographic and transactional information of 140,000 customers.
There is a subset of customers (say 10,000) who currently buy A. The goal is to learn from these customers 10,000 customers and score the remaining 130,000 with their probability to buy A, then target the ones with the highest scores with marketing campaigns to increase A sales.
How should I define my training and evaluation sets?
Training set:
Should it be only the 10,000 who bought A or the whole 140k customers?
Evaluation set: (where the model will be used in production)
I believe this should be the 130k who haven't bought A.
The question about time:
Another alternative is to take a photograph of the database last year, use it as a training set, then take the database as it is today and evaluate all customer's with the model created with last year's info. Is this correct? When should I do this?
Which option is correct for all sets? | Can training and evaluation sets be the same in predictive analytics? | 1.2 | 0 | 0 | 55 |
43,023,256 | 2017-03-25T23:43:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,user-interface | 43,023,286 | 1 | false | 0 | 0 | You can simply write the data out to an output file for the first program, then Read that output file as an input for the second program. | 1 | 1 | 0 | At this moment I have a Python GUI program that outputs some data, I want to find a way to take the outputs and input them into a different running program. I've been searching for a way to do this for a while and can't seem to find a way.
Thanks all help appreciated!!!
Edit:
Program one with GUI (Designed by me) outputs strings.
Program two running in commandline (not designed by me and do not have access to source code).
I need program one outputs to go into the commandline of program two. The way I am thinking about it is that I am trying to interface between them. | Input Python Output's into a different program | 0 | 0 | 0 | 157 |
43,023,297 | 2017-03-25T23:49:00.000 | 1 | 0 | 0 | 0 | php,python,wordpress | 43,023,344 | 2 | false | 0 | 0 | The only safe way I can think of is to "poll" the website from the local machine. It is impractical and insecure to access the local machine from the website.
You need to find a condition that changes and can be examined using the local script and use that to determine whether to take the necessary action or not. This could be another PHP script on the WordPress site giving you state information, or just a web call locally. | 2 | 0 | 0 | I have a WordPress site running on an external server, I also have a local machine running a Python script.
I am looking for a way to trigger the python script from the WordPress install.
I know I can use the publish_post hook in WordPress to activate whenever a new post is published but I am unsure of the best way to link that to my local machine.
Anyone have an example of something similar being done? Would the WordPress Rest API be of any use here? | Trigger Python script on local machine from Website | 0.099668 | 0 | 0 | 103 |
43,023,297 | 2017-03-25T23:49:00.000 | 1 | 0 | 0 | 0 | php,python,wordpress | 43,023,353 | 2 | false | 0 | 0 | WordPress wouldn't have access to your python script. But you python script has access to WordPress. It should periodically fetch the data from the WordPress installation URL / Blog page. And this will have to be scheduled periodically like every 4 hours if that's how frequently the posts are published. | 2 | 0 | 0 | I have a WordPress site running on an external server, I also have a local machine running a Python script.
I am looking for a way to trigger the python script from the WordPress install.
I know I can use the publish_post hook in WordPress to activate whenever a new post is published but I am unsure of the best way to link that to my local machine.
Anyone have an example of something similar being done? Would the WordPress Rest API be of any use here? | Trigger Python script on local machine from Website | 0.099668 | 0 | 0 | 103 |
43,024,066 | 2017-03-26T01:47:00.000 | 0 | 0 | 0 | 0 | python,django | 43,051,626 | 2 | false | 1 | 0 | I finally got the "python manage.py runserver" command to work. The only thing different I did was before setting up the virtual env and installing Django was set my executionpolicy to Unrestricted. Previously it had been set to RemoteSigned. I hadn't been gettiing any warning or errors but thought I would try it and it worked. | 2 | 0 | 0 | I am new to stackoverflow, very new to Python and trying to learn Django.
I am on Windows 10 and running commands from powershell (as administrator).
I am in a virtual environment. I am trying to set up Django.
I have run the following commands
"pip install Django"
"django-admin.py startproject learning_log ."
"python manage.py migrate"
All of the above seemed to work okay, however, when I then try to run the command
"python manage.py runserver"
I get a popup error box that says:
Python has stopped working
A problem caused the program to stop working correctly.
Windows will close the program and notify you if a solution is available.
Can someone tell me how to resolve this issue or where to look for any error messages that might clue me in as to what is causing the problem? | Python stops working on manage.py runserver | 0 | 0 | 0 | 2,081 |
43,024,066 | 2017-03-26T01:47:00.000 | 0 | 0 | 0 | 0 | python,django | 49,569,047 | 2 | false | 1 | 0 | I encountered the same problem. After trying everything, I switched from PS to cmd, cd to the same directory and run python manage.py runserver. Then it worked. Then I ctrl+C quit the server, switched back to PS, ran the command, it still threw the same dialog window (Python stopped working). Then I went back to cmd, typed the command and the server started fine.
Conclusion: Use cmd to run the command, not PS. | 2 | 0 | 0 | I am new to stackoverflow, very new to Python and trying to learn Django.
I am on Windows 10 and running commands from powershell (as administrator).
I am in a virtual environment. I am trying to set up Django.
I have run the following commands
"pip install Django"
"django-admin.py startproject learning_log ."
"python manage.py migrate"
All of the above seemed to work okay, however, when I then try to run the command
"python manage.py runserver"
I get a popup error box that says:
Python has stopped working
A problem caused the program to stop working correctly.
Windows will close the program and notify you if a solution is available.
Can someone tell me how to resolve this issue or where to look for any error messages that might clue me in as to what is causing the problem? | Python stops working on manage.py runserver | 0 | 0 | 0 | 2,081 |
43,026,801 | 2017-03-26T08:47:00.000 | 1 | 0 | 0 | 0 | python,django,django-registration | 43,152,421 | 2 | false | 1 | 0 | I am not intimately familiar with django, but an easy solution would be to follow the default registration workflow to let your user register. Then when your user tries to login for the first time you present them with a form to fill with all the extra information you might need.
In this way you also decouple the actual account creation from asking the user for more information, creating for them an extra incentive to actually go through with this process ("Oh man why do I need to provide my name, let's not sign up" vs "Oh well, I have already registered and given them an email might as well go through with it")
If you would prefer to have them in one step, then providing what code you have already would help us provide better feedback | 1 | 0 | 0 | There are already some question on this but most of their answers are using models based workflow which is not a recommended way anymore according to django-registration. I am just frustrated from last week, trying to figure out how to add first and last name fields in registration form in HMAC Workflow. | How to add custom-fields (First and Last Name) in django-registration 2.2 (HMAC activation Workflow)? | 0.099668 | 0 | 0 | 174 |
43,027,980 | 2017-03-26T11:11:00.000 | 47 | 0 | 1 | 0 | python,matplotlib,jupyter-notebook,ipython | 55,266,804 | 10 | false | 0 | 0 | If you want to add plots to your Jupyter notebook, then %matplotlib inline is a standard solution. And there are other magic commands will use matplotlib interactively within Jupyter.
%matplotlib: any plt plot command will now cause a figure window to open, and further commands can be run to update the plot. Some changes will not draw automatically, to force an update, use plt.draw()
%matplotlib notebook: will lead to interactive plots embedded within the notebook, you can zoom and resize the figure
%matplotlib inline: only draw static images in the notebook | 4 | 798 | 0 | What exactly is the use of %matplotlib inline? | Purpose of "%matplotlib inline" | 1 | 0 | 0 | 773,816 |
43,027,980 | 2017-03-26T11:11:00.000 | 12 | 0 | 1 | 0 | python,matplotlib,jupyter-notebook,ipython | 59,612,389 | 10 | false | 0 | 0 | It just means that any graph which we are creating as a part of our code will appear in the same notebook and not in separate window which would happen if we have not used this magic statement. | 4 | 798 | 0 | What exactly is the use of %matplotlib inline? | Purpose of "%matplotlib inline" | 1 | 0 | 0 | 773,816 |
43,027,980 | 2017-03-26T11:11:00.000 | 2 | 0 | 1 | 0 | python,matplotlib,jupyter-notebook,ipython | 61,199,350 | 10 | false | 0 | 0 | Provided you are running Jupyter Notebook, the %matplotlib inline command will make your plot outputs appear in the notebook, also can be stored. | 4 | 798 | 0 | What exactly is the use of %matplotlib inline? | Purpose of "%matplotlib inline" | 0.039979 | 0 | 0 | 773,816 |
43,027,980 | 2017-03-26T11:11:00.000 | -5 | 0 | 1 | 0 | python,matplotlib,jupyter-notebook,ipython | 52,795,802 | 10 | false | 0 | 0 | It is not mandatory to write that. It worked fine for me without (%matplotlib) magic function.
I am using Sypder compiler, one that comes with in Anaconda. | 4 | 798 | 0 | What exactly is the use of %matplotlib inline? | Purpose of "%matplotlib inline" | -1 | 0 | 0 | 773,816 |
43,028,435 | 2017-03-26T12:01:00.000 | 0 | 0 | 0 | 1 | python,ubuntu,window,subprocess | 43,028,894 | 1 | false | 0 | 0 | The window placement is performed according to the placement policy of ones user interface. This can be influenced by add-ons, but depends on the user interface you use.
As to the continuation of the script, you could call the subprocess.Popen(...) in a thread you create for that purpose. | 1 | 0 | 0 | My question is how can I open a given path in a window and continue the script? I'd also like to select where to put that window.
This is aimed to Ubuntu, where I can set a window in any corner by pressing cntrl + alt + 1/7/9/3.
I've tried this so far, but appart from not being able to continue the script, I can't select where to position the window:
import subprocess
subprocess.Popen(["xdg-open", "/home/user/Desktop"])
Thanks | Python - Open a given path in a separate window and continue script | 0 | 0 | 0 | 40 |
43,028,654 | 2017-03-26T12:25:00.000 | 1 | 0 | 1 | 0 | python,jupyter-notebook,decompiling | 43,162,651 | 3 | true | 0 | 0 | All that can be done directly from the Jupyter Notebook or any IPython console.
to view the contents of the script
!cat myscript.py
to run the script as a program use this built-in magic command
%run myscript.py | 2 | 0 | 0 | I have a Python Script (with .py extension) and it was created by makepy.py.
Is there a way to view and copy the codes inside that script and load them into my Jupyter Notebook?
I have done a Google search but strangely, I can't find this mentioned anywhere.
Do I need a specific software to do this? Or can it be done at the Python command prompt level? | How do I view the Python Codes inside a Python Script? | 1.2 | 0 | 0 | 2,860 |
43,028,654 | 2017-03-26T12:25:00.000 | 2 | 0 | 1 | 0 | python,jupyter-notebook,decompiling | 43,028,668 | 3 | false | 0 | 0 | You can just edit the file with a text editor. Right click and select open with. Then you can copy the code. | 2 | 0 | 0 | I have a Python Script (with .py extension) and it was created by makepy.py.
Is there a way to view and copy the codes inside that script and load them into my Jupyter Notebook?
I have done a Google search but strangely, I can't find this mentioned anywhere.
Do I need a specific software to do this? Or can it be done at the Python command prompt level? | How do I view the Python Codes inside a Python Script? | 0.132549 | 0 | 0 | 2,860 |
43,029,358 | 2017-03-26T13:29:00.000 | 1 | 0 | 0 | 0 | python,tensorflow | 43,032,478 | 1 | false | 0 | 0 | In terminal, type help(tf.contrib.learn.DNNRegressor. There you will see the object has methods such as predict() which returns predicted scores.
DNNRegressor does regression, not classification, so you don't get a probability distribution over classes. | 1 | 0 | 1 | For DNN Classifier there is a method predict_proba to get the probabilities, whereas for DNN Regressor it is not there. Please help. | How to get confidence levels(probabilities) for DNN Regressor in Tensorflow | 0.197375 | 0 | 0 | 827 |
43,030,168 | 2017-03-26T14:42:00.000 | -1 | 0 | 0 | 0 | linux,wxpython,wxwidgets | 43,030,860 | 2 | false | 0 | 1 | You can try to use FindWindow() and then Bind() the event handler. | 2 | 0 | 0 | Is it possible to detect a double click on another window in wxwidgets?
In my quest to switch to linux I wanna build a program that reacts to double click on the desktop and the file manager and displays a menu.
Same as listary does on Windows.
Is this something that can be done with wxwidgets (preferably wxpython) under linux? What about on Windows? | wxwidgets detect mouse click on another window | -0.099668 | 0 | 0 | 208 |
43,030,168 | 2017-03-26T14:42:00.000 | 2 | 0 | 0 | 0 | linux,wxpython,wxwidgets | 43,031,796 | 2 | false | 0 | 1 | You can't receive mouse clicks, or any other events, for windows of another process unless you capture the mouse (and never release it, which would be a bad idea). | 2 | 0 | 0 | Is it possible to detect a double click on another window in wxwidgets?
In my quest to switch to linux I wanna build a program that reacts to double click on the desktop and the file manager and displays a menu.
Same as listary does on Windows.
Is this something that can be done with wxwidgets (preferably wxpython) under linux? What about on Windows? | wxwidgets detect mouse click on another window | 0.197375 | 0 | 0 | 208 |
43,033,174 | 2017-03-26T18:57:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,fonts,colors,size | 49,737,636 | 1 | false | 0 | 0 | I know about the color part, but I came for the size.
To start off with, to close the color, use \u001b[0m. If you don't use this, all text will become the color that you started with until you close it.
Green is\u001b[32m
Black is\u001b[30m
Pink is \u001b[35m
There are much more, but is you experiment with different numbers, you can highlight and do different colors.
To make a sentence with color, format it like this:
print("\u001b[35mHi, coders. This is pink output.\u001b[0m")
Test that code. it will come out in pink. | 1 | 1 | 0 | I am using Python 3 and I am not sure if this is a possible query because I've searching it up and I couldn't find a solution. My question is, I want to learn how to change colour and size of my output.
How to make the size bigger or smaller?
Able to make the font size big
How to change the background colour of shell?
Able to make the background colour, for example, right now, it's all white but I want it black.
How to change the output colour of shell?
I would love to see colourful fonts operating in black background shell
I hope there is a solution to this! Thanks in advance | How to change font colour and size of Python Shell | 0.197375 | 0 | 0 | 3,009 |
43,034,716 | 2017-03-26T21:24:00.000 | 3 | 0 | 1 | 0 | python,anaconda,package,python-packaging,canopy | 43,035,582 | 4 | false | 0 | 0 | This is a bit like asking "Why doesn't every motor come with a car around it?"
While a car without a motor is pretty useless, the inverse doesn't hold: Most motors aren't even used for cars. Of course one could try selling a complete car to people who want to have a generator, but they wouldn't buy it.
Also the people designing cars might not be the best to build a motor and vice versa.
Similarly with python. Most python distributions are not used with numpy, scipy or pandas. Distributing python with those packages would create a massive overhead.
However, there is of course a strong demand for prebuilt distributions which combine those modules with a respective python and make sure everything interacts smoothly. Some examples are Anaconda, Canopy, python(x,y), winpython, etc. So an end user who simply wants a car that runs, best chooses one of those, instead of installing everything from scratch. Other users who do want to always have the newest version of everything might choose to tinker them together themselves. | 2 | 3 | 1 | What is the reason packages are distributed separately?
Why do we have separate 'add-on' packages like pandas, numpy?
Since these modules seem so important, why are these not part of Python itself?
Are the "single distributions" of Python to come pre-loaded?
If it's part of design to keep the 'core' separate from additional functionality, still in that case it should at least come 'pre-imported' as soon as you start Python.
Where can I find such distributions if they exist? | Why doesn't Python come pre-built with required libraries like pandas, numpy etc | 0.148885 | 0 | 0 | 1,040 |
43,034,716 | 2017-03-26T21:24:00.000 | -1 | 0 | 1 | 0 | python,anaconda,package,python-packaging,canopy | 43,035,255 | 4 | false | 0 | 0 | PyPi currently has over 100,000 libraries available. I'm sure someone thinks each of these is important.
Why do you need or want to pre-load libraries, considering how easy a pip install is especially in a virtual environment? | 2 | 3 | 1 | What is the reason packages are distributed separately?
Why do we have separate 'add-on' packages like pandas, numpy?
Since these modules seem so important, why are these not part of Python itself?
Are the "single distributions" of Python to come pre-loaded?
If it's part of design to keep the 'core' separate from additional functionality, still in that case it should at least come 'pre-imported' as soon as you start Python.
Where can I find such distributions if they exist? | Why doesn't Python come pre-built with required libraries like pandas, numpy etc | -0.049958 | 0 | 0 | 1,040 |
43,034,958 | 2017-03-26T21:47:00.000 | 0 | 0 | 1 | 1 | python,notifications,cron,crontab | 43,035,495 | 1 | true | 0 | 0 | Since this is a personal project, that is ok I would say. It is quick and simple, as well as using pre-existing tools available to you (crontab in this case).
The downside of it, it is making the solution / programme OS dependent. What if someone ever wants or needs to use this on Windows? It would not not work as crontabs are not available in that OS.
For making it OS independent / portable, you should include the ability to manage, control notification and trigger them in your program. This would of course require it to be spawn as a server, so keep track of tasks and notifications on them.
How far do you want to go? That is the question. | 1 | 0 | 0 | I'm making a small reminder/note-taking programme for myself, and I have a lot of it set up. All that I'm wondering is if it'd be correct for me to make a cron job for each note. This cron job would run notify-send whenever a note was set to take place. If this is the correct method, how would I go about doing this? | Is it correct to use a cron job for a notification programme? | 1.2 | 0 | 0 | 244 |
43,035,543 | 2017-03-26T22:55:00.000 | 2 | 0 | 1 | 0 | python,dsl | 43,035,711 | 2 | false | 0 | 0 | Writing a usable editor is not a trivial task. That's basically a months-long project on its own if you want anything more than trivial editing functions. Embeddable editors like Scintilla can help of course, but that's on you to figure out their API.
I'd recommend a different direction: since you've got the whole grammar, generate the autocompletion and syntax highlighting as a plugin for an existing editor. Usually that functionality is abstracted pretty well. You can do that for apps like vim, vscode, or really any editor you want.
If you really do want to use an embedded editor, ask a specific question about that part. Notepad++ uses Scintilla for example and works with pretty much every language there is. It will very likely fit your use case.
In general: yes it's possible, because anything is possible. You may get better answers if you ask a question about your specific issue with including Qscintilla in your project.
PS. DSLs existed for decades. If you can't find anything relevant, look harder. SQL is a DSL for example. Everything written in LISP is pretty much its own DSL. | 1 | 0 | 0 | I wanted to know if anyone had ideas about how to create an editor / gui for Python's DSL.
So I have a grammar (based on textX project) and a class which interpret my DSL grammar. But I want to create an editor which has auto-completion and syntax highlight for the grammar of my own DSL.
Is it possible ?
I went into PySide, Qscintilla, but I'm a little lost, it doesn't seem to be appropriate.
Furthermore DSL is pretty new as a concept, so there is pretty much 0 docs on the net, that's why I'm here (you never know !)
EDIT : ^Sorry apparently I'm triggering everyone about that sentence up. My bad, I'm pretty new to DSL, and I wanted to say there is almost nothing about develop a DSL in Python compared to Java (with Eclipse Modeling...)
Cya ! | Create a DSL with Python | 0.197375 | 0 | 0 | 925 |
43,037,896 | 2017-03-27T04:21:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,ethereum,coinbase-api | 48,160,272 | 3 | false | 0 | 0 | Something worked for me with a similar problem calling exchange rates.
Try changing the params in
coinbase\wallet\client.py
from
response = self._get('v2', 'prices', 'spot', data=params)
to
response = self._get('v2', 'prices', 'spot', params=params) | 1 | 10 | 0 | Using the python coinbase API-- The functions-- get_buy_price, get_sell_price, get_spot_price, get_historical_data, etc... all seem to return bitcoin prices only. Is there a way of querying Ethereum prices?
It would seem that currency_pair = 'BTC-USD' could be changed to something akin to currency_pair = 'ETH-USD' although this has no effect.
I would expect that the API simply doesn't support this, except that the official documentation explicitly states:
Get the total price to buy one bitcoin or ether
I can work around this somewhat by using the quote='true' flag in the buy/sell request. This however only works moving forward, I would like historical data. | Historical ethereum prices - Coinbase API | 0 | 0 | 1 | 9,048 |
43,037,903 | 2017-03-27T04:22:00.000 | 0 | 0 | 1 | 0 | python,scikit-learn | 44,992,636 | 5 | false | 0 | 0 | Just closed the Spyder editor and restarted. This Issue got fixed. | 3 | 7 | 1 | I was trying to import sklearn.model_selection with Jupiter Notebook under anaconda environment with python 3.5, but I was warned that I didn't have "model_selection" module, so I did conda update scikit-learn.
After that, I received a message of ImportError: cannot import name 'logsumexp' when importing sklearn.model_selection.
I reinstalled sklearn and scipy, but still received the same error message. May I have some advice? | ImportError: cannot import name 'logsumexp' when importing sklearn.model_selection | 0 | 0 | 0 | 18,428 |
43,037,903 | 2017-03-27T04:22:00.000 | 1 | 0 | 1 | 0 | python,scikit-learn | 55,497,890 | 5 | false | 0 | 0 | The same error appeared when I tried to import hmm from hmmlearn, I reinstalled scipy and it worked. Hope this can be helpful.(I have tried updating all of the packages involved to solve the problem, but did not work. My computer system is ubuntu 16.04, with anaconda3 installed.) | 3 | 7 | 1 | I was trying to import sklearn.model_selection with Jupiter Notebook under anaconda environment with python 3.5, but I was warned that I didn't have "model_selection" module, so I did conda update scikit-learn.
After that, I received a message of ImportError: cannot import name 'logsumexp' when importing sklearn.model_selection.
I reinstalled sklearn and scipy, but still received the same error message. May I have some advice? | ImportError: cannot import name 'logsumexp' when importing sklearn.model_selection | 0.039979 | 0 | 0 | 18,428 |
43,037,903 | 2017-03-27T04:22:00.000 | 10 | 0 | 1 | 0 | python,scikit-learn | 43,158,642 | 5 | true | 0 | 0 | I came across exactly the same problem just now. After I updated scikit-learn and tried to import sklearn.model_selection, the ImportError appeared.
I just restarted anaconda and ran it again.
It worked. Don't know why. | 3 | 7 | 1 | I was trying to import sklearn.model_selection with Jupiter Notebook under anaconda environment with python 3.5, but I was warned that I didn't have "model_selection" module, so I did conda update scikit-learn.
After that, I received a message of ImportError: cannot import name 'logsumexp' when importing sklearn.model_selection.
I reinstalled sklearn and scipy, but still received the same error message. May I have some advice? | ImportError: cannot import name 'logsumexp' when importing sklearn.model_selection | 1.2 | 0 | 0 | 18,428 |
43,038,122 | 2017-03-27T04:45:00.000 | 0 | 0 | 1 | 0 | python,pycharm,spyder | 68,713,312 | 2 | false | 0 | 0 | Spyder has an integrated Jupyter Notebook and can show plots inline - comes with autocompletion that is just as good (if not much better) than PyCharm's terminal...
It comes with a live debugger that's much easier on the eye (side-by-side) and can execute code real damn fast!!!
Spyder gives you the option to autocomplete words in the console itself - Pycharm just uses the system command prompt/Terminal. | 2 | 3 | 0 | I have installed both PyCharm and Spyder (from Anaconda2).
However, when I run the exact same code (printing a very large array) from the python console, the console opened from Spyder printed out the array in less than five seconds, whereas the console opened from PyCharm took one minute to process and then printed the array.
I am wondering what is the reason for the difference in "processing time"? I like the auto-complete feature of PyCharm, but from my experience, it is slower than Spyder. Is there a solution? | Console speed: PyCharm vs Spyder | 0 | 0 | 0 | 3,380 |
43,038,122 | 2017-03-27T04:45:00.000 | 4 | 0 | 1 | 0 | python,pycharm,spyder | 43,047,152 | 2 | true | 0 | 0 | (Spyder maintainer here) To avoid this problem in Spyder we only allow 500 lines to be shown in the console at a time (Note: This limit is configurable by the user)
So I'd say Pycharm's console doesn't have this functionality (although I can't tell that for sure). | 2 | 3 | 0 | I have installed both PyCharm and Spyder (from Anaconda2).
However, when I run the exact same code (printing a very large array) from the python console, the console opened from Spyder printed out the array in less than five seconds, whereas the console opened from PyCharm took one minute to process and then printed the array.
I am wondering what is the reason for the difference in "processing time"? I like the auto-complete feature of PyCharm, but from my experience, it is slower than Spyder. Is there a solution? | Console speed: PyCharm vs Spyder | 1.2 | 0 | 0 | 3,380 |
43,039,310 | 2017-03-27T06:29:00.000 | 2 | 1 | 1 | 0 | python,documentation | 43,040,351 | 1 | true | 0 | 0 | Use pydoc in the form $ pydoc <name_of_library>[.<name_of_module>[.<name_of_function>]]. To get information about requests's Reponse object, for example, you would do $ pydoc requests.models.Response.
pydoc is installed in any standard Python distribution. | 1 | 0 | 0 | How can I conveniently view class structure and methods description in python? I know about help(method) method, but it's not easy for use. F.e. I have download a library, and I know that it hase class Foo. I wanna see methods and meaning, have I to look for documentation or other way exists? | Python class and method description | 1.2 | 0 | 0 | 363 |
43,039,601 | 2017-03-27T06:48:00.000 | 1 | 0 | 0 | 0 | python,selenium | 43,041,354 | 2 | true | 0 | 0 | This is how I have done for all my projects.
Create a text file which has all the project dependencies mentioned. make sure you mentioned version as well.
Example: requirement.txt
pytest==2.9.1
selenium==2.35.1
Create a Shell script or batch file which, creates a new virtual environment, installs all the dependencies and run the tests. | 1 | 1 | 0 | Is there a way to install selenium webdriver as a python project dependence.
I need this in such a way, so when this project goes to a os, that doesn't have selenium webdriver installed not to be an issue for this project to run properly on that os.
Thank you in advance.
PS: Please take a look at my own answer to this question.
Stefan | Adding selenium webdriver as a python project dependence | 1.2 | 0 | 1 | 1,572 |
43,039,917 | 2017-03-27T07:06:00.000 | 0 | 0 | 0 | 1 | python,bash | 43,040,077 | 2 | false | 0 | 0 | A quick thing to note: $ is a bash construct. It is the one which evaluates the variable and returns it's value. This does not happen in general when calling one program from another program. So when you invoke myprogram it is up to you to provide all the arguments in a form in which myprogram understands them. | 1 | 0 | 0 | I have a program that take one argument.
I need to call this program in my python script and I need to pass the argument in bytecode format (like \x42\x43).
Directly in bash, I can do like this and it does work:
./myprogram $'\x42\x43'
But with subprocess.call it doesn't work:
subprocess.call(["myprogram", "$'\x42\x43'"])
Bytes are not intrepreted.
I try to call my program with /bin/bash but my program returns a segfault! | Intrepret bytecode with subprocess.call argument | 0 | 0 | 0 | 59 |
43,041,759 | 2017-03-27T08:46:00.000 | 1 | 0 | 1 | 0 | python,anaconda,spyder,auto-indent | 43,041,908 | 1 | true | 0 | 0 | You can restore the default settings through terminal/CMD like so:
spyder --reset
Alternatively delete the .spyder folder, located in your users folder. | 1 | 0 | 0 | I have been using Anaconda/Spyder for years. Today auto-indent just stopped indenting after a "for" statement with a colon. I also tried it with absolutely no code above the for statement, so it isn't a syntax error in a previous line. Am I being dense? | Spyder auto-indent stopped working | 1.2 | 0 | 0 | 202 |
43,041,964 | 2017-03-27T08:56:00.000 | 0 | 0 | 0 | 0 | python-3.x,zipline | 53,089,089 | 3 | false | 0 | 0 | You can get data for non-NYSE stocks as well like Nasdaq securities. Screens are also available by fundamentals(market, exchange, market cap). These screens can limit stocks analyzed from the broad universe. | 1 | 0 | 1 | I want to know were Quantopian gets data from?
If I want to do an analysis on a stock market other than NYSE, will I get the data? If not, can I manually upload the data so that I can run my algorithms on it. | Using quantopian for data analysis | 0 | 0 | 0 | 312 |
43,044,659 | 2017-03-27T11:05:00.000 | 8 | 0 | 0 | 0 | python,nginx,flask,gunicorn | 43,045,007 | 5 | false | 1 | 0 | Do you know why the Django mascot is a pony? The story is that Django
comes with so many things you want: an ORM, all sorts of middleware,
the admin site… "What else do you want, a pony?" Well, Gunicorn
stands for "Green Unicorn" - obeythetestinggoat.com
Nginx is the front face for your server.
Gunicorn runs multiple django projects(each project is a wsgi
application powered by Gunicorn) in a single server(say Ubuntu).
Every request comes to nginx and asks for which gunicorn application
should it go and it redirects it.
NOTE - Gunicorn cannot serve static files automatically as your local django server does. So you will need nginx for that again. | 3 | 46 | 0 | I want to use gunicorn for a REST API application with Flask/Python. What is the purpose of adding nginx here to gunicorn? The gunicorn site recommends using gunicorn with nginx. | What is the purpose of using nginx with gunicorn? | 1 | 0 | 0 | 31,058 |
43,044,659 | 2017-03-27T11:05:00.000 | 3 | 0 | 0 | 0 | python,nginx,flask,gunicorn | 43,044,788 | 5 | false | 1 | 0 | In production nginx works as reverse proxy. It means users will hit nginx from browser and nginx will forward the call to your application.
Hope this helps. | 3 | 46 | 0 | I want to use gunicorn for a REST API application with Flask/Python. What is the purpose of adding nginx here to gunicorn? The gunicorn site recommends using gunicorn with nginx. | What is the purpose of using nginx with gunicorn? | 0.119427 | 0 | 0 | 31,058 |
43,044,659 | 2017-03-27T11:05:00.000 | 9 | 0 | 0 | 0 | python,nginx,flask,gunicorn | 43,044,819 | 5 | false | 1 | 0 | Gunicorn is an application server for running your python application instance.
NGINX is a reverse proxy. It accepts incoming connections and decides where they should go next. It is in front of Gunicorn. | 3 | 46 | 0 | I want to use gunicorn for a REST API application with Flask/Python. What is the purpose of adding nginx here to gunicorn? The gunicorn site recommends using gunicorn with nginx. | What is the purpose of using nginx with gunicorn? | 1 | 0 | 0 | 31,058 |
43,048,126 | 2017-03-27T13:42:00.000 | 0 | 0 | 0 | 0 | python,h2o | 43,051,434 | 2 | false | 0 | 0 | This refers to 2-4 times the size of the file on disk, so rather than looking at the memory in Python, look at the original file size. Also, the 2-4x recommendation varies by algorithm (GLM & DL will requires less memory than tree-based models). | 1 | 0 | 1 | I am loading Spark dataframes into H2O (using Python) for building machine learning models. It has been recommended to me that I should allocate an H2O cluster with RAM 2-4x as big as the frame I will be training on, so that the analysis fits comfortably within memory. But I don't know how to precisely estimate the size of an H2O frame.
So supposing I have an H2O frame already loaded into Python, how do I actually determine its size in bytes? An approximation within 10-20% is fine. | How to determine size in bytes of H2O frame in Python? | 0 | 0 | 0 | 699 |
43,049,256 | 2017-03-27T14:32:00.000 | 0 | 0 | 0 | 1 | python,linux,windows | 43,050,053 | 2 | false | 0 | 0 | os.rename(src, dst)
Rename the file or directory src to dst. If dst is a directory, OSError will be raised. On Unix, if dst exists and is a file, it will be replaced silently if the user has permission. The operation may fail on some Unix flavors if src and dst are on different filesystems. If successful, the renaming will be an atomic operation (this is a POSIX requirement). On Windows, if dst already exists, OSError will be raised even if it is a file; there may be no way to implement an atomic rename when dst names an existing file.
or shutil.move(src, dst)
Recursively move a file or directory (src) to another location (dst).
If the destination is an existing directory, then src is moved inside that directory. If the destination already exists but is not a directory, it may be overwritten depending on os.rename() semantics.
If the destination is on the current filesystem, then os.rename() is used. Otherwise, src is copied (using shutil.copy2()) to dst and then removed.
If I got you right both will work for you.
by the way I know that when you install git you can enable Linux commands inside your CMD during the installation. (pay attention to checkbox there), but I'm not sure how it will behave and integrate with your scripts. | 2 | 0 | 0 | Is it possible to use mv in Windows python.
I want to use mv --backup=t *.pdf ..\ to make copies of existing file but don't want to overwrite them, and Windows move command does not supports suffixes with existing files.
I can run my script with mv command in Windows Bash or CygWin but not on cmd or powershell.
So is it possible to use Linux commands in Windows python?
EDIT: i'm using python 2.7 | How can I use Linux commands in python Windows | 0 | 0 | 0 | 803 |
43,049,256 | 2017-03-27T14:32:00.000 | 1 | 0 | 0 | 1 | python,linux,windows | 43,064,620 | 2 | true | 0 | 0 | well I tried a different approach to rename the existing files with a random hex at the end on the 'name'
and i'm pretty much satisfied with it :D
if os.path.isfile('../%s.pdf' % name) == True:
os.system('magick *.jpg pdf:"%s".pdf' % name_hex)
else:
os.system('magick *.jpg pdf:"%s".pdf' % name) | 2 | 0 | 0 | Is it possible to use mv in Windows python.
I want to use mv --backup=t *.pdf ..\ to make copies of existing file but don't want to overwrite them, and Windows move command does not supports suffixes with existing files.
I can run my script with mv command in Windows Bash or CygWin but not on cmd or powershell.
So is it possible to use Linux commands in Windows python?
EDIT: i'm using python 2.7 | How can I use Linux commands in python Windows | 1.2 | 0 | 0 | 803 |
43,049,673 | 2017-03-27T14:50:00.000 | 0 | 0 | 0 | 0 | python,tensorflow,deep-learning | 43,056,493 | 1 | false | 0 | 0 | Tensorflow provides a way to save your model: tensorflow.org/api_docs/python/tf/train/Saver. Your friend should then also use Tensorflow to load them. The language you load / save with doesn't affect how it is saved - if you save them in Tensorflow in Python they can be read in Tensorflow in C++. | 1 | 1 | 1 | I would like to save weights (and biases) from a CNN that I implemented and trained from scratch using Tensorflow (Python API).
Now I would like to save these weights in a file and share it with someone so he can use my network. But since I have a lot of weights I don't know. How can/should I do that? Is there a format recommended to do that? | Best way to save a CNN's weights in order to reuse them | 0 | 0 | 0 | 383 |
43,050,388 | 2017-03-27T15:22:00.000 | 2 | 0 | 0 | 0 | google-api,google-drive-api,google-api-client,google-api-python-client | 43,052,062 | 1 | true | 0 | 0 | No there isn't. I believe (haven't tested it) that Google will respect Accept-Encoding: gzip for content downloads. | 1 | 0 | 0 | People of Earth! Hello. As I can see in the docs, I'm able to download only one file using one API request. In order to download 10 files - I have to make 10 requests that makes me sad... Google Drive UI allows us to download archived files after selecting files and clicking on "download". Is there the same feature in the API that would allow me to download the desired number of files at once? I need Google Drive API to compress files and let me download an archive. | Is there a way to force Google Drive API to compress files and let me download an archive? | 1.2 | 0 | 1 | 332 |
43,050,855 | 2017-03-27T15:45:00.000 | 2 | 0 | 0 | 0 | python,datetime,timezone | 43,053,522 | 1 | true | 0 | 0 | You cannot convert from POSIX to Olson.
POSIX strings are a format for encoding offsets, abbreviations, and transitions. They can only represent a single "standard time" offset, an optional single "daylight time" offset, and a single pair of transitions between them.
Olson strings (aka IANA, tzdb, tzinfo, etc.) are identifiers. They reference a Zone or Link entry in the tz data. Each zone can contain any number of offsets and transitions (using Rule entries). This allows for true representation of how time zones change over time.
One can project a POSIX string for a given Olson time zone and a given point in time (such as "now"), but the reverse is not possible.
In the example you gave, given Europe/Berlin, I could compute the current POSIX string, which would look like what you showed. But the exact same string would be computed for Europe/Paris and several other distinct time zones that have their own set of changes over time. There's no way to distinguish between them. | 1 | 1 | 0 | I work with old devices which send some metadata to server. I need to manipulate date time which is in UTC. I also get timezone information where devices are located in POSIX format eg. for Berlin:
CET-1CEST,M3.5.0/2,M10.5.0/3
However, this format doesn't seem to be supported by popular libraries like pytz
Thus I thought I would simply convert this to Olson format which would be: Europe/Berlin
But I can't find way to convert POSIX format to Olson format (which I thought was common thing and well supported). How would you do that? | POSIX to Olson/IANA timezone format conversion | 1.2 | 0 | 0 | 959 |
43,051,032 | 2017-03-27T15:53:00.000 | 0 | 0 | 0 | 0 | java,python,selenium,selenium-webdriver | 43,053,997 | 1 | true | 1 | 0 | As Nameless said, it won't solve the "make me an efficient XPath", etc. problem that you are talking about but you can install Selenium IDE (a FF plugin) and record your scenarios and then export them into various languages. It doesn't write the best code but you can get an idea of what it does with a quick download and install. | 1 | 0 | 0 | I'd like to reverse engineer Selenium WebDriver to write my tests for me as I use it. This would entail opening a WebDriver on screen, and clicking around and using it as normal. It will output instructions like self.driver.find_element_by_id('username-box') or whatnot for me, instead of the time-wasting of right-clicking the "Inspect element" each time I write a test.
Ideally this will give me a nice xpath which is more exact. How do I retrieve the Xpath/way to recreate actions when manually using Selenium WebDriver? | How to retrieve HTML information about specific actions using WebDriver | 1.2 | 0 | 1 | 43 |
43,052,100 | 2017-03-27T16:47:00.000 | 0 | 1 | 1 | 0 | python,eclipse,numpy,tensorflow,pycharm | 43,055,489 | 1 | false | 0 | 0 | Found the solution: need to define the same interpreter as in PyCharm. Thanks everyone anyway! | 1 | 0 | 0 | I have problems with my PyCharm (it is running extremely slow) so I decided to change to eclipse. However, the import which worked in PyCharm suddenly don't work in eclipse. I am talking about numpy and tensorflow (which were appropriately installed).
Please, can anyone can give me a hint? | Why Python import works on pycharm but not on eclipse | 0 | 0 | 0 | 61 |
43,052,411 | 2017-03-27T17:02:00.000 | 2 | 0 | 1 | 0 | python,spss | 43,058,353 | 1 | true | 0 | 0 | Yes, you can run Statistics in external mode from a Python or R program. You might have to add the SPSS Python directory to your Python search path, but then just do
import spss
and run your Python code. The only thing you can't do is Viewer and user interface stuff, because there is no SPSS UI in that mode. By default, you will get output as text (which you can turn off when you get the hang of things). If you want better quality output, you can use OMS to capture output in a wide variety of formats.
Note that you need a compatible version of Python if you don't use the one installed with SPSS. That would be 2.7 for most Statistics versions. The Python installed with Statistics is not registered, but you can install a standard version from Python.org and just add the SPSS Python directory to the search path.
HTH | 1 | 0 | 0 | I'm helping my wife try and navigate IBM SPSS and python. She knows SPSS, and I kinda know python -- We might be able to work together. As it stands, I understand that I can call small snippets of python from within an SPSS syntax. While this is useful for looping and conditional branching based on data, it seems a little fuzzy to me. It almost feels like Inversion of Control, but not really.
I was wondering is it possible to have a python script, external to an spss syntax, that can still use the SPSS libraries in any meaningful way, or do I have to keep my scripts confined to the SPSS syntax and runtime? | Can python import the SPSS and SPSSAux libraries and use them to any value outside of the spss context? | 1.2 | 0 | 0 | 406 |
43,056,007 | 2017-03-27T20:35:00.000 | 0 | 1 | 0 | 1 | python,linux,amazon-web-services,amazon-ec2 | 43,056,995 | 3 | false | 0 | 0 | I read that the use of rc.local is getting deprecated. One thing to try is a line in /etc/crontab like this:
@reboot full-path-of-script
If there's a specific user you want to run the script as, you can list it after @reboot. | 1 | 2 | 0 | I have followed a few posts on here trying to run either a python or shell script on my ec2 instance after every boot not just the first boot.
I have tried the:
[scripts-user, always] to /etc/cloud/cloud.cfg file
Added script to ./scripts/per-boot folder
and
adding script to /etc/rc.local
Yes the permissions were changed to 755 for /etc/rc.local
I am attempting to pipe the output of the file into a file located in the /home/ubuntu/ directory and the file does not contain anything after boot.
If I run the scripts (.sh or .py) manually they work.
Any suggestions or request for additional info to help? | ec2 run scripts every boot | 0 | 0 | 0 | 4,018 |
43,057,477 | 2017-03-27T22:26:00.000 | 1 | 0 | 1 | 0 | python-2.5,cellular-automata | 43,203,788 | 1 | true | 0 | 0 | You can define a cellular automaton on any cell state space. Just formulate the cell update function as F:Q^n->Q where Q is your state space (here Q={0,1,2,3,4,5}) and n is the size of your neighborhood.
As a start, just write F as a majority rule, that is, 0 being the neutral state, F(c) should return the value in 1-5 with the highest count in the neighborhood, and 0 if none is present. In case of equality, you may pick one of the max at random.
As an initial state, start with a configuration with 5 relatively equidistant cells with the states 1-5 (you may build them deterministically through a fixed position that can be shifted/mirrored, or generate these points randomly).
When all cells have a value different than 0, you have your map.
Feel free to improve on the update function, for example by applying the rule with a given probability. | 1 | 0 | 0 | I am making a roguelike where the setting is open world on a procedurally generated planet. I want the distribution of each biome to be organic. There are 5 different biomes. Is there a way to organically distribute them without a huge complicated algorithm? I want the amount of space each biome takes up to be nearly equal.
I have worked with cellular automata before when I was making the terrain generators for each biome. There were 2 different states for each tile there. Is there an efficient way to do 5?
I'm using python 2.5, although specific code isn't necessary. Programming theory on it is fine.
If the question is too open ended, are there any resources out there that I could look at for this kind of problem? | Cellular automaton with more then 2 states(more than just alive or dead) | 1.2 | 0 | 0 | 208 |
43,058,703 | 2017-03-28T00:36:00.000 | 0 | 0 | 0 | 0 | python,performance,csv,pandas,datetime | 43,058,780 | 1 | false | 0 | 0 | and 2. In my experience, if processing time is not critical for your study (say you process the data once and then you run your analysis) then I would recommend you parse the dates using pd.to_datetime() and others after you have read in the data.
anything that will help Pandas reduce the set of possibilities about the types in your data WILL speed up the processing. That make sense. The more precise you are, the faster will be the processing. | 1 | 1 | 1 | These questions about datetime parsing in pandas.read_csv() are all related.
Question 1
The parameter infer_datetime_format is False by default. Is it safe to set it to True? In other words, how accurately can Pandas infer date formats? Any insight into its algorithm would be helpful.
Question 2
Loading a CSV file with over 450,000 rows took over 10 mins when I ran pd.read_csv("file.csv", parse_dates = ["Start", "End"])
However it took only 20 seconds when I added the parameters dayfirst = True and infer_datetime_format = True. Yet if either was False, it took over 10 mins.
Why must both be True in order to speed up datetime parsing? If one is False but not the other, shouldn't it take strictly between 20 sec and 10 mins? (The answer to this question may well be the algorithm, as in Question 1.)
Question 3
Since dayfirst = True, infer_datetime_format = True speeds up datetime parsing, why is it not the default setting? Is it because Pandas cannot accurately infer date formats? | Slow datetime parsing in Pandas | 0 | 0 | 0 | 527 |
43,059,689 | 2017-03-28T02:33:00.000 | 2 | 0 | 0 | 1 | python,python-2.7,python-3.x,ubuntu-16.04,python-idle | 43,060,767 | 2 | false | 0 | 0 | Type whereis python2 on your terminal; you end up getting possibly one or more paths to python2. You can then copy-paste any of these paths onto your alias for python in .bash_aliases. | 1 | 0 | 0 | Okay, so I have python 3.5 on my system (Ubuntu 16.04). Whenever I open a .py file, Idle3 starts, thus pressing F5 will instantly run my code.
However I need python 2.7 now for an assignment. In terminal I've apt-get install idle so, I can open idle and idle3 there easily.
My problem is, I can't change my .py files' default application to idle. It only sees idle3, so I can't open my files with idle(2.7) as default.
Tried to make an alias in ~/.bash_aliases as alias python=/usr/local/bin/python2.7, but typing python --version into terminal I get:
-bash: /usr/local/bin/python2.7: No such file or directory.
Typing python2 --version and python3 --version works fine.
Is there any simple workaround for that? | How to change default idle for python (Ubuntu)? | 0.197375 | 0 | 0 | 1,044 |
43,060,735 | 2017-03-28T04:40:00.000 | 0 | 0 | 1 | 0 | django,python-2.7,python-3.x,python-venv | 43,073,626 | 3 | false | 1 | 0 | Ok - so I figured out what happened. I have installed django using sudo pip install. Even though I was in the venv (created with python3) this has resulted in reference to django outside the venv. Sooo...it was an interesting thing to learn I guess. | 2 | 1 | 0 | I have created python 3.5.2 virtual environment ("python --version" confirms that)
but when i try to install django using "pip install django~=1.10.0" I get this message:
Requirement already satisfied: django~=1.10.0 in /usr/local/lib/python2.7/dist-packages
How can I get django version that agrees with the python version in my venv? | virtual env python 3.5 only finds django python 2.7 | 0 | 0 | 0 | 114 |
43,060,735 | 2017-03-28T04:40:00.000 | 0 | 0 | 1 | 0 | django,python-2.7,python-3.x,python-venv | 43,069,102 | 3 | true | 1 | 0 | Probabily you have already installed django outside the venv with python2.
just write see in the pip list if django is installed.
Then uninstall, enter in the venv and reinstall django with python3 | 2 | 1 | 0 | I have created python 3.5.2 virtual environment ("python --version" confirms that)
but when i try to install django using "pip install django~=1.10.0" I get this message:
Requirement already satisfied: django~=1.10.0 in /usr/local/lib/python2.7/dist-packages
How can I get django version that agrees with the python version in my venv? | virtual env python 3.5 only finds django python 2.7 | 1.2 | 0 | 0 | 114 |
43,060,827 | 2017-03-28T04:50:00.000 | 1 | 0 | 0 | 0 | python,tensorflow | 44,095,963 | 5 | false | 0 | 0 | Adding reuse = tf.get_variable_scope().reuse to BasicLSTMCell is OK to me. | 2 | 2 | 1 | I run ptb_word_lm.py provided by Tensorflow 1.0, but it shows this message:
ValueError: Attempt to have a second RNNCell use the weights of a variable scope that already has weights:
'Model/RNN/multi_rnn_cell/cell_0/basic_lstm_cell'; and the cell was
not constructed as BasicLSTMCell(..., reuse=True). To share the
weights of an RNNCell, simply reuse it in your second calculation, or
create a new one with the argument reuse=True.
Then I modify the code, add 'reuse=True' to BasicLSTMCell, but it show this message:
ValueError: Variable Model/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/weights does not
exist, or was not created with tf.get_variable(). Did you mean to set
reuse=None in VarScope?
How could I solve this problems? | ValueError: Attempt to have a second RNNCell use the weights of a variable scope that already has weights | 0.039979 | 0 | 0 | 3,069 |
43,060,827 | 2017-03-28T04:50:00.000 | 0 | 0 | 0 | 0 | python,tensorflow | 45,557,687 | 5 | false | 0 | 0 | You can trying to add scope='lstmrnn' in your tf.nn.dynamic_rnn() function. | 2 | 2 | 1 | I run ptb_word_lm.py provided by Tensorflow 1.0, but it shows this message:
ValueError: Attempt to have a second RNNCell use the weights of a variable scope that already has weights:
'Model/RNN/multi_rnn_cell/cell_0/basic_lstm_cell'; and the cell was
not constructed as BasicLSTMCell(..., reuse=True). To share the
weights of an RNNCell, simply reuse it in your second calculation, or
create a new one with the argument reuse=True.
Then I modify the code, add 'reuse=True' to BasicLSTMCell, but it show this message:
ValueError: Variable Model/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/weights does not
exist, or was not created with tf.get_variable(). Did you mean to set
reuse=None in VarScope?
How could I solve this problems? | ValueError: Attempt to have a second RNNCell use the weights of a variable scope that already has weights | 0 | 0 | 0 | 3,069 |
43,064,496 | 2017-03-28T08:32:00.000 | 1 | 0 | 1 | 0 | python,pip,virtualenv | 44,232,164 | 1 | false | 1 | 0 | You can pip download the Python packages you want to install offline and then install the .whl files in your virtualenv. Here is an example with Django and Requests:
Create a directory to store local Python packages:
mkdir local_python
Change directory: cd local_python
Download Python packages to be available offline:
pip download django requests
Install local package .whl files after you activate your virtualenv:
pip install Django-1.11.1-py2.py3-none-any.whl requests-2.16.5-py2.py3-none-any.whl | 1 | 1 | 0 | I am new to using python, so bear with me if I make any assumptions.. So I have virtualenv and pip installed in my ubuntu machine. Every time I create a virtual environment I have to remotely download and install python modules(using pip install) such as django already installed in the main python package.
The problem is that I am not always connected to the internet. Is there a way I can load modules existing in the main Python to every virtual environment I create? Thanks! | Option to import global packages into virtual environment I create? | 0.197375 | 0 | 0 | 323 |
43,064,625 | 2017-03-28T08:38:00.000 | 0 | 0 | 0 | 0 | python,django | 43,066,067 | 2 | true | 1 | 0 | The middleware is the way to go for this problem, it's the only method that's truly reliable and configurable. | 1 | 0 | 0 | I need to have certain users fill out a specific form on login. Django should redirect the user to the form, whenever a certain condition is True for that user.
I used a custom middleware to do it, but I am curious if there is a better approach.
Any ideas? | Django force user to fill out form on login | 1.2 | 0 | 0 | 203 |
43,064,633 | 2017-03-28T08:39:00.000 | 0 | 0 | 0 | 1 | linux,python-2.7,google-app-engine,google-cloud-platform,centos6 | 54,142,496 | 4 | false | 0 | 0 | If you are on Windows This is a simple solution that worked for me:
open Powershell as administrator and run this to add your Python folder to your environment's PATH:
$env:Path += ";C:\python27_x64\"
Then re-run the command that gave you the original error. It should work fine.
Alternatively you could run that original (error-causing) command within the Cloud SDK Shell. That also worked for me. | 1 | 5 | 0 | Our server OS is CentOS 6.8, I was trying to install google-cloud-sdk, even though I installed
python 2.7 in /usr/local/bin
, it is still looking at old version of
python 2.6 in /usr/bin
. I tried giving export PATH=/usr/local/bin:$PATH to first look at /usr/local/bin than /usr/bin but still the problem persists. please suggest a way to fix. | google-cloud-sdk installation not finding right Python 2.7 version in CentOS /usr/local/bin | 0 | 0 | 0 | 9,333 |
43,066,877 | 2017-03-28T10:18:00.000 | 0 | 0 | 0 | 0 | mysql,mysql-python | 43,179,890 | 2 | false | 1 | 0 | 2 tables: user and user_uri_permission? 2 columns in the second: userID and URI. When the User-URI pair is in the table the use has access. | 1 | 0 | 0 | What I want is that when I have looked up a user in a table, I want to list all the file urls that the user have access to. My first thought was to have a field in the table with a list of file URLs. However, I have now understood that there are no such field type.
I was then thinking that maybe ForeignKeys might work, but I am having trouble getting my head around it.
Another solution maybe is to have one table for each user, with each row representing each file.
What would you say is best practice in this case?
I am also going to expand into having shared files, but thought that I'd address this issue first. | Mapping users to all of their files(URLs) in a mysql database. | 0 | 1 | 0 | 25 |
43,069,291 | 2017-03-28T12:14:00.000 | 0 | 0 | 0 | 0 | python,compression,gzip,key-value-store,dbm | 43,080,032 | 1 | false | 0 | 0 | You can gzip the value or use a key/value store that support compression like wiredtiger. | 1 | 1 | 0 | I have a big-ol' dbm file, that's being created and used by my python program. It saves a good amount of RAM, but it's getting big, and I suspect I'll have to gzip it soon to lower the footprint.
I guess usage will involve un-gzipping it to the disk, using it, and erasing the extracted dbm when I'm done.
I was wondering whether there perhaps exists some nice way of compressing the dbm and keep working with it somehow. In my spcific usecase, I only need to read from it.
Thanks. | recipe for working with compressed (any)dbm files in python | 0 | 1 | 0 | 109 |
43,070,180 | 2017-03-28T12:55:00.000 | 1 | 0 | 1 | 0 | python,anaconda | 43,070,556 | 1 | false | 0 | 0 | I have had similar issues with Anaconda environments - pip will install to the environment in which you're 'logged in', so you need to be very careful about which environment you're in when you use pip.
If I were you, I would pip uninstall the packages in both environments, and methodically install keras in each, as having two different versions in two different environments will not be an issue. | 1 | 1 | 0 | I want to use keras 1.0 and keras 2.0 at the same time, I tried to create two environments in anaconda: keras1 and keras2.
I install keras1.0 in keras1, when I change the environment to keras2, I found the keras' version is 1.0, and I upgrade the keras to 2.0, then the keras' version became 2.0 in environment keras1.
What should I do to use the two versions at the same time? | how to install two python package versions in different anaconda environments? | 0.197375 | 0 | 0 | 433 |
43,070,869 | 2017-03-28T13:25:00.000 | 0 | 0 | 0 | 0 | python,eclipse,pydev,pylint | 43,074,661 | 1 | false | 0 | 1 | In PyDev you can choose the severity of each of the message types in PyLint (i.e.: Fatal, Errors, Warnings, Conventions, Refactor) -- you can do that in the PyLint preferences page inside PyDev, but there's nothing related to changing the message type of a single item into an error.
I.e.: if you need to change the message type of some message from the Warning message type to the Error message type, PyLint itself should support that (and if/when it does, you should configure the parameters to pass to PyLint in the preferences page). | 1 | 0 | 0 | I'm working with pylint under Eclipse for my python code.
I have some warnings given by pylint that I want to be marked as errors.
Graphically it consists in a red cross instead of a yellow warning panel.
Any idea ?
PS : The warning I want more severe is W0102, ie dangerous-default-value | Turn a pylint warning as an error under Eclipse/pydev | 0 | 0 | 0 | 145 |
43,072,375 | 2017-03-28T14:32:00.000 | 6 | 0 | 1 | 0 | python,tkinter | 43,073,588 | 2 | false | 0 | 1 | I'm not sure what evidence you have that says everyone says not to use place. I suspect if you're judging by stackoverflow posts, you're mostly reading my opinion a hundred times rather than a hundred different opinions.
I recommend against place mainly because it requires more work to make a UI that is responsive to changes in fonts, resolutions, and window sizes. While it's possible to write a GUI that uses place and is responsive to those things, it requires a lot of work to get right.
One advantage that both pack and grid have over place is that they allow tkinter to properly configure the size of the root and Toplevel windows. With place you must hard-code a size. Tkinter is remarkably good at making windows to be the exact right size without you having to decide on explicit sizes.
In addition, long term maintenance of applications that use place is difficult. If you want to add a new widget, you will almost certainly have to adjust every other widget. With grid and pack it's much easier to add and remove widgets without having to change the layout of all of the other widgets. If I've learned anything over years of using tk and tkinter is that my widget layout changes a lot during development.
place is mostly useful for edge cases. For example, if you want to center a single widget inside another widget, place is fantastic. Also, if you want to place a widget such that it is independent of other widgets, place is great for that too. | 1 | 9 | 0 | I have been working on a note taking program for myself and it is going well however I have had a lot of problems with getting all my widgets placed where I want them using the .pack() or .grid() options.
After looking around I found that I could use the .place() option instead. Before I decided to use .place() I found countless forum post saying "don't use .place()!".
I was at a stand still with my other options so I decided to give .place() a try. It turns out .place() is exactly what I needed to fix my layout issues and I just don't understand why everyone is hating on .place() so much.
Is there something inherently wrong with .place()? Or do people just prefer to use .pack() and .grid() for some practical reason other than ease of use? | Why do people say "Don't use place()"? | 1 | 0 | 0 | 573 |
43,075,420 | 2017-03-28T16:50:00.000 | 1 | 0 | 0 | 0 | python,django,sqlite,pycharm | 43,075,527 | 2 | false | 0 | 0 | After clicking on the View => Tools => Window => Database click on the green plus icon and then on Data Source => Sqlite (Xerial). Then, on the window that opens install the driver (it's underneath the Test Connection button) that is proposing (Sqlite (Xerial)).
That should do it both for db.sqlite3 and identifier.sqlite. I have never any problem with Sqlite database, showing on PyCharm IDE. | 1 | 2 | 0 | After updating PyCharm (version 2017.1), PyCharm does not display sqlite3 database tables anymore.
I've tested the connection and it's working.
In sqlite client I can list all tables and make queries.
Someone else has get this problem? And in this case could solve anyway? | Pycharm does not display database tables | 0.099668 | 1 | 0 | 3,401 |
43,076,878 | 2017-03-28T18:08:00.000 | 1 | 1 | 0 | 0 | python,chatterbot | 44,412,516 | 1 | false | 0 | 0 | Current chatterbot train based on your input file size, if the train file is bigger it will take more time to train the bot.
There is no specific examples to train the bot, it will learn past user inputs and responds your answers | 1 | 1 | 0 | How much time does it take to train the Ubuntu Dialog Corpus with chatterbot? How many examples are needed to train the bot well? | Training The Ubuntu Dialog Corpus with chatterbot | 0.197375 | 0 | 0 | 776 |
43,078,256 | 2017-03-28T19:26:00.000 | 0 | 1 | 0 | 1 | python,linux,bash | 43,078,388 | 3 | false | 0 | 0 | You should run sleep using subprocess.Popen before calling script.sh. | 1 | 1 | 0 | I'm working with a Python script and I have some problems on delaying the execution of a Bash script.
My script.py lets the user choose a script.sh, and after the possibility to modify that, the user can run it with various options.
One of this option is the possibility to delay of N seconds the execution of the script, I used time.sleep(N) but the script.py totally stops for N seconds, I just want to retard the script.sh of N seconds, letting the user continue using the script.py.
I searched for answers without success, any ideas? | How to delay the execution of a script in Python? | 0 | 0 | 0 | 2,353 |
43,078,634 | 2017-03-28T19:50:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,oop,operator-overloading,operators | 43,078,782 | 1 | false | 0 | 0 | The definition given in "How to think..." is correct.
It isn't specific for Python, C++ has the same concept.
The programmer can give an operator a new meaning, e.g. adding two vectors with a + instead of two scalar numbers.
The mere fact that an operator can be used on multiple datatypes natively doesn't have a specific naming. In almost any language + can be used to add integers or floats. There's no special word for this and many programmers aren't even aware of the difference. | 1 | 0 | 0 | In "Think Python: How to Think Like a Computer Scientist", the author defines operator overloading as:
Changing the behavior of an operator like + so it works with a
programmer-defined type.
Is this an accurate definition of it (in programming in general, and in Python in specific)? Isn't it: "The ability to use the same operator for different operations?" For example, in Python, we an use + to add to numbers, or to concatenate two sequences. Isn't this operator overloading? Isn't the + operator overloaded here?
Does the author means by "the behavior of an operator", raising a TypeError because it's not implemented for the given class? Because the operator still has its behavior with other types (e.g. strings).
Is the definition the author wrote, a specific type of operator overloading? | Is "Changing the behavior of an operator like + so it works with a programmer-defined type," a good definition of operator overloading? | 0.197375 | 0 | 0 | 73 |
43,080,722 | 2017-03-28T22:00:00.000 | 2 | 0 | 0 | 0 | python,machine-learning,scikit-learn,scikits | 43,080,891 | 1 | true | 0 | 0 | i'th output is prediction for i'th input. Whatever you passed to .predict is a collection of objects, and the ordering of predictions is the same as the ordering of data passed in. | 1 | 1 | 1 | I am using sklearn's Linear Regression ML model in Python to predict. The predict function returns an array with a lot of floating point numbers, (which is correct) but I don't quite understand what the floating point numbers represent. Is it possible to map them back?
For context, I am trying to predict sales of a product (label) from stocks available. The predict function returns a large array of floating point numbers. How do I know what each floating point number represents?
For instance, the array is like [11.5, 12.0, 6.1,..]. It seems 6.1 is the sales qty but with what stock quantity is it associated with? | Sklearn predict function | 1.2 | 0 | 0 | 4,905 |
43,082,900 | 2017-03-29T01:52:00.000 | -1 | 0 | 0 | 0 | python,linux,windows | 43,091,219 | 1 | false | 0 | 1 | My understanding is, only the foreground window get the focus and can handle keyboard input to play. Not sure in make sense to send input to background window or not.... | 1 | 0 | 0 | My issue currently is that of emulating keystroke input to an arbitrary program (such as a running game).
Currently I am using win32 libraries on Windows to find windows (win32gui.FindWindow) and grab focus (via win32gui.SetForegroundWindow), then send keyboard input (win32api.keybd_event).
This works if I am only sending input to a single program, but I wish to parallelize, playing multiple games simultaneously. This does not work with my current method as both applications demand "focus" for the keys to go to the right application, thus interfering with each other.
Ideally I would like something that sends input to a given window, that does not require focus , and is independent of input given to other windows or the currently focused window. | Keyboard output to multiple programs simultaneously? | -0.197375 | 0 | 0 | 658 |
43,084,539 | 2017-03-29T04:44:00.000 | 0 | 0 | 1 | 0 | python-3.x | 43,140,722 | 1 | false | 0 | 0 | I found the answer to my question. When you reference classes and you import as a module in a different one; the module will get call when is defined if you do not use if __name__=='__main__':. By putting this code at the end of the code will only execute the module once it is intended to be executed but not when you import the module. This way you can use the modules by themselves and also to import from other modules. | 1 | 0 | 0 | I created two classes. The first class takes an image from the working directory and then covert the image from pdf to jpg using wand. The second class takes the created jpg image and then do further manipulations with the image.
Now When I try to run the first class and then the second class right after that; python crashes because the second class is trying to look for the image but it wont find it until it is created.
My question is how can you run the second class but just after the first class is executed.
class1 = imagecreation('image.jpg')
class2 = transformimage() | Open a file after it was created in Python | 0 | 0 | 0 | 22 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.