Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
29,754,112 | 2015-04-20T17:08:00.000 | 1 | 0 | 0 | 0 | python,web-scraping,scrapy | 33,189,665 | 4 | false | 1 | 0 | Auto_throttle is specifically designed so that you don't manually adjust DOWNLOAD_DELAY. Setting the DOWNLOAD_DELAY to some number will set an lower bound, meaning your AUTO_THROTTLE will not go faster than the number set in DOWNLOAD_DELAY. Since this is not what you want, your best bet would be to set AUTO_THROTTLE to all spiders except for the one you want to go faster, and manually set DOWNLOAD_DELAY for just that one spider without AUTO_THROTTLE to achieve whatever efficiency you desire. | 2 | 1 | 0 | My use case is this: I have 10 spiders and the AUTO_THROTTLE_ENABLED setting is set to True, globally. The problem is that for one of the spiders the runtime WITHOUT auto-throttling is 4 days, but the runtime WITH auto-throttling is 40 days...
I would like to find a balance and make the spider run in 15 days (3x the original amount). I've been reading through the scrapy documentation this morning but the whole thing has confused me quite a bit. Can anyone tell me how to keep auto-throttle enabled globally, and just turn down the amount to which it throttles? | How to Set Scrapy Auto_Throttle Settings | 0.049958 | 0 | 0 | 5,769 |
29,755,274 | 2015-04-20T18:09:00.000 | 1 | 1 | 0 | 1 | python,windows,text-files,remote-access,readfile | 29,755,645 | 1 | false | 0 | 0 | You can use powershell for this.
first Open powershell by admin previlage.
Enter this command
Enable-PSRemoting -Force
Enter this command also on both computer so they trust eachother.
Set-Item wsman:\localhost\client\trustedhosts *
then restart winrm service on both pc by this command.
Restart-Service WinRM
test it by this command
Test-WsMan computername
for executing a Remote Command.
Invoke-Command -ComputerName COMPUTER -ScriptBlock { COMMAND }
-credential USERNAME
for starting remote session.
Enter-PSSession -ComputerName COMPUTER -Credential USER | 1 | 1 | 0 | Im trying to find a module that will allow me to run a script locally that will:
1. Open a text file on a remote Windows Machine
2. Read the lines of the text file
3. Store the lines in a variable and be able to process the data.
This is absolutely no problem on a Linux machine via SSH, but I have no clue what module to use for a remote Windows machine. I can connect no problem and run commands on a remote Windows machine via WMI,but WMI does not have a way to read/write to files. Are there any modules out there that I can install to achieve this process? | Python: Open and read remote text files on Windows | 0.197375 | 0 | 0 | 2,454 |
29,757,386 | 2015-04-20T20:07:00.000 | 0 | 0 | 1 | 0 | python | 31,281,691 | 1 | false | 1 | 0 | Since your goal is to be cross-architecture, any Python program which relies on native C modules will not be possible with this approach.
In general, using virtualenv to create a target environment will mean that even users who don't have permission to install new system-level software can install dependencies under their own home directory; thus, what you ask about is not often needed in practice.
However, if you wanted to do things that are consider evil / bad practices, pure-Python modules can in fact be bundled into a script; thus, a tool of this sort would be possible for modules with only native-Python dependencies!
If I were writing such a tool, I might start the following way:
Use pickle to serialize content of modules on the "sending" side
In the loader code, use imp.create_module() to create new module objects, and assign unpickled objects to them. | 1 | 1 | 0 | In the JavaScript ecosystem, "compilers" exist which will take a program with a significant dependency chain (of other JavaScript libraries) and emit a standalone JavaScript program (often, with optimizations applied).
Does any equivalent tool exist for Python, able to generate a script with all non-standard-library dependencies inlined? Are there other tools/practices available for bundling in dependencies in Python? | Can an arbitrary Python program have its dependencies inlined? | 0 | 0 | 0 | 86 |
29,758,142 | 2015-04-20T20:50:00.000 | 0 | 0 | 0 | 0 | python,django,backend | 29,759,759 | 2 | false | 1 | 0 | So I'm thinking this through, and one possibility I have come up with is to build databases (in mysql in my case) supported by Django that represent the data I am interested in from these data sources.
I could then override the model methods that query from/save changes to the mysql model to make calls to an external python class I write to interact directly with the data source and respective mysql database.
So, for example, in a query call, I could override the django method for doing so and prepend an operation to check if the mysql records are "stale" before calling super - if so, request an update to them before continuing.
In an update operation, I could append (post-update to mysql table), an operation to request that the external class update the external source.
This is a kind of roundabout way of doing it, but it does allow me to keep the app itself all within the django framework and if, in the future, modules are well implemented that provide a direct back-end interface to these sources, I can swap out the workaround with the direct interface easy enough.
Thoughts? Criticisms? | 1 | 4 | 0 | I have a tool I use at work, written in python, that I want to port into the Django framework to make writing a web-based management interface more seamless. I've been through the django tutorials and have a pretty solid understanding of how to write a basic django app with your own database (or databases).
The dilemma I've run into with this particular project is that I am referencing multiple data sources that:
May or may not actually be SQL databases, and some do not have any implementation as a django back-end (LDAP and Google Admin SDK for example).
Are third party data sources for which the overall "model" may change without notice, I have no control over this... Though the portions of their 'model' that I will be accessing will likely never change.
So my question is: Should I even be thinking about these external data sources as a django 'model'? Or am I better off just writing some separate interface classes for dealing with those data sources?
I can see the possibility of writing in a new 'db engine' to handle communications with these data sources so from the actual app implementation I can call all the usual methods like I am querying any database. Ideally, the core of the app I am writing needs to not care about the implementation details of each datasource that it connects to - I want to make it as pluggable as possible so implementation of new datasource types in the future doesn't involve much if any modification to the core code.
I want to know if that is the 'accepted' way of doing it though - or if, for custom situations like this, you would work around using the django back-end and just implement your own custom solution for querying information out of those data sources.
I hope this question is clear enough... If not, ask me for whatever specifics you need. Thanks! | Django Back-End Design Advice | 0 | 0 | 0 | 239 |
29,758,554 | 2015-04-20T21:15:00.000 | 1 | 0 | 0 | 0 | python,web-scraping,scrapy | 30,086,257 | 4 | true | 1 | 0 | It appears that the primary problem was not having cookies enabled. Having enabled cookies, I'm having more success now. Thanks. | 2 | 1 | 0 | I'm running Scrapy 0.24.4, and have encountered quite a few sites that shut down the crawl very quickly, typically within 5 requests. The sites return 403 or 503 for every request, and Scrapy gives up. I'm running through a pool of 100 proxies, with the RotateUserAgentMiddleware enabled.
Does anybody know how a site could identify Scrapy that quickly, even with the proxies and user agents changing? Scrapy doesn't add anything to the request headers that gives it away, does it? | Scrapy crawl blocked with 403/503 | 1.2 | 0 | 1 | 3,256 |
29,758,554 | 2015-04-20T21:15:00.000 | 0 | 0 | 0 | 0 | python,web-scraping,scrapy | 72,128,238 | 4 | false | 1 | 0 | I simply set AutoThrottle_ENABLED to True and my script was able to run. | 2 | 1 | 0 | I'm running Scrapy 0.24.4, and have encountered quite a few sites that shut down the crawl very quickly, typically within 5 requests. The sites return 403 or 503 for every request, and Scrapy gives up. I'm running through a pool of 100 proxies, with the RotateUserAgentMiddleware enabled.
Does anybody know how a site could identify Scrapy that quickly, even with the proxies and user agents changing? Scrapy doesn't add anything to the request headers that gives it away, does it? | Scrapy crawl blocked with 403/503 | 0 | 0 | 1 | 3,256 |
29,759,754 | 2015-04-20T22:44:00.000 | 1 | 0 | 0 | 0 | python,tkinter,label | 29,759,880 | 3 | false | 0 | 1 | You can't. At least, not automatically. You can use a separate label just for the "$", or manually format the string for the label. | 1 | 0 | 0 | I have a Python\Tkinter Label formatting question.
I have a simple main window with a Label that has its textvariable bound to a DoubleVar()
I want to display the value in the DoubleVar() as a US dollar amount,
as in "$ 123.45"
How can I have the Label display the value the way I want?
I know I could change the textvariable to a StringVar() but then I have lost the precision of the DoubleVar(). I need it for other calculations in the app.
I have also tried sub-classing the Label class but couldn't figure out what method to override when the textvariable value is accessed.
How do I tell a Label how I want a number formatted? | Python tkinker label formatting | 0.066568 | 0 | 0 | 3,745 |
29,760,119 | 2015-04-20T23:18:00.000 | 1 | 0 | 0 | 0 | python,python-2.7,nltk | 29,762,672 | 1 | false | 0 | 0 | It seems NTLK has a tabulate() method, which gives you the numeric data. From there on you could use pylab to generate the hist() function (or bar() for a bar plot). | 1 | 1 | 1 | I am using NLTK and FreqDist().plot() . But for curiosity, it's there a way to transform the line graph into an histogram? and how I can put labels in both cases?
I've searched in the documentation, but sadly it isn't detailed for it.
Thanks in advance | FreqDist().plot() as an histogram | 0.197375 | 0 | 0 | 1,774 |
29,761,006 | 2015-04-21T00:54:00.000 | 0 | 0 | 1 | 1 | python-2.7,easy-install | 29,761,891 | 2 | true | 0 | 0 | I figured it out I needed to cd c:\python27\scripts, then use the pip install tmx command. Nothing I read anywhere suggested I had to run cmd from the directory that pip was in. | 2 | 0 | 0 | I installed easy install it is in my scripts folder. I set my path variable. When I type python in cmd it works, but no matter what I try if I type easy_install it says it is not recognized. I am trying to install pip and then pytmx. is there an easier way to install pytmx? or can someone please walk me through this so I can get this working.
new variable PY_HOME value C:\Python27
path variable %PY_HOME%;%PY_HOME%\Lib;%PY_HOME%\DLLs;%PY_HOME%\Lib\lib-tk;C:\Python27\scripts
python version 2.7.8
windows 7 professional
Update uninstalled all versions of python reinstalled version 2.7.9
now pip is not a recognized command python is still recognized and give me a version number. I still cannot install pytmx. | Easy install is not recognized | 1.2 | 0 | 0 | 913 |
29,761,006 | 2015-04-21T00:54:00.000 | 0 | 0 | 1 | 1 | python-2.7,easy-install | 29,761,326 | 2 | false | 0 | 0 | If you install python 2.7.9 pip is included.
Then just pip install pytmx | 2 | 0 | 0 | I installed easy install it is in my scripts folder. I set my path variable. When I type python in cmd it works, but no matter what I try if I type easy_install it says it is not recognized. I am trying to install pip and then pytmx. is there an easier way to install pytmx? or can someone please walk me through this so I can get this working.
new variable PY_HOME value C:\Python27
path variable %PY_HOME%;%PY_HOME%\Lib;%PY_HOME%\DLLs;%PY_HOME%\Lib\lib-tk;C:\Python27\scripts
python version 2.7.8
windows 7 professional
Update uninstalled all versions of python reinstalled version 2.7.9
now pip is not a recognized command python is still recognized and give me a version number. I still cannot install pytmx. | Easy install is not recognized | 0 | 0 | 0 | 913 |
29,761,728 | 2015-04-21T02:17:00.000 | 3 | 0 | 1 | 1 | python,spyder,osx-yosemite | 29,841,327 | 1 | false | 0 | 0 | (Spyder dev here) There is no simple way to do what you ask for, at least for the Python version that comes with Spyder.
I imagine you downloaded and installed our DMG package. That package comes with its own Python version as part of the application (along with several important scientific packages), so it can't be removed because that would imply to remove Spyder itself :-)
I don't know how you installed IDL(E?), so I can't advise you on how to remove it. | 1 | 4 | 0 | I have installed the Python IDE Spyder. For me it's a great development environment.
Some how in this process I have managed to install three versions of Python on my system.These can be located as following:
Version 2.7.6 from the OS X Terminal;
Version 2.7.8 from the Spyder Console; and
Version 2.7.9rc1 from an IDL window.
The problem I have is (I think) that the multiple versions are preventing Spyder from working correctly.
So how do I confirm that 2.7.6 is the latest version supported by Apple and is there a simple way ('silver bullet') to remove other versions from my system.
I hope this is the correct forum for this question. If not I would appreciate suggestions where I could go for help.
I want to keep my life simple and to develop python software in the Spyder IDE. I am not an OS X guru and I really don't want to get into a heavy duty command line action. To that end I just want to delete/uninstall the 'unofficial versions' of Python. Surely there must be an easy way to do this - perhaps 'pip uninstall Python-2.7.9rc1' or some such. The problem is that I am hesitant to try this due to the fear that it will crash my system.
Help on this would be greatly appreciated. | How to uninstall and/or manage multiple versions of python in OS X 10.10.3 | 0.53705 | 0 | 0 | 9,789 |
29,764,235 | 2015-04-21T06:11:00.000 | 5 | 0 | 0 | 0 | python-2.7,wxpython,buttonclick,simulate | 29,778,393 | 1 | false | 0 | 1 | I depends on what you really need it to do. Do you just want cause the same code to be executed as would be done when the user clicks the button? Or do you need to have a real system level mouse event occur on the native UI button as if there was a real user doing it?
For the former you just need to cause the event handler function to be called from wherever you need it to be done. You can create a matching event object and use wx.PostEvent as suggested, or simply call the event handler method directly. Or for a little bit better programming style, refactor and move the guts of the event handler to a separate function and call it from the event handler and also wherever else you need to simulate the effects of clicking the button.
For having real system level events being sent to the native widget there is the wx.UIActionSimulator class, which can be used to simulate the mouse or keyboard at a lower level than posting wx events, so the UI behaves exactly the same as if there was a real user doing it. I would guess that 95% of the time that this is overkill for cases like what you describe, and also more complex than the above, but it's there if you really need it. | 1 | 2 | 0 | I m working on a python project currently, It is a application which don not interact with internet.GUI is being dine with wxpython. Is there code available in python which can simulate button click. | simulating button click in python | 0.761594 | 0 | 0 | 1,909 |
29,764,424 | 2015-04-21T06:23:00.000 | 0 | 0 | 0 | 0 | python,apache-spark,apache-spark-sql | 29,786,975 | 1 | false | 0 | 0 | I think community is going to patch this. But for now, we can use Dataframe.rdd in ALS.train (or any other place where we see only RDDs are allowed) | 1 | 1 | 1 | I am trying to use Spark SQL and MLib together to create a recommendation program (extending movie recommendation program) in python. It was working fine with 1.2.0.
However, in 1.3.1, by default spark create Dataframe objects instead of SchemaRDD objects as output of a SQL. hence, mlib.ALS.train method is failing with an assertion error:
assert(ratings,RDD)
(of course ratings is not RDD anymore :) )
Anyone facing this issue? Any workaround (I am thinking to use a map just to convert DF to RDD, but thats stupid :) ) | spark 1.3.1: Dataframe breaking MLib API | 0 | 0 | 0 | 208 |
29,764,655 | 2015-04-21T06:36:00.000 | -1 | 0 | 1 | 0 | python,flask | 63,067,551 | 3 | false | 1 | 0 | You can try with wheel pacakges . | 1 | 4 | 0 | How do I install Python Flask without using pip?
I do not have pip, virtualenv nor easy_install.
The context of this question is that I am on a tightly controlled AIX computer. I cannot install any compiled code without going through several layers of management. However, I can install python modules.
Python 2.7 is installed.
I have some existing python code that generates a report.
I want to make that report available on a web service using Flask.
I am using bottle, but I am going to want to use https, and support for https under Flask seems much more straight forward.
I would like to put the flask library (and its dependencies) into my project much like bottle is placed into the project.
What I tried: I downloaded the flask tarball and looked at it. It had quite a bit of stuff that I did not know what to do with. For instance, there was a makefile. | Install Python Flask without using pip | -0.066568 | 0 | 0 | 16,641 |
29,765,250 | 2015-04-21T07:08:00.000 | 0 | 0 | 1 | 0 | python,eclipse,numpy,pydev | 59,991,219 | 5 | false | 0 | 0 | Pandas can be installed after install python in to your pc.
to install pandas go to command prompt and type "pip install pandas" this command collecting packages related to pandas. After if it asking to upgrade pip or if pip is not recognized by the command prompt use this command:
python -m pip install --upgrade pip. | 1 | 1 | 0 | I am trying to install a package called 'numpy'.
i have python setup in eclipse luna with the help of pydev.
how do i install numpy in pydev.
tried putting numpy in site-packages folder. doesnt seem to work | Install Numpy in pydev(eclipse) | 0 | 0 | 0 | 16,056 |
29,765,477 | 2015-04-21T07:20:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,amazon-web-services,boto | 29,777,440 | 2 | false | 0 | 0 | It looks quite custom.. so a custom script would be able to solve it. Using CLI commands like describe-images.
You should also store the creation of the image as a tag.
I assume you don't have that much images ( thousands) so you can easily build an array about the different images, count them and select the latest one in O(n) time. Then you need to call the deregister-image command.
The script can run periodically.
The best would be to run it as a lambda function, but it is may not ready for it because of the lack of triggering. ( No cron style trigger at the moment.) But you may can trigger the function with the creation of a new AMI. | 1 | 1 | 0 | I want to delete the AMI image on the basis of its count. That is, I have defined the instance-ids in the name tag of the image, so that i can know of which instance the image is. I want to delete the images if the image count goes over 30 for that particular instance and the images deleted must be older ones not the newest ones. | Deleting the AMI Image | 0 | 0 | 0 | 1,218 |
29,766,605 | 2015-04-21T08:18:00.000 | 7 | 0 | 1 | 0 | python,debugging,pdb | 29,789,830 | 3 | true | 0 | 0 | I managed to fix the problem. Apparently there's another module in pip repository called pdb for shared password management. each time attempting pip install pdb I did not know my machine was installing the wrong module.
pdb module(python debugger) is shipped with product when you install it on your system or in the case of Linux Ubuntu, it is included in the distribution which are located at /usr/lib/python2.7 as opposed to the third party modules that get installed under /usr/local/lib/python2.7.
For some weird, unknown reason(I guess installing ipdb caused that), I did not have the pdb.py under my pre-shipped python modules. e.g, /usr/lib/python2.7.
what fixed my problem was downloading the pdb.py module from the python documentation website and located that file within the mentioned folder.
Hope this could help. | 1 | 3 | 0 | previously, I had pdb installed system-wide using pip install, a little after I found out about ipdb. successfully installed it with pip.
Didn't quite work well, made me decided to go back to former pdb.
Now I get error using import pdb; pdb.set_trace()
exceptions.AttributeError: 'module' object has no attribute 'set_trace'
Any idea what is going wrong?
EDIT:
this is the error after re-installing IPython and PDB back again:
File "/usr/local/lib/python2.7/dist-packages/IPython/core/debugger.py", line 59, in
from pdb import Pdb as OldPdb
ImportError: cannot import name Pdb | How did I mess up python pdb | 1.2 | 0 | 0 | 6,376 |
29,768,499 | 2015-04-21T09:43:00.000 | 0 | 1 | 0 | 1 | python,cron,cron-task | 29,769,403 | 2 | false | 0 | 0 | You can use a lock file to indicate that the first script is still running. | 1 | 0 | 0 | I start a script and I want to start second one immediately after the first one is completed successfully?
The problem here is that this script can take 10min or 10hours according to specific cases and I do not want to fix the start of the second script.
Also, I am using python to develop the script, so if you can provide me a solution with python control on the cron it will be OK.
Thank you, | How can I check whether the script started with cron job is completed? | 0 | 0 | 0 | 45 |
29,769,633 | 2015-04-21T10:29:00.000 | 1 | 0 | 1 | 0 | python,multithreading,function,global | 29,769,721 | 2 | false | 0 | 0 | No. There will only be a single instance of the 'global' variables (presumably defined at the top level of the module).
A module is only ever imported once, importing it a second time simply adds it to the appropriate namespace. | 2 | 1 | 0 | I have a library that uses quite a few global variables, that I'd like to use in a multi-threaded application, however what I'd like to know is if I import the library inside a function, will the library's global variables etc. be separate copies, so that they don't corrupt each other? | Python Import a library inside a function for multi-threading | 0.099668 | 0 | 0 | 55 |
29,769,633 | 2015-04-21T10:29:00.000 | 1 | 0 | 1 | 0 | python,multithreading,function,global | 29,769,731 | 2 | false | 0 | 0 | No. Python has Module Scope here whereby the global variables you have defined in that module if mutated by other threads without locking will have unpredictable behaviour.
I would refactor your code into a set of objects with remove the use of globals and possibly also implement locking if you intend to share the same objects. | 2 | 1 | 0 | I have a library that uses quite a few global variables, that I'd like to use in a multi-threaded application, however what I'd like to know is if I import the library inside a function, will the library's global variables etc. be separate copies, so that they don't corrupt each other? | Python Import a library inside a function for multi-threading | 0.099668 | 0 | 0 | 55 |
29,772,530 | 2015-04-21T12:41:00.000 | 0 | 0 | 0 | 1 | python,ubuntu,selenium | 29,772,531 | 1 | true | 0 | 0 | Solved upgrading six
pip install --upgrade six | 1 | 0 | 0 | I'm trying to install selenium library for python on my Ubuntu machine using pip installer.
I receive the following error:
pip install selenium
Exception: Traceback (most recent call last): File
"/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 122, in
main
status = self.run(options, args) File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 304,
in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File
"/usr/lib/python2.7/dist-packages/pip/req.py", line 1230, in
prepare_files
req_to_install.run_egg_info() File "/usr/lib/python2.7/dist-packages/pip/req.py", line 293, in
run_egg_info
logger.notify('Running setup.py (path:%s) egg_info for package %s' % (self.setup_py, self.name)) File
"/usr/lib/python2.7/dist-packages/pip/req.py", line 285, in setup_py
if six.PY2 and isinstance(setup_py, six.text_type): AttributeError: 'module' object has no attribute 'PY2'
I am currently using Python 2.7.9
python --version
Python 2.7.9 | Can't install python selenium on Ubuntu 14.10/15.04(pre-release) | 1.2 | 0 | 1 | 361 |
29,773,989 | 2015-04-21T13:43:00.000 | 0 | 0 | 1 | 0 | python,list | 29,775,165 | 5 | false | 0 | 0 | "At least two elements" means that you can say the following for your list L without worrying about an IndexError being raised:
L[0] # First element is guaranteed to exist
L[1] # Second element is guaranteed to exist
L[-1] # Last element is guaranteed to exit
L[-2] # Next-to-last element is guaranteed to exist
I think "second and last" refer to L[1] and L[-1] (unless it's a poorly worded ellipsis and they mean "second-to-last" and "last"—L[-2] and L[-1]). | 1 | 0 | 0 | Given a list L with at least 2 elements, write a Python code fragment that swaps the second and the last
elements of the list:
My question - When they say a list with at least 2 elements, does that mean l=[1,3] ? and just wondering if thats what it means could someone care explain what they mean by the second and last?
For example even though it says at least 2 elements , what if I just want to use 2 elements, so would I just swap element 1 with 2? | Python code fragment - List | 0 | 0 | 0 | 2,054 |
29,775,174 | 2015-04-21T14:30:00.000 | 0 | 0 | 1 | 1 | python,windows,command-line | 29,775,325 | 2 | false | 0 | 0 | Change the shortcut target to "cmd filename" (i.e. add "cmd" before the target) | 1 | 1 | 0 | I made a shortcut of a python script on my desktop.
Now I want to open this with the command line and parameters.
If I open the properties of the shortcut, I can add the parameters, but I can't force to be opened with the command line.
The default program for these files is notepad++, but if I change it to "command line" and double click it, then just the command line opens with the respective path given in the shortcut, but not executing the file.
What do I need to do? | How to open a shortcut with windows command line and parameters | 0 | 0 | 0 | 4,374 |
29,775,956 | 2015-04-21T15:00:00.000 | 2 | 0 | 0 | 0 | javascript,python,django,web-applications | 29,777,880 | 2 | true | 1 | 0 | You should indeed make two views, one to only return the page showing the loading UI and one to perform the long task.
The second view will be called using an AJAX request made from the "loading" page. The response from the AJAX request will notify your "loading" page that it is time to move on.
You need to make sure the AJAX request's duration won't exceed the timeout of your server (with ~10 seconds, you should be fine). | 1 | 3 | 0 | I have a main view function for my application. After logging in successfully this main view method is called and is expected to render the template.
But I have to perform some calculations in this view method [I am checking certain conditions about the user by making facebook graph api request.]
Thus it takes 2~4 seconds to load.
How do I show this loading scene since the template is rendered by return statement and thus is executed only when the process is complete.
Should I make 2 views , one for showing loading and the other one for calculating and keep making AJAX request to other view method to check if the process is complete or not ? | How do I make a loading page while I am processing something in the backend with Django? | 1.2 | 0 | 0 | 917 |
29,778,405 | 2015-04-21T16:46:00.000 | 1 | 0 | 0 | 0 | python,linux,apache,file-permissions | 29,778,427 | 2 | false | 0 | 0 | Change file permissions to make it executable: sudo chmod +x file.py | 2 | 0 | 0 | How to execute a file after it gets downloaded on client side,File is a python script, User dont know how to change permission of a file, How to solve this issue?? | File permission gets changed after file gets downloaded on client machine | 0.099668 | 0 | 1 | 48 |
29,778,405 | 2015-04-21T16:46:00.000 | 0 | 0 | 0 | 0 | python,linux,apache,file-permissions | 29,778,455 | 2 | false | 0 | 0 | Maybe you can try teaching them how to use chmod +x command? Or actualy even more simple it would be to change it using GUI: right click -> properties -> permissions-> pick what is needed | 2 | 0 | 0 | How to execute a file after it gets downloaded on client side,File is a python script, User dont know how to change permission of a file, How to solve this issue?? | File permission gets changed after file gets downloaded on client machine | 0 | 0 | 1 | 48 |
29,785,134 | 2015-04-21T23:43:00.000 | 0 | 0 | 1 | 0 | python,list,pandas | 29,785,304 | 3 | false | 0 | 0 | It appears that printing the list wouldn't work, and you haven't provided us with any code to work with, or an example print of what your date time looks like. My best suggestion is to use the sort function.
dataframe.sort()
If I wanted a specific date to print, I would have to say to print it by index number once you have it sorted. Without knowing what your computers ability is to handle print statements of this size, I suggest copying this sorted file to a out txt file to ensure that you are getting the proper response. | 1 | 0 | 1 | I have a list of unique dates in chronological order.
I have a dataframe with dates in it. I want to use the list of dates in the dataframe to get the NEXT date in the list (find the date in dataframe in the list, return the date to the right of it ( next chronological date).
Any ideas? | get next value in list Pandas | 0 | 0 | 0 | 366 |
29,785,534 | 2015-04-22T00:24:00.000 | 1 | 0 | 1 | 1 | python,bash,debugging,pycharm,pdb | 37,883,146 | 1 | false | 0 | 0 | You can attach the debugger to a python process launched from terminal:
Use Menu Tools --> Attach to process then select python process to debug.
If you want to debug a file installed in site-packages you may need to open the file from its original location.
You can to pause the program manually from debugger and inspect the suspended Thread to find your source file. | 1 | 4 | 0 | I know how to set-up run configurations to pass parameters to a specific python script. There are several entry points, I don't want a run configuration for each one do I? What I want to do instead is launch a python script from a command line shell script and be able to attach the PyCharm debugger to the python script that is executed and have it stop at break points. I've tried to use a pre-launch condition of a utility python script that will sleep for 10 seconds so I can attempt to "attach to process" of the python script. That didn't work. I tried to import pdb and settrace to see if that would stop it for attaching to the process, but that looks to be command line debugging specific only. Any clues would be appreciated.
Thanks! | How to attach to PyCharm debugger when executing python script from bash? | 0.197375 | 0 | 0 | 1,147 |
29,785,815 | 2015-04-22T00:52:00.000 | 3 | 0 | 1 | 0 | python-2.7,floating-point,floating-accuracy,floating-point-precision,floating-point-conversion | 29,785,841 | 3 | false | 0 | 0 | Sum them and then divide by 100. A good rule of thumb is that you can usually minimize FP error by performing fewer operations[1]. If you sum them and then divide, you have performed 100 floating-point operations. If you divide and then sum, you have performed 199 floating-point operations.
[1] there are exceptions where the rounding error of multiple computations exactly cancels out, but this rarely happens by chance -- if this is happening it is usually because an algorithm was designed to work that way by someone who knows what they are doing. | 1 | 1 | 0 | Let's say that I have a large number of floats, e.g. 100, and I need to calculate their average.
To get the most accurate result, should I sum all the numbers and then divide by 100?
Or should I divide each number by 100, and then sum all of them?
(If it matters, I'm coding in Python 2.) | Maintain precision when averaging floats | 0.197375 | 0 | 0 | 485 |
29,786,099 | 2015-04-22T01:23:00.000 | 2 | 0 | 1 | 0 | python-3.x,random | 29,786,160 | 3 | true | 0 | 0 | this will do the trick random.random() - .5 | 1 | 2 | 0 | How to generate a random number in the interval [-0.5, 0.5] using Python random() or randrange() functions? | Generate a random number from an interval in Python | 1.2 | 0 | 0 | 12,525 |
29,786,322 | 2015-04-22T01:50:00.000 | 1 | 0 | 0 | 0 | python,sqlalchemy | 29,786,610 | 1 | false | 0 | 0 | I think a SQLAlchemy RowProxy uses _row, a tuple, to store the value. It doesn't have a __dict__, so no storage overhead of a _dict__ per row. Its _parent object has fields which store the column names to index pos in tuple lookup. Pretty common thing to do if you are trying to cut on down sql fetching result sizes - the column list is always the same for each row of the same select, so you rely on a common parent to keep track of which index of the tuple holds which column rather than having your own per-row __dict__.
Additional advantage is that, at the db lib connect level, sql cursors return (always?) their values in tuples, so you have little processing overhead. But a straight sql fetch is just that, a cursor descr & a bunch of disconnected rows with tuples in them - SQLALchemy bridges that and allows to use column names.
Now, as to how the unpickling process goes, you'd have to look at the actual implementation. | 1 | 0 | 0 | The same attributes stored in __dict__ are needed to restore the object, right? | Why is a pickled SQLAlchemy model object smaller than its pickled `__dict__`? | 0.197375 | 1 | 0 | 100 |
29,789,325 | 2015-04-22T06:23:00.000 | 0 | 0 | 0 | 0 | python,django,photologue | 31,057,437 | 2 | false | 1 | 0 | In the admin panel, you also need to:
Create a gallery.
Choose which photos are a part of which galleries. | 1 | 0 | 0 | I installed photologue correctly in my project (blog) and I can add images in admin panel, but how to display them on my main page? | How to add a gallery in photologue? | 0 | 0 | 0 | 244 |
29,791,175 | 2015-04-22T07:56:00.000 | 1 | 0 | 0 | 0 | python,visual-studio,flask | 29,791,283 | 1 | true | 1 | 0 | In Flask it is usually enough to import the files the routed functions are defined in. You can add your API methods in an other file and import it. Make sure you don't have any circular imports, they are a source of problem in Flask quite often.
If things are getting more complex, it's best to use Blueprints to bundle routes together. | 1 | 0 | 0 | I'm using the Python Tools for Visual Studio (PTVS 2.2 Beta) and creating a new Flask Web Project with Visual Studio 2013. (btw I'm new to both Flask and Python...)
The views.py contains my routes. How does VS know to load this file? I don't see it in any properties or other files. Is it a default to always have the routes in views.py?
My goal is to use Flask to build a RESTful API and I'm tempted to just replace everything in the views.py with my API routes. Or, can I add another .py file for the API routes? | How does Flask know to load the views.py in the Visual Studio default Flask Solution? | 1.2 | 0 | 0 | 251 |
29,792,439 | 2015-04-22T08:56:00.000 | 1 | 0 | 0 | 0 | python,django,unit-testing | 47,938,881 | 6 | false | 1 | 0 | Another possibility is that you've disconnected signals in the setUp of a test class and did not re-connect in the tearDown. This explained my issue. | 1 | 31 | 0 | I have a test in Django 1.5 that passes in these conditions:
when run by itself in isolation
when the full TestCase is run
when all of my app's tests are run
But it fails when the full test suite is run with python manage.py test. Why might this be happening?
The aberrant test uses django.test.Client to POST some data to an endpoint, and then the test checks that an object was successfully updated. Could some other app be modifying the test client or the data itself?
I have tried some print debugging and I see all of the data being sent and received as expected. The specific failure is a does-not-exist exception that is raised when I try to fetch the to-be-updated object from the db. Strangely, in the exception handler itself, I can query for all objects of that type and see that the target object does in fact exist.
Edit:
My issue was resolved when I found that I was querying for the target object by id and User and not id and UserProfile, but it's still confusing to me that this would work in some cases but fail in others.
I also found that the test would fail with python manage.py test auth <myapp> | why would a django test fail only when the full test suite is run? | 0.033321 | 0 | 0 | 8,341 |
29,794,354 | 2015-04-22T10:13:00.000 | 1 | 0 | 0 | 1 | python,esky | 30,921,894 | 1 | false | 0 | 0 | Esky uses a bootstrapping mechanism that keeps the app safe in the face of failed or partial updates.
The top level executable is the one you should be running, it does all the business of managing what version to run. Once it has decided what to run it will open up the exe for the correct version. | 1 | 1 | 0 | esky 0.9.8 creates 2 executables of my application.
There is an inner executable that weights less then the outer executable.
I would like to know if esky is supposed to create 2 executables and if there are any drawbacks or advantages in creating 2 executables.
I would also like to know which executable should I be calling when I want to run my application. | Why does esky create 2 executables? | 0.197375 | 0 | 0 | 54 |
29,794,957 | 2015-04-22T10:39:00.000 | 0 | 1 | 0 | 0 | python,encode,gsutil | 29,808,376 | 1 | false | 0 | 0 | I assume you're getting this error when trying to upload(copy) your files to the Google Cloud Storage using 'gsutil' tool.
In order to resolve the issue install UTF-8 version of your Russian font on your computer. | 1 | 1 | 0 | I need to backup on the Google Storage, files having name with russian characters. Is there any solution? I get this kind of error:
'ascii' codec can't encode characters in position 203-213: ordinal not in range(128) | gsutil api google storage nearline | 0 | 0 | 0 | 141 |
29,796,483 | 2015-04-22T11:43:00.000 | 1 | 0 | 0 | 0 | javascript,google-apps-script,google-api,google-api-python-client | 29,799,548 | 1 | false | 1 | 0 | Here is a way to do it without triggers/onEdit and without polling the spreadsheet api. Beware this is hacky but it has the advantage of not using apps script quotas or spreadsheet api quotas to detect changes1) in the spreadsheet enable email notifications whenever the spreadsheet changes.2) using gmail filters send those notification emails to a label.3) using the gmail api (has larger quotas) look for that email by finding emails with That label.note that this does avoid quota issues with spreadsheets but i really think its much better to just use the regular triggers/onEdit/onChange and just live with the 1minute delay. Quotas will not be exhausted if you simply detect change and call urlfetch. | 1 | 0 | 0 | I am very new to JavaScript and Google app script. I want to monitor changes in a sheet. And if there is an edit, I want a script function to run. Triggers are not an option here to use because they have small quota limit. So I can not use onEdit or time driven triggers to call my function. I need some service to use Google API to monitor changes in google sheet and then trigger the function. Is it possible? Can someone please help me with this?
Thank You :) | Is there a service that can use google api to monitor changes in google spreadsheet? | 0.197375 | 0 | 0 | 736 |
29,797,893 | 2015-04-22T12:40:00.000 | 0 | 0 | 1 | 0 | python,opencv,pycharm | 56,717,998 | 4 | false | 0 | 0 | Do the following steps:
Download and install the OpenCV executable.
Add OpenCV in the system path(%OPENCV_DIR% = /path/of/opencv/directory)
Go to C:\opencv\build\python\2.7\x86 folder and copy cv2.pyd file.
Go to C:\Python27\DLLs directory and paste the cv2.pyd file.
Go to C:\Python27\Lib\site-packages directory and paste the cv2.pyd file.
Go to PyCharm IDE and go to DefaultSettings > PythonInterpreter.
Select the Python which you have installed.
Install the packages numpy, matplotlib and pip in pycharm.
Restart your PyCharm. | 3 | 7 | 1 | I am working on a project that requires OpenCV and I am doing it in PyCharm on a Mac. I have managed to successfully install OpenCV using Homebrew, and I am able to import cv2 when I run Python (version 2.7.6) in Terminal and I get no errors. The issue arises when I try importing it in PyCharm. I get a red underline with:
no module named cv2
I assume that PyCharm is unable to locate my cv2.so file but I have the latest PyCharm version (4.0.6) and none of the forums I've looked at are helpful for this version. How do I get PyCharm to recognise my cv2 file? I went in Project Interpreter but there is no option for importing OpenCV from my own machine. Furthermore in Edit Configurations I defined an environment variable
PYTHONPATH
and set it to
/usr/local/lib/python2.7/site-packages:$PYTHONPATH
but this didn't help either.
Any ideas?
EDIT: I set up a virtualenv to no avail and figured out how to add a path to the current framework on the new PyCharm version and it turns out the path to cv2.so has already been given yet it is still complaining. | Cannot import cv2 in PyCharm | 0 | 0 | 0 | 8,253 |
29,797,893 | 2015-04-22T12:40:00.000 | 0 | 0 | 1 | 0 | python,opencv,pycharm | 44,804,084 | 4 | false | 0 | 0 | Have you selected the right version of python ?
or rather, when you have installed opencv with brew, this last probably has installed a new version of python that you can find in Cellar's Directory. You can see this immediately; from the main window of PyCharm select:
Configure -> Preferences -> Project Interpreter
click on Project Interpreter Combobox and be careful if there is a instance of python in Cellar's Directory, if yes, select it and you can see the cv2 in the list below. | 3 | 7 | 1 | I am working on a project that requires OpenCV and I am doing it in PyCharm on a Mac. I have managed to successfully install OpenCV using Homebrew, and I am able to import cv2 when I run Python (version 2.7.6) in Terminal and I get no errors. The issue arises when I try importing it in PyCharm. I get a red underline with:
no module named cv2
I assume that PyCharm is unable to locate my cv2.so file but I have the latest PyCharm version (4.0.6) and none of the forums I've looked at are helpful for this version. How do I get PyCharm to recognise my cv2 file? I went in Project Interpreter but there is no option for importing OpenCV from my own machine. Furthermore in Edit Configurations I defined an environment variable
PYTHONPATH
and set it to
/usr/local/lib/python2.7/site-packages:$PYTHONPATH
but this didn't help either.
Any ideas?
EDIT: I set up a virtualenv to no avail and figured out how to add a path to the current framework on the new PyCharm version and it turns out the path to cv2.so has already been given yet it is still complaining. | Cannot import cv2 in PyCharm | 0 | 0 | 0 | 8,253 |
29,797,893 | 2015-04-22T12:40:00.000 | 0 | 0 | 1 | 0 | python,opencv,pycharm | 39,482,840 | 4 | false | 0 | 0 | I got the same situation under win7x64 with pycharm version 2016.1.1, after a quick glimpse into the stack frame, I think it is a bug!
Pycharm ipython patches import action for loading QT, matplotlib, ..., and finally sys.path lost its way!
anyway, there is a workaround, copy Lib/site-packages/cv2.pyd or cv2.so to $PYTHONROOT, problem solved! | 3 | 7 | 1 | I am working on a project that requires OpenCV and I am doing it in PyCharm on a Mac. I have managed to successfully install OpenCV using Homebrew, and I am able to import cv2 when I run Python (version 2.7.6) in Terminal and I get no errors. The issue arises when I try importing it in PyCharm. I get a red underline with:
no module named cv2
I assume that PyCharm is unable to locate my cv2.so file but I have the latest PyCharm version (4.0.6) and none of the forums I've looked at are helpful for this version. How do I get PyCharm to recognise my cv2 file? I went in Project Interpreter but there is no option for importing OpenCV from my own machine. Furthermore in Edit Configurations I defined an environment variable
PYTHONPATH
and set it to
/usr/local/lib/python2.7/site-packages:$PYTHONPATH
but this didn't help either.
Any ideas?
EDIT: I set up a virtualenv to no avail and figured out how to add a path to the current framework on the new PyCharm version and it turns out the path to cv2.so has already been given yet it is still complaining. | Cannot import cv2 in PyCharm | 0 | 0 | 0 | 8,253 |
29,798,334 | 2015-04-22T12:58:00.000 | 1 | 0 | 1 | 0 | python,django,python-2.7,python-3.4 | 29,906,257 | 1 | true | 0 | 0 | By using --ignore-installed with pip, I was able to overcome these issues. By recreating shortcuts and such it works fine now. Seems that with two version of python you must take extra care in setup and PYTHONPATH or these isssue occur. | 1 | 1 | 0 | I have both Python 2.7 and 3.4 on my Windows machine. I have pip, pip2, and pip3. Pip2 is 2.7 while the others are 3.4 when running pip(,2,3) -V However, pip -V, pip2 -V and pip3 -V All show the same thing about pips location and all but pip2 show that it's for Python3.4. For all three the location they show for pip is C:\Python27\site-packages. Which is fine for pip2, but for pip3, and pip(if posssible, not 100% necessary), they should show the location as C:\Python34 instead. Not sure why this happens. But when I install things with pip3, they end up in C:\Python27\site-packages rather than C:\Python34\site-packages\etc.etc. This is an annoyance, how can I set this up correctly so it works the way I need? | Pip with Python 2.7 and 3.4 on Windows Machine | 1.2 | 0 | 0 | 495 |
29,798,471 | 2015-04-22T13:03:00.000 | 0 | 0 | 0 | 0 | django,python-3.x | 29,798,581 | 2 | false | 1 | 0 | We did this just by change the Model field to not allow null/blank and add default value, and also write a script to add default value to the already existed data whose field is null/blank. | 2 | 0 | 0 | I have a database where I have older entries which contain null/blank values for a specific field. However, from now on, I would like to not allow null/blank values to be added to the database.
Is there a way to do this in Django 1.7, python 3.4. | Django 1.7 - allow null and blank in the database, but do not allow null or blank in admin form | 0 | 0 | 0 | 41 |
29,798,471 | 2015-04-22T13:03:00.000 | 1 | 0 | 0 | 0 | django,python-3.x | 29,798,580 | 2 | true | 1 | 0 | There's really no such thing as blank in the database, or null in the admin. null controls whether the database can contain NULL values (not even empty values, just NULL). blank controls whether the admin and modelforms accept empty values.
It seems to me that what you want is simply null=True, blank=False. | 2 | 0 | 0 | I have a database where I have older entries which contain null/blank values for a specific field. However, from now on, I would like to not allow null/blank values to be added to the database.
Is there a way to do this in Django 1.7, python 3.4. | Django 1.7 - allow null and blank in the database, but do not allow null or blank in admin form | 1.2 | 0 | 0 | 41 |
29,798,841 | 2015-04-22T13:18:00.000 | 1 | 0 | 0 | 0 | python,django,testing,migration | 29,814,121 | 2 | false | 1 | 0 | Put your your apps before Django apps in INSTALLED_APP in settings.py file | 1 | 7 | 0 | I am using the custom user model which inherits from the AbstractBaseUser class. When I try to migrate after makemigrations command
django.db.utils.ProgrammingError: relation "custom_user_users" does not exist
This is happening since Django is trying to migrate other apps first which depends upon the custom user model. Even I tried to changing the order of the app which contains the custom user model in INSTALLED_APP but no luck.
I know I can force fully migrate custom_user model first then let Django migrate all other models. This solves the problem but during running test it runs the migration in order which Django decides.
How can I alter the order in which apps are migrated during test ? Any other way to solve this dependency problem ?
I am using Django 1.8 | Change the order in which Django migrate app during testing | 0.099668 | 0 | 0 | 2,278 |
29,799,993 | 2015-04-22T14:01:00.000 | 0 | 0 | 0 | 0 | python,salesforce,soql | 34,097,734 | 1 | false | 0 | 0 | If you truly wish to extract everything, you can use the query_all function.
query_all calls the helper function get_all_results which recursively calls query_more until query_more returns "done". The returned result is the full dictionary of all your results.
The plus, you get all of your data in a single dictionary. The rub, you get all 2.8 million records at once. They may take a while to pull back and, depending on the size of the record, that may be a significant amount of ram. | 1 | 0 | 0 | I am using simple_salesforce package in python to extract data from SalesForce.
I have a table that has around 2.8 million records and I am using query_more to extract all data.
SOQL extracts 1000 rows at a time. How can I increase the batchsize in python to extract maximum number of rows at a time. [I hope maximum number of rows is 2000 at a time]?
Thanks | Increasing Batch size in SOQL | 0 | 1 | 0 | 122 |
29,802,546 | 2015-04-22T15:42:00.000 | 2 | 0 | 0 | 0 | python,django | 29,804,634 | 1 | true | 1 | 0 | You most likely have a Python file called filer.py or a Python module (folder containing a __init__.py file) called filer in your current directory (or somewhere in your PYTHON_PATH) which shadows the filer package you installed.
You need to remove or rename that file/module in order to be able to use the filer package. | 1 | 0 | 0 | I have this problem. I have installed filer and when I import only filer its fine import filer
It is importing but when I do from filer.fields.image import FilerImageField
It is not importing and gives error no module named fields.image
What is the problem? I have gone through the documentation and it gives the same way of importing. | django filer no module named fields.image error | 1.2 | 0 | 0 | 809 |
29,803,463 | 2015-04-22T16:20:00.000 | 0 | 0 | 0 | 0 | python,networking,directory,sftp,distutils | 29,803,781 | 1 | false | 0 | 0 | Do you mean the "copy_tree" method in distutils.dir_util module? If so the answer is no, with a caveat. The Code requires a directory name for both the source and destination directories. The caveat is if you can mount the remote drive onto your the local machine then it would be doable. | 1 | 0 | 0 | I'm wondering if there is a way to use the copy_tree module in Python to copy a directory tree over a network. Has anyone done this or seen something close to this? | Python : copy_tree over network? | 0 | 0 | 1 | 90 |
29,803,949 | 2015-04-22T16:44:00.000 | 0 | 0 | 1 | 1 | python,pip,virtualenv,virtualenvwrapper | 29,906,180 | 1 | true | 0 | 0 | Seems by realiasing pip and starting over and using --force-installed to install everything all over (including venv) it worked. | 1 | 0 | 0 | I install to virtualenv, yet it still runs the versions from C:\Python27\site-packages or C:\Python34\site-packages. When I try to install with pip in my venv I get already installed and the location is the global site-packages.
Any idea why that could be??
Also my virtualenv wrapper commands work but when i do workon X it doesn't activate the venv.
OS is win7. But the problem occurred on powershell and git bash.
Thanks | Installing to virtual env doesn't work | 1.2 | 0 | 0 | 768 |
29,805,701 | 2015-04-22T18:11:00.000 | 0 | 0 | 0 | 0 | python,events,audio,pygame | 29,806,090 | 1 | false | 0 | 1 | -.- I misunderstood the documentation. There is said:
The type argument will be the event id sent to the queue. This can be any valid event type, but a good choice would be a value between pygame.locals.USEREVENT and pygame.locals.NUMEVENTS. If no type argument is given then the Channel will stop sending endevents.
And I understood I should choose either USEREVENT or NUMEVENTS. I was constantly looking for a way to use some new event types but I thought it is impossible. So I chose USEREVENT, hoping that no other pygame module uses it, and thinking it's better than NUMEVENTS.
But then I came up with idea, that maybe these types are just numbers and "between" means literally between. So after I checked it appeared that USEREVENT == 24 and NUMEVENTS == 32. So I can basically use any number from 25 to 32...
Silly me. :)
Only one thing remain: let's take that I used 25 as type for Channel.set_endevent(). How to get my channel id now?
Edit:
Ok, now I have a full image. It's even better - I think. If I understand correctly: pygame.USEREVENT (equal 24) is the first of many events up to pygame.NUMEVENT (equal 32). Any of them have an attribute code that can be used for whatever I want (and Channel uses it to put its id). So basically I can use pygame.USEREVENT + 0 for channels and pygame.USEREVENT + 1 for music.
Correct me please if I'm wrong. | 1 | 0 | 0 | In pygame I can use pygame.mixer.music to load and play long audio files (by streaming), or pygame.mixer.Sound & pygame.mixer.Channel for shorter ones (that are loaded entirely into memory) - as I understand correctly.
I'd like to use both of these methods. However I'd also like to know, when playback of given Channel or Music has just finished. There are methods for that: set_endevent() - on both music and channel. When I use pygame.locals.USEREVENT as a type of event, when channel's playback is finished i receive event with code == <channel_id>. When music's playback is finished, code is always 0. Thus I cannot tell the difference, whether it is music that stopped, or channel with id 0.
Is there any way to tell them apart? | How to differentiate endevent of music and channel? | 0 | 0 | 0 | 196 |
29,806,160 | 2015-04-22T18:36:00.000 | 2 | 0 | 0 | 0 | python,pyqt,qcombobox,qpushbutton,qdate | 29,907,644 | 1 | true | 0 | 1 | You can try a QToolButton with no text and the arrowType property set to Qt.DownArrow. eg: myqtoolbutton.setArrowType(Qt.DownArrow). | 1 | 0 | 0 | In PyQt4, I want to present a QPushButton that looks like the down-arrow button in QComboBox. Feasible, and if so, how?
I don't need help getting my new widget-combination acting like a QComboBox (see below). I only want the QPushButton display/graphic to look like the down-arrow button in a QComboBox - and tips/code on how to overlay that graphic (especially if said graphic comes via a file) onto my own QPushButton.
More details, context:
I'm seeking to replace a QComboBox widget with a QLineEdit + QCalendarWidget, because QDateEdit isn't as customizable as I need (I think...). The thought is to place a QPushButton immediately adjacent (on the right-side) of the QLineEdit to make it things look like regular QComboBox as much as possible. Then said button will .exec_() the QCalendarWidget (which is technically wrapped by a QDialog).
Let me know if this doesn't make sense, and I can provide further or clarified context. | PyQt: display QPushButton that looks like the down-arrow button in QComboBox | 1.2 | 0 | 0 | 2,863 |
29,806,384 | 2015-04-22T18:48:00.000 | 3 | 0 | 0 | 1 | python-2.7,google-app-engine,memcached | 29,806,800 | 1 | true | 1 | 0 | Moving objects to and from Memcache will have no impact on your memory unless you destroy these objects in your Java code or empty collections.
A bigger problem is that memcache entities are limited to 1MB, and memcache is not guaranteed. The first of these limitations means that you cannot push very large objects into Memcache.
The second limitations means that you cannot easily replace, for example, a HashMap with memcache - it's impossible to tell if getValue() returns null because an object is not present or because it was bumped out of memcache. So you will have to make an extra call each time to a datastore to see if an object is really not present. | 1 | 0 | 0 | I'm currently running into soft memory errors on my Google App Engine app because of high memory usage. A number of large objects are driving memory usage sky high.
I thought perhaps if I set and recalled them from memcache maybe that might reduce overall memory usage. Reading through the docs this doesn't seem to be the case, and that the benefit of memcache is to reduce HRD queries.
Does memcache impact overall memory positively or negatively?
Edit: I know I can upgrade the instance class to F2 but I'm trying to see if I can remain on the least expensive while reducing memory. | Will using memcache reduce my instance memory? | 1.2 | 1 | 0 | 130 |
29,807,381 | 2015-04-22T19:45:00.000 | 1 | 0 | 1 | 1 | python,windows,process,wmi,remote-access | 29,809,058 | 3 | true | 0 | 0 | I figured out the answer in case anybody else runs into a similar issue; You actually dont even need WMI and can be run directly from command prompt:
if you are in the same network you can issue a command via command prompt with the following format:
taskkill /s [Computer name or IP] /u [USER or DOMAIN\USER] /p Password /pid [The process to kill i.e. notepad.exe]
This will take a few moments but will eventuall kill the process that is running. | 1 | 2 | 0 | I know it is possible to create a process on a remote windows machine using the WMI module but I wish to know if the same can be said for ending a process. I havent been able to find a thread or any documentation for this so please if you can help me It would be greatly appreciated. | Possible to Kill Processes using WMI Python | 1.2 | 0 | 0 | 2,587 |
29,810,878 | 2015-04-22T23:36:00.000 | 0 | 0 | 0 | 0 | python,rest,python-jira | 29,880,808 | 2 | false | 0 | 0 | Just to be clear, one of the reasons why is called REST api is because you do not have to do anything with the session. | 1 | 0 | 0 | I am trying to invalidate the session using jira-python rest client. How can I achieve this feature, is it built in or needs to be implemented?
I tried looking at all the APIs available in client.py and there seems to be no way to destroy or invalidate a session.
Another question that follow is, do I have to authenticate on every REST call made by the client? Currently that is what I am doing. | Destroy session using jira-python REST client | 0 | 0 | 1 | 1,174 |
29,811,173 | 2015-04-23T00:03:00.000 | 6 | 0 | 1 | 0 | python,image | 29,857,854 | 1 | false | 0 | 0 | It seems like this is so trivial everyone knows how to vote down but no one knows the answer, right?
OK, so the package is Image, to create a canvas we need Image.new(colour_mode, size=(x,y), colour)
to put an image into this image we need
Image.paste(image, (left, up, right, down))
Thanks a lot for nothing. | 1 | 4 | 0 | Here is my problem: I have n images. They have the same width and height, they are png. I want to make an image(png) that contains a table out of them that have 5x(n/5) images in it by simply putting them next to each other.
I never tried creating images in Python, so could you help me what package and functions to use? | Creating image table in python | 1 | 0 | 0 | 2,720 |
29,811,281 | 2015-04-23T00:13:00.000 | 0 | 0 | 0 | 0 | python,django,templates | 29,813,856 | 2 | false | 1 | 0 | Well it is not a good solution, but try hardcoding the full path. | 1 | 4 | 0 | I am trying to create a login page in my django application. I created a "templates" folder on the root directory of my application.
Then on my settings.py I wrote this code.
TEMPLATE_DIRS = (os.path.join(BASE_DIR,'templates'),)
And it is giving this feedback:
TemplateDoesNotExist at /login/
Template-loader postmortem
Django tried loading these templates, in this order:
Using loader django.template.loaders.filesystem.Loader:
Using loader django.template.loaders.app_directories.Loader:
/Users/julianasakae/Desktop/DjangoProject/demo/lib/python3.4/site-packages/django/contrib/admin/templates/login.html (File does not exist)
/Users/julianasakae/Desktop/DjangoProject/demo/lib/python3.4/site-packages/django/contrib/auth/templates/login.html (File does not exist)
/Users/julianasakae/Desktop/DjangoProject/boardgames/main/templates/login.html (File does not exist)
I tryed everything, it does not seems to work.
Any suggestions? | Djangos TEMPLATE_DIRS not found | 0 | 0 | 0 | 717 |
29,811,423 | 2015-04-23T00:28:00.000 | 0 | 0 | 1 | 0 | python,opencv,py2exe,pyinstaller | 38,987,705 | 2 | false | 0 | 0 | I guess I will go ahead and post an answer for this, but solution was provided by @otterb in the comments to the question. I am pasting the text here:
"py2exe is not perfect so will often miss some libraries or dll, pyd etc needed. Most likely you are missing opencv_highgui249.dll and opencv_ffmpeg249.dll etc. I would use py2exe with no single executable option enabled. And, start manually copying files that might be needed for your app. After identifying them, modify setup.py for py2exe to include them automatically."
I will note however that I use pyinstaller rather than py2exe, since I get fewer problems while building. I still have to manually copy the opencv dll files though.On Windows 7 they are located here: "C:\Python27\DLLs" and they need to be copied into the distribution folder so that they are on the same path as the other dll files that go with the distribution. | 1 | 0 | 1 | I have a python program that uses OpenCV to get frames from a video file for processing. I then create a standalone executable using py2exe (also tried pyinstaller and got same error). My computer and the target computer are both Windows 7, but the target computer does not have python installed. I use OpenCV to read the frame rate and individual images from a video file.
Problem: When I run the executable on the target computer the frame rate is returned as 0.0 and I cannot read frames.
If python is installed on the target machine then the executable runs as expected, otherwise it produces this error. So it seems that something is missing in the executable, but I get no errors when creating the executable to indicate what might be missing.
Others who have reported similar issues usually have not included the numpy dependency (and get an error indicating this), but I have included numpy. I have also tried including the entire PyQt4 module since this is listed as a dependency on the python xy site for OpenCV (I already have parts of PyQt4 for other parts of the code) and this does not solve the problem either. | OpenCV with standalone python executable (py2exe/pyinstaller) | 0 | 0 | 0 | 6,352 |
29,812,355 | 2015-04-23T02:07:00.000 | 0 | 0 | 0 | 1 | php,python,mysql,linux,webserver | 29,814,988 | 1 | false | 0 | 0 | If you expect code running on the production server to connect to a mysql db, then yes. | 1 | 1 | 0 | I am setting up a production server for the first time and would like to make sure I only have what I need for security prepress.
By "interactions" since I'm also new to programming, I think I mean "API calls".
Do I need mysql-client on a Linux (Debian) server to be able to 'talk' to mysql with any programming language? As I think there isn't any point installing the client on the production server if I can remotely send commands from mysql-client on my Mac. | Do I need mysql-client for PHP/Python etc interactions? | 0 | 0 | 0 | 35 |
29,812,987 | 2015-04-23T03:13:00.000 | 0 | 0 | 0 | 0 | python,xml,django,django-models | 29,970,413 | 1 | true | 1 | 0 | Judging by the deafening silence, I gather the answer is: do it differently.
We're rebuilding the project to consume a JSON ReSTful interface instead. | 1 | 0 | 0 | A software architecture question:
I have the top two/thirds of a Django app: the view and template layers. I'd like to use an external resource for the model. How?
I'd like to use as much of the Django model layer as possible, for the ORM. The external resource is a specialized Java package, providing content via a flexible backend XML API.
My current strategy is a sort of thin model shim API: Django models without fields, instead a series of @propertys, each a function pulling data from the external resource, as needed.
Is this a good idea? How else would you solve this problem? | Django: thin model to consume XML API? | 1.2 | 0 | 0 | 65 |
29,814,020 | 2015-04-23T04:57:00.000 | 1 | 0 | 0 | 1 | python,linux,windows | 29,815,298 | 2 | false | 1 | 0 | What you are looking for is a GUI tool-kit with bindings to python. Tkinter is the de facto standard for python GUI and is cross platform. Qt is also a popular choice but the license is more restrictive then Tkinter but will allow you to transition into C++ programming with Qt easier if that is something you may want to do down the road. The choice up to you. | 1 | 2 | 0 | I am pretty familiar with building web based apps using python django on my linux machine. But when I decided to try my hand at building desktop applications that can run on windows/linux I didn't know where to begin.
I know I can surely build windows desktop application on windows machine. But I am pretty comfortable with linux and don't want to get out of that comfort zone. Can anyone guide me as to what tools can I begin with to develop a simple windows desktop application. I would target windows 7 for starters.
Any guidance is hugely appreciated. | Recommended way to build cross platform desktop application using linux development machine | 0.099668 | 0 | 0 | 1,133 |
29,824,790 | 2015-04-23T13:22:00.000 | 15 | 0 | 1 | 0 | python,macos,sympy | 29,824,853 | 3 | true | 0 | 0 | Use pip list to list all packages and their versions. You can pipe it to grep to search for the package your interested in:
pip list | grep sympy
Alternatively to get all information about that package including version you can use pip show:
pip show sympy
To upgrade it's simply:
pip install --upgrade sympy
If you need write permissions to your python installation directory don't forget to prepend the pip install command with a sudo: e.g. sudo pip install --upgrade sympy | 1 | 23 | 0 | How to check the current version of SymPy and upgrade to the latest version?
I am using macOS. The way I installed my current version is using pip install sympy. | How to check the current version of sympy and upgrade to the latest version? | 1.2 | 0 | 0 | 12,119 |
29,824,873 | 2015-04-23T13:25:00.000 | 3 | 1 | 0 | 0 | python,security,port,password-protection,portforwarding | 29,825,016 | 2 | false | 0 | 0 | You cannot password protect a port. That concept is several layers up in the network stack and not something regular internet gear has anything to do with.
You'll have to add authentication at the service/application layer. Meaning, your Pi will have to demand authentication. Whether that's possible or not depends on what's running on it.
If that's not available, you'll need an intermediary. Either you set up a proxy in front of the Pi that can handle authentication; or you set up a VPN server instead of a simple forwarded port, which would put the authentication at the point of network access. | 2 | 0 | 0 | I have a Python program that runs on my Windows 7 computer which communicates with a Raspberry Pi over the internet through a port I opened with a Port Forwarding rule on my internet modem.
I am concerned about a hacker getting through that open port and causing problems.
My question is:
Is there a way to password protect that port so anyone who tries to access that port is required to enter the correct password to get through to my Raspberry Pi?
If not, what other ways could I protect that open port?
Any help/advice is much appreciated. Thanks in advance. | Port Forwarding Security | 0.291313 | 0 | 0 | 768 |
29,824,873 | 2015-04-23T13:25:00.000 | 0 | 1 | 0 | 0 | python,security,port,password-protection,portforwarding | 29,844,068 | 2 | false | 0 | 0 | You can effectively "password protect a port" via an SSH tunnel.
Instead of opening your application's port, you open TCP port 22 and run an SSH daemon. The SSH service can be protected by a password, passphrase, a key file or a combination thereof.
When you connect to SSH from Windows using PuTTY or Plink, you can specify that a local port on your Windows box is mapped to the port on your remote Raspberry Pi.
So rather than connecting to, say 203.0.113.0 on port 1234, you would connect to 127.0.0.1 on your Windows machine on port 1234 after establishing the SSH connection and this will route it through to the machine at the other end of the SSH tunnel. | 2 | 0 | 0 | I have a Python program that runs on my Windows 7 computer which communicates with a Raspberry Pi over the internet through a port I opened with a Port Forwarding rule on my internet modem.
I am concerned about a hacker getting through that open port and causing problems.
My question is:
Is there a way to password protect that port so anyone who tries to access that port is required to enter the correct password to get through to my Raspberry Pi?
If not, what other ways could I protect that open port?
Any help/advice is much appreciated. Thanks in advance. | Port Forwarding Security | 0 | 0 | 0 | 768 |
29,826,237 | 2015-04-23T14:19:00.000 | 0 | 0 | 1 | 1 | python,python-2.7,pip,anaconda,eyed3 | 29,851,171 | 1 | false | 0 | 0 | The problem is that this file is only written for Python 2 but you are using Python 3. You should use Anaconda (vs. Anaconda3), or create a Python 2 environment with conda with conda create -n py2 anaconda python=2 and activate it with activate py2. | 1 | 2 | 0 | could you help me with that. I can't manage to install this plugin. I tried:
1) install it through pip
2) through setup.py in win console
3) through anaconda3 but still no.
4) I searched about it in web and here, but insructions are made to older versions.
5) and also through the installation page of eyeD3.
Could you guide me how should I do this? Maybe I'm doing something wrong. For first: should I use for this Python 2.7.9 or can it be Anaconda3 | I can't install eyeD3 0.7.5 into Python in windows | 0 | 0 | 0 | 641 |
29,826,523 | 2015-04-23T14:30:00.000 | 2 | 0 | 0 | 0 | python,floating-point,double,precision | 29,826,612 | 2 | false | 0 | 0 | You could try the c_float type from the ctypes standard library. Alternatively, if you are capable of installing additional packages you might try the numpy package. It includes the float32 type. | 1 | 6 | 1 | I need to implement a Dynamic Programming algorithm to solve the Traveling Salesman problem in time that beats Brute Force Search for computing distances between points. For this I need to index subproblems by size and the value of each subproblem will be a float (the length of the tour). However holding the array in memory will take about 6GB RAM if I use python floats (which actually have double precision) and so to try and halve that amount (I only have 4GB RAM) I will need to use single precision floats. However I do not know how I can get single precision floats in Python (I am using Python 3). Could someone please tell me where I can find them (I was not able to find much on this on the internet). Thanks.
EDIT: I notice that numpy also has a float16 type which will allow for even more memory savings. The distances between points are around 10000 and there are 25 unique points and my answer needs to be to the nearest integer. Will float16 provide enought accuracy or do I need to use float32? | Python float precision float | 0.197375 | 0 | 0 | 4,158 |
29,828,922 | 2015-04-23T16:08:00.000 | 4 | 0 | 0 | 0 | python,spatial | 32,484,856 | 2 | true | 0 | 0 | Solved my problem, this is for others looking to do same analysis.
Definitely recommend using R for spatial analysis. A transfer from python is simple because all you need is coordinates of your point pattern.
Write a csv of x,y and z coordinates of your points using python
R has good functionality of reading csv using command
read.csv("filename"). Make sure the directory is set properly using setwd command.
Convert the csv you just read to a ppp(point
pattern which R understands) using as.ppp command.
Continue to use
Kest, Gest etc for required spatial analysis.
Cheers. | 1 | 7 | 1 | I am looking for Ripley's k function implementation in Python. But so far haven't been able to find any spatial modules implementing this in scipy or elsewhere.
I have created Voronoi tessellation of a fibre composite and need to perform analysis using Ripley's K and pair distribution functions compared to a Poisson distribution.
Cannot upload images-not enough rep. | Ripley's K Function (Second order intensity function) Python | 1.2 | 0 | 0 | 2,325 |
29,832,656 | 2015-04-23T19:28:00.000 | 1 | 0 | 0 | 0 | python,authentication,authorization,google-bigquery | 29,853,676 | 1 | false | 0 | 0 | The issue was environmental. My browser was not recognizing localhost, so when I manually modified the url to reference the ip, 127.0.0.1, then the authorization succeeded.
Thanks for the responses. | 1 | 2 | 0 | I am trying to run a sample BigQuery query using a Python Client that I downloaded from the Google Site (modified with my client secrets, project info, etc.), but am unable to get passed the browser page that is requesting access. I've tried several browsers including Chrome and Firefox. I am on a MAC if that matters.
I've tried both the native google client sample as well as Pandas GBQ API.
When I execute either of the API samples, a page is rendered in the browser basically asking that the client API is requesting permission to "View and manage your data in Google BigQuery".
When I click Accept, a new page is rendered with an error that indicates no data was returned from the server.
I cannot tell if this is an issue with Google or my local network blocking.
I would like to know what might be going on or how I can troubleshoot this issue so I can authenticate/authorize and run queries through my python client.
Thanks,
J.D. | Unable to Authenticate with Google BigQuery | 0.197375 | 0 | 1 | 442 |
29,837,153 | 2015-04-24T01:27:00.000 | 0 | 0 | 0 | 0 | python,pandas | 29,892,429 | 2 | false | 0 | 0 | Row by row. Pandas is not the ideal tool for this.
I would suggest you look into Map/Reduce. It is designed for exactly this. Streaming is the key to row by row processing. | 1 | 1 | 1 | I have a lot of time series data. Almost 3 GB of csv files. The dimensions are 50k columns with 6000 rows. Now I need to process them row by row. They are time ordered and its important that for each row, I look at each column.
Would importing this in to pandas as a pivot table and iterating them over row by row efficient? Any suggestions? | Python Pandas Large Row Processing | 0 | 0 | 0 | 662 |
29,839,766 | 2015-04-24T05:56:00.000 | 3 | 0 | 0 | 0 | python,django | 42,911,596 | 2 | false | 1 | 0 | If you need it only for reporting errors the best choice would be to inherit from the django.utils.log.AdminEmailHandler and override the def format_subject(self, subject): method.
Note that changing EMAIL_SUBJECT_PREFIX will affect not only error emails but all emails send to admins including system information emails or so. | 1 | 7 | 0 | i noticed about to change the subject for django error reporting emails,
is it possible to change subject?
can we modify the subject for Django error reporting emails ? | how to change the subject for Django error reporting emails? | 0.291313 | 0 | 0 | 1,862 |
29,840,006 | 2015-04-24T06:15:00.000 | 0 | 0 | 0 | 0 | python,performance,numpy,save | 29,841,705 | 1 | false | 0 | 0 | Try to make the data obsolete as fast as possible by further processing/accumulating e.g. plotting immediately.
You did not give details about the memory/storage needed. for sparse matrices there are efficient representations. if your matrices are not sparse there are roughly 500k entries per matrix and therefore 5G entries altogether. without knowing your data type this could be typically 40GB of memory.
I strongly suggest to review your algorithms for achieving a smaller memory footprint. | 1 | 0 | 1 | This is a broad question. I am running a very long simulation (in Python) that generates a sizeable amount of data (about 10,000 729*729 matrices). I only need the data to plot a couple of graphs and then I'm done with it. At the moment I save the data in (numpy) arrays. When the simulation is complete I plot the data.
One alternative would be to write the data to a file, and then access the file after simulation to plot graphs etc.
In general is there consensus on the best (i.e. quickest) way to manage large temporary data sets. Is either of these "best practice"? | Is it better to store temp data in arrays or save it to file for access later? | 0 | 0 | 0 | 626 |
29,842,709 | 2015-04-24T08:44:00.000 | 0 | 0 | 1 | 0 | python,point-cloud-library | 29,874,427 | 3 | false | 0 | 0 | PCL depend on Boost, VTK, Eigen, FLANN and QHull.
So it's diffucult to bind PCL to python.
So code PCL in C++. | 1 | 2 | 0 | How to install Point cloud library on python on Windows?
I am working on Anaconda which is python distribution.
So what is the way to install PCL using pip?
If you know any other method please let me know. | Installing PCL python on Windows using PIP | 0 | 0 | 0 | 2,681 |
29,843,170 | 2015-04-24T09:05:00.000 | 1 | 0 | 0 | 0 | python,cx-oracle | 30,171,565 | 2 | false | 0 | 0 | If you use multiple cursor. cursor.close() will help you to release the resources you don't need anymore.
If you just use one cursor with one connection. I think connection.close() is fine. | 1 | 4 | 0 | I'm using cx_Oracle module in python. Do we need to close opened cursors explicitly? What will happen when we miss to close the cursor after fetching data and closing only the connection object (con.close()) without issuing cursor.close()?
Will there be any chance of memory leak in this situation? | cx_Oracle module cursor close in python | 0.099668 | 1 | 0 | 4,239 |
29,843,710 | 2015-04-24T09:28:00.000 | 0 | 1 | 0 | 1 | python,linux,passwords | 29,938,376 | 4 | true | 0 | 0 | I figured out the best way is to disable it via sudo command:
Cmnd_Alias SCRIPT =
Defaults!SCRIPT !syslog
The above lines in sudoers.conf should help from preventing the logging in syslog. | 1 | 2 | 0 | I have a sample python script: sample.py. The script takes the argument as the user name and password to connect to some remote server. When I run the script sample.py --username --password , the password is being logged in linux messages files. I understand this is a linux behavior, but wondering if we can do anything within my script to avoid this logging. One way I can think is to provide password in an interactive way. Any other suggestions? | plain Password is logged via my python script | 1.2 | 0 | 0 | 446 |
29,845,345 | 2015-04-24T10:40:00.000 | 0 | 0 | 0 | 0 | python,scrapy | 29,871,881 | 1 | true | 1 | 0 | Your question doesn't compute, because the answer depends solely on your specific use case and the goal of your scraping. The criteria/metric of spiders success is directly derived from your business logic. Just take a step back and ask yourself, what do I want to achieve with this spider? When you answer that, the metric you should monitor will be very clear. | 1 | 0 | 0 | To make sure everything is working properly, what metric should I monitor for a general spider?
Currently, I'll monitor
the number of item returned. E.g. the percentage change should be less than 1%
the number of error in log file. E.g. the error number should be zero
is it enough for a general spider? | what metric should I monitor for a spider in scrapy | 1.2 | 0 | 0 | 161 |
29,845,685 | 2015-04-24T10:56:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine | 29,846,381 | 1 | true | 1 | 0 | The 'Datastore caching' you refer to is implemented using memcache under the hood anyway, so you won't gain from additional explicit caching of these entities.
Python's ndb API and Java's Objectify both provide memcache-based automated caching for exactly this scenario. Of course, you can still use memcache independently for additional application caching. | 1 | 0 | 0 | I have read that the datastore read request in the App engine get cached and subsequent reads performed on the same entity are fast.
So if i read an entity from datastore, are there any tangible benefits of storing the entity in the memcache explicitly for later fetches? Or would the datastore caching serve with sufficient efficiency? | App engine Datastore cache or memcache | 1.2 | 0 | 0 | 290 |
29,845,940 | 2015-04-24T11:08:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine | 29,846,170 | 1 | false | 1 | 0 | I'm not entirely clear on what you're looking for, but you can retrieve that type of information from the WSGI environmental variables. The method of retrieving them varies with WSGI servers, and the number of variables made available to your application depends on the web server configuration.
That being said, getting client IP address is a common task and there is likely a method on the request object of the web framework you are using.
What framework are you using? | 1 | 0 | 0 | I'm hosting my app on Google App Engine. Is there any posibility to get server IP of my app for current request?
More info:
GAE has a specific IP addressing. All http requests go to my 3-level domain, and IP of this domain isn't fixed, it changes rather frequently and can be different on different computers at the same moment. Can I somehow find out, what IP address client is requesting now?
Thank you! | GAE: how to get current server IP? | 0 | 0 | 0 | 157 |
29,846,606 | 2015-04-24T11:41:00.000 | -1 | 0 | 0 | 0 | python,django,internationalization,translation | 29,846,744 | 3 | false | 1 | 0 | I assume that you are using Django to create an API, and you consume the API with javascript. You can check the user-agent string from the header and make the appropriate redirect according to the request. | 1 | 3 | 0 | I have django app which is backend for javascript application intended for multiple TV devices. Each device has different frontend but I don't think that creating multiple .po files is good idea for this goal, because most of translations are repetitive for these devices.
Is this possible to add additional parameters for translations, for example in my case some function with parameter "device" would be very useful? If not, how to do in Django way? | Django way for multiple translations for one language or parametrize translations | -0.066568 | 0 | 0 | 521 |
29,848,354 | 2015-04-24T13:07:00.000 | 0 | 0 | 0 | 0 | python,pygame,pyscripter | 29,859,267 | 1 | false | 0 | 1 | Woops I just needed to change run->python engine from internal to remote. | 1 | 0 | 0 | When PyScripter opens another window (eg. a pygame window or matplotlib graphs), I can't edit code until I close the other window. I can move around the code and delete bits, but can't type.
The problem didn't use to occur because the pygame window opened with another program (I think it was python.exe). I don't recall changing anything that would have made this happen; I have always just run a sctipt called Game.py from inside pyscripter which opens the pygame window using pygame.init().
I have python 2.7.9 32bit, pyscripter 2.6.0 32bit (the problem also occured for pyscripter 2.5.3), windows 7 64bit.
How can I either get pygame to open in the python.exe, or change pyscripter so that I can edit scripts ? | Pyscripter not editable when a sub window is open | 0 | 0 | 0 | 233 |
29,849,552 | 2015-04-24T14:03:00.000 | 1 | 1 | 0 | 0 | bash,python-2.7 | 29,849,608 | 2 | false | 0 | 0 | I suspect you're encountering output buffering, where it's waiting to get a certain number of bytes before it flushes. You can look at the unbuffer command if that is undesirable for you. | 1 | 0 | 0 | When I run my python script (content irrelevant for this question, just uses print a couple of times) interactively, it sends the output straight away.
When I use it in a pipe to tee or in an output redirection (./script.py > script.log) there is no output. What am I doing wrong? | Python only sends output interactvely | 0.099668 | 0 | 0 | 13 |
29,850,613 | 2015-04-24T14:49:00.000 | 0 | 0 | 0 | 1 | python,flask,uwsgi | 29,899,664 | 1 | true | 1 | 0 | Sorry false alarm. This was my Devops incorrectly pinging my actual application route for heartbeat. Sorry for the confusion. | 1 | 1 | 0 | I'm running a flask application using nginx and uwsgi and I noticed when I tail the logs for uwsgi it looks like its just constantly polling my app when I'm doing nothing. It also seems like it's cycling through the cores on my machine with each request so I see this in the logs.
[pid: 27182|app: 0|req: 557/784] {26 vars in 254 bytes} [09:33:38 2015] GET / => generated 1337 bytes in 11 msecs ( 200) 3 headers in 238 bytes (1 switches on core 0)
[pid: 27182|app: 0|req: 558/785]{26 vars in 254 bytes} [09:33:42 2015] GET / => generated 1337 bytes in 11 msecs ( 200) 3 headers in 238 bytes (1 switches on core 1)
[pid: 27182|app: 0|req: 559/786] {26 vars in 254 bytes} [09:33:43 2015] GET / => generated 1337 bytes in 11 msecs ( 200) 3 headers in 238 bytes (1 switches on core 2)
[pid: 27182|app: 0|req: 560/787] {26 vars in 254 bytes} [09:33:47 2015] GET / => generated 1337 bytes in 11 msecs ( 200) 3 headers in 238 bytes (1 switches on core 3)
Nginx shows something similar. It's just constantly issuing a request to my app.
It's only doing this when nginx is on. If I stop nginx the polling stops. My app is up and working but I don't know why this is happening. Is this normal behavior for nginx/uwsgi when using the uwsgi protocol?
EDIT Im also using uwsgi in emperor mode | Nginx/Uwsgi log showing duplicate requests | 1.2 | 0 | 0 | 602 |
29,851,825 | 2015-04-24T15:41:00.000 | 2 | 0 | 0 | 0 | android,python,kivy,qpython | 30,407,066 | 1 | true | 0 | 1 | You need to have
#qpy:kivy
part in the first line.
At least that was what happened to me.
I suppose that is because QPython is finding what type of app is it. | 1 | 1 | 0 | I've been programming some Kivy/Python apps on my Motorola Moto G mobile phone.
I've got a few handy little apps, that have been working OK for a few months.
Today, I launched one of the apps - through the QPython interface, and it didn't work.
So, I tried another of my apps and that failed to launch for the same reason. In fact, all of them fail to launch for the same reason.
The error shown on screen ends with:
File "/QPython/core/build/python-install/lib/python2.7/UserDict.py", line 23, in getitem
KeyError: 'ANDROID_APP_PATH'
I presume that something on the phone has taken an upgrade - and broken something.
I assume that is the case because this problem affects all of the Kivy apps I was using.
Anyone else encountered this? | KeyError: 'ANDROID_APP_PATH' | 1.2 | 0 | 0 | 543 |
29,852,509 | 2015-04-24T16:14:00.000 | 1 | 0 | 0 | 0 | python,tcp,tcp-ip | 29,898,151 | 1 | false | 0 | 0 | To open a UDP socket you'd use:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_UDP
To send use:
query = craft_dns_query() # you do this part
s.sendto(query,(socket.inet_aton("8.8.8.8",53))
To receive the response use:
response = s.recv(1024)
You'll have to refer to documentation on DNS for actually crafting the messages and handling the responses. | 1 | 2 | 0 | I've been using Scapy to craft packets and test my network, but the programmer inside me is itching to know how to do this without Scapy.
For example, how do I craft a DNS Query using sockets (I assume it's sockets that would be used).
Thanks | Crafting a DNS Query Message in Python | 0.197375 | 0 | 1 | 2,153 |
29,852,919 | 2015-04-24T16:37:00.000 | 1 | 0 | 1 | 0 | python,pycharm,versions | 29,869,463 | 1 | true | 0 | 0 | Change the path in PyCharm (in Options). PyCharm shows the first one it sees, and the IDLE shows the one you are using currently. | 1 | 1 | 0 | I have installed Python 2.7.9 64 and 32 bit versions. I have also installed PyCharm for better easing with the learning (I just recently started learning Python).
The problem is that Pycharm and IDLE show different Python versions when usng sys.version, even though they use the same path for the .exe.
How can I solve this? Thank you! | Different PyCharm and IDLE Python versions | 1.2 | 0 | 0 | 523 |
29,853,949 | 2015-04-24T17:33:00.000 | 2 | 0 | 0 | 0 | python,xml,parsing,sax,elementtree | 29,855,367 | 1 | true | 0 | 0 | What is important for you here is that you need a streaming parser, which is what sax is. (There is a built in sax implementation in python and lxml provides one.) The problem is that since you are trying to modify the xml file, you will have to rewrite the xml file as you read it.
An XML file is a text file, You can't go and change some data in the middle of the text file without rewriting the entire text file (unless the data is the exact same size which is unlikely)
You can use SAX to read in each element and register an event to write back each element after it is been read and modified. If your changes are really simple it may be even faster to not even bother with the XML parsing and just match text for what you are looking for.
If you are doing any signinficant work with this large of an XML file, then I would say you shouldn't be using an XML file, you should be using a database.
The problem you have run into here is the same issue that Cobol programmers on mainframes had when they were working with File based data | 1 | 1 | 0 | I want to parse a large XML file (25 GB) in python, and change some of its elements.
I tried ElementTree from xml.etree but it takes too much time at the first step (ElementTree.parse).
I read somewhere that SAX is fast and do not load the entire file into the memory but it just for parsing not modifying.
'iterparse' should also be just for parsing not modifying.
Is there any other option which is fast and memory efficient? | memory efficient way to change and parse a large XML file in python | 1.2 | 0 | 1 | 1,129 |
29,854,433 | 2015-04-24T18:00:00.000 | 0 | 0 | 0 | 0 | python,django,object,get,models | 29,882,375 | 1 | true | 1 | 0 | I solved it by using transaction.commit() before my second query. | 1 | 0 | 0 | I'm having this weird problem when using
Model.objects.get(op1=1,op2=2)
it raises the does not exist error although it exists. Did that ever happen with anyone?
I even checked in my logs to make sure that the log happened when the id already existed in the database.
[2015-04-24 20:18:21,106] ERROR: Couldn't find the model entry: Traceback (most recent call last):
DoesNotExist: NpBilling matching query does not exist.
and in the database, the last modified date for this row specifically is 20:18:19.
How could that possible ever happen?! The weird thing is that sometimes it works and sometimes it throws this error.
I tried to use get_or_create but I end up with 2 entries in the database. one of them is what was already created.
Thanks in advance for your help. I would appreciate fast responses and suggestions. | Model.objects.get returns nothing | 1.2 | 1 | 0 | 57 |
29,855,209 | 2015-04-24T18:45:00.000 | 0 | 0 | 0 | 0 | paraview,pvpython | 29,926,488 | 1 | true | 0 | 0 | I figured that these catalyst editions are sort of 'stand-alone'. We don't have to mess with paraview installation but we should have most of what was needed as pre-requisits for paraview to build these. They consist of necessary files with cmakelists in order to build just by itself. It does not depend on ParaView.
We don't have to generate the source tree as well. We only need to follow this step.
cd
/cmake.sh
The source_dir is the base files you downloaded and the build_dir is the empty build folder you create for the new build.
after this you need to run just
cd
make
While trying to run cmake.sh I encountered with
-- Found MPI_C: /opt/cray/craype/2.2.1/bin/cc
-- Found MPI_CXX: /opt/cray/craype/2.2.1/bin/CC
-- Could NOT find MPI_Fortran (missing: MPI_Fortran_LIBRARIES MPI_Fortran_INCLUDE_PATH)
CMake Error at VTK/CMake /vtkTestingMPISupport.cmake:31 (message):
MPIEXEC was empty.
I had to set MPI_Fortran to ftn location(similar to cc and CC) and export MPIEXEC=mprun
to build it. | 1 | 0 | 0 | I'm trying to build Catalyst with an existing Superbuild of Paraview 4.1
I'm trying to generate the source tree as given in the wiki.
cd /Catalyst
python catalyze.py -i Editions/Base/ -o
I don't know if I can find the source in the existing build if it is there. The files in the downloaded catalyst Base seems different from what is in Paraview installation. Can I locate catalyst source in an existing module?
Can anyone clarify this? | Paraview Catalyst Build: If I download Base of the same version and set it as catalyst_source_dir while generating the source tree I get an error | 1.2 | 0 | 0 | 173 |
29,857,593 | 2015-04-24T21:13:00.000 | 0 | 0 | 1 | 1 | python,c++ | 29,858,319 | 1 | true | 0 | 0 | You can't do that unless you relax one of the restrictions.
Relax the python dict requirement: The command line has a well defined text arguments interface, which can easily handle all the info. You can pass the json filename, the str representation of the dict, or pass name-value pairs as command line arguments.
Relax the system call requirement: Rather than building an executable from the c++ code, you can build a python c++ extension. The c++ code can export functions that take a python dict.
Relax the c++ requirement: Obviously you could code it in python. | 1 | 0 | 0 | I have a wrapper file that is reading a JSON file into a dictionary.
I am using the os.system("command") command to run a C++ code in this python file.
The C++ code takes command line inputs which are key values in the parsed dictionary.
How can i pass a python variable as a command line input for a C++ code using the os.system("command") instruction? | Passing dictionary values in python as command line inputs to c++ code | 1.2 | 0 | 0 | 211 |
29,866,234 | 2015-04-25T14:09:00.000 | 2 | 0 | 0 | 1 | python,google-app-engine,webapp2 | 29,866,631 | 1 | true | 1 | 0 | Use a 307 redirect. A 307 will not change the method of the redirect.
Wikipedia: 307 temporary redirect (provides a new URL for the browser to resubmit a GET or POST request) | 1 | 1 | 0 | I am trying to redirect a POST request from an Google App Engine Python Handler to another URL. The Problem is that it seems the method is changed to GET. Is there any way to set the POST method when redirecting? | Webapp2 Redirect Method | 1.2 | 0 | 0 | 177 |
29,870,031 | 2015-04-25T19:56:00.000 | 0 | 0 | 0 | 0 | python,math,vector,line,collision | 29,870,156 | 1 | false | 0 | 0 | Possible Solutions:
Instead of using a single 1D 'line', you could construct a 2D rectangle (that is as this as you want/need it to be) --- composed of 4 separate 'lines'. I.e. you can have collisions with any of the 4 faces of the rectangle object. Would that work?
Do some sort of corner collision -- if the ball is 'hits' the start or end of a line, have it bounce off appropriately. I think the way this would be done is as follows:
i. Collision occurs if the corner falls within the radius of the ball.
ii. Define a line between the corner and the center of the ball.
iii. Reverse the component of the ball's velocity along this line. | 1 | 1 | 1 | So I have a program where a ball subject to gravity bounces off of lines created by a user with mouse clicks. These lines are normally sloped. My collision bounces work perfectly EXCEPT in the case where ball does approximately this:
->O ------
My code works by finding the normal vector of the line such that the scalar product of the incident velocity vec of the ball and the normal of the line is negative (this means the vector have opposite directions).
Then I decompose the velocity into terms parallel and perpendicular to the normal,
and the reverse the direction of the parallel component.
During the edge case described above ball moves basically along the line. How can I account for this? Any advice?> | Ball-Line Segment Collision on End-Point of Line | 0 | 0 | 0 | 157 |
29,871,461 | 2015-04-25T22:31:00.000 | 1 | 0 | 0 | 0 | python,python-3.x,sqlite,unique-constraint | 29,874,681 | 2 | true | 0 | 0 | If the primary key is part of the UNIQUE constraint that led to the violation, you already have its value.
Otherwise, the two columns in the UNIQUE constraint are an alternate key for the table, i.e., they can uniquely identify the conflicting row.
If you need the actual primary key, you need to do an additional SELECT.
(The primary key of the existing row is not part of the exception because it was never looked at during the INSERT attempt.) | 1 | 1 | 0 | I have UNIQUE constraint on two columns of a table in SQLite.
If I insert a record with a duplicate on these two columns into the table, I will get an exception (sqlite3.IntegrityError).
Is it possible to retrieve the primary key ID of this record upon such a violation, without doing an additional SELECT? | Return existing primary key ID upon constraint failure in sqlite3 | 1.2 | 1 | 0 | 1,878 |
29,871,669 | 2015-04-25T22:57:00.000 | 6 | 0 | 0 | 0 | python,scipy | 29,883,739 | 3 | false | 0 | 0 | scipy.sparse has a number of formats, though only a couple have an efficient set of numeric operations. Unfortunately, those are the harder ones to extend.
dok uses a tuple of the indices as dictionary keys. So that would be easy to generalize from 2d to 3d or more. coo has row, col, data attribute arrays. Conceptually then, adding a third depth(?) is easy. lil probably would require lists within lists, which could get messy.
But csr and csc store the array in indices, indptr and data arrays. This format was worked out years ago by mathematicians working with linear algebra problems, along with efficient math operations (esp matrix multiplication). (The relevant paper is cited in the source code).
So representing 3d sparse arrays is not a problem, but implementing efficient vector operations could require some fundamental mathematical research.
Do you really need the 3d layout to do the vector operations? Could you, for example, reshape 2 of the dimensions into 1, at least temporarily?
Element by element operations (*,+,-) work just as well with the data of a flattened array as with the 2 or 3d version. np.tensordot handles nD matrix multiplication by reshaping the inputs into 2D arrays, and applying np.dot. Even when np.einsum is used on 3d arrays, the product summation is normally over just one pair of dimensions (e.g. 'ijk,jl->ikl')
3D representation can be conceptually convenient, but I can't think of a mathematical operation that requires it (instead of 2 or 1d).
Overall I think you'll get more speed from reshaping your arrays than from trying to find/implement genuine 3d sparse operations. | 1 | 14 | 1 | I am working on a project where I need to deal with 3 dimensional large array. I was using numpy 3d array but most of my entries are going to be zero, so it's lots of wastage of memory. Scipy sparse seems to allow only 2D matrix. Is there any other way I can store 3D sparse array? | Python multi dimensional sparse array | 1 | 0 | 0 | 9,257 |
29,871,970 | 2015-04-25T23:40:00.000 | 0 | 0 | 0 | 1 | python,eclipse,wxpython,pydev | 29,873,332 | 1 | true | 0 | 0 | I figured it out on my own. I deleted the wx and wxPython forced builtins and then loaded wx as an external library. Everything worked fine after that. | 1 | 0 | 0 | I'm using Eclipse Luna and the latest pydev with it. I have wxpython 3.0 installed. First, I could import wx and I tried in the console to print version, perfect, but then I do import wx.lib.pubsub -- it says unresolved. I try other variations, no dice, so I have to go into the properties of my project and add wx manually, then it worked.
Second, now all my CallAfter calls are underlined red, undefined variable from import. I know callAfter used to be it, so I tried that too, it tries to autocomplete to it -- but then underlines it. I know in 3.0, CallAfter is capitalized. Even if it wasn't, Eclipse tries to autocomplete to an old version and then says it's still bad.
I've never seen that before, I'm confused. Does anyone know what I'm doign incorrectly?
EDIT: Even weirder -- I use the console inside pydev eclipse, it autocompletes to normal CallAfter and doesn't throw any errors. | Python/Eclipse/wxPython -- CallAfter undefined variable? callAfter is as well -- confused | 1.2 | 0 | 0 | 94 |
29,876,315 | 2015-04-26T10:10:00.000 | 0 | 0 | 1 | 0 | python,apache,flask | 29,876,936 | 1 | false | 1 | 0 | Solved it by giving absolute path. I was trying all combinations of paths and also gave absolute path /var/www/arxiv/static/data/name.json and it worked. | 1 | 0 | 0 | I am using f = open('name.json','w+') to create a new file and write to it. But i am unable to create the file. Apache server logs show "No such file exists." | Unable to create a file on Ubuntu server using Flask and Python | 0 | 0 | 0 | 170 |
29,876,874 | 2015-04-26T11:11:00.000 | 0 | 0 | 0 | 0 | python,django,git | 31,610,217 | 1 | true | 1 | 0 | A possibility, as suggested, is templating the project which could suffice. However, templating projects would result in a different starting point for every project. Since templates evolve over time there's not one project that starts the same.
Having a baseline throughout all your projects seems only worth pursuing if your big enough as a (development) company and are building enough different projects. A baseline in this case can be a custom python egg or additional django app added to your Django project. Such an app can enforce deployment strategies or help maintain infrastructural dependancies.
For instance, propagating/configuring server information about your infrastructure or other services. Think of db, storage, auth services and other backends. When moving to a different inhouse storage backend or it's new API version, your app can smoothen the transition. If you don't already have added (re-usable) apps to your project specifically to hook up your project to the storage service.
So, in short, I think the answer to my own question is:
Yes, create reusable apps for this as far as possible but always make sure the solution is not bigger then the problem. | 1 | 0 | 0 | I'm trying to decide whether underneath 'reusable apps' it is doable to maintain a reusable bare project setup as well.
On a side note: I can destroy any of my servers and rebuild it in barely hours as long as I have a (data) backup and a blueprint (in my case saltstack, but it might just as well be puppet, chef or what not).
With a flexible infra, these deployment steps are next:
Create a virtualenv for the django application
Check out my project (always named project)
My project has a setup.py included. This sets the surroundings:
render settings files based on yaml data and templates
Possibly loads fixtures when needed
When in production, it renders apache config
etc
After that, as git submodules, you can plug your reusable apps to end up with a running webapplication.
With quite a lot of intelligency built in the bare project base structure I ended up with my original question.
I'm wondering if the project baseline is maintainable as a cross (Web)Application seperate git repo. Or does the project structure itself have to much specific moving parts related to the (Web)Application as a whole?
E.g: pluggable apps go in INSTALLED_APPS, etc and always ending up with to much related/changing data.
Hope it's clear. Looking forward to you comments | Is it possible to maintain a reusable django *project* across apps? | 1.2 | 0 | 0 | 98 |
29,880,603 | 2015-04-26T16:57:00.000 | 7 | 0 | 1 | 0 | python,large-files | 29,880,709 | 3 | false | 0 | 0 | Multiprocessing will not really help, because your bottleneck is memory. You will need to use hashes:
Read line
Calculate hash, e.g. md5, look it up in a set of all encountered hashes.
Output line if hash not found in set and add this hash to set.
Couple things to be mindful of:
md5 takes 128 bits, so even without overhead it is more than 2G of ram.
set and dict have large memory overhead.
So if you have 4+ gigs, it is doable. A more scalable solution would be to store encountered hashes in sorted file(s) on disk, and search through them every time. This will be (a lot!) slower, but you can have as low memory footprint as you want.
Also if you don't care about line ordering in resulting file, you can split your file into smaller files based on some hash function (lines with md5 starting with a, lines with md5 starting with b etc). This will allow you to make them small enough to just sort | uniq them (or sort in-memory with python, if you wish) and concatenate results. | 2 | 5 | 0 | I have a huge text file that has duplicate lines. The size would be about 150000000 lines. I'd like to find the most efficient way to read these lines in and eliminate duplicates. Some of the approaches I'm considering are as follows :-
Read the whole file in, do a list(set(lines)).
Read 10k lines in at a time, do a list(set(lines)) on what I have, read another 10k lines into the list, do a list(set(lines)). Repeat.
How would you approach this problem? Would any form of multiprocessing help? | Python read a huge file and eliminate duplicate lines | 1 | 0 | 0 | 1,785 |
29,880,603 | 2015-04-26T16:57:00.000 | 0 | 0 | 1 | 0 | python,large-files | 29,880,817 | 3 | false | 0 | 0 | Think about if you really need to solve this in python itself. You could
call out to sort and uniq, standard tools that are present on most posix systems. They will do the job, are faster and solve edge cases (eg running out of memory) before you thought about them
The most simple solution would probably be to create an in-memory database using the sqlite-package, insert all lines into a temporary table and do a select distinct... from it. Again, sqlite will perform better than you could do yourself in pure python. | 2 | 5 | 0 | I have a huge text file that has duplicate lines. The size would be about 150000000 lines. I'd like to find the most efficient way to read these lines in and eliminate duplicates. Some of the approaches I'm considering are as follows :-
Read the whole file in, do a list(set(lines)).
Read 10k lines in at a time, do a list(set(lines)) on what I have, read another 10k lines into the list, do a list(set(lines)). Repeat.
How would you approach this problem? Would any form of multiprocessing help? | Python read a huge file and eliminate duplicate lines | 0 | 0 | 0 | 1,785 |
29,881,872 | 2015-04-26T18:40:00.000 | 0 | 0 | 0 | 0 | python,matplotlib,plot | 41,026,037 | 2 | false | 0 | 0 | The inverted arrowhead is due to a negative sign of the head_length variable. Probably you are scaling it using a negative value. Using head_length= abs(value)*somethingelse should take care of your problem. | 1 | 3 | 1 | I am trying to plot arrows pointing at a point on a curve in python using matplotlib.
On this line i need to point vertical arrows at specific points.
This is for indicating forces acting on a beam, so their direction is very important. Where the curve is the beam and the arrow is the force.
I know the coordinate of said point, exactly, but it is of cause changing with the input.
This input should also dictate whether the arrow points upwards or downwards from the line. (negative and positive forces applied).
I have tried endlessly with plt.arrow, but because the scale changes drastically and so does the quadrant in which the arrow has to be. So it might have to start at y < 0 and end in a point where y > 0.
The problem is that the arrowhead length then points the wrong way like this --<. instead of -->.
So before I go bald because of this, I would like to know if there is an easy way to apply a vertical arrow (could be infinite in the opposite direction for all i care) pointing to a point on a curve, of which I can control whether it point upwards to the curve, or downwards to the curve. | Arrow pointing to a point on a curve | 0 | 0 | 0 | 1,773 |
29,883,690 | 2015-04-26T21:20:00.000 | 1 | 0 | 1 | 0 | python,pycharm,lapack,blas | 39,548,461 | 5 | false | 0 | 0 | I had the same issue, and downloading Anaconda, and switching the project interpreter in PyCharm to \Anaconda3\python.exe helped solve this.
Good luck! | 1 | 11 | 1 | I'm currently having trouble installing scipy via PyCharm's package manager. I have installed numpy successfully and do have the Microsoft Visual Studio C/C++ compiler in the System Variables.
However, when it's time to install scipy in PyCharm, the following error occurs:
Executed Command: pip install scipy
Error occured: numpy.distutils.system_info.NotFoundError: no lapack/blas resources found
I have seen other resources on installing blas / lapack on windows, but I'm unsure if it will work with PyCharm's installations.
If anybody has the solution / resources to redirect me to, please let me know. | Trouble installing scipy via pyCharm windows 8 - no lapack / blas resources found | 0.039979 | 0 | 0 | 16,017 |
29,885,414 | 2015-04-27T00:42:00.000 | 1 | 0 | 0 | 0 | python,django,google-calendar-api | 29,885,432 | 1 | true | 1 | 0 | You can export it as CSV, much easier, and Google calendar supports it as Import format | 1 | 0 | 0 | I'm writing an app in Django, I would like to be able to allow users to export an event schedule to Google Calendar.
I was thinking about exporting the schedule to an iCal file, but I would have to use some unpopular third party libraries, which sounds like a lot of trouble. Does Django have a functionality like that? | How to export a schedule to Google Calendar? | 1.2 | 0 | 0 | 186 |
29,886,501 | 2015-04-27T03:03:00.000 | 1 | 0 | 1 | 0 | python,datetime,time | 29,886,958 | 2 | false | 0 | 0 | If running the script in a UNIX like OS, you can use the date command -
>>>import subprocess
>>>process=subprocess.Popen(['date','-d','@1430106933', '+%Y%m%d'], stdout=subprocess.PIPE)
>>>out,err = process.communicate()
>>>print out
20150426 | 1 | 2 | 0 | Without having to convert it to datetime, how can I get the date from a Unix timestamps? In other words, I would like to remove hours, minutes and seconds from the time stamp and get the numbers that represent the date only. | How to remove hours, minutes, and seconds from a Unix timestamp? | 0.099668 | 0 | 0 | 3,529 |
29,888,233 | 2015-04-27T05:58:00.000 | 1 | 0 | 0 | 0 | python,image,neural-network | 29,889,993 | 9 | false | 0 | 0 | Draw the network with nodes as circles connected with lines. The line widths must be proportional to the weights. Very small weights can be displayed even without a line. | 1 | 29 | 1 | I want to draw a dynamic picture for a neural network to watch the weights changed and the activation of neurons during learning. How could I simulate the process in Python?
More precisely, if the network shape is: [1000, 300, 50],
then I wish to draw a three layer NN which contains 1000, 300 and 50 neurons respectively.
Further, I hope the picture could reflect the saturation of neurons on each layer during each epoch.
I've no idea about how to do it. Can someone shed some light on me? | How to visualize a neural network | 0.022219 | 0 | 0 | 35,856 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.